(Notes: All opinions are my own)
I have been wanting to certify my Data Science skills during the last few years, and thus jumped at the opportunity to take this series of courses when I discovered that one of the biggest AI experts and co-founder of Coursera, Andrew Ng, had rolled out additional Data Science & AI course materials following his world-famous Machine Learning course.
Being an avid Coursera student, I jumped at the opportunity. Here’s what I found.
Taking the Specialization has been incredible from a learning-clarity and knowledge-absorption perspective.
Everyone wants to get on the Deep Learning wave in these days but often refers to neural networks as totally black boxes. Andrew Ng’s ability to unpack the topic and present it in a digestible yet comprehensive way is disarming.
I particularly enjoyed the first couple of courses, which act as the more foundational of the five, as they are the ones in which the student builds foundational and practical knowledge about the inner workings of neural nets. The third course then acts as a bridge between courses 1 & 2 and 4 & 5, by applying a strategic lens to deep learning and illustrating common project strategies for performance improvement and error analysis.
The series then closes by presenting more complex applications, which get the student exposed to real world use cases and leave the learner just with the right level of excitement to progress further with the theoretical learning and/or to immediately start thinking around personal projects in which these newly acquired and forward looking skills can be applied.
Wanting you to judge for yourself, I have summarized below what to expect if you decide to take the Deep Learning Specialization on Coursera, to enable you to take a decision of whether to proceed with the time and investment (£36/month subscription until completion- prices may vary depending on your location)
Structure, Course Topics and Tech Stack
Deeplearning.ai’s Deep Learning Specialization is structured across 5 courses.
Total indicative duration is 4 months at a pace of 5 hours per week. If you are keen on dedicating constant time to studying and going through the materials, you can definitely cut that to 2 or 3 months, especially if it’s not your first Data Science/AI course series.
The course list is the following:
- Neural Networks and Deep Learning
- Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
- Structuring Machine Learning Projects
- Convolutional Neural Networks
- Sequence Models
As prerequisites, knowledge of Python programming, basic linear algebra, and Machine Learning (ideally from the above mentioned Machine Learning course, also from Andrew Ng) are recommended.
During the course practical exercises, you will mainly make use of the following:
Tools: Jupyter Notebook
Main libraries: Numpy, Tensorflow, Keras
- World class teaching and deep learning expertise
Few are as qualified as Andrew Ng when it comes to AI and Deep Learning knowledge. His combination of academic and industry experience intertwines with great teaching and communication ability.
You can tell he is as passionate about the topic as he is teaching it to students, and this mix makes for an overall superb learning experience. You can “test him out” by taking his separate but related Machine Learning course to get familiar with the topic and his teaching method, but chances are you will love it and come back for more.
Such knowledge is delivered via a combination of video lectures and Jupyter notebooks, which the student has to complete with Python code in order to work towards course and series completion.
A pleasant aspect of the course quizzes is the fact that the Jupyter notebooks are heavily documented with theoretical summaries of that particular week’s contents, which provide the student with helpful refreshers and equation details right at quiz time.
The icing on the cake is then provided by optional video interviews Andrew conducts with AI and deep learning experts in academia/industry, which garnish the course with useful contextual knowledge and a direct taste of Deep Learning from best in class practitioners.
2. Develop great intuition by building from scratch
A lot of current deep learning courses are implementation-focused; while having practical knowledge is fine and indeed very useful, it leaves the knowledge-hungry students wanting for more.
Especially in courses 1,2 and 4, such desires are definitely granted as students can learn to work out the ins and outs of neural nets from scratch, without initially relying on any high-level deep learning programming framework. Everything is implemented through Numpy and put together using a modular approach which solidifies understanding.
The student thus learns not only what input and output variables represent in such a context, but also the intermediate steps which map the former to the latter (from forward propagation to gradient descent, via the application of different activation functions and the backpropagation of the model’s weights and biases).
This modular approach to learning is best exemplified by the use of single-purpose Python functions, which the student compiles and stacks together in a final “model” function, which represents all of the steps necessary to achieve a working neural network model implementation.
Such foundational approach is the heart and soul of the Specialization, and fosters great practical knowledge of Numpy and its linear algebra methods.
3 Mix of depth and breadth
Another great value-add of the series is the combination of foundational knowledge and holistic illustration of real-world use cases, which comes to light in courses 4 and 5, as the student is presented with Convolutional Neural Networks and Sequence models.
This is a great aspect as the student builds on what learned during the previous three courses and is then exposed to a variety of interesting applications, from face recognition, to object detection, machine translation, etc. Such applications are presented deeply enough without going excessively in depth on a specific one. This rings true especially for the last course in the series.
Overall, this has the great effect of nudging the student towards his/her preferred application area, while providing enough to dive in.
Finding things to improve on such a well crafted specialization is really hard. If I had to mention some aspects that would come to mind, it would be the following two, although the disclaimer here is that overall they hold little weight compared to the key selling points of this course.
- Occasional excessive handholding during exercises
While the programming exercises at the end of each week are great, on some occasions I felt they were providing too much guidance on how to go from A to Z. Some of the Python functions encountered needed only a couple of lines of code to run, and sometimes the in-notebook instructions were giving out the answers, or at least a near-complete indication on how to get there.
While this enables great understanding on the overall neural network process, it risks leaving the student a bit disoriented by the end. This applies to the more complex exercises of the series, occurring during the final two courses, so do not take it as a general trait.
However, if you are expecting a bit more freedom to implement your own workaround during the quizzes, the rigorous structure of the workbook might leave you slightly underwhelmed.
2. No final capstone project
I would have loved to have had the opportunity to work around a personal project implementation to submit in a cloud-environment, perhaps at the cost of sacrificing some lecture time along the way.
There are probably computational and organizational reasons for why this does not happen, but I feel that at the end of such a course even a smaller-size deep learning implementation would have left the student leaving on better terms, having just developed its own neural network model.
Nonetheless, I still leave inspired to pursue related interests, even if a closing project would have solidified what learned even more, especially as neural nets can be “customized” so much in terms of architecture and hyperparameter tuning.
Even so, the series’ closure does a great job at making you leave hopeful and anxious to put what learned into practice at the first occasion possible.
All in all, I would still recommend taking this course if you are:
- a Data Scientist looking to take the next steps in your Data Science career by diving into the Deep Learning area of AI, and move away from classical Machine Learning methods and applications.
- looking to get certified and prepare/apply for an entry level Machine/Deep Learning engineer positions
- looking to build enough ground knowledge to get started with personal Deep Learning projects, with the possibility of implementing neural networks either from scratch or via the Tensorflow/Keras frameworks
I would not still recommend taking this course if you are:
- expecting to derive abstract-level knowledge on neural networks; the course is quite foundational and makes little use of Keras compared to building intuition via Numpy and later Tensorflow
- looking for a deep dive on sequence models, even if you are planning to take course five in isolation from the others in the specialization; I found this course featuring a broader and more general overview compared to the others in the specialization, and you’d probably be better served by looking for sequence learning specific applications/papers.
Access my free Data Science resource checklist here