top of page

So I took that Machine Learning Course Everyone Talks About

There’s been all this buzz around AI and machine learning and what not so I’m not gonna lie I haven’t been that much a stranger to the concept. However, initially it really was just that. A buzz I only heard and knew nothing about. Or so I thought.

When they touched on Machine learning in my impact evaluation course (I really mean touched on btw, they spent ONE lecture on it) I realised in a simplified way, it was levelled up regression. An oversimplified naive understanding but something for one short lecture nonetheless.


Something about it stuck. Then while working in D&T at The HEINEKEN Company, I was exposed to real life ML algorithms at play in the digital products they were building.

Now a beer company investing heavy in tech and using ML is bound to pique my curiosity, especially when I was able to somewhat follow along with what was happening.

I spoke to so many people with their hands deep in product, data, and strategy and after several genuinely interesting conversations, I was almost sure I wanted to explore and get my hands dirty myself. During a casual coffee chat with the Chief AI Officer, I asked where I'd even begin and he pointed me straight to the foundations:


The Machine Learning Specialisation offered by Deeplearning.AI and Stanford University. Andrew Ng is a legend so I’m sure you’ve heard of the course too.


When I started, I knew just enough Python to get by. Like hellaaaaa basic. Most of my experience was with my bachelor’s thesis, where I worked in Stata with a dataset of hundreds of thousands of entries. It was a solid starting point, but still a very guided experience. Definitely not enough to call myself a model builder, especially in Python. I hadn’t really built anything from scratch or understood the “why” behind each line of an ML model.


The course is broken into three parts: supervised learning, advanced learning, and unsupervised learning.


Here’s what stuck with me:

  • The bias and variance tradeoff shows up everywhere and all the time in every real decision you make.

  • Evaluation metrics actually matter. 95% accuracy might sound great but not if you’re predicting cancer diagnoses or fraud detection. Context is everything.

  • ML success has less to do with having the smartest model, and more to do with knowing your data, your users, and your goals.

  • And my favourite, the real-world examples. Spam filters, facial recognition, basic neural networks. It wasn’t just code anymore; it felt real. Tangible. Relevant. And as a visual thinker- the perfect learning aid.


But it wasn’t always smooth of course. Diving in with basic Python meant the labs with a lot of pre-written code were both a blessing and a trap. It was easy to skim, hit run, and feel like I got it. The graded labs at the end of each module forced me to slow down, write from scratch, and debug through trial and error. They were also the reason I would go back to the weekly labs and force myself to read each line of code and toggle with it.


In a surprising turn of events, the math was the easy part. It was mostly stuff I’d learned during undergrad, now applied in new contexts. That isn’t to say that some videos didn’t take a few rewatches to properly land. But I actually liked that. It took me back to when I was a student only this time I was enjoying so much more being unburdened with the pressure of exams or grades hovering over me.


If I could do it again and moving further, I’d probably apply each topic to a real world dataset right away. Make use of Kaggle and maybe actually build something. Not necessarily a useful model but just enough for me to get my hands dirty and say “yeah I get the flow of work.” When I started my own capstone project after finishing the course, I realised that while I was comfortable with the core model-building part, the setup, Git messiness, and data cleaning were a whole new challenge.


I also started seeing ML differently; less like a checklist of skills and more like a set of tools that need to be used in the right context.

There’s a big difference between building something cool and building something useful.

And while I still have a lot to learn, I’ve realised that it’s not just about writing the code, it’s about understanding the problem well enough to know whether you should be writing code at all.


I signed up to understand machine learning and I did. But I also got clarity on the kind of problem solver (and builder) I actually want to be. Basically Andrew Ng can now add “life coach” in his resume as well lol.

bottom of page