My 100 Days Of ML Code Journey

My 100 Days Of ML Code Journey

“Science of Getting Computers to learn, without being explicitly programmed”

Machine learning is a subset of artificial intelligence in the field of computer science that often uses statistical techniques to give computers the ability to “learn” (i.e., progressively improve performance on a specific task) with data, without being explicitly programmed. ~that’s what Wikipedia says

Machine Learning and Artificial Intelligence are currently big trends. It turns out to be very useful in solving real-world problems by making predictions on various models(Ex-breast cancer prediction).
And the best part of Machine Learning is that it is evolving every day.

About this Challenge

Siraj Raval recently proposed a challenge #100DaysOfMLCode.
Anyone who wants to contribute can participate in this journey of 100 days.
It’s a pledge to dedicate 1 hour each day(for 100 days) to studying or coding Machine Learning.

This is my Journey

I’ve decided to be a part of this movement to dedicate an hour each day for 100 days to studying or coding Machine Learning and will keep track of each day’s progress through this blog as well as my GitHub Repository.

Day 0 (22nd July 2018):

  • I’ve collected the study materials from Siraj’s YouTube channel and other online resources, started with Google’s Machine Learning Crash Course, installed all the dependencies, and set up the Programming Environment.

Day 1 (23rd July 2018):

  • Studied about Linear and Logistic Regression

  • Applied Logistic Regression on Breast Cancer Dataset(you can find it here)

  • Used Scikit Learn Library to implement Logistic Regression (Naive Bayes)

  • Learned how to track the shape of matrices. Though it was a hectic task to deal with rank 1 arrays

Results: Got 96.66666667% accuracy on test set

Day 2 (24th July 2018):

Learned about vectorization and its importance in improving the performance of the Machine Learning Models.
Here’s what I’ve observed:
Using the Numpy Library of Python, I implemented a simple dot product of two Numpy matrices, one using Vectorization, and another using explicit for loop:

import time
import numpy as np

a = np.random.rand(3000000)
b = np.random.rand(3000000)

init = time.time()
c = np.dot(a,b)
fin = time.time()
print(c)
print("time: " + str(1000 * (fin-init))+"ms")

c = 0
init = time.time()
for i in range(3000000):
c += a[i]*b[i]
fin = time.time()
print(c)
print("time: " + str(1000 * (fin-init))+"ms")

Here, I have two numpy matrices named ‘a and ‘b’. Using time functions we can track the time taken by the dot multiplication to complete its execution.

So, the “np.dot()” function uses a vectorized implementation for the dot multiplication, whereas in the second part, we are using an explicit for loop to iterate through each element of the matrices and multiply them.

Results:

749824.6073387377
time: 2.269268035888672ms
749824.6073387286
time: 1075.3190517425537ms

We can observe that the time taken by the vectorized implementation is much smaller as compared to the time taken by our explicit for loop to compute the dot product.

Also, I made some progress with the Machine Learning Crash Course till the topic “Generalization”.

Day 3 (25th July 2018):

  • I made some progress with the second course(Improving Deep Neural Networks: Hyperparameter Tuning, Regularization, and Optimization) the Deep Learning Specialization offered by Coursera.

  • Learned about Gradient Descent and its working.

  • Studied various methods of regularizing the Neural Network to prevent overfitting.

Day 4 (26th July 2018):

  • Learned about Mini-batch Gradient Descent and a bit about Stochastic Gradient Descent.

  • Made some progress with the Machine Learning Crash Course.

Day 5 (27th July 2018):

  • Implemented a Linear Regression Model on the Boston Dataset.

  • Used Scikit Learn Library for the dataset and the Linear Regression Model.

  • Used Matplot Lib to plot the predictions.

Results: Got a mean squared error of 21.47121262684534

Day 6 (28th July 2018):

  • Made some progress with the Deep Learning Specialization on Coursera.

Day 7 (29th July 2018):

  • Made some with the ML Crash Course by Google.

  • Learned about Exponentially Weighted Averages and the Bias Correction in Exponentially Weighted Averages.

Day 8 (30th July 2018):

  • Learned about Adam Optimization Algorithm

  • Implemented Adam Optimization Algorithm with the Deep Learning Course Assignments

Day 9 (31st July 2018):

  • Learnt and revised about basics of Building a Neural Network, it’s working, forward prop, and backward prop

  • Made some progress with the ML Crash Course

Day 10 (1st August 2018):

  • Implemented Deep Dream on Live Video Stream using OpenCV
    It was a challenge posed by Siraj Raval on his YouTube channel where he taught about Google’s DeepDream and gave a weekly challenge to apply DeepDream to a video stream.

The output of Google’s Deep Dream

Live Stream with Deep Dream

Day 11 (2nd August 2018):

Day 12 (3rd August 2018):

  • Started with TensorFlow Framework

  • Learned about the basics of Tensors and how to run a session in TensorFlow

Deep Learning Frameworks help a lot. A Single line of code can do many more things and reduce the code and complexity.

Day 13 (4th August 2018):

Day 14-Day 21 (5th August 2018–12th August 2018):

Day 22-Day 29 (13th August 2018–20th August 2018):

  • Made some progress with the Machine Learning Crash Course

  • Saw some videos on Machine Learning with Python

Not much for the week (PS: University Exams won’t let you do something Interesting and Beneficial).

Day 30 (21st August 2018):

  • Started with the concepts of Convolutional Neural Networks

Day 31 (22nd August 2018):

  • Learned about the pooling and padding concepts in CNNs

Day 32-Day 38 (23rd August 2018–29th August 2018):

  • Preparing for the Hackathon Project on Deep Learning with Android.

Applied for Hitsathon V2.0 (Hindustan University Hackathon) in Chennai, India.

Day 39-Day 40 (30th August 2018–31st August 2018):

This was a remarkable day as we won this hackathon and it was our first win. It was amazing when we heard our team name at 1st position in the top 10 selected teams initially.
After a short presentation of 60 seconds about the project we were working on, they finally announced the winners of the Hackathon…
And the winning team is…. Scary Bugs

It was an amazing experience to work on a DeepLearning Project with Android.

Here’s the link to the Github Repo

Team Scary Bugs

Day 41-Day 44 (1st September 2018–4th September 2018):

Learned about the classic architectures of CNNs:

  • LeNet - 5

  • Alex Net

  • VGG - 16

  • ResNets

Day 45-Day 74 (5th September 2018–4th October 2018):

Finally, I completed the Deep Learning Specialization provided by deeplearning.ai.

Thanks to Coursera and Andrew Ng for this course and the amazing content.

Day 75 (1st November 2018):

  • Getting started with Tensorflow.js

  • Learned about how to create tensors and some basic operations

I got the motivation to use tensorflow.js from already implemented applications like posenet.

Day 76 (1st December 2018):

  • Implemented XOR Function using Neural Networks in TensorFlow

  • Learned about the use of one-hot encoding in machine learning

You can find the source code in this repository.