top of page

Sprint 2 -update 4

​

white space week - bytehacks​

​

30 November 2020

36f05c3e-4ee1-404d-b309-1ff66d53cde4_edi

It the start of sprint 2. and It is bytehack week, we have our final (for this sprint) product to showcase. Having camera to capture face for facial recognition, the raspberry pi as the brain of safe gantry and as well as LCD screen to display the information. We also did the website to allow admin to view the data for decision making and as well as uploading images for image processing.

​

Link to webite: https://sentry-p2.herokuapp.com/

 

Update

​

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Firstly, facial images are being stored in firebase storage and details such as names are stored in the database. The webpage that we created is used to upload the face image as well as the information to firebase. We then created a script to retrieve and update the machine learning model to register the users face.

​

This week update might seems smooth but many issues actually starts to come in.

​

We decided to deploy the machine learning model into our local machine to do the facial recognition because the raspberry pi is unable to recognize faces due to the low accuracy. As we know raspberry pi is NOT meant for training a machine learning model. It heats up which also causes the OS to not respond..

 

Therefore, we set up a local server using ZeroMQ to communicate between the local machine and the raspberry pi. As show in the proposed solution. Also as for the website, we also encounter multiple error such that the webpage crashes after a certain amount of photos are being uploaded to firebase database.

 

HOG was very inconsistent as well therefore we have to change to a different machine learning model to improve the accuracy. Convolutional Neural Network was a big improvement as a machine learning model. It helps to improve the accuracy level.

​

​

Based on research that algorithm between HOG and DNN:

 

Histogram of oriented gradients
​

Based on first order image gradients. The image gradients are pooled into overlapping oriention bins in a dense manner.

​

HOG is:

  1. Based on first order image gradients pooled in orientation bins.

  2. Dense (evaluated all over the image)

  3. Hand engineered, no learning algorithms for HOG features.

​

Convolutional neural network
​

Based on repeated convolutional operations which repeatedly filter the signal at each stage. The filters are trainable, that is, they learn to adapt to the task at hand during learning.

​

CNNs are:

  1. Trainable feature detectors which makes them highly adaptive. That is why they can achieve high accuracy levels in most applications such as image recognition. They can be trained end-to-end.

  2. Mainly supervised deep learning models motivated by the primary visual cortex with alternating layers of convolutions and pooling layers.

  3. They can learn low-level features similar to SIFT and HOG features from training examples alone, that is amazing. Thus one can minimize feature engineering when it comes to using CNNs.

​

 

Thoughts after first sprint

​

This week has been a tough week for our team. Despite everything, I enjoyed working with my teammates. This project experience have showed me the real meaning of scrum instead of learning from lecture which I wasn't able to relate at the start. We have done our sprint retrospective, i realize how much we could have done to help each other improve.

​

​

To be continue... when sprint resumes...

​

​

​

Untitled.png
bottom of page