In this code example I created a background mood changer using the ml5 feature extractor. The end user will train the pretrained model to detect if he/she is smiling or frowning, and based on their facial expression, the background will change to either rain or sunshine.
The ml5 feature extractor is a pretrained model that takes advantage of transfer learning. You are using the feature part of a pretrained model that allows you to retrain or reuse the model for a new task. In this instance, the task will be determining whether the user is smiling or frowning.
I added in some instructions so the end user knows how to use the m15 feature extractor. I also added in three audio files – if the user is sad it will play the rain, if the user is happy it will play the cricket file and when the user clicks the train button it will make a noise. I created buttons for the user to click on to train the model.
To create the design of the ml5 feature extractor project I added in CSS to center the video. I added in an initial background image to the page before the background image changes. I added in CSS to mimic paper for the direction portion of the page. Lastly, I added in some CSS to change the button icon color.
I grabbed the body and audio elements and set those to variables – body, rainAudio, cricketAudio.
During training, the ml5 feature extractor is calculating the probability it got the image wrong. As it continues training, the error gets lower and lower, eventually so low it reaches null. In the function whileTraining we want to know when the training error gets to null (loss), and once it does so we classify our results using the gotResults function.
In the setup function, I grab the sad and happy button elements and set onclick functions so they will train the model and grab my image whenever the button is pressed. The sad and happy buttons are designed to track the number of clicks and add the clicks to the button elements so the user knows how many sad and happy images they have saved to be used to train the feature extractor. I also grab the train button which starts the training process.
In the draw function I created a canvas and placed a video inside it.I reversed the video so the user would see a reflection of themselves.