In this code example I created a background mood changer using the ml5 feature extractor. The end user will train the pretrained model to detect if he/she is smiling or frowning, and based on their facial expression, the background will change to either rain or sunshine.

The ml5 feature extractor is a pretrained model that takes advantage of transfer learning. You are using the feature part of a pretrained model that allows you to retrain or reuse the model for a new task. In this instance, the task will be determining whether the user is smiling or frowning.

HTML

To create the base of the ml5 feature extractor project I added in some external CSS and JavaScript files. I added in p5.js, ml5.js, Google Fonts, Bootstrap, Font Awesome, an external weather css file, and my own css file. The p5.js allows me to create a canvas, the ml5.js provides the ml5 feature extractor and the weather css is what shows the rain drops.

I added in some instructions so the end user knows how to use the m15 feature extractor. I also added in three audio files – if the user is sad it will play the rain, if the user is happy it will play the cricket file and when the user clicks the train button it will make a noise. I created buttons for the user to click on to train the model.

Copy to Clipboard

CSS

To create the design of the ml5 feature extractor project I added in CSS to center the video. I added in an initial background image to the page before the background image changes. I added in CSS to mimic paper for the direction portion of the page. Lastly, I added in some CSS to change the button icon color.

Copy to Clipboard

JavaScript

I grabbed the body and audio elements and set those to variables – body, rainAudio, cricketAudio.

During training, the ml5 feature extractor is calculating the probability it got the image wrong. As it continues training, the error gets lower and lower, eventually so low it reaches null. In the function whileTraining we want to know when the training error gets to null (loss), and once it does so we classify our results using the gotResults function.

The gotResults function returns an error if an error is present, otherwise it returns your results in the variable I named results. If the result label is equal to sad I added in JavaScript to change the background image to a rainy photo and I play the rain audio file. I also pause the cricket audio file if it is playing. I added in a weather rain class to the body tag. If the results are happy then I set the background to a sunny photo, and pause the rain audio file. I search to see if a weather rain class is present on the body tag and if so I remove it, getting rid of the rain.

In the setup function, I grab the sad and happy button elements and set onclick functions so they will train the model and grab my image whenever the button is pressed. The sad and happy buttons are designed to track the number of clicks and add the clicks to the button elements so the user knows how many sad and happy images they have saved to be used to train the feature extractor. I also grab the train button which starts the training process.

In the draw function I created a canvas and placed a video inside it.I reversed the video so the user would see a reflection of themselves.

Copy to Clipboard