Face, emotion and object detection

This app uses machine learning to perform facial detection, emotion detection, and object detection. If performing facial analysis, it will try to determine the gender, age and emotions of people in a selfie. When performing object detection, the app will attempt to classify the objects in a picture.

 

Emotion and facial detection

  1.  Take a selfie, alone or with friends. For best results, ensure that the faces are not obscured.
  2. Follow @mwambaanalytics.
  3. Attach the selfie and tweet it to @mwambaanalytics, with the hashtag #facedetect
  4. The app will respond with a tweet listing the genders, age, and emotions for all faces in the selfie. The confidence that the algorithm has in its answer is given as a percentage. Only the highest confidence emotions are displayed. See the example above.

 

Object detection

  1. Take a picture of any scene (office, outdoors, room, etc).
  2. Follow @mwambaanalytics.
  3. Attach the picture and tweet it to @mwambaanalytics, with the hashtag #objectdetect
  4. The app will attempt to classify different objects in the picture, and respond in a tweet. The confidence that the algorithm has in its answer is given as a percentage.

 

additional examples

 


 
 


tech details

-This app uses the twitter  API, python, and a slew of Amazon Web Services (AWS).

-Facial detection when looking at faces is not a challenge to ML, since all faces have the same basic structure (eyes, mouth, nose, etc). More difficult is a task like differentiating between cats and dogs.

-Emotion detection, also, is nuanced, and the results from this basic app might not be impressive. Whereas even a toddler can read facial expressions easily, algorithms might struggle. The somewhat unquantifiable aspect of expressions (a half-raised eyebrow, pursed lips, a clenched jaw) often make emotion detection challenging for algorithms.