What Face-Reading Computer Software Can Tell Us About Our Emotions

iStock
iStock / iStock
facebooktwitterreddit

Is it possible for computer software to understand the human face? After 10 years of research, Fernando de la Torre and his team of computer scientists, engineers, and psychologists at Carnegie Mellon University’s Human Sensing Laboratory (HSL) believe they can finally say "yes."

This spring, the HSL released a piece of software they call IntraFace to the public. Anyone with an iPhone or Android can use this tool to characterize facial features through IntraFace-powered mobile and desktop applications. For several years, the software has been tested in a wide variety of applications, including autism, depression, and driver distractedness.

“Facial expression provides cues about emotion, intention, alertness, pain and personality,” de la Torre tells mental_floss. “We wanted to make artificial intelligence and algorithm-trained computers learn to understand expression and emotion. That was the ultimate goal."

HOW TO READ A FACE

Carnegie Mellon University’s Human Sensing Laboratory

Scientists have been trying to create automated facial recognition technology as early as 1964, when scientists Woody Bledsoe, Helen Chan Wolf, and Charles Bisson first started programming a computer to identify specific coordinates of facial features taken from photographs. According to the International Journal of Computer Science and Information [PDF], Bledsoe said the unique difficulties involved with facial recognition included a "great variability in head rotation and tilt, lighting intensity and angle, facial expression, aging, etc."

The team at Carnegie Mellon University’s Human Sensing Laboratory made their breakthrough roughly two to three years ago, when the lab first identified detection of the points of the face.

"If we don’t know here the mouth or eyes are, we can’t understand anything about expression," de le Torre says. In order to create IntraFace, the HSL’s team of computer scientists had to develop algorithms to interpret changes in facial expressions in real-time while compensating for deviations in angles, positions, and image quality.

That's why, he says, their work "is a breakthrough—a big revelation in facial image analysis. The first step in detection is the image: locating the eyes, nose and mouth. The second step is classification: identifying whether the person is smiling, frowning, male, female, etc. How does the computer know that? We learn from examples. All that we do to understand faces is from examples. We use image samples, label them, and train the computers through algorithms.”

Wen-Shang Chu is an IntraFace developer and computer scientist who is developing the algorithms for understanding these expressions. “From our demo alone, we developed face tracking, where we localized facial landmarks automatically,” Chu tells mental_floss. “We taught the computers to read the faces through 49 defined points on the faces.”

Equipped with the ability to identify facial features, the program was trained to interpret them using videos of facial expressions that were manually labeled by experts, collected from data sets available through CMU and several other universities. Thousands of images and hundreds of subjects—a mix of people of Asian, Caucasian, and African descent—were part of the data set, with more increasing over time. The researchers tested and refined the software’s abilities through the images, which could be generated at 30 images per second.

“We learned that registration and facial landmark detection is an important step for facial expression analysis,” de la Torre says. “Also, we learned that is better to train with more images of different people rather than many images of the same subject to improve generalization.”

EMOTIONAL INVESTMENT

“Evolutionarily, we [humans] recognize faces and emotions on other human beings,” de la Torre says. Between the 1950s and 1990s, psychologist Paul Ekman found a set of expressions used by people all over the world. The subtle motions and placements that define facial expression were divided into the upper and lower parts of the face and associated with major muscle groups called "facial action units." Ekman developed a taxonomy for facial expression called the Facial Action Coding System (FACS), and it is often used by psychologists today.

IntraFace's algorithms are taught to use Ekman's system as well as data from newer research conducted by Du Shichuan and Aleix Martinez about compound emotions (as opposed to single, internally felt emotions, such as the happy surprise we feel at a surprise birthday party). They identified 17 compound expressions [PDF], and Intraface takes these into account.

WHAT FACIAL RECOGNITION IS GOOD FOR

“With algorithms we can build emotionally aware machines that will be instrumental in many domains, from healthcare to autonomous driving,” de la Torre says, and a variety of companies and organizations are interested in using facial recognition technology.

For example, an automobile company IntraFace is working with (which they declined to identify) wants to incorporate IntraFace technology into the front panel screens of cars to extract information about the driver’s expression. IntraFace can monitor if the driver is distracted and detect fatigue; an intelligent car can compensate by alerting the driver and taking control when the driver is distracted.

The developers see potential commercial uses for their technology, such as market research analysis. For example, a company would be able to monitor focus groups in a noninvasive way for previously undetectable features such as subtle smiles, attentiveness, and microfacial expressions.

But it's IntraFace's potential in the world of medicine that has the researchers most excited.

THE DOCTOR (AND HER COMPUTER) WILL SEE YOU NOW

In collaboration with the Physical Medicine Group in New York City, the HSL has a proposal under review with the National Institute of Health so that IntraFace can be used in the measurement of intensity and dynamics of pain in patients.

IntraFace was also used in a clinical trial for the treatment of depression, and it was applied to help better understand the role of emotion in depression. So far, IntraFace’s interpretation of facial features can account for 30 to 40 percent of the variance in the Hamilton Depression Rating Scale, the industry standard for depression severity measurement.

In addition, the researchers in the clinical trial were able to uncover information about depression that had not yet been discovered. Predominantly, people with depression had decreased positive moods, which was expected. IntraFace helped researchers uncover that depressed patients exhibited increased expressions of anger, disgust, and contempt but decreased expressions of sadness. People with less severe depression expressed less anger and disgust, but more sadness. This study was published [PDF] in 2014 in the journal Image and Vision Computing.

“Sadness is about affiliation; expressing sadness is a way of asking others for help,” Jeffrey Cohn, a professor of psychology and psychiatry at the University of Pittsburgh and an adjunct professor in CMU’s Robotics Institute, explains to mental_floss. “That, for me, is even more exciting than being able to detect depression or severity; we’re using [IntraFace] to really learn something about the disorder.”

IntraFace is also being used to develop and test treatments for post-traumatic stress disorder, and, in fall 2015, IntraFace’s facial feature detection technology was incorporated into an iOS application called Autism & Beyond using ResearchKit, an open source framework that enables an iOS app to become an application for medical research.

Autism & Beyond was created by a team of researchers and software developers from Duke University. “We have developed and patented technology that includes the [IntraFace] design on video stimuli to create certain emotions and expressions in children, and then correlate those emotions with developmental disorders,” Guillermo Sapiro, a professor of electrical and computer engineering at Duke University, tells mental_floss. The app can potentially be used by parents to screen young children for autism and mental health challenges, such as anxiety or tantrums.

The HSL team hopes the public release of the program will spark even more uses. De la Torre is convinced that others will build on his team’s product. (The source code, however, is not distributed.)

“We want to bring this technology to the people,” de la Torre said. “We have limited resources in our studies and students. We want to bring it out there and see what kind of interesting applications people will find with IntraFace.”