iStock
iStock

What Face-Reading Computer Software Can Tell Us About Our Emotions

iStock
iStock

Is it possible for computer software to understand the human face? After 10 years of research, Fernando de la Torre and his team of computer scientists, engineers, and psychologists at Carnegie Mellon University’s Human Sensing Laboratory (HSL) believe they can finally say "yes."

This spring, the HSL released a piece of software they call IntraFace to the public. Anyone with an iPhone or Android can use this tool to characterize facial features through IntraFace-powered mobile and desktop applications. For several years, the software has been tested in a wide variety of applications, including autism, depression, and driver distractedness.

“Facial expression provides cues about emotion, intention, alertness, pain and personality,” de la Torre tells mental_floss. “We wanted to make artificial intelligence and algorithm-trained computers learn to understand expression and emotion. That was the ultimate goal."

HOW TO READ A FACE

Carnegie Mellon University’s Human Sensing Laboratory

Scientists have been trying to create automated facial recognition technology as early as 1964, when scientists Woody Bledsoe, Helen Chan Wolf, and Charles Bisson first started programming a computer to identify specific coordinates of facial features taken from photographs. According to the International Journal of Computer Science and Information [PDF], Bledsoe said the unique difficulties involved with facial recognition included a "great variability in head rotation and tilt, lighting intensity and angle, facial expression, aging, etc."

The team at Carnegie Mellon University’s Human Sensing Laboratory made their breakthrough roughly two to three years ago, when the lab first identified detection of the points of the face.

"If we don’t know here the mouth or eyes are, we can’t understand anything about expression," de le Torre says. In order to create IntraFace, the HSL’s team of computer scientists had to develop algorithms to interpret changes in facial expressions in real-time while compensating for deviations in angles, positions, and image quality.

That's why, he says, their work "is a breakthrough—a big revelation in facial image analysis. The first step in detection is the image: locating the eyes, nose and mouth. The second step is classification: identifying whether the person is smiling, frowning, male, female, etc. How does the computer know that? We learn from examples. All that we do to understand faces is from examples. We use image samples, label them, and train the computers through algorithms.”

Wen-Shang Chu is an IntraFace developer and computer scientist who is developing the algorithms for understanding these expressions. “From our demo alone, we developed face tracking, where we localized facial landmarks automatically,” Chu tells mental_floss. “We taught the computers to read the faces through 49 defined points on the faces.”

Equipped with the ability to identify facial features, the program was trained to interpret them using videos of facial expressions that were manually labeled by experts, collected from data sets available through CMU and several other universities. Thousands of images and hundreds of subjects—a mix of people of Asian, Caucasian, and African descent—were part of the data set, with more increasing over time. The researchers tested and refined the software’s abilities through the images, which could be generated at 30 images per second.

“We learned that registration and facial landmark detection is an important step for facial expression analysis,” de la Torre says. “Also, we learned that is better to train with more images of different people rather than many images of the same subject to improve generalization.”

EMOTIONAL INVESTMENT

“Evolutionarily, we [humans] recognize faces and emotions on other human beings,” de la Torre says. Between the 1950s and 1990s, psychologist Paul Ekman found a set of expressions used by people all over the world. The subtle motions and placements that define facial expression were divided into the upper and lower parts of the face and associated with major muscle groups called "facial action units." Ekman developed a taxonomy for facial expression called the Facial Action Coding System (FACS), and it is often used by psychologists today.

IntraFace's algorithms are taught to use Ekman's system as well as data from newer research conducted by Du Shichuan and Aleix Martinez about compound emotions (as opposed to single, internally felt emotions, such as the happy surprise we feel at a surprise birthday party). They identified 17 compound expressions [PDF], and Intraface takes these into account.

WHAT FACIAL RECOGNITION IS GOOD FOR

“With algorithms we can build emotionally aware machines that will be instrumental in many domains, from healthcare to autonomous driving,” de la Torre says, and a variety of companies and organizations are interested in using facial recognition technology.

For example, an automobile company IntraFace is working with (which they declined to identify) wants to incorporate IntraFace technology into the front panel screens of cars to extract information about the driver’s expression. IntraFace can monitor if the driver is distracted and detect fatigue; an intelligent car can compensate by alerting the driver and taking control when the driver is distracted.

The developers see potential commercial uses for their technology, such as market research analysis. For example, a company would be able to monitor focus groups in a noninvasive way for previously undetectable features such as subtle smiles, attentiveness, and microfacial expressions.

But it's IntraFace's potential in the world of medicine that has the researchers most excited.

THE DOCTOR (AND HER COMPUTER) WILL SEE YOU NOW

In collaboration with the Physical Medicine Group in New York City, the HSL has a proposal under review with the National Institute of Health so that IntraFace can be used in the measurement of intensity and dynamics of pain in patients.

IntraFace was also used in a clinical trial for the treatment of depression, and it was applied to help better understand the role of emotion in depression. So far, IntraFace’s interpretation of facial features can account for 30 to 40 percent of the variance in the Hamilton Depression Rating Scale, the industry standard for depression severity measurement.

In addition, the researchers in the clinical trial were able to uncover information about depression that had not yet been discovered. Predominantly, people with depression had decreased positive moods, which was expected. IntraFace helped researchers uncover that depressed patients exhibited increased expressions of anger, disgust, and contempt but decreased expressions of sadness. People with less severe depression expressed less anger and disgust, but more sadness. This study was published [PDF] in 2014 in the journal Image and Vision Computing.

“Sadness is about affiliation; expressing sadness is a way of asking others for help,” Jeffrey Cohn, a professor of psychology and psychiatry at the University of Pittsburgh and an adjunct professor in CMU’s Robotics Institute, explains to mental_floss. “That, for me, is even more exciting than being able to detect depression or severity; we’re using [IntraFace] to really learn something about the disorder.”

IntraFace is also being used to develop and test treatments for post-traumatic stress disorder, and, in fall 2015, IntraFace’s facial feature detection technology was incorporated into an iOS application called Autism & Beyond using ResearchKit, an open source framework that enables an iOS app to become an application for medical research.

Autism & Beyond was created by a team of researchers and software developers from Duke University. “We have developed and patented technology that includes the [IntraFace] design on video stimuli to create certain emotions and expressions in children, and then correlate those emotions with developmental disorders,” Guillermo Sapiro, a professor of electrical and computer engineering at Duke University, tells mental_floss. The app can potentially be used by parents to screen young children for autism and mental health challenges, such as anxiety or tantrums.

The HSL team hopes the public release of the program will spark even more uses. De la Torre is convinced that others will build on his team’s product. (The source code, however, is not distributed.)

“We want to bring this technology to the people,” de la Torre said. “We have limited resources in our studies and students. We want to bring it out there and see what kind of interesting applications people will find with IntraFace.”

nextArticle.image_alt|e
iStock
'Lime Disease' Could Give You a Nasty Rash This Summer
iStock
iStock

A cold Corona or virgin margarita is best enjoyed by the pool, but watch where you’re squeezing those limes. As Slate illustrates in a new video, there’s a lesser-known “lime disease,” and it can give you a nasty skin rash if you’re not careful.

When lime juice comes into contact with your skin and is then exposed to UV rays, it can cause a chemical reaction that results in phytophotodermatitis. It looks a little like a poison ivy reaction or sun poisoning, and some of the symptoms include redness, blistering, and inflammation. It’s the same reaction caused by a corrosive sap on the giant hogweed, an invasive weed that’s spreading throughout the U.S.

"Lime disease" may sound random, but it’s a lot more common than you might think. Dermatologist Barry D. Goldman tells Slate he sees cases of the skin condition almost daily in the summer. Some people have even reported receiving second-degree burns as a result of the citric acid from lime juice. According to the Mayo Clinic, the chemical that causes phytophotodermatitis can also be found in wild parsnip, wild dill, wild parsley, buttercups, and other citrus fruits.

To play it safe, keep your limes confined to the great indoors or wash your hands with soap after handling the fruit. You can learn more about phytophotodermatitis by checking out Slate’s video below.

[h/t Slate]

nextArticle.image_alt|e
iStock
Why Eating From a Smaller Plate Might Not Be an Effective Dieting Trick 
iStock
iStock

It might be time to rewrite the diet books. Israeli psychologists have cast doubt on the widespread belief that eating from smaller plates helps you control food portions and feel fuller, Scientific American reports.

Past studies have shown that this mind trick, called the Delboeuf illusion, influences the amount of food that people eat. In one 2012 study, participants who were given larger bowls ended up eating more soup overall than those given smaller bowls.

However, researchers from Ben-Gurion University in Negev, Israel, concluded in a study published in the journal Appetite that the effectiveness of the illusion depends on how empty your stomach is. The team of scientists studied two groups of participants: one that ate three hours before the experiment, and another that ate one hour prior. When participants were shown images of pizzas on serving trays of varying sizes, the group that hadn’t eaten in several hours was more accurate in assessing the size of pizzas. In other words, the hungrier they were, the less likely they were to be fooled by the different trays.

However, both groups were equally tricked by the illusion when they were asked to estimate the size of non-food objects, such as black circles inside of white circles and hubcaps within tires. Researchers say this demonstrates that motivational factors, like appetite, affects how we perceive food. The findings also dovetail with the results of an earlier study, which concluded that overweight people are less likely to fall for the illusion than people of a normal weight.

So go ahead and get a large plate every now and then. At the very least, it may save you a second trip to the buffet table.

[h/t Scientific American]

SECTIONS

arrow
LIVE SMARTER
More from mental floss studios