Your face reveals how you feel–even to a computer

My last post dealt with lying—not an unimportant topic given that around 60% of people lie at least once during a 10-minute conversation. It is therefore perhaps concerning that people are by and large quite poor at detecting deception, including law-enforcement personnel such as members of the CIA, the FBI, NSA, and DEA (to name but a few acronyms). One exception involves members of the U.S. Secret Service. They perform better than other professionals and are the only ones to perform above chance.

But how do they do it?

What does it mean to detect lying?

It turns out that when people lie, their faces tend to reveal “micro expressions” that are absent in people who tell the truth. Those micro expressions last only a brief fraction of a second, and they occur when a person either deliberately or unconsciously conceals a feeling.

Analysis of facial expressions can be highly diagnostic in a number of arenas beyond detecting deception and lies: For example, facial expression analysis can identify which depressed patients are at greatest risk for reattempting suicide; it constitutes an index of physical pain; and it distinguishes different types of adolescent behavior problems, to name but a few.

The most informative studies have used anatomically based coding of facial expression rather than the more commonly used method of subjective judgment. A recent article by Jeffrey Girard and colleagues in Behavior Research Methods surveyed some of the findings for anatomically-based measurement and concluded that “These findings have offered glimpses into critical areas of human behavior that were not possible using existing methods [such as subjective judgment] of assessment, often generating considerable research excitement and media attention.”

But Girard and colleagues then went on to note that, however striking those findings, there has been remarkably little follow-up work in these areas using anatomically-based measures. The reasons are straightforward: It takes 6 months of training for a researcher to learn how to code all the combinations of movements that might make up a facial expression. For instance, a Duchenne smile is indicated by simultaneous contraction of the zygomatic major and orbicularis oculi pars lateralis muscles. And even after all that training, coding a single minute of video for all facial movements can take over an hour. Not exactly the type of methodology that permits rapid research progress.

This was the departure point for the research by Girard and colleagues.

Their contribution was to examine the accuracy of an automated computerized system for the coding of facial expressions. Their investigation used a database of over 400,000 video frames taken from 80 people of a variety of ethnic backgrounds.

For the experiment, people were randomly assigned to a condition that involved the consumption of alcoholic beverages, another condition involving a placebo drink or a non-alcoholic control. Within each condition, participants were assigned to groups of 3 at random, and they then spent some time getting acquainted with each other (none had known each other before) before they performed some cognitive tasks.

Emphasis in this study was on the unstructured time during which people got to know each other. People were video recorded, and their facial expressions were then coded in two ways: First by two expert human coders (for cross-checking; their agreement was sufficiently high to ensure that they picked up a meaningful signal), and then by computer.

The accuracy of the computer program was assessed by comparing its classifications to those of the human coders in two ways: First, at the level of the session overall—how comparable were the overall patterns of behavior measured throughout the entire period of observation? Second, at the “micro” level of each frame—how much did the computer agree with the coders at any given time (i.e., frame of the video)?

The former measure (“session-level”) is critical when average rates of actions are of interest, and the latter measure (“frame-level”) is relevant when it one wants to know when particular actions occur in the ongoing stream of behavior.

Session-level reliability was very strong, with a correlation of .89 with the manual coding on a scale from 0 to 1. The frame-level is far more demanding, and hence reliability was not as strong with a correlation of .60, which still is comparable to what often is achieved between manual coders.

Girard and colleagues conclude that their work has opened up new avenues for research that hitherto had been barred by pragmatic considerations: An experiment that might take hundreds of person hours to code often simply cannot be done, whereas the same experiment will become routine if it can be analyzed in a day or two. They also suggest that computers may be able to measure aspects of behavior that humans struggle to quantify, such as the velocity of facial movements or an individual’s overall level of facial expressivity.

We can therefore look forward to quite a bit more knowledge about people’s facial expressions, and how our emotions can be detected by researchers.

There is, however, another side to this opportunity: If psychological researchers can automatically code emotions for the betterment of humanity, then so can the people who are affiliated with those acronyms from above—and a few others, such as FSB or MI6 or whatever. So anyone who thinks cognitive research has little impact on the world might want to rethink.

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like