Table of Contents

True Lies

A new detection model more reliably flags falsehoods

Ming Hsu

Featured Researcher

Ming Hsu

Associate Professor of Business Administration, Marketing

By

Katia Savchuk

Illustration by

Chris Gash

Share on LinkedIn LinkedIn
Share on BlueSky BlueSky
Copy link Copy Link Link Copied
Illustration of a lie detector. The readout shows the profile of a person with a long nose, indicating faslehoods.

Ever since a Berkeley researcher developed the polygraph in 1921, each successive technique—from measuring perspiration to brain scans—has failed to reliably detect lying.

But a new study published in the Proceedings of the National Academy of Sciences from a cross-disciplinary team including Associate Professor Ming Hsu offers a novel approach that not only reveals why detection has been unreliable but also promises better accuracy. 

Using brain scans and the latest machine learning techniques, the researchers built a model that predicts with relatively high accuracy when people are lying. Yet researchers discovered something else: The brain signatures of those telling falsehoods were the same as those of people who were just being selfish, confirming for the first time that what lie detectors pick up may not be falsehoods themselves. 

“People have always worried about the possibility that we are not detecting the lie, but something merely associated with the lie,” Hsu says. “In the past, research in this area has largely sidestepped this important issue. We decided to finally confront this question directly.”

After fine-tuning their algorithm, the team was able to show that it is possible to remove confounding signals and bring the field closer to a more scientifically valid lie detector. 

A variety of falsehoods

Lying is a complex process that isn’t housed in a single part of the brain, which makes it challenging to separate activity linked to lying from anxiety, self-interest, or other factors. Falsehoods themselves come in many flavors, from white lies to omissions to hedges, which could all look different in the brain. Plus, brain activity may vary across individuals depending on how often they lie or whether they know they’re lying.  

“The suspicion people have had for a hundred years that lie detection tests could be committing this type of error is right, but until now there was no clear evidence,” Hsu says. 

Using an algorithm to give more weight to signals across the brain that only occurred with lying produced a model that no longer predicted selfishness but could still correctly identify lying 70% of the time.

The technique the researchers developed could be used to fine-tune lie-predictor algorithms to account for other confounding factors. “This is a big conceptual breakthrough,” Hsu says. “You’re going to get more and more refined predictors and understand the biological basis of deception in much more granular form.”

Beyond lie detection

The method can be applied in other fields, Hsu notes, such as potentially using brain scans combined with machine learning algorithms to identify distinctive brain activity associated with certain psychiatric illnesses. It could also help researchers gauge reactions to marketing campaigns by going beyond focus groups to track signals in the brain. 

The technique is a vast leap from the original polygraph, invented by a UC Berkeley alum in 1921. Police officer and eventual psychiatrist John Augustus Larson, PhD 20 (physiology), pioneered the device that continuously measured blood pressure and pulse as part of Berkeley Police Chief August Vollmer’s crusade to make interrogations more scientific. Leonarde Keeler, who worked for the Berkeley Police Department in high school, later made the test more portable and reliable, and the device was sold to the FBI. Over the years, measures of respiration, involuntary eye movements, and eventually brain wave tests have all been used for lie detection.

Despite their progress, Hsu and fellow researchers agree that a general-purpose lie detector based on their method is, at best, many years away. And in the end, that holy grail might still prove elusive. 

Either way, the research team has taken an important step in improving lie detection. “After decades of going in circles, we think we have finally identified at least a path forward,” Hsu says. 

Posted in:

Topics:

Tagged: