December 10, 2024

Brain scans help researchers untangle the truth about lie detection

Ming Hsu

Featured Researcher

Ming Hsu

Associate Professor of Business Administration, Marketing

By

Katia Savchuk

Photo of a vintage polygraph machine
A vintage polygraph machine (Photo by Mezzalira Davide/AdobeStock)

Ever since people began trying to use machines to detect lies, they’ve met with frustration. When a new technique emerges—polygraphs, brain wave tests, magnetic resonance imaging—scientists assume better technology will solve the problem. Yet inevitably, hype gives way to skepticism, as each new tool struggles to reliably tell when someone is lying.

Because of the challenge of linking mendacity to unique biological markers, research into the science of deception has made little progress over the past century. But a new study published in the Proceedings of the National Academy of Sciences from a cross-disciplinary team—including lead author Sangil “Arthur” Lee, postdoctoral researcher at UC Berkeley’s Helen Wills Neuroscience Institute; Berkeley Haas professor Ming Hsu; and UCSF neurology professor Andrew Kayser—offers hope that not all is lost.

Using brain scans and the latest machine learning techniques, the researchers built a model that could predict with relatively high accuracy when participants in their study were lying. Yet their initial model had a critical shortcoming: The brain signatures of those telling falsehoods were the same as those of people who were just being selfish. This confirmed empirically for the first time a longstanding skepticism that what lie detectors pick up may not be falsehoods themselves.

“People have always worried about the possibility that we are not detecting the lie, but something merely associated with the lie,” Hsu said. “In the past, research in this area has largely sidestepped this important issue. We decided to finally confront this question directly.”

But the team didn’t want to stop at proving how hard lie detection was. After fine-tuning their algorithm, they were able to show that it is possible to remove confounding signals and bring the field closer to a more scientifically valid lie detector.

The experiments

During the experiments, researchers put subjects in a functional magnetic resonance imaging (fMRI) scanner, which measures brain activity by tracking changes in blood flow and had them play two games. In the first, participants saw a screen with two different dollar amounts. By pressing a button, they could tell another person, who couldn’t see the numbers, which reward to give them and which to keep. In making their request, they could choose from messages that told the truth or a lie (e.g., “Option A will earn you more money than B.”). In the second version of the game, participants selected from messages that didn’t include lies but allowed them to either be selfish or altruistic (e.g., “I prefer that you choose option A.”).

After training an algorithm using brain scans of people being deceitful or honest, the researchers produced a model that could accurately predict when someone was lying 79% of the time by viewing an image of their brain. Yet using images from the second game, the model could also predict whether someone was being selfish at the same rate, meaning it was failing to separate lying from selfishness.

“The algorithm is, in a way, confusing liars with jerks,” Hsu said. “So, the suspicion people have had for a hundred years that lie detection tests could be committing this type of error is right, but until now there was no clear evidence.”

(A) Searchlight analysis for regions that can predict deceptive choices. Searchlight analysis with a radius of two voxels was performed across the entire brain to identify regions that significantly predict deceptive choices, as assessed by leave-one-out cross-validation. Regions with predictive performances significantly above 50% at the whole-brain correction level (permutation tested TFCE P < 0.05) are shown. (B) Cross-task generalization performance is measured for regions identified in (A); the ratio of the t test statistics is shown. Several regions that have high predictive power in panel (A) are also shown to have high generalization in panel (B). (C) Dual goal tuning is applied at each searchlight to eliminate cross-task generalization and thereby identify regions that can significantly predict deceptive but not selfish choices (P < 0.05). (Figure 4 from published paper)

The many flavors of falsehoods

One reason it’s so hard to isolate signals of deception is that lying is a complex process that isn’t housed in a single part of the brain. Even if scientists can identify the most relevant regions, it’s challenging to separate activity linked to lying from that reflecting anxiety, self-interest, or other factors. Falsehoods themselves come in many flavors, from white lies to omissions to hedges, which could all look different in the brain. Plus, brain activity may vary across individuals depending on how often they lie or whether they know they’re lying. 

Determined to move the field forward, the research team decided to try improving their model so that it could still predict lying to some degree without confusing deception with selfishness. Simply removing brain signals that showed up with selfishness but not with lying didn’t work. “So much of the brain activity is similar, you start killing the predictive power for deception as well,” Lee said.

A technique that did prove effective involved using an algorithm to give more weight to signals across the brain that only occurred with lying, less to those that appeared with both lying and selfishness and even less to those that only popped up with selfishness. As a result, the model no longer predicted selfishness but could still correctly identify lying 70% of the time.

“We are still some ways from primetime,” Kayser said. “But the fact that we can rescue the predictor to be better than chance shows that there is something distinctive about lies compared to selfish decisions. There is a deception signal, potentially.”

The technique the researchers developed can be used to fine-tune lie-predictor algorithms to account for other confounding factors. “This is a big conceptual breakthrough,” Hsu said. “You’re going to get more and more refined predictors and understand the biological basis of deception in much more granular form.”

New applications

The method can be applied in other fields, he noted, such as potentially using brain scans combined with machine learning algorithms to identify distinctive brain activity associated with certain psychiatric illnesses. It could also help researchers gauge reactions to marketing campaigns by going beyond focus groups to track signals in the brain.

The technique is a vast leap from the original polygraph, invented in Berkeley, Calif. in 1921. Police officer and physiologist John Larson pioneered the device based on a systolic blood pressure test as part of police chief August Vollmer’s crusade to make interrogations more scientific. Scientist Leonarde Keeler, who worked for the Berkeley Police Department in high school, later improved the test, which eventually measured other signals, such as heart rate and breathing.

Despite their progress, the researchers agree that a general-purpose lie detector based on their method is, at best, many years away. And in the end, that holy grail might still prove elusive. “Unlike before, when researchers speculated about the presence of confounding factors, we can now continuously make improvements,” Lee said. “Of course, it is always possible that if you remove too many confounds , deception disappears — it’s just a conglomerate of all these things.”

Either way, the research team has taken an important step toward answering that question. “After decades of going in circles, we think we have finally identified at least a path forward,” Hsu said.

Read the full study:

“Distinguishing deception from its confounds by improving the validity of fMRI-based neural prediction”

By Sangil Lee, Runxuan Niu, Lusha Zhu, Andrew S. Kayser, and Ming Hsu

Proceedings of the National Academy of Sciences, December 2024