Haas researchers delve into the science of why we make the decisions we do.
Nearly every minute, we’re faced with choices. Should we focus on an upcoming project or organize for the future? Take a chance on a risky investment or play it safe? Go to the deli or try that new Thai place for lunch? For the most part, we feel in control of those decisions, having the free will to make our own choices based on what we want or need to do. Economists thought so too, expecting human beings to act rationally to choose what’s in their best interests based on the available information and their mental abilities to process it. For decades, economists preached that humans might not always choose the best option available, but they will choose a “good enough” option for themselves in the moment.
More recently, however, the burgeoning field of behavioral decision research has been calling those assumptions into question. Using a combination of economics, neuroscience, psychology, and machine learning, decision scientists have shown that we humans aren’t very rational at all when it comes to the choices we make. Errors in judgment, emotional responses, impulsiveness, and lack of perspective all skew our decision-making abilities, frequently causing us to choose poorly even when better options are available.
“Anyone who has studied the economics of decision-making will have encountered the basic concept that individuals should choose the option with the highest-expected value,” says Professor Don Moore, associate dean for academic affairs. Moore, who has become one of the leaders in the field for his work on overconfidence, has just released his second book, Decision Leadership (see sidebar, p. 18). “In real life, however, it can be complicated to calculate expected value, so we end up relying on our intuition, which is imperfect.”
Despite our lack of rationality, humans still tend to act in predictable ways that can be studied scientifically. Haas researchers are using tools from a variety of disciplines to better understand the predictably bad choices that people make—and what might be done to push them toward better outcomes. In many cases, they’ve found, there is a “right” answer that will produce a more optimal end result, if people understand how to recognize it. These are critical skills for managers: Understanding the latest decision research can help them not only make better decisions at work but also set up environments to help employees and customers make better decisions as well.
The Truth About Consequences
Say you’re a doctor with two patients, but you only have resources to operate on one. Patient A has an 85% chance of surviving, but if you operate, you can increase it to 90%. Patient B has only a 20% chance of surviving, but if you operate, you’ll increase it to 30%. Whom would you choose? “If you care about saving lives, you should operate on Patient B, because you have the most chance of increasing their survival,” says Associate Professor Ellen Evers (shown left). Yet, when she and Haas PhD students Stephen Baum and William Ryan posed this question in the lab, participants overwhelmingly chose Patient A.
The reason is that people are much more apt to focus on the negative consequences of their actions rather than the positive. “If you don’t operate on Patient B and they die, you say, ‘Hey, I couldn’t have done much about that anyway,’” says Evers. “But if I don’t operate on Patient A and they die, then you think, ‘Oh man, I could have prevented their death.’” Such emotional responses are frequently undervalued by economists when it comes to decision-making, and yet they can have huge effects on the choices people make, especially when evaluating risk. “Most economic models don’t see those kinds of emotional ‘negative-values’ as true inputs,” Evers says, “but as human beings, we experience those emotions.”
In gambling experiments, Evers finds that people frequently pay too much for insurance to cover their losses, beyond the probability that they’ll lose. They tend to behave the same in risky hypothetical situations—for example, in deciding to buy back-up tickets to an indoor theme park in case of getting rained out of an outdoor park—paying the same whether there’s an 80%, 50%, or 20% chance of rain. “They’re so worried about feeling regret if something bad happens, they don’t consider whether the chance of something bad happening is minuscule,” says Evers. As a result, people frequently overinvest in a backup plan when there’s little chance they’ll need it, but they also underinvest in Plan B when chances are likely they will. Similarly, we overinvest in projects likely to be successful but don’t invest enough in projects that are long shots. “The more important decisions become, the worse we are at accurately considering their chances of success because we care too much,” Evers says.
Identifying regret as the cause of poor decision-making can aid leaders in helping people make better choices, she says. Have people focus on external causes of negative consequences rather than on themselves. “If people are less likely to say, ‘I am at fault for doing this,’ then their decisions become more optimal,” Evers says.
The Value of Memory
Picking your favorite fast-food restaurant seems like an easy enough task. But when Associate Professor Ming Hsu (shown left) asked people to do just that, 30% of respondents picked McDonald’s. Yet half of those people changed their selection to a different favorite fast-food chain when they were later given a list to pick from. While it seems strange that people forget their favorite brand, Hsu found the same thing happened when he asked people their favorite fruit, salad dressing, and other categories.
“According to the rational economic model, if you didn’t buy something, it must be because you didn’t like it,” says Hsu, the William Halford Jr. Family Chair in Marketing. “We found that it’s possible people don’t buy things because they forgot about it.” Hsu’s research combines economics with neuroscience, scanning the brains of study participants using functional magnetic resonance imaging (fMRI) to see what’s going on when they make decisions. His lab found that when people made open-ended choices, they activated a part of the brain associated with memory, but when they chose from a list, that part of the brain remained dormant.
Interestingly, Hsu’s lab didn’t see the same result for running shoes, when people chose Nike for both open-ended and multiple-choice options. “People chose McDonald’s because they couldn’t think of anything else, whereas with Nike, people
really do like Nike,” he says. Such tools can help companies better understand the value of their brands in the marketplace. In the future, Hsu plans to look at how those choices change over time. “If you’re McDonald’s or another category leader, you may be benefitting from associations that were built up 20 years ago. But if you’re not putting any brand value in the bank, then ten years from now, you may be dead. It’s important how much you’re willing to pay for a brand, but it’s also important how much it sticks in your mind.”
For decades, psychologists have been aware of a phenomenon called “anchoring.” In numeric judgments (e.g., What will Amazon’s stock price be in a year?) people often “anchor” on a starting value (e.g., today’s stock price) and tend not to adjust far enough in their final answer. Good negotiators use anchoring by offering a very high or very low opening bid to influence the outcome.
Professor Clayton Critcher (shown right), the Joe Shoong Chair of Business, has shown that people are not only anchored by starting numbers but are also influenced by other focal values, “attractors,” that seem to draw judgments toward them.
In recent research, he found that round numbers served as attractors when people predicted, for example, airfare increases. “If airfare from L.A. to New York is $360 but has been rising, then a round number like $400 serves as a natural focal point, an ‘attractor,’” says Critcher. Asked where airfare was likely to go in the coming days, study participants estimated a relatively big upward jump. When Critcher changed the current airfare to $380, however, participants still chose numbers close to $400—forecasting a much smaller increase. “The attractor ends up shaping people’s subjective sense of what is a big or small possible change,” he says.
“It’s important how much you’re willing to pay for a brand, but it’s also important how much it sticks in your mind.”
—Assoc. Prof. Ming Hsu
The phenomenon could have many implications in business. In other studies, Critcher leaned on different ways investment firms construct graphs that illustrate how, say, mutual fund values have evolved. Making different incidental numbers salient on these graphs had predictable consequences for how potential retail investors thought the funds would perform.
“You are able to nudge people into interpreting trends as more or less significant depending on how you present the information,” Critcher says.
The idea can also be applied to exact social good. For example, if health officials want people to take a rise in COVID cases seriously, they could depict the upward trajectory on a chart in which the closest y-axis label is farther away. This could lead people to predict that cases are contining to increase, perhaps encouraging more precautions.
The Power of Perception
When Juliana Schroeder (shown right) is searching for a new research topic, she often looks at the world around her. “Every paper starts with a puzzle,” says Schroeder, an associate professor and the Harold Furst Chair in Management Philosophy & Values. “I’m particularly interested in social inference and what people get wrong in their social judgments.” One recent study, for example, started with the observation that people often appreciate constructive criticism from others but are often reluctant to give it themselves. “It could range from being told you have a stain on your shirt to more consequential feedback between spouses,” she says. “Most of us want good feedback in our lives and aren’t getting enough of it.”
In a series of hypothetical and actual scenarios, she found that people consistently said they wanted constructive criticism when they were in the position of receiver but wouldn’t offer it when in the position of giver. In drilling down into the causes of that paradox, Schroeder found it was only partially due to anxiety about how the feedback would be perceived. In fact, the biggest impediment was a lack of realization into just how much the other person desired it. “When we intervened to cause people to think about a time when they wanted feedback, it was enough to trigger them to realize, ‘Okay, maybe I should give feedback to others as well.’”
For another group of studies, Schroeder has looked at the power of rituals in the workplace—finding that simple actions, such as how managers conduct meetings, become imbued with importance over time. “With almost every activity, you can add small physical features, making it more rigid and formal, and people will start to add meaning to it,” she says. At the same time, people tend to resist change to rituals, ostracizing those who don’t follow established formalities.
Schroeder recommends that managers think about how they construct rituals to ensure they reflect the values of the company. “It could be as simple as starting a meeting by having everyone share something they did in their personal lives, which says something quite different than asking everyone to say something about what they did at work.” But they should also think hard before changing established rituals in the workplace. “You always have the new boss who comes in and wants to change everything, and I would be careful about that,” she says. “Conveying your good intentions helps a little, but once people get used to rituals, they don’t want them changed.”
Decision science encompasses a wide range of inquiries: how customers choose products, how we insure ourselves against risk, how we offer criticism to a co-worker, and more. Yet despite this variety, the research comes down to the same basic premise: Human beings may not be rational, but our irrationality itself is predictable.
By better understanding the misperceptions and emotions that routinely lie behind people’s decision-making processes, managers can help anticipate some of the common pitfalls that can lead to bad decisions and negative consequences. They can learn to recognize the biases that creep into their own judgments and the judgments of others and combat that with processes that rely on facts and probabilities, rather than faulty intuition or simple heuristics.
Understanding the latest decision research can help [managers] not only make better decisions…but also set up environments to help employees and customers make better decisions as well.
Good leaders can even use decision science to their advantage by setting up an environment to help people choose more wisely. Whether that means using an attractor to better frame a problem, instituting rituals to bolster company culture, or presenting “nudges” to encourage people to act in their best interest, the predictable way humans respond to choices can be harnessed as a force for positive decision-making.
By more fully understanding how and why people decide the way they do, we can all learn to make better decisions in the end.