Can racism, sexism, and other biases be quantified?

Berkeley Haas Assoc. Prof. Ming Hsu built a model to quantify stereotypesWhen a Starbucks employee recently called the police on two black men who asked for a bathroom key but hadn’t yet ordered anything, it seemed a clear-cut case of racism leading directly to unfair treatment. Many outraged white customers publicly contrasted it with their years of hassle-free Starbucks pit stops.

But from a scientific perspective, making a direct connection between people’s biases and the degree to which they treat others differently is tricky. There are thousands of ways people stereotype different social groups—whether it’s assuming an Asian student is good at math or thinking an Irish colleague would make a good drinking buddy—and with so many variables, it’s incredibly challenging to trace how someone is treated to any one particular characteristic.

“There is a tendency for people to think of stereotypes, biases, and their effects as inherently subjective. Depending on where one is standing, the responses can range from ‘this is obvious’ to ‘don’t be a snowflake,’” said Berkeley Haas Assoc. Prof. Ming Hsu. “What we found is that these subjective beliefs can be quantified and studied in ways that we take for granted in other scientific disciplines.”

How do stereotypes influence behavior?

Berkeley Haas Assoc Prof Ming Hsu
Ming Hsu

A new paper published today in the Proceedings of the National Academy of Sciences cuts to the heart of messy social interactions with a computational model to quantify and predict unequal treatment based on perceptions of warmth and competence. Hsu and post-doctoral researcher Adrianna C. Jenkins—now an assistant professor at the University of Pennsylvania—drew on social psychology and behavioral economics in a series of lab experiments and analyses of field work. (The paper was co-written by Berkeley researcher Pierre Karashchuk and Lusha Zhu of Peking University.)

“There’s been lots of work showing that people have stereotypes and that they treat members of different social groups differently,” said Jenkins, the paper’s lead author. “But there’s quite a bit we still don’t know about how stereotypes influence people’s behavior.”

It’s more than an academic issue: University admission officers, for example, have long struggled with how to fairly consider an applicant’s race, ethnicity, or other qualities that may have presented obstacles to success. How much weight should be given, for example, to the obstacles faced by African Americans compared with those faced by Central American immigrants or women?

Eye-opening findings

While these are much larger questions, Hsu said the paper’s contribution is to improve how to quantify and compare different types of discrimination across different social groups—a common challenge facing applied researchers.

“What was so eye-opening is that we found that variations in how people are perceived translated quantitatively into differences in how they are treated,” said Hsu, who holds a dual appointment with UC Berkeley’s Helen Wills Neuroscience Institute and the Neuroeconomics Lab. “This was as true in laboratory studies where subjects decided how to divide a few dollars as it was in the real-world where employers decided whom to interview for a job.”

The model offers a way to establish a direct connection between widely held stereotypes and entrenched societal inequities. Kellie McElhaney, founding executive director of the Center for Equity, Gender and Leadership (EGAL), said this is the kind of fundamental research that informs the mission of the center, which aims to “develop equity fluent leaders who ignite and accelerate change.”

“This research continues to advance critical knowledge and solutions around the significant and negative impact of biases, and in particular, the consequences in the business world,” she said.

Rather than analyzing whether the stereotypes were justified, the researchers took stereotypes as a starting point and looked at how they translated into behavior with over 1,200 participants across five studies. In the first study involving the classic “Dictator Game,” where a player is given $10 and asked to decide how much of it to give to a counterpart, the researchers found that people gave widely disparate amounts based on just one piece of information about the recipient (i.e., occupation, ethnicity, nationality). For example, people on average gave $5.10 to recipients described as “homeless,” while those described as “lawyer” got a measly $1.70—even less than an “addict,” who got $1.90

To look at how stereotypes about the groups drove people’s choices to pay out differing amounts, the researchers drew on an established social psychology framework that categorizes all stereotypes along two dimensions: those that relate to a person’s warmth (or how nice they are seen to be), and those that relate to a person’s competence (or how intelligent they are seen to be). These ratings, they found, could be used to accurately predict how much money people distributed to different groups. For example, “Irish” people were perceived as warmer but slightly less competent than “British,” and received slightly more money on average.

“We found that people don’t just see certain groups as warmer or nicer, but if you’re warmer by X unit, you get Y dollars more,” Hsu said.

Specifically, the researchers found that disparate treatment results not just from how people perceive others, but how they see others relative to themselves. In allocating money to a partner viewed as very warm, people were reluctant to offer them less than half of the pot. Yet with a partner viewed as more competent, they were less willing to end up with a smaller share of the money than the other person. For example, people were ok with having less than an “elderly” counterpart, but not less than a “lawyer.”

Predicting job callbacks

It’s one thing to predict how people behave in carefully controlled laboratory experiments, but what about in the messy real world? To test whether their findings could be generalized to the field, Hsu and colleagues tested whether their model could predict treatment disparities in the context of two high-profile studies of discrimination. The first was a Canadian labor market study that found a huge variation in job callbacks based on the perceived race, gender, and ethnicity of the names on resumes. Hsu and colleagues found that the perceived warmth and competence of the applicants—the stereotype based solely on their names—could predict the likelihood that an applicant had gotten callbacks.

They tried it again with data from a U.S. study on how professors responded to mentorship requests from students with different ethnic names and found the same results.

“The way the human mind structures social information has specific, systemic, and powerful effects on how people value what happens to others,” the researchers wrote. “Social stereotypes are so powerful that it’s possible to predict treatment disparities based on just these two dimensions (warmth and competence).”

Future applications

Hsu says the model’s predictive power could be useful in a wide range of applications, such as identifying patterns of discrimination across large populations or building an algorithm that can detect and rate racism or sexism across the internet—something these authors are deep at work on now.

“Our hope is that this scientific approach can provide a more rational, factual basis for discussions and policies on some of the most emotionally-fraught topics in today’s society,” Hsu said.

 

Back