October 24, 2023

Sins of the machine: Fighting AI bias

By

Michael Blanding

In a new working paper, Haas post-doc scholar Merrick Osborne examines how bias occurs in AI, and what can be done about it.

a human eye with computer code

While artificial intelligence can be a powerful tool to help people work more productively and cheaply, it comes with a dark side: Trained on vast repositories of data on the internet, it tends to reflect the racist, sexist, and homophobic biases embedded in its source material. To protect against those biases, creators of AI models must be highly vigilant, says Merrick Osborne, a postdoctoral scholar in racial equity at Haas School of Business.

man wearing a suit and tie

Postdoc Scholar Merrick Osborne’s new paper explores the dark side of AI

Osborne investigates the origins of the phenomenon—and how to combat it—in a new paper, “The Sins of the Parents Are to Be Laid Upon the Children: Biased Humans, Biased Data, Biased Models,” published in the journal Perspectives in Psychological Science.

“People have flaws and very natural biases that impact how these models are created,” says Osborne, who wrote the paper along with computer scientists Ali Omrani and Morteza Dehghani of the University of Southern California. “We need to think about how their behaviors and psychologies impact the way these really useful tools are constructed.”

Osborne joined Haas earlier this year as the first scholar in a post-doc program supporting academic work focused on racial inequities in business. Before coming to Haas, he earned a PhD in business administration at University of Southern California’s Marshall School of Business last year. In their new paper, he and his co-authors apply lessons from social psychology to examine how bias occurs, and what can be done to combat it.

Representation bias

Bias starts with the data that programmers use to train AI systems, says Osborne. While oftentimes it reflects stereotypes of marginalized groups, it can just as often leave them out entirely, creating “representation bias” that privileges a white, male, heterosexual worldview by default. “One of the most pernicious biases for computer scientists in terms of the dataset is just how well-represented—or under-represented—different groups of people are,” Osborne says.

Adding to problems with the data, AI engineers often use annotators—humans who go through data and label them for a desired set of categories. “That’s not a dispassionate process,” Osborne says. “Maybe even without knowing, they are applying subjective values to the process.” Without explicitly recognizing the need for fairer representation, for example, they may inadvertently leave out certain groups, leading to skewed outputs in the AI model. “It’s really important for organizations to invest in a way to help annotators identify the bias that they and their colleagues are putting in.”

Privileged programmers

Programmers themselves are not immune to their own implicit bias, he continues. By virtue of their position, computer engineers constructing AI models are more likely to be inherently privileged. The high status they are granted within their organizations can increase their sense of psychological power. “That higher sense of societal and psychological power can reduce their inhibitions and mean they’re less likely to stop and really concentrate on what could be going wrong.”

Osborne believes we’re at a critical fork in the road: We can continue to use these models without examining and addressing their flaws and rely on computer scientists to try to mitigate them on their own. Or we can turn to those with expertise in biases to work collaboratively with programmers on fighting racism, sexism, and all other biases in AI models.

First off, says Osborne, it’s important for programmers and those managing them to go through training that can make them aware of their biases and can take measures to account for gaps or stereotypes in the data when designing models. “Programmers may not know how to look for it, or to look for it at all,” Osborne says. “There’s a lot that could be done just from simply having discussions within a company or team on how our model could end up hurting people—or helping people.”

AI Fairness

Moreover, computer scientists have recently taken measures to combat bias within machine learning systems, implementing a new field of research known as AI fairness. As Osborne and his colleague describe in their paper, fairness uses complex mathematical formulas to train machine learning systems on certain variables including gender, ethnicity, sexual orientation, and disability, to make sure that the algorithm behind the model is treating different groups equally. Other processes are aimed at ensuring individuals are being treated fairly within groups, and that all groups are being fairly represented.

Organizations can help improve their models by making sure that programmers are aware of this latest research and sponsoring them to take courses to introduce them to these algorithmic models—tools such as  IBM’s AI Fairness 360 Toolkit, Google’s What-If Tool,  Microsoft’s Fairlearn.py, or Aequitas. Because each model is different, organizations should work with organizational experts in implementing algorithmic fairness to understand how bias can manifest for their specific programs. “We aren’t born knowing how to create a fair machine-learning model,” Osborne says. “It’s knowledge that we must acquire.”

More broadly, he says, companies can encourage a culture of awareness around bias in AI, such that individual employees who notice biased outcomes can feel supported in reporting them to their supervisors. Managers, in turn, can go back to programmers to give them feedback to tweak their models or design different queries that can root out biased outcomes.

“Models aren’t perfect, and until the programming catches up and creates better ones, this is something we are all going to be dealing with as AI becomes more prevalent,” Osborne says. “Organizational leaders play a really important role in improving the fairness of model’s output.”