Well into Beena Ammanath’s career as a leader for data science teams in artificial intelligence, she often found herself the only woman at the table.
“As I set up AI teams, there were never enough women,” said Ammanath, who is co-president of the new Alliance for Inclusive Artificial Intelligence (AIAI), launched this month by the Fisher Center for Business Analytics at Haas. “I couldn’t hire them and I couldn’t find them, and it became really obvious that we did not have enough women or underrepresented minorities. I knew that AI wouldn’t be as robust or safe if you didn’t have them involved in this process, where we’re encoding human intelligence and human bias.”
To help solve the problem, Ammanath, a managing director at Deloitte and founder of the nonprofit Humans for AI, joined Gauthier Vasseur, the executive director of the Fisher Center for Business Analytics, to launch the new AIAI initiative at Berkeley Haas.
The alliance is focused on building a more inclusive future for women and underrepresented minorities in AI.
Plans include fundraising for scholarships and educational activities around AI and analytics; new programs that promote awareness of AI and inclusion; plans to host conferences on AI and analytics; new courses and access to learning within AI research projects; professional training and coaching and career support; and support for internships, job placement, and ongoing mentoring from AI experts.
“AI is the future and it has the potential to profoundly transform the work that people will do,” said Assoc. Prof. Zsolt Katona, faculty director for the Fisher Center for Business Analytics, which is part of the Institute for Business Innovation (IBI). “Haas plays a unique role within the Berkeley artificial intelligence ecosystem, with its focus on educating the next generation of business managers and leaders. With the sweeping changes that AI will bring, it’s critical that everybody has the opportunity to weigh in on how AI is used and who uses it, not just one group.”
Associate Adjunct Prof. Thomas Y. Lee, the Fisher Center’s director of data science, notes that Haas’ comparative advantage is not in building better algorithms. By training leaders, Haas will influence how AIs are trained and how they are applied to benefit society. “This is about how you encourage people to get involved, how you think about collecting the data, how you train systems, and how you evaluate impact,” he said. “As this world evolves and matures, there are a lot more ways to be inclusive and build a better society and the idea is that the technology can be used in positive and negative ways. Haas is in the business of preparing leaders to take on the bias.”
The new AIAI initiative was founded last spring after Ammanath and Vasseur met and realized that they both shared similar views about AI— computer systems that interpret and learn from data to perform tasks that normally require human intelligence.
Vasseur, who is co-president of the alliance, said people on teams aren’t often asking the right questions, and are susceptible to groupthink.
“This is counterproductive for many companies and closes so many doors—when they are not being diverse in their analytics process, their data collection, and their analysis,” said Vasseur, a data analytics industry veteran and teacher who previously worked at Google and Oracle. “They need to see the big picture and pay attention to details. You won’t get that with a team of only white males.”
While a lack of diversity is a well-documented problem at tech companies, a recent study by WIRED magazine and Montreal startup Element AI estimated that only about 12% of leading machine learning researchers are women. That estimate came from tallying the numbers of men and women who had contributed work at three top machine learning conferences in 2017.
Hoping for better representation for both women and underrepresented minorities in AI, Celeste Fa’ai’uaso, MBA 20, a mechanical engineer who is co-president of the Haas Data Science Club, said she’s planning to meet with AIAI organizers.
The potential problem with AI, she says, is that people are using data to train these algorithms without considering how the data could be biased or how systemic issues can cause the patterns we see today, and whether the data is accurate or representative of many groups. “There’s been a lot of hype and excitement around AI and the thing I worry about is garbage in, garbage out,” she said. “We live in a world today where there are inequities and biases. If you take the data from that world to train machine learning and AI algorithms, we risk reinforcing these inequities even more.”
Ammanath, who was honored as Businesswoman of the Year at last year’s Fisher Center for Business Analytics summit, wrote in a recent LinkedIn blog post that to avoid bias, an artificial intelligence system should ideally have many data sources, from as wide a range of viewpoints as possible.
Examples of this that she cited include when Nikon created a camera that could detect if someone blinked at the crucial moment—but did not take into account that not all people around the world have the same shaped eyes—and when Winterlight Labs built auditory tests for neurological diseases like Alzheimer’s, Parkinson’s, and multiple sclerosis that only worked for English speakers of a particular Canadian dialect.
“The problem with AI comes when we knowingly or unknowingly expose the engine to data which only tells one story or does not show the whole picture,” she wrote.