Is it ethical? New undergrad class trains students to think critically about artificial intelligence

two sstudents in a Haas classroom listening intently
Berkeley Haas undergraduate students Hunter Esqueda (left) and Sohan Dhanesh (right) are enrolled in Genevieve Smith’s Responsible AI Innovation & Management class. Photo: Noah Berger

 

“Classified” is an occasional series spotlighting some of the more powerful lessons being taught in classrooms around Haas.

On a recent Monday afternoon, Sohan Dhanesh, BS 24, joined a team of students to consider whether startup Moneytree is using machine learning ethically to determine credit worthiness among its customers.

After reading the case, Dhanesh, one of 54 undergraduates enrolled in a new Berkeley Haas course called Responsible AI Innovation & Management, said he was concerned by Moneytree’s unlimited access to users’ phone data, and whether customers even know what data the company is tapping to inform its credit scoring algorithm. Accountability is also an issue, since Silicon Valley-based Moneytree’s customers live in India and Africa, he said. 

“Credit is a huge thing, and whether it’s given to a person or not has a huge impact on their life,” Dhanesh said. “If this credit card [algorithm] is biased against me, it will affect my quality of life.”

Dhanesh, who came into the class believing that he didn’t support guardrails for AI companies, says he’s surprised by how his opinions have changed about regulation. That he isn’t playing Devil’s advocate, he said, is due to the eye-opening data, cases, and readings provided by Lecturer Genevieve Smith.

A contentious debate

Smith, who is also the founding co-director of the Responsible & Equitable AI Initiative at the Berkeley AI Research Lab and former associate director of the Berkeley Haas Center for Equity, Gender, & Leadership, created the course with an aim to teach students both sides of the AI debate.

Woman in a purple jacket teaching
Lecturer Genevieve Smith says the goal of her class is to train aspiring leaders to understand, think critically about, and implement strategies for responsible AI innovation and management. Photo: Noah Berger

Her goal is to train aspiring leaders to think critically about artificial intelligence and implement strategies for responsible AI innovation and management. “While AI can carry immense opportunities, it also poses immense risks to both society and business linked to pervasive issues of bias and discrimination, data privacy violations, and more,” Smith said. “Given the current state of the AI landscape and its expected global growth, profit potential, and impact, it is imperative that aspiring business leaders understand responsible AI innovation and management.”

“While AI can carry immense opportunities, it also poses immense risks to both society and business linked to pervasive issues of bias and discrimination, data privacy violations, and more,” – Genevieve Smith.

During the semester, Smith covers the business and economic potential of AI to boost productivity and efficiency. But she also explores the immense potential for harm, such as the risk of embedding inequality or infringing on human rights; amplifying misinformation and a lack of transparency, and impacting the future of work and climate. 

Smith said she expects all of her students will interact with AI as they launch careers, particularly in entrepreneurship and tech. To that end, the class prepares them to articulate what “responsible AI” means and understand and define ethical AI principles, design, and management approaches. 

Learning through mini-cases

Today, Smith kicked off class with a review of the day’s AI headlines, showing an interview with OpenAI’s CTO Mira Murati, who was asked where the company gets its training data for Sora, OpenAI’s new generative AI model that creates realistic video using text. Murati contended that the company used publicly available data to train Sora but didn’t provide any details in the interview. Smith asks the students what they thought about her answer, noting the “huge issue” with a lack of transparency on training data, as well as copyright and consent implications.

Student in class wearing blue and yellow berkeley hoodie
Throughout the semester, students will develop a responsible AI strategy for a real or fictitious company. Photo: Noah Berger

After, Smith introduced the topic of “AI for good” before the students split into groups to act as responsible AI advisors to three startups, described in three mini cases for Moneytree, HealthNow, and MyWeather.  They worked to answer Smith’s questions: “What concerns do you have? What questions would you ask? And what recommendations might you provide?” The teams explored these questions across five core responsible AI principles, including privacy, fairness, and accountability. 

Julianna De Paula, BS 24, whose team was assigned to read about Moneytree, asked if the company had adequately addressed the potential for bias when approving customers for credit (about 60% of loans in East Africa go to men, and 70% of loans in India go to men, the case noted), and whether the app’s users are giving clear consent for their data when they download it. 

Other student teams considered HealthNow, a chatbot that provides health care guidance, but with better performance for men and English speakers; and MyWeather, an app developed for livestock herders by a telecommunications firm in Nairobi, Kenya, that uses weather data from a real-time weather information service provider.

The class found problems with both startups, pointing out the potential for a chatbot to misdiagnose conditions (“Can a doctor be called as a backup?” one student asked), and the possibility that MyWeather’s dependence on a partner vendor could lead to inaccurate climate data.

Preparing future leaders

Throughout the semester, students will go on to develop a responsible AI strategy for a real or fictitious company. They are also encouraged to work with ChatGPT and other generative AI language tools. (One assignment asked them to critique ChatGPT’s own response to a question of bias in generative AI.) Students also get a window into real-world AI use and experiences through guest speakers from Google, Mozilla, Partnership on AI, the U.S. Agency for International Development (USAID), and others. 

All of the students participate in at least one debate, taking sides on topics that include whether university students should be able to use ChatGPT or other generative AI language tools for school; if the OpenAI board of directors was right to fire Sam Altman; and if government regulation of AI technologies stifles innovation and should be limited.

Smith, who has done her share of research into gender and AI, also recommended many readings for the class, including “Data Feminism” by MIT Associate Professor Catherine D’Ignazio and Emory University Professor Lauren Klein; “Unmasking AI: My Mission to Protect What Is Human in a World of Machines” by AI researcher, artist, and advocate Joy Buolamwini; “Weapons of Math Destruction” by algorithmic auditor Cathy O’Neil; and “Your Face Belongs to Us” by New York Times reporter Kashmir Hill.

Smith said she hopes that her course will enable future business leaders to be more responsible stewards and managers of such technologies. “Many people think that making sure AI is ‘responsible’ is a technology task that should be left to data scientists and engineers,” she said. “The reality is, business managers and leaders have a critical role to play as they inform the priorities and values that are embedded into how AI technology is developed and used.”

Back