Is it ethical? New undergrad class trains students to think critically about artificial intelligence

two sstudents in a Haas classroom listening intently
Berkeley Haas undergraduate students Hunter Esqueda (left) and Sohan Dhanesh (right) are enrolled in Genevieve Smith’s Responsible AI Innovation & Management class. Photo: Noah Berger

 

“Classified” is an occasional series spotlighting some of the more powerful lessons being taught in classrooms around Haas.

On a recent Monday afternoon, Sohan Dhanesh, BS 24, joined a team of students to consider whether startup Moneytree is using machine learning ethically to determine credit worthiness among its customers.

After reading the case, Dhanesh, one of 54 undergraduates enrolled in a new Berkeley Haas course called Responsible AI Innovation & Management, said he was concerned by Moneytree’s unlimited access to users’ phone data, and whether customers even know what data the company is tapping to inform its credit scoring algorithm. Accountability is also an issue, since Silicon Valley-based Moneytree’s customers live in India and Africa, he said. 

“Credit is a huge thing, and whether it’s given to a person or not has a huge impact on their life,” Dhanesh said. “If this credit card [algorithm] is biased against me, it will affect my quality of life.”

Dhanesh, who came into the class believing that he didn’t support guardrails for AI companies, says he’s surprised by how his opinions have changed about regulation. That he isn’t playing Devil’s advocate, he said, is due to the eye-opening data, cases, and readings provided by Lecturer Genevieve Smith.

A contentious debate

Smith, who is also the founding co-director of the Responsible & Equitable AI Initiative at the Berkeley AI Research Lab and former associate director of the Berkeley Haas Center for Equity, Gender, & Leadership, created the course with an aim to teach students both sides of the AI debate.

Woman in a purple jacket teaching
Lecturer Genevieve Smith says the goal of her class is to train aspiring leaders to understand, think critically about, and implement strategies for responsible AI innovation and management. Photo: Noah Berger

Her goal is to train aspiring leaders to think critically about artificial intelligence and implement strategies for responsible AI innovation and management. “While AI can carry immense opportunities, it also poses immense risks to both society and business linked to pervasive issues of bias and discrimination, data privacy violations, and more,” Smith said. “Given the current state of the AI landscape and its expected global growth, profit potential, and impact, it is imperative that aspiring business leaders understand responsible AI innovation and management.”

“While AI can carry immense opportunities, it also poses immense risks to both society and business linked to pervasive issues of bias and discrimination, data privacy violations, and more,” – Genevieve Smith.

During the semester, Smith covers the business and economic potential of AI to boost productivity and efficiency. But she also explores the immense potential for harm, such as the risk of embedding inequality or infringing on human rights; amplifying misinformation and a lack of transparency, and impacting the future of work and climate. 

Smith said she expects all of her students will interact with AI as they launch careers, particularly in entrepreneurship and tech. To that end, the class prepares them to articulate what “responsible AI” means and understand and define ethical AI principles, design, and management approaches. 

Learning through mini-cases

Today, Smith kicked off class with a review of the day’s AI headlines, showing an interview with OpenAI’s CTO Mira Murati, who was asked where the company gets its training data for Sora, OpenAI’s new generative AI model that creates realistic video using text. Murati contended that the company used publicly available data to train Sora but didn’t provide any details in the interview. Smith asks the students what they thought about her answer, noting the “huge issue” with a lack of transparency on training data, as well as copyright and consent implications.

Student in class wearing blue and yellow berkeley hoodie
Throughout the semester, students will develop a responsible AI strategy for a real or fictitious company. Photo: Noah Berger

After, Smith introduced the topic of “AI for good” before the students split into groups to act as responsible AI advisors to three startups, described in three mini cases for Moneytree, HealthNow, and MyWeather.  They worked to answer Smith’s questions: “What concerns do you have? What questions would you ask? And what recommendations might you provide?” The teams explored these questions across five core responsible AI principles, including privacy, fairness, and accountability. 

Julianna De Paula, BS 24, whose team was assigned to read about Moneytree, asked if the company had adequately addressed the potential for bias when approving customers for credit (about 60% of loans in East Africa go to men, and 70% of loans in India go to men, the case noted), and whether the app’s users are giving clear consent for their data when they download it. 

Other student teams considered HealthNow, a chatbot that provides health care guidance, but with better performance for men and English speakers; and MyWeather, an app developed for livestock herders by a telecommunications firm in Nairobi, Kenya, that uses weather data from a real-time weather information service provider.

The class found problems with both startups, pointing out the potential for a chatbot to misdiagnose conditions (“Can a doctor be called as a backup?” one student asked), and the possibility that MyWeather’s dependence on a partner vendor could lead to inaccurate climate data.

Preparing future leaders

Throughout the semester, students will go on to develop a responsible AI strategy for a real or fictitious company. They are also encouraged to work with ChatGPT and other generative AI language tools. (One assignment asked them to critique ChatGPT’s own response to a question of bias in generative AI.) Students also get a window into real-world AI use and experiences through guest speakers from Google, Mozilla, Partnership on AI, the U.S. Agency for International Development (USAID), and others. 

All of the students participate in at least one debate, taking sides on topics that include whether university students should be able to use ChatGPT or other generative AI language tools for school; if the OpenAI board of directors was right to fire Sam Altman; and if government regulation of AI technologies stifles innovation and should be limited.

Smith, who has done her share of research into gender and AI, also recommended many readings for the class, including “Data Feminism” by MIT Associate Professor Catherine D’Ignazio and Emory University Professor Lauren Klein; “Unmasking AI: My Mission to Protect What Is Human in a World of Machines” by AI researcher, artist, and advocate Joy Buolamwini; “Weapons of Math Destruction” by algorithmic auditor Cathy O’Neil; and “Your Face Belongs to Us” by New York Times reporter Kashmir Hill.

Smith said she hopes that her course will enable future business leaders to be more responsible stewards and managers of such technologies. “Many people think that making sure AI is ‘responsible’ is a technology task that should be left to data scientists and engineers,” she said. “The reality is, business managers and leaders have a critical role to play as they inform the priorities and values that are embedded into how AI technology is developed and used.”

New center aims to create healthcare innovation research-to-impact pipeline

The Center for Healthcare Marketplace Innovation aims to shape the future of AI in healthcare through groundbreaking economic research, data partnerships and more.

Associate Professor Jonathan Kolstad will serve as faculty director of the new center (Photo: Copyright Noah Berger / 2023).

UC Berkeley experts are developing a trailblazing infrastructure to translate cutting-edge AI and behavioral economics healthcare research into powerful real-world advances in patient outcomes and drastically reduced medical costs.

The Center for Healthcare Marketplace Innovation, announced today by the College of Computing, Data Science, and Society and the Haas School of Business, will act as a force multiplier for top-tier technological innovation and economic insights. Developing and using the research on healthcare innovation incentives will lead to the creation and deployment of interventions that meaningfully improve public health.

Artificial intelligence (AI) is widely expected to transform healthcare. The new Berkeley center aims to play an essential role in ensuring those innovations benefit the public. AI tools could enhance care quality by, for example, helping triage patients in emergency rooms, diagnosing diseases and coaching clinicians. These technologies can also help reduce the 15% to 30% of health care spending that goes towards administrative functions each year, said Jonathan Kolstad, the center’s faculty director. That means up to $250 billion less in annual spending and more time focused on improving patient care. Still, this moment also carries risk.

“AI is going to be central to healthcare delivery in 10, 15 years from now,” said Kolstad, a professor of economic analysis and policy at Berkeley’s business school. “We’re at this inflection point. By understanding the technology, the systemic incentives and the human abilities in the healthcare system, we have a tremendous opportunity to help shape those dynamics.”

“We’re at this inflection point. By understanding the technology, the systemic incentives and the human abilities in the healthcare system, we have a tremendous opportunity to help shape those dynamics.” —Professor Jonathan Kolstad

“I think it matters whether and how those tools get built to actually enhance care delivery and help patients, and whether they are built in equitable, ethical ways because they’re started in places like Berkeley,” he said.

The center’s faculty are the right experts to lead this charge. Kolstad and faculty affiliates like Ziad Obermeyer are already award-winning academics in their respective fields, founders of healthcare innovation startups, and experts called upon by California and federal leaders to inform healthcare policies and regulations. Obermeyer is an associate professor at Berkeley’s School of Public Health.

This expertise enables them to build unique research and data resources and foster interdisciplinary incubation and industry and policy collaborations. Berkeley’s all-around excellence amplifies their potential impact. With connections to ambitious initiatives like the UC San Francisco-UC Berkeley Joint Program in Computational Precision Health and the open platforms initiative recently launched by CDSS, the new center can support other leading thinkers in moving their research from breakthrough papers into impact for public good. 

“Berkeley’s leadership in disciplines across computing, public health and economics and dedication to making real-world impacts make it the obvious home for this exciting initiative,” said Jennifer Chayes, dean of the College of Computing, Data Science, and Society. “The Center for Healthcare Marketplace Innovation will enable those at the intersection of healthcare economics and policy to join together with clinical and computing researchers to redefine success in healthcare outcomes.” 

“Harnessing AI to make our healthcare system work for people and ensure patients get better care requires a truly interdisciplinary approach,” said Ann Harrison, dean of the Haas School of Business. “I am very excited to see some of Berkeley’s great minds and cutting-edge resources come together at the new Center for Healthcare Marketplace Innovation.”

The center’s foundational development was made possible through a generous philanthropic donation by an anonymous thought partner. CHMI will be housed within the Institute for Business Innovation at Berkeley Haas.

A ‘bench-to-product’ runway

As society shifts to a new era of healthcare where AI plays a larger role, understanding human decision-making will remain central to discovering and applying useful solutions. The center aims to connect expertise in behavioral economics with the advanced research and development being executed at Berkeley to help develop healthcare solutions that people and companies want and will harness.

The center will focus on three pillars: conducting research to advance the science of innovation incentives in healthcare; encouraging interdisciplinary collaboration on projects and solutions; and partnering with healthcare providers, insurers, government agencies and others to test and refine the novel interventions.

Kolstad hopes this will be the “bench-to-product runway” that the increasingly technical and interdisciplinary AI, computer science and behavioral science need to be translated from research into impact.

“There’s a lot of really cool computational stuff happening, but it’s being built with very little understanding of the actual function of the healthcare system – of the complicated incentives of what it would take to have an algorithm, a prediction model, a solution be deployed to really change either healthcare outcomes or costs,” said Kolstad. “This kind of center that works to bridge these mechanisms can be very, very influential.”

“We want to take all of this intense energy and interest in AI and health and make sure that’s turning into benefits for patients and for the healthcare system.” —Ziad Obermeyer

Obermeyer’s work offers a blueprint of what the center’s impact could look like in practice. Through his research, Obermeyer found there was a need to improve physicians’ diagnoses of a patient’s probability of heart attack, an action that can trigger tests and other urgent care. Working with a major healthcare system, he developed an algorithm that could support doctors in emergency rooms as they screen patients and make crucial life or death decisions.

But will that algorithm work in practice? Obermeyer intends to find out. He’s now conducting randomized trials to see if the machine learning method he developed for an academic paper can become a real-world medical solution used in emergency rooms.

“We’re seeing so many papers come out in this area. I don’t think we’ve seen the impacts we want to see from those academic projects,” said Obermeyer, an affiliated faculty member of the Computational Precision Health program. “I think it’s because of that different skill set and because of the difficulties of translating academic ideas into the world.”

“We want to take all of this intense energy and interest in AI and health and make sure that’s turning into benefits for patients and for the healthcare system,” he said. 

Increasing access to industry data, feedback

The Center for Healthcare Marketplace Innovation is just getting started, but already its docket is stacked with ambitious projects. 

For example, the center is close to signing multiple large-scale, multimodal data access agreements with healthcare partners. The data is typically tightly held, and it can take years for academics to access it, Obermeyer said. That limits what research can be done to tackle health problems and the usefulness of related AI, which is only as good as the data it has access to train on, he said. Making it easier to access that data – and keeping it secure and used ethically – will unleash possibilities for research and impact in computational health. 

The center is also setting up an industry feedback platform, where large healthcare providers and others can share with researchers what problems they’re trying to solve for their patients, clinicians and systems. This input could lead to research and provide on-the-ground insights to inform the center’s efforts.

Additionally, the center will soon begin piloting a new generative AI model that offers clinical coaching to medical professionals. And it’s hosting an economics and policy conference – the Occasional California Health Economics Workshop – on March 8. 

These initiatives offer a glimpse of the new path forward the center is trying to create at Berkeley for this research, these industries and society.

“The future of AI and healthcare needs behavioral incentives, technological breakthroughs and data,” said Kolstad. “We’re working to bring those together.”

 

This article was also published by the College of Computing, Data Science, and Society with the headline “New center aims to create healthcare innovation research-to-impact pipeline.”

Media contact:

Laura Counts, Haas School of Business, [email protected]

.