February 4, 2025

New UC Berkeley guide helps business leaders navigate AI ethics amid rapid adoption

Genevieve Smith

Featured Researcher

Genevieve Smith

Lecturer

Featured Researcher

Merrick Osborne

Professional Faculty

By

Laura Counts

Speed is king in the rush to capitalize on generative AI. Yet amidst the frenzy, more than three-quarters of AI product managers in a UC Berkeley survey said they are uncertain about how to responsibly navigate high-stakes issues like data privacy, transparency, biases, inaccuracies, and security.

Image shows the cover of the "Responsible Use of Generative AI" playbook with a geometric blue and gold illustration.

“Product managers and product teams often end up as gatekeepers for responsible AI implementation,” explains lead author Genevieve Smith, a Berkeley researcher and professional faculty member at the Haas School of Business. “But responsibility can be put to the side when the top-down priority and incentive structure is around speed to market.”

This widespread uncertainty was the impetus behind the launch of a new guide designed to help organizations navigate these choppy waters. Titled “Responsible Use of Generative AI: A Playbook for Product Managers & Business Leaders” and produced by the Responsible AI Initiative of the Berkeley AI Research Lab (BAIR) and Berkeley Haas, it arrives at a critical juncture.

“There have been other periods of rapid technological change, but this is really the first period where we are injecting a fundamental element of our humanity into non-living entities,” says coauthor Merrick Osborne, a postdoctoral scholar at Berkeley Haas. “We’re creating faux-human machines without really talking about how we create these things.”

“There have been other periods of rapid technological change, but this is really the first period where we are injecting a fundamental element of our humanity into non-living entities.”

—Merrick Osborne

The project grew out of a partnership with Google, which provided funding. The project team, which includes researchers from Stanford University and the University of Oxford along with UC Berkeley, conducted 25 interviews with product managers and surveyed 300 people working in product management-related roles around the world.

Proactive leadership

The analysis found that 77% of respondents lacked clarity on how to define and implement AI responsibly and also reported a general diffusion of responsibility in their organizations. Less than one in five reported having incentives for responsible use. incentives were more often tied to shipping products and moving fast.

Leadership is critical, the survey found: Those who worked in organizations where leaders expressed a commitment to ethical AI implementation were nearly four times more likely to have colleagues focused on responsible use and about 2.5 times more likely to take actions for responsible use like testing for bias.

“There is a clear role for organizational leaders, while also an important place for product managers,” said Smith, founding director of BAIR’s Responsible AI Initiative. “Even in the midst of these highly uncertain environments, many product managers reported taking micro-level actions, such as individual or team-wide reviews and safeguarding standards for customer data and finding ways to align these actions with existing company values and principles.”

The case for responsible AI

Developed from the paper and with support from Google—including through Googlers helping to prototype early versions—the playbook makes the case that responsible AI practices can foster positive brand image and customer loyalty, maintain regulatory compliance, and minimize risk. It emphasizes a proactive approach, outlining 10 “plays,” or practical actions leaders and product managers can take to integrate AI responsibility into day-to-day work and products.

Based on the researchers’ interviews, the guide includes real-life scenarios product managers face—such as noticing that AI-designed user personas are depicting “investors” as men and “budget conscious shoppers” as women. Key recommendations include establishing clear AI principles, developing governance frameworks, conducting “gut checks” for responsibility risks, and implementing regular risk assessments.

“There’s massive hype around these technologies which leads to misaligned organizational priorities,” said Osborne, who has studied how biases are ingrained in AI models. At the same time, there is immense opportunity to embed responsible use of gen AI to build trust and better capitalize on the benefits. “What we’re seeing is that a really important part of the process is whether people feel comfortable talking about these ethical issues with their colleagues and supervisors, given just how delicate the social dynamics around some of these topics can be.”

Read the full playbook:

Responsible Use of Generative AI: A Playbook for Product Managers & Business Leaders

By Genevieve Smith (UC Berkeley), Natalia Luka (UC Berkeley), Jessica Newman (UC Berkeley), Merrick Osborne (UC Berkeley), Brandie Nonnecke (UC Berkeley), Brian Lattimore (Stanford University), and Brent Mittelstadt (University of Oxford).

Read the working paper:

Responsible Generative AI Use by Product Managers: Recoupling Ethical Principles and Practices