New program gives undergrads space to develop resilience

 

class of students with their professor
Tarun Galagali, CEO of startup Mandala, (far right) with his class of undergraduate students in the new Foundations of Resilient Leadership Program.

Julianna De Paula, BS 24, approaches life a little differently since she finished the new undergraduate Foundations of Resilient Leadership program at Berkeley Haas.

First, she pauses to think before having difficult conversations. She also takes time out to breathe—truly pay attention to the inhale and exhale—throughout  the school day. She believes that both changes will help her as she gets ready to move to New York City to launch a career at L’Oreal this fall.

“There’s a lot going on with the war in Gaza and the protests and a lot of my friends are impacted by what’s going on in Palestine,” said De Paula, one of 30 students, largely Haas undergraduates, enrolled in the class. Being a more active listener helps guide her navigate the stress, she said. 

These skills will also make her a more resilient leader, which is the heart of the new six-week certificate program founded by Tarun Galagali, CEO of startup Mandala. The program, also used to train employees at corporations like Microsoft, covers topics that range from having difficult conversations to navigating imposter syndrome to listening mindfully to understanding the meaning of values-based leadership.

Woman standing next to a man on a college campus
Emma Daftary, assistant dean of the Haas Undergraduate Programs, with Tarun Galagali, CEO of startup Mandala.

Galagali worked with Emma Daftary, assistant dean of the undergraduate programs, Lauren Simon, associate director of Student Life & Leadership Development for the undergraduate program, and Katrina Koski, director of inclusion and belonging at Haas, to launch the class at Haas this past spring. (Mandala is an ancient Sanskrit word that means circle—referring to community and connection.) 

Developing “skills to navigate”

The program provides students an open space to discuss their struggles and challenges. In doing so, it normalizes feelings and experiences that can otherwise leave students feeling isolated and alone, Daftary said. 

It is our role as a business school to help shape and inform inclusive, resilient, effective leaders,” Daftary said.We launched the program to provide our students with the skills to navigate situations that are personally and professionally triggering.” One catalyst for the program, among others, she said, was the turmoil on campus following the Hamas attacks in Israel on October 7, and the resulting war in Palestine. “We were meeting with students and they were reporting that they were having a really difficult time processing their grief while balancing the demands of their classes,” Daftary said. “They were feeling alone and disconnected.”

It is our role as a business school to help shape and inform inclusive, resilient, effective leaders,” —Emma Daftary.

man teaching at a podium
Tarun Galagali, CEO of startup Mandala, asks students to share “glacier stories,” and open up to each other about their struggles.

In class, Galagali starts by sharing his own story, beginning with his childhood as the son of Indian immigrants growing up in Cupertino, Ca.  After earning a Harvard MBA, he worked as a product marketing and strategy lead at Google, a management consultant at EY Parthenon, as a director of strategy at online therapy platform Talkspace, and as as a senior political advisor to Congressman Ro Khanna. Under the surface of the names on his resume, he said, there are “glacier stories” of feeling isolated, inadequate, or not belonging at times.

“I share (my story) to show that there’s a story behind each of the resume logos and resilience embedded in them,” he said.  There are positives to these stories, too, he said, as he used what learned about leadership and teamwork at Google and from his experience lobbying for mental health of kids in California to build out the Mandala program. 

Balancing stress and anxiety

Coco Zhang, BA 26, who lives with and supports her single mother by working part-time jobs as a full-time student, said she often feels over-committed and burned out at Berkeley. What helped, she said, was learning that she was not alone. “Before I joined (Mandala) I thought I was one of the few who struggled a lot,” she said. “It helped to hear other students’ experiences and to know what they are doing to balance stress and anxiety. It motivates me to see what they have done to handle imposter syndrome and to learn some invaluable mental well-being concepts that have helped me to ground my true self to go beyond my boundaries and rise above the horizons.” 

Jacob Williams, BS 24, who was part of the founding group that worked with Mandala to launch the program, said the principles explored have provided him with tools he has already deployed in daily life. 

man wearing a suit jacket in front of a building
Jacob Williams

“I think this semester has been revolutionary for me” he said. Through the program, he said he has learned to “jump across domains,” and make new connections, such as connecting the dots between his cancer research and his DEI efforts, which has made his work a lot more meaningful. “On the first day of Mandala, Tarun explored the concept of an underlying glacier,” he said. “Among the many interpretations shared, the concept of a subconscious root to the way we think, behave, feel, and act really resonated with me. Realizing the deeper motivations behind my intuition and the ways I’ve chosen to govern has allowed me to communicate in a way which ultimately generates greater value, meaning, and impact for the people I work with and the public I’m honored to serve.”

A successful outcome

Galagali said the program is particularly relevant at a time when people are “quiet quitting” at work due to burnout. People lack critical things at work, he said, including psychological safety and a sense of belonging and connection.

Galagali said he would like to expand the Berkeley program, based on the success they’ve had so far: 88% of students who finished the program reported an increase in resilience; 94% of students reported reductions in burnout; and 100% felt the program improved their confidence in entering the workplace. 

A lot of this is a personal deep desire to create community,” he said, noting that students who have completed this program have reported that they are better able to show up for hard conversations, that they’ve learned something new to make them better at their job, and that they have more self awareness and awareness of others.

De Paula said she hopes the program will continue. “It was surprising to see so many Haas students opening up to each other,” she said. “Tarun is also very inspirational as a mentor, so I have only good things to say about the program.”

Is it ethical? New undergrad class trains students to think critically about artificial intelligence

two sstudents in a Haas classroom listening intently
Berkeley Haas undergraduate students Hunter Esqueda (left) and Sohan Dhanesh (right) are enrolled in Genevieve Smith’s Responsible AI Innovation & Management class. Photo: Noah Berger

 

“Classified” is an occasional series spotlighting some of the more powerful lessons being taught in classrooms around Haas.

On a recent Monday afternoon, Sohan Dhanesh, BS 24, joined a team of students to consider whether startup Moneytree is using machine learning ethically to determine credit worthiness among its customers.

After reading the case, Dhanesh, one of 54 undergraduates enrolled in a new Berkeley Haas course called Responsible AI Innovation & Management, said he was concerned by Moneytree’s unlimited access to users’ phone data, and whether customers even know what data the company is tapping to inform its credit scoring algorithm. Accountability is also an issue, since Silicon Valley-based Moneytree’s customers live in India and Africa, he said. 

“Credit is a huge thing, and whether it’s given to a person or not has a huge impact on their life,” Dhanesh said. “If this credit card [algorithm] is biased against me, it will affect my quality of life.”

Dhanesh, who came into the class believing that he didn’t support guardrails for AI companies, says he’s surprised by how his opinions have changed about regulation. That he isn’t playing Devil’s advocate, he said, is due to the eye-opening data, cases, and readings provided by Lecturer Genevieve Smith.

A contentious debate

Smith, who is also the founding co-director of the Responsible & Equitable AI Initiative at the Berkeley AI Research Lab and former associate director of the Berkeley Haas Center for Equity, Gender, & Leadership, created the course with an aim to teach students both sides of the AI debate.

Woman in a purple jacket teaching
Lecturer Genevieve Smith says the goal of her class is to train aspiring leaders to understand, think critically about, and implement strategies for responsible AI innovation and management. Photo: Noah Berger

Her goal is to train aspiring leaders to think critically about artificial intelligence and implement strategies for responsible AI innovation and management. “While AI can carry immense opportunities, it also poses immense risks to both society and business linked to pervasive issues of bias and discrimination, data privacy violations, and more,” Smith said. “Given the current state of the AI landscape and its expected global growth, profit potential, and impact, it is imperative that aspiring business leaders understand responsible AI innovation and management.”

“While AI can carry immense opportunities, it also poses immense risks to both society and business linked to pervasive issues of bias and discrimination, data privacy violations, and more,” – Genevieve Smith.

During the semester, Smith covers the business and economic potential of AI to boost productivity and efficiency. But she also explores the immense potential for harm, such as the risk of embedding inequality or infringing on human rights; amplifying misinformation and a lack of transparency, and impacting the future of work and climate. 

Smith said she expects all of her students will interact with AI as they launch careers, particularly in entrepreneurship and tech. To that end, the class prepares them to articulate what “responsible AI” means and understand and define ethical AI principles, design, and management approaches. 

Learning through mini-cases

Today, Smith kicked off class with a review of the day’s AI headlines, showing an interview with OpenAI’s CTO Mira Murati, who was asked where the company gets its training data for Sora, OpenAI’s new generative AI model that creates realistic video using text. Murati contended that the company used publicly available data to train Sora but didn’t provide any details in the interview. Smith asks the students what they thought about her answer, noting the “huge issue” with a lack of transparency on training data, as well as copyright and consent implications.

Student in class wearing blue and yellow berkeley hoodie
Throughout the semester, students will develop a responsible AI strategy for a real or fictitious company. Photo: Noah Berger

After, Smith introduced the topic of “AI for good” before the students split into groups to act as responsible AI advisors to three startups, described in three mini cases for Moneytree, HealthNow, and MyWeather.  They worked to answer Smith’s questions: “What concerns do you have? What questions would you ask? And what recommendations might you provide?” The teams explored these questions across five core responsible AI principles, including privacy, fairness, and accountability. 

Julianna De Paula, BS 24, whose team was assigned to read about Moneytree, asked if the company had adequately addressed the potential for bias when approving customers for credit (about 60% of loans in East Africa go to men, and 70% of loans in India go to men, the case noted), and whether the app’s users are giving clear consent for their data when they download it. 

Other student teams considered HealthNow, a chatbot that provides health care guidance, but with better performance for men and English speakers; and MyWeather, an app developed for livestock herders by a telecommunications firm in Nairobi, Kenya, that uses weather data from a real-time weather information service provider.

The class found problems with both startups, pointing out the potential for a chatbot to misdiagnose conditions (“Can a doctor be called as a backup?” one student asked), and the possibility that MyWeather’s dependence on a partner vendor could lead to inaccurate climate data.

Preparing future leaders

Throughout the semester, students will go on to develop a responsible AI strategy for a real or fictitious company. They are also encouraged to work with ChatGPT and other generative AI language tools. (One assignment asked them to critique ChatGPT’s own response to a question of bias in generative AI.) Students also get a window into real-world AI use and experiences through guest speakers from Google, Mozilla, Partnership on AI, the U.S. Agency for International Development (USAID), and others. 

All of the students participate in at least one debate, taking sides on topics that include whether university students should be able to use ChatGPT or other generative AI language tools for school; if the OpenAI board of directors was right to fire Sam Altman; and if government regulation of AI technologies stifles innovation and should be limited.

Smith, who has done her share of research into gender and AI, also recommended many readings for the class, including “Data Feminism” by MIT Associate Professor Catherine D’Ignazio and Emory University Professor Lauren Klein; “Unmasking AI: My Mission to Protect What Is Human in a World of Machines” by AI researcher, artist, and advocate Joy Buolamwini; “Weapons of Math Destruction” by algorithmic auditor Cathy O’Neil; and “Your Face Belongs to Us” by New York Times reporter Kashmir Hill.

Smith said she hopes that her course will enable future business leaders to be more responsible stewards and managers of such technologies. “Many people think that making sure AI is ‘responsible’ is a technology task that should be left to data scientists and engineers,” she said. “The reality is, business managers and leaders have a critical role to play as they inform the priorities and values that are embedded into how AI technology is developed and used.”