Is it ethical? New undergrad class trains students to think critically about artificial intelligence

two sstudents in a Haas classroom listening intently
Berkeley Haas undergraduate students Hunter Esqueda (left) and Sohan Dhanesh (right) are enrolled in Genevieve Smith’s Responsible AI Innovation & Management class. Photo: Noah Berger

 

“Classified” is an occasional series spotlighting some of the more powerful lessons being taught in classrooms around Haas.

On a recent Monday afternoon, Sohan Dhanesh, BS 24, joined a team of students to consider whether startup Moneytree is using machine learning ethically to determine credit worthiness among its customers.

After reading the case, Dhanesh, one of 54 undergraduates enrolled in a new Berkeley Haas course called Responsible AI Innovation & Management, said he was concerned by Moneytree’s unlimited access to users’ phone data, and whether customers even know what data the company is tapping to inform its credit scoring algorithm. Accountability is also an issue, since Silicon Valley-based Moneytree’s customers live in India and Africa, he said. 

“Credit is a huge thing, and whether it’s given to a person or not has a huge impact on their life,” Dhanesh said. “If this credit card [algorithm] is biased against me, it will affect my quality of life.”

Dhanesh, who came into the class believing that he didn’t support guardrails for AI companies, says he’s surprised by how his opinions have changed about regulation. That he isn’t playing Devil’s advocate, he said, is due to the eye-opening data, cases, and readings provided by Lecturer Genevieve Smith.

A contentious debate

Smith, who is also the founding co-director of the Responsible & Equitable AI Initiative at the Berkeley AI Research Lab and former associate director of the Berkeley Haas Center for Equity, Gender, & Leadership, created the course with an aim to teach students both sides of the AI debate.

Woman in a purple jacket teaching
Lecturer Genevieve Smith says the goal of her class is to train aspiring leaders to understand, think critically about, and implement strategies for responsible AI innovation and management. Photo: Noah Berger

Her goal is to train aspiring leaders to think critically about artificial intelligence and implement strategies for responsible AI innovation and management. “While AI can carry immense opportunities, it also poses immense risks to both society and business linked to pervasive issues of bias and discrimination, data privacy violations, and more,” Smith said. “Given the current state of the AI landscape and its expected global growth, profit potential, and impact, it is imperative that aspiring business leaders understand responsible AI innovation and management.”

“While AI can carry immense opportunities, it also poses immense risks to both society and business linked to pervasive issues of bias and discrimination, data privacy violations, and more,” – Genevieve Smith.

During the semester, Smith covers the business and economic potential of AI to boost productivity and efficiency. But she also explores the immense potential for harm, such as the risk of embedding inequality or infringing on human rights; amplifying misinformation and a lack of transparency, and impacting the future of work and climate. 

Smith said she expects all of her students will interact with AI as they launch careers, particularly in entrepreneurship and tech. To that end, the class prepares them to articulate what “responsible AI” means and understand and define ethical AI principles, design, and management approaches. 

Learning through mini-cases

Today, Smith kicked off class with a review of the day’s AI headlines, showing an interview with OpenAI’s CTO Mira Murati, who was asked where the company gets its training data for Sora, OpenAI’s new generative AI model that creates realistic video using text. Murati contended that the company used publicly available data to train Sora but didn’t provide any details in the interview. Smith asks the students what they thought about her answer, noting the “huge issue” with a lack of transparency on training data, as well as copyright and consent implications.

Student in class wearing blue and yellow berkeley hoodie
Throughout the semester, students will develop a responsible AI strategy for a real or fictitious company. Photo: Noah Berger

After, Smith introduced the topic of “AI for good” before the students split into groups to act as responsible AI advisors to three startups, described in three mini cases for Moneytree, HealthNow, and MyWeather.  They worked to answer Smith’s questions: “What concerns do you have? What questions would you ask? And what recommendations might you provide?” The teams explored these questions across five core responsible AI principles, including privacy, fairness, and accountability. 

Julianna De Paula, BS 24, whose team was assigned to read about Moneytree, asked if the company had adequately addressed the potential for bias when approving customers for credit (about 60% of loans in East Africa go to men, and 70% of loans in India go to men, the case noted), and whether the app’s users are giving clear consent for their data when they download it. 

Other student teams considered HealthNow, a chatbot that provides health care guidance, but with better performance for men and English speakers; and MyWeather, an app developed for livestock herders by a telecommunications firm in Nairobi, Kenya, that uses weather data from a real-time weather information service provider.

The class found problems with both startups, pointing out the potential for a chatbot to misdiagnose conditions (“Can a doctor be called as a backup?” one student asked), and the possibility that MyWeather’s dependence on a partner vendor could lead to inaccurate climate data.

Preparing future leaders

Throughout the semester, students will go on to develop a responsible AI strategy for a real or fictitious company. They are also encouraged to work with ChatGPT and other generative AI language tools. (One assignment asked them to critique ChatGPT’s own response to a question of bias in generative AI.) Students also get a window into real-world AI use and experiences through guest speakers from Google, Mozilla, Partnership on AI, the U.S. Agency for International Development (USAID), and others. 

All of the students participate in at least one debate, taking sides on topics that include whether university students should be able to use ChatGPT or other generative AI language tools for school; if the OpenAI board of directors was right to fire Sam Altman; and if government regulation of AI technologies stifles innovation and should be limited.

Smith, who has done her share of research into gender and AI, also recommended many readings for the class, including “Data Feminism” by MIT Associate Professor Catherine D’Ignazio and Emory University Professor Lauren Klein; “Unmasking AI: My Mission to Protect What Is Human in a World of Machines” by AI researcher, artist, and advocate Joy Buolamwini; “Weapons of Math Destruction” by algorithmic auditor Cathy O’Neil; and “Your Face Belongs to Us” by New York Times reporter Kashmir Hill.

Smith said she hopes that her course will enable future business leaders to be more responsible stewards and managers of such technologies. “Many people think that making sure AI is ‘responsible’ is a technology task that should be left to data scientists and engineers,” she said. “The reality is, business managers and leaders have a critical role to play as they inform the priorities and values that are embedded into how AI technology is developed and used.”

FlowGPT cofounder on his visionary AI project that’s speeding ahead in the AI market

Startup: FlowGPT
Co-founders: Lifan Wang, MBA 22, and Jay Dang, a former UC Berkeley Computer Science major

photo of a man in black and white
Lifan Wang, MBA 23, co-founder of startup FlowGPT (Wang used Lensa AI to generate the image.)

In this interview, Lifan Wang discusses how he met his FlowGPT co-founder, Jay Dang, at UC Berkeley, and why speed was critical for his startup in entering the AI market.

How did you  come up with the idea for FlowGPT?

We started this project in January. We both were power users of ChatGPT when it first came out. We would spend around 10 hours a day exploring different use cases of ChatGPT prompts and trying to leverage AI to increase our productivity. As we used it more, we realized that there are so many more use cases that people haven’t discovered.  So we started doing extensive research by talking to people who use ChatGPT and prompts. We talked with approximately 100 people from various online communities, such as Discord channels and found that people constantly post and share ChatGPT prompts with each other, which gave us the idea to create a dedicated platform for prompt creators to share their prompts.

How did you get started in entrepreneurship at Haas? 

Haas is a great place for aspiring entrepreneurs. I’ve taken several entrepreneurship classes, including a class with Rhonda Shrader, executive director of the Berkeley Haas Entrepreneurship Program, that helped me understand the process of launching a startup — from searching for ideas to conducting user research to creating a prototype. 

Haas is a great place for aspiring entrepreneurs.

In the Business of AI,  taught by Pieter Abbeel, a renowned professor in the engineering school, I interacted with generative AI and learned about neural networks and the GAN (Generative Adversarial Network), which pits two different deep learning models against each other in a game. I also explored various technical imaging technologies. I firmly believe that AI, especially generative AI, is going to be a significant trend that will revolutionize the world.

Where did you meet your co-founder?

Jay and I met during our time at UC Berkeley SkyDeck, where we attended various events. Jay was seeking funding for his startup in his freshman year. As a part-time venture partner, I was interested in potential investment opportunities. He pitched me his startup, which connected to the work I had previously done in the industry. We had extensive discussions and got to know each other well.

Are you both seeking funding right now?

We secured our C round of funding in May and are currently preparing to launch a new funding round this month or next. Our user base has experienced robust growth, and based on the data we’ve gathered, now is the perfect time to accelerate expansion.

What are some of your concerns about the future of AI or its impact on work and society?

With every technological advancement, there are inherent risks. When computers were introduced, illegal activities emerged on websites and regulations evolved. Our aim is to empower people to be more productive and generate a positive impact while prioritizing safety. We must ensure the safe use of AI, which will become a powerful tool, similar to the internet and software. Many people are already leveraging new AI tools like ChatGPT and Prompt Engineering to increase their productivity. At FlowGPT, we use ChatGPT daily for coding, product management, messaging, and marketing, covering various aspects of our operations. AI represents the next generation of powerful tools that elevate human productivity to new heights.

Our aim is to empower people to be more productive and generate a positive impact while prioritizing safety.

Do you have any advice for aspiring entrepreneurs? 

Execution is crucial. That is the most important thing I learned from Jay, my co-founder. 

We launched the product in January, just one and a half months after ChatGPT’s release. Unlike many competitors, who were still in the ideation stage, we were already ahead. When competitors attempted to imitate us, we had already iterated three times and gained a million users. 

My advice is to start building right away. You don’t have to be an expert at product development to get started. During my time at Cal, I noticed many people getting stuck in the same phase. Some might say, “I’ve got all the business plans figured out, and all I need is one programmer to build the product.” However, as time passed, they were still searching for programmers. The ability to launch is crucial, especially in the initial stages.

 

California Management Review examines how AI will change business

California Management Review AI issuA special summer issue of California Management Review takes an in-depth look at how artificial intelligence is changing business.

Eight articles cover topics such as AI in human resources management, the role of AI in personalized marketing, organizational decision-making in the age of AI, and how AI will launch the “feeling economy,” where interpersonal skills are more valuable than ever.

“Artificial intelligence is a rather fuzzy concept and is actually not that easy to define,” says Andreas Kaplan, a marketing professor at France’s ESCP Business School, who guest-edited the issue with ECSP marketing Prof. Michael Haenlein. “We define artificial intelligence as a system’s ability to interpret external data correctly, to learn from such data, and to use these learnings to achieve specific goals and tasks through flexible adaptation.”

Watch a video introduction:

“Managers of the future will need to consider AI and the associated systems of automation as a central part of their future workforce,” Haenlein says in the introduction. “An average employee performs dozens if not hundreds of different tasks in a day, and only some can be taken over by a machine. Instead of talking about job replacement, we should be talking about job enhancement, because AI systems can help employees do their jobs more efficiently.”

Browse the articles here.

California Management Review is Berkeley Haas’ premier management journal. Edited at the University of California for more than 60 years, the journal publishes cutting-edge research useful to management education, and presents new insights into the practice of management.

Minority homebuyers face widespread statistical lending discrimination, study finds

UC Berkeley study finds Minority Homebuyers Face Widespread Statistical Lending Discrimination

Face-to-face meetings between mortgage officers and homebuyers have been rapidly replaced by online applications and algorithms, but lending discrimination hasn’t gone away.

A new University of California, Berkeley study has found that both online and face-to-face lenders charge higher interest rates to African American and Latino borrowers, earning 11 to 17 percent higher profits on such loans. All told, those homebuyers pay up to half a billion dollars more in interest every year than white borrowers with comparable credit scores do, researchers found.

The findings raise legal questions about the rise of statistical discrimination in the fintech era, and point to potentially widespread violations of U.S. fair lending laws, the researchers say. While lending discrimination has historically been caused by human prejudice, pricing disparities are increasingly the result of algorithms that use machine learning to target applicants who might shop around less with higher-priced loans.

“The mode of lending discrimination has shifted from human bias to algorithmic bias,” said study co-author Adair Morse, a finance professor at UC Berkeley’s Haas School of Business. “Even if the people writing the algorithms intend to create a fair system, their programming is having a disparate impact on minority borrowers—in other words, discriminating under the law.”

First-ever dataset 

A key challenge in studying lending discrimination has been that the only large data source that includes race and ethnicity is the Home Mortgage Disclosure Act (HDMA), which covers 90 percent of residential mortgages but lacks information on loan structure and property type. Using machine learning techniques, researchers merged HDMA data with three other large datasets—ATTOM, McDash, and Equifax—connecting, for the first time ever, details on interest rates, loan terms and performance, property location, and borrower’s credit with race and ethnicity.

The researchers—including professors Nancy Wallace and Richard Stanton of the Haas School of Business and Prof. Robert Bartlett of Berkeley Law—focused on 30-year, fixed-rate, single-family residential loans issued from 2008 to 2015 and guaranteed by Fannie Mae and Freddie Mac.

This ensured that all the loans in the pool were backed by the U.S. government and followed the same rigorous pricing process—based only on a grid of loan-to-value and credit scores—put in place after the financial crisis. Because the private lenders are protected from default by the government guarantee, any additional variations in loan pricing would be due to the lenders’ competitive decisions. The researchers could thus isolate pricing differences that correlate with race and ethnicity apart from credit risk.

The analysis found significant discrimination by both face-to-face and algorithmic lenders:

  • Black and Latino borrowers pay 5.6 to 8.6 basis points higher interest on purchase loans than White and Asian ethnicity borrowers do, and 3 basis points more on refinance loans.
  • For borrowers, these disparities cost them $250M to $500M annually.
  • For lenders, this amounts to 11 percent to 17 percent higher profits on purchase loans to minorities, based on the industry average 50-basis-point profit on loan issuance.

“Algorithmic strategic pricing”

Morse said the results are consistent with lenders using big data variables and machine learning to infer the extent of competition for customers and price loans accordingly. This pricing might be based on geography—such as targeting areas with fewer financial services—or on characteristics of applicants. If an AI can figure out which applicants might do less comparison shopping and accept higher-priced offerings, the lender has created what Morse calls “algorithmic strategic pricing.”

“There are a number of reasons that ethnic minority groups may shop around less—it could be because they live in financial deserts with less access to a range of products and more monopoly pricing, or it could be that the financial system creates an unfriendly atmosphere for some borrowers,” Morse said. “The lenders may not be specifically targeting minorities in their pricing schemes, but by profiling non-shopping applicants they end up targeting them.”

This is the type of price discrimination that U.S. fair lending laws are designed to prohibit, Bartlett notes. Several U.S. courts have held that loan pricing differences that vary by race or ethnicity can only be legally justified if they are based on borrowers’ creditworthiness. “The novelty of our empirical design is that we can rule out the possibility that these pricing differences are due to differences in credit risk among borrowers,” he said.

Overall decline in lending discrimination

The data did reveal some good news: Lending discrimination overall has been on a steady decline, suggesting that the rise of new fintech platforms and simpler online application processes for traditional lenders has boosted competition and made it easier for people to comparison shop—which bodes well for underserved homebuyers.

The researchers also found that fintech lenders did not discriminate on accepting minority applicants. Traditional face-to-face lenders, however, were still 5 percent more likely to reject them.

 

CONTACTS & RESOURCES

Read the full paper.

Berkeley Haas Media Relations: Laura Counts, [email protected], (510) 643-9977

Berkeley Law: Prof. Robert Bartlett, [email protected]

 

 

Student Startup Roundup: Vidi, Ping, Cryptonite

The Startup Roundup series spotlights students and alumni who are starting a new business or enterprise.

Vidi

Co-founders:

Federico Alvarez del Blanco, MBA 18
John Kim, PhD 18 (UC Berkeley/UCSF Bioengineering)
Hector Neira. PhD 18 (UC Berkeley/UCSF Bioengineering)
Robert Kim PhD candidate (UCSD MD/PhD, Neuroscience)

Busy surgical teams inadvertently leave an instrument inside a patient an estimated 1,500 times a year in the U.S. alone, according to research. Less frightening, but still problematic, is the considerable cost to hospitals that bring instruments into the hospital that are never used, but must still be sterilized or restocked—as well as delays that happen when the required instruments fail to make it to the surgical tray.

Solving those problems is the focus of Vidi, a fledgling company launched last November by Federico Alvarez del Blanco, MBA 18, and three other University of California graduates. “Tracking surgical instruments, is slow, manual, and error-prone,” Alvarez del Blanco says.

Team VIDI
Team Vidi, left to right: Hector Neira, Federico Alvarez del Blanco, and John J. Kim

The team’s inspiration came while they were attending a workshop on visual recognition sponsored by information technology company NEC on the Cal campus. “We realized that the technology being used to develop self-driving cars could have wider applications in the medical field,” he says.

The heart of the Vidi system is a camera mounted in the operating room and connected to a computer. The system scans the surgical tray, recognizes the instruments on it, and keeps track of them. When the surgery is concluded, the system gives the team a readout of each item that was in the cart at the beginning of the procedure and lets them know if anything is missing.

The really difficult part of developing the system is training machines to correctly recognize hundreds of instruments, Alvarez del Blanco says. It’s similar to the technology self-driving cars need to recognize objects and react accordingly. That’s why Vidi team members have advanced degrees in fields such as bioengineering, neuroscience, and image recognition.

Although Vidi, which means “to see” in Latin, is very young, it has already gained a good deal of recognition. The team was awarded a Haas Dean’s Seed Fund grant last year; earned a second-place win at the University of California Big Ideas Competition in 2018; and won awards from NEC and the National Science Foundation’s I-Corps program.

Alvarez Del  Blanco says his time in the MBA program helped him build the connections he needed to launch Vidi. “Haas has an interdisciplinary approach that gave me access to ideas and people across the entire University of California system,” del Blanco says.

 

Ping

Co-founders:

Kourosh Zamanizadeh, BS 09, MBA 18
Ryan Alshak, BS 09 (Political Science)
Matt Bordas
Janesh Gupta
Eric Zaarour

If you’ve ever had dealings with a law firm, you’ve probably gotten a detailed bill with line items for everything from reviewing files to drafting documents to answering emails. While it may seem cut-and-dried, billing clients is actually a burdensome, error-prone task that costs law firms potentially billions in wasted time and lost revenue, says Kourosh Zamanizadeh, MBA 18, co-founder and COO of Ping.

A Berkeley Haas-nurtured startup, Ping uses artificial intelligence, machine learning, and cloud computing to automate legal billing. The software tracks, stores, and analyzes the time attorneys spend on a case, and then creates client-ready bills. It’s early days, but Ping has already attracted significant funding from top-tier venture capital firms (a public announcement is pending), along with a $5,000 grant from the Dean’s Seed Fund. It was named “Legal Tech Startup of the Year” in 2017 by the American Bar Association.

Ping has landed its first large client, Mishcon de Reya, a London-based law firm employing more than 800 people, says Zamanizadeh. Ping has already run a successful pilot and the firm has committed to expanding it company-wide within the year. Zamanizadeh also expects to start trials with a number of other global law firms later this year—a business expansion that will require a larger technology team.

The Ping team, left to right: Matt Bordas, Eric Zaarour, Ryan Alshak, Janesh Gupta, and Kourosh Zamanizadeh

Zamanizadeh and co-founder Ryan Alshak met while undergraduates and fraternity brothers at Cal a decade ago. “We always dreamed of starting a company together and decide to take the leap in 2016,” he says. “We both left our careers and just went for it.” The startup team has a deep lineup of relevant talent: Alshak is a former lawyer; Matt Bordas and Janesh Gupta are software engineers; Eric Zaarour is a designer; and Zamanizadeh has experience in business development and investment management.

This is the second startup for the five-member team, who made an earlier, unsuccessful attempt to build a company around an app for exchanging contacts. The team hit upon the idea of focusing on legal technology and they were accepted by Skydeck, the accelerator run by Berkeley Haas, the College of Engineering, and UC Berkeley, where they had a home base to develop their idea further.

“The startup ecosystem at Berkeley has very much matured since Ryan and I first met as undergrads. It’s truly world-class,” says Zamanizadeh, who credits Skydeck Executive Director Caroline Winnett and Ikhlaq Sidhu, chief scientist and founding director of the Sutarja Center for Entrepreneurship & Technology, for their extra support. “The environment has been very empowering and the help we’ve received couldn’t be any more genuine.”

 

Cryptonite

Co-founders:

Cryptonite logoDustin Seely, EWMBA 18
Michael Brenndoerfer, M.Eng 18

Efficiently buying and selling bitcoins and hundreds of other cryptocurrencies is not a problem most people have. But as these hypermodern currencies become more of an investment and less of a curiosity, investors will need a simple way to manage their crypto-portfolios.

That’s the market Dustin Seely EWMBA 18, co-founder of Cryptonite, is going after. “We’re going to give investors a way to invest in the entire cryptocurrency market in one place, and do it in U.S. dollars,” he says.

Dustin Seely
Dustin Seely

Seely and co-founder Michael Brenndoerfer met in a Berkeley Haas entrepreneurship class, and then took the new, multidisciplinary “Blockchain and the Future of Technology, Business and the Law” course last spring, where they learned more about the technology underlying cryptocurrencies. Their young company was awarded a Dean’s Seed Fund grant and is expected to go live in the fall.

The cryptocurrency market is volatile and expanding, with a market cap of about $250 billion in mid-July (down from a peak of more than $800 billion in January). Although bitcoin is the most valuable and most widely known, there are now more than 1,600 cryptocurrencies sold on almost 12,000 scattered exchanges, according to CoinMarketCap. What’s more, many of those exchanges do not accept dollars, so doing business with them requires buyers to slog through complicated, multi-step trading procedures. Buying a cryptocurrency called Zilliqa, for example, means buying a bitcoin in dollars, and then using the Bitcoin to purchase the Zilliqa, Seely explains.

Michael Brenndoerfer
Michael Brenndoerfer

Cryptonite will serve as a middleman between investors and other exchanges. Account holders will be able to buy cryptos in dollars without dealing directly with other exchanges, and manage their portfolio on a mobile device, Seely says.

At the moment, cryptocurrencies are only lightly regulated, but Cryptonite is preparing for the future. “Securities regulations are coming to the space and we welcome it,” Seely says. “Regulation will give further legitimacy to the market and we can use it as a competitive advantage when we become fully compliant.”