Laying Waste

The problem with e-waste

Pile of e-waste, like computers and circuit boards, in a landfill.E-waste is the world’s fastest-growing solid waste stream, and companies are struggling with a deluge of waste produced by their manufacturing processes and products. Some have been illegally exporting their e-waste—which may contain hazardous substances that need special treatment—or illegally dumping it closer to home.

In 2021, for example, Amazon was caught trashing some 130,000 unsold or returned items in a U.K. warehouse—including laptops, smart TVs, and other electronic devices—in one week. The company acted in line with financial incentives: Destroying these goods was cheaper than storing, repurposing, or recycling them.

These clashing incentives are causing waste processing systems to fall far short of best practices, according to research co-authored by Assistant Professor Sytske Wijnsma. The paper offers recommendations to help regulators improve ineffective laws.

Simulating waste streams

It’s estimated that 75% of e-waste globally is exported, typically from the EU or U.S. to developing countries, where disposal is less regulated. Only slightly over a third of the EU’s e-waste is properly handled.

Wijnsma and her colleagues constructed a model to simulate where waste typically leaks from the waste disposal chain, incorporating two key actors: a manufacturer producing waste and a treatment operator responsible for treating waste within a country.

Waste producers either generate high-quality waste—with resale value from its component parts—or low-quality waste, which is more hazardous and less valuable post-treatment.

Clashing incentives are causing waste processing systems to fall far short of best practices.

Typically, a treatment operator sets a price to manage a batch of waste without knowing whether it’s high or low quality.

The waste producer decides whether to contract with the treatment operator or to export it—legally or illegally, in which case it leaks from the system, often landing in developing countries where environmental regulations are spotty. Many countries prohibit low-quality waste export, while higher-quality waste export often remains legal.

Even if a producer contracts a local operator, proper treatment is not guaranteed. The operator might still opt to dump it illegally rather than disassemble it, immobilize hazardous substances, and recycle it for revenue. “If an operator thinks there’s a very high probability of only getting bad waste, then they might be more inclined to dump it.”

Addressing system breakdowns

The model highlights two key reasons the e-waste treatment chain breaks down. First, there are few if any consequences for waste producers when their contracted treatment operators violate regulations.

Second, current export policy focuses solely on prohibiting the export of low-quality waste. As such, waste with low post-treatment value is increasingly retained locally, causing treatment operators to raise the price of treatment. That, in turn, drives the more valuable waste to be sent abroad where treatment costs are lower. Consequently, local operators are left with primarily low-quality, unprofitable waste and have more incentive to dump it.

When it comes to policymaking, Wijnsma and her colleagues say that regulations that treat high- and low-quality waste dramatically differently create perverse incentives and are likely to backfire. The researchers also recommend holding waste producers partially responsible when their downstream waste is disposed of improperly.

Is it ethical? New undergrad class trains students to think critically about artificial intelligence

two sstudents in a Haas classroom listening intently
Berkeley Haas undergraduate students Hunter Esqueda (left) and Sohan Dhanesh (right) are enrolled in Genevieve Smith’s Responsible AI Innovation & Management class. Photo: Noah Berger


“Classified” is an occasional series spotlighting some of the more powerful lessons being taught in classrooms around Haas.

On a recent Monday afternoon, Sohan Dhanesh, BS 24, joined a team of students to consider whether startup Moneytree is using machine learning ethically to determine credit worthiness among its customers.

After reading the case, Dhanesh, one of 54 undergraduates enrolled in a new Berkeley Haas course called Responsible AI Innovation & Management, said he was concerned by Moneytree’s unlimited access to users’ phone data, and whether customers even know what data the company is tapping to inform its credit scoring algorithm. Accountability is also an issue, since Silicon Valley-based Moneytree’s customers live in India and Africa, he said. 

“Credit is a huge thing, and whether it’s given to a person or not has a huge impact on their life,” Dhanesh said. “If this credit card [algorithm] is biased against me, it will affect my quality of life.”

Dhanesh, who came into the class believing that he didn’t support guardrails for AI companies, says he’s surprised by how his opinions have changed about regulation. That he isn’t playing Devil’s advocate, he said, is due to the eye-opening data, cases, and readings provided by Lecturer Genevieve Smith.

A contentious debate

Smith, who is also the founding co-director of the Responsible & Equitable AI Initiative at the Berkeley AI Research Lab and former associate director of the Berkeley Haas Center for Equity, Gender, & Leadership, created the course with an aim to teach students both sides of the AI debate.

Woman in a purple jacket teaching
Lecturer Genevieve Smith says the goal of her class is to train aspiring leaders to understand, think critically about, and implement strategies for responsible AI innovation and management. Photo: Noah Berger

Her goal is to train aspiring leaders to think critically about artificial intelligence and implement strategies for responsible AI innovation and management. “While AI can carry immense opportunities, it also poses immense risks to both society and business linked to pervasive issues of bias and discrimination, data privacy violations, and more,” Smith said. “Given the current state of the AI landscape and its expected global growth, profit potential, and impact, it is imperative that aspiring business leaders understand responsible AI innovation and management.”

“While AI can carry immense opportunities, it also poses immense risks to both society and business linked to pervasive issues of bias and discrimination, data privacy violations, and more,” – Genevieve Smith.

During the semester, Smith covers the business and economic potential of AI to boost productivity and efficiency. But she also explores the immense potential for harm, such as the risk of embedding inequality or infringing on human rights; amplifying misinformation and a lack of transparency, and impacting the future of work and climate. 

Smith said she expects all of her students will interact with AI as they launch careers, particularly in entrepreneurship and tech. To that end, the class prepares them to articulate what “responsible AI” means and understand and define ethical AI principles, design, and management approaches. 

Learning through mini-cases

Today, Smith kicked off class with a review of the day’s AI headlines, showing an interview with OpenAI’s CTO Mira Murati, who was asked where the company gets its training data for Sora, OpenAI’s new generative AI model that creates realistic video using text. Murati contended that the company used publicly available data to train Sora but didn’t provide any details in the interview. Smith asks the students what they thought about her answer, noting the “huge issue” with a lack of transparency on training data, as well as copyright and consent implications.

Student in class wearing blue and yellow berkeley hoodie
Throughout the semester, students will develop a responsible AI strategy for a real or fictitious company. Photo: Noah Berger

After, Smith introduced the topic of “AI for good” before the students split into groups to act as responsible AI advisors to three startups, described in three mini cases for Moneytree, HealthNow, and MyWeather.  They worked to answer Smith’s questions: “What concerns do you have? What questions would you ask? And what recommendations might you provide?” The teams explored these questions across five core responsible AI principles, including privacy, fairness, and accountability. 

Julianna De Paula, BS 24, whose team was assigned to read about Moneytree, asked if the company had adequately addressed the potential for bias when approving customers for credit (about 60% of loans in East Africa go to men, and 70% of loans in India go to men, the case noted), and whether the app’s users are giving clear consent for their data when they download it. 

Other student teams considered HealthNow, a chatbot that provides health care guidance, but with better performance for men and English speakers; and MyWeather, an app developed for livestock herders by a telecommunications firm in Nairobi, Kenya, that uses weather data from a real-time weather information service provider.

The class found problems with both startups, pointing out the potential for a chatbot to misdiagnose conditions (“Can a doctor be called as a backup?” one student asked), and the possibility that MyWeather’s dependence on a partner vendor could lead to inaccurate climate data.

Preparing future leaders

Throughout the semester, students will go on to develop a responsible AI strategy for a real or fictitious company. They are also encouraged to work with ChatGPT and other generative AI language tools. (One assignment asked them to critique ChatGPT’s own response to a question of bias in generative AI.) Students also get a window into real-world AI use and experiences through guest speakers from Google, Mozilla, Partnership on AI, the U.S. Agency for International Development (USAID), and others. 

All of the students participate in at least one debate, taking sides on topics that include whether university students should be able to use ChatGPT or other generative AI language tools for school; if the OpenAI board of directors was right to fire Sam Altman; and if government regulation of AI technologies stifles innovation and should be limited.

Smith, who has done her share of research into gender and AI, also recommended many readings for the class, including “Data Feminism” by MIT Associate Professor Catherine D’Ignazio and Emory University Professor Lauren Klein; “Unmasking AI: My Mission to Protect What Is Human in a World of Machines” by AI researcher, artist, and advocate Joy Buolamwini; “Weapons of Math Destruction” by algorithmic auditor Cathy O’Neil; and “Your Face Belongs to Us” by New York Times reporter Kashmir Hill.

Smith said she hopes that her course will enable future business leaders to be more responsible stewards and managers of such technologies. “Many people think that making sure AI is ‘responsible’ is a technology task that should be left to data scientists and engineers,” she said. “The reality is, business managers and leaders have a critical role to play as they inform the priorities and values that are embedded into how AI technology is developed and used.”

Economist Thomas Marschak, UC Berkeley researcher and teacher for 60 years, dies at 93

Professor Emeritus Thomas Marschak, an economist who influenced generations of students during almost 60 years of active research and teaching at Berkeley Haas, passed away Jan. 31 at his Oakland home. He was 93.

Professor Tom Marschak (Photo: Jane Scherr)

Marschak, the Cora Jane Flood Research Chair Emeritus, was known for his dry humor, his generous mentorship, and his research into the design of efficient organizations.

“In so many ways, Tom was way ahead of his time,” said Professor Rich Lyons, UC Berkeley Associate Vice Chancellor for Innovation & Entrepreneurship and former dean of Berkeley Haas. “When you think about the center of gravity of his work—the informational and incentive aspects of the design of efficient organizations—you realize quickly that these topics are becoming ever more important.”

As a member of Haas’ Economic Analysis & Policy Group and Operations & IT Management group, Marschak continued his boundary-spanning research into his 10th decade. Just two weeks before his death, he had a paper accepted to the Journal of Institutional and Theoretical Economics.

“Tom was one of the sharpest, most insightful, and most admirable economists I have ever seen,” said Dong Wei, PhD 20 (economics), an assistant professor of economics at UC Santa Cruz who co-authored the recent paper with Marschak. “He had a tremendously successful academic career, and at the age of 90, he was still developing novel research ideas, conducting economic analysis with advanced mathematical tools, and writing academic papers with extreme rigor and clarity.”

“In so many ways, Tom was way ahead of his time. When you think about the center of gravity of his work—the informational and incentive aspects of the design of efficient organizations—you realize quickly that these topics are becoming ever more important.” —Professor Rich Lyons

Fleeing Nazi Germany

Marschak was born in Heidelberg, Germany, in 1930. His father, Jacob, who was Jewish and from Kyiv, Ukraine (then part of Russia), was a notable figure: As a 19-year-old student opposed to Lenin and the Bolsheviks, he served as labor secretary in a separatist republic in the Caucasus that lasted less than a year. When the Bolsheviks prevailed, Jacob Marschak—who went on to become a prominent economist—fled to Berlin. There, he met Tom Marschak’s mother Marianne, a journalist who earned her PhD and became an influential psychologist, developing the Marschak Interaction Method for observing the relationship between caregivers and children.

Although Tom’s early life in Germany was sunny, the looming threat of Nazism cast a shadow. In 1933, when Tom was 4 years old, his father insisted they flee to the United Kingdom. It was a prescient move as the family escaped the horrors of the Holocaust.

Marschak spoke about his father’s foresight in an oral history he recorded in 2005. “That was amazing foresight because all the other Jewish people with that kind of position said, ‘It’ll pass, it’s nothing, it’s a civilized country,’” Marschak said in the oral history. “He knew better.”

Tom Marshak in Canada in 2017. (Photo courtesy of Merideth Marschak)

In England, Jacob Marschak was made a fellow of All Souls College at Oxford University while young Tom and his sister were put into school—taught in English, a language he had to learn quickly. In 1939, as the war spread, the family decamped to the United States. As they were not British citizens, and Germany had withdrawn citizenship from Jews, they were stateless for a time. Still, with the help of Tom’s father’s academic friends, they settled in New York, where Jacob Marschak took a position at the New School for Social Research.

In 1943, the family moved to Chicago, where Marschak went to University High School—an experimental school attached to the University of Chicago where students could graduate high school in 10th grade and get a bachelor’s degree by 12th grade. The Marschak home during that period was host to a circle of prominent émigrés, including Leo Szilard, the physicist who discovered the nuclear chain reaction process; atomic physicist Hyman Goldsmith; violinist Isaac Stern; and Edward Teller, the father of the hydrogen bomb.

By age 17, Marschak was a college graduate, with honors. He landed on economics as his field of study and headed to Stanford for his doctorate, followed by a job at RAND Corp. in Santa Monica under Charlie Hitch (later president of the University of California).

In 1960, he was hired as an associate professor at Berkeley Haas. “Things were very different then,” he recalled later. “You dressed in a white shirt and a tie, I can’t believe that. I was one of the very first to grow a beard—almost unheard of.”

Marschak lived in Berkeley with his first wife, Dorothy, and their children Debbie, Madeline, and Timothy. In 1968, Marschak’s life was scarred by tragedy when his eldest daughter Debbie, age 10, died in a car accident.

In 1979, he remarried, and he and his wife Merideth had sons Anthony and Daniel. He was a devoted and deeply involved father. “He took us to film festivals, summer backpacking and river trips, enrolled us in summer programs, monitored our education, and kept us in close contact with his side of the family,” recalled daughter Madeline Marschak. “He offered all four of his children unconditional love and support equally. …Tom Marschak was my hero and the best father anyone could hope to have.”

Academic boundary spanner

Academically, Marschak made his mark in economics theory, studying information gathering, information technology, and network mechanisms—complex work that was ahead of its time, Lyons said.

“Tom was an intellectual boundary-spanner from the get-go, having spanned two academic groups at Haas and having spanned in his work even more areas than these two groups traditionally have done,” Lyons said. “His work covered IT, data science, use of data to drive enterprise value: These are some of the defining issues of our current time.”

Marschak was the co-winner of the Koç University prize in 1996. He was an elected fellow of the Econometric Society and the recipient of both a Fulbright-Hays research award, a Guggenheim Fellowship, and a Ford Foundation faculty research fellowship.

“Much of Tom’s work addressed foundational issues of organizational design, such as how the degrees of hierarchy or decentralization affect an organization’s communication costs and ability to achieve its objectives,” said Professor Emeritus Michael Katz, Sarin Chair Emeritus in Strategy and Leadership. “Although this work was abstract, it has important implications for business organizations.”

‘Dry and delicious humor’

A woman with short gray hair and black dress smiles at the camera. A man in suit jack sits at a table holding an hor d'oeuvres on a skewer.
Tom Marschak with Merideth in 2017. (Photo courtesy of Merideth Marschak)

His colleagues at Haas remember him as a generous instructor with a wry sense of humor. “Tom taught microeconomics to a generation of Haas undergraduates,” said Professor Emeritus Jonathan Leonard, George Quist Chair in Business Ethics. “If you could get him to raise an eyebrow, you knew you had said something interesting.”

Merideth Marschak also recalled her husband’s “dry and delicious” humor, as well as his love for outdoor hiking adventures and walking the Bay Area hills up until his last months. He was “unbeatable at trivia and could summon up historic facts and arcane knowledge on request” and also loved to cook for friends and family. “A crowded dinner table was the best fun,” she added. He was delighted when he became a grandfather at age 88.

“He was incredibly generous with his insight and his kindness,” Merideth Marschak said. “He taught us all the value of slowing down, enjoying life, and keeping an open mind.”

Marschak is survived by his wife, Merideth; his children, Madeline, Timothy, Anthony, and Daniel; his granddaughters Lucy and Alice; and nieces Emily and Julie Jernberg. He was predeceased by his sister, Ann Jernberg.

As e-waste streams grow, regulations are backfiring, study finds

A broken cell phone lies in a collection container for hazardous materials at a waste sorting facility in Germany. (Photo: Jens B’ttner/picture-alliance/dpa/AP Images)

E-waste is the world’s fastest-growing solid waste stream, and companies are struggling with a deluge of waste produced by their manufacturing processes and products. Some have been illegally exporting their e-waste—which may contain hazardous substances that need special treatment—or illegally dumping it in landfills closer to home.

In 2021, for example, Amazon was caught destroying some 130,000 unsold items in a U.K. warehouse over the course of one week. Among the trashed merchandise were smart TVs, laptops, drones, hairdryers, computer drives, and other electronic devices.

The company acted in line with financial incentives: It was cheaper to destroy these goods than store, repurpose, or properly recycle them.

Yet recovering useful materials like precious metals from discarded electronics can reduce mining and forest degradation. It can also allow many jurisdictions to reduce their dependence on raw materials imports from other countries.

These clashing incentives are causing waste processing systems to fall far short of best practices, according to a new paper co-authored by Assistant Professor Sytske Wijnsma and published in the journal Management Science. She and her fellow researchers—Dominique Olié Lauga of University of Cambridge and L. Beril Toktay of the Georgia Institute of Technology—considered the impacts of various policy interventions on waste treatment and disposal, and offered practical recommendations to help regulators better align incentives and improve ineffective laws.

“Research on these systems is important because they are highly complex and not very transparent,” Wijnsma says. “Often, well-intended policy interventions can backfire.”

Simulating waste streams

It’s estimated that 75% of e-waste globally is exported, typically from the EU or the U.S. to developing countries, where recycling is less regulated. Only slightly over a third of e-waste in the EU is handled in line with waste regulations.

To simulate the confounding dynamics within waste processing chains, Wijnsma and her colleagues constructed a model. They drew from real-life scenarios shared with Wijnsma by Europol, the European law enforcement agency responsible for recommending and enforcing several waste management policies in the EU. The model was intended to shed more light on where in the waste chain incentives are misaligned and at which stages waste can leak from the system through local dumping or export to developing countries.

The simulated waste chain contains two key actors: a manufacturer producing waste and a treatment operator responsible for undertaking waste treatment within a country.

Within the model, waste producers are either the sort that generate high-quality waste—which can create more revenue for the treatment operators because of the high resale value of its component parts—or low-quality waste, which comes with higher hazard levels and lower revenue post-treatment.

The simulated waste chain ferries waste producers and waste treatment operators through three stages, representing a common real-world progression. First, a treatment operator sets a price a treat a batch of waste from a producer. Importantly, the operator doesn’t necessarily know whether the waste will be of high or low quality—which has significant repercussions. If the quality is likely to be low, the operator can’t count on recouping any resale value and would want to charge a higher price to treat it. On the other hand, if the quality is expected to be high, the operator could charge a lower price to process and treat it because they will recoup some value.

Next, the waste producer considers the quoted price and decides whether to contract with the treatment operator or to export the waste—either legally or legally. Currently, many regulations prohibit the export of low-quality waste, while the export of higher-quality waste often remains legal. As a result, exporting high-quality waste is relatively straightforward and inexpensive, while exporting lower-quality junk requires an expensive and risky circumnavigation of laws. Most of the electronic waste currently leaks from the system through export.

Finally, if a treatment operator has been contracted, it can opt to either treat the waste or dump it illegally. The difficulty in that decision lies in the fact that treatment operators typically have to quote a price while the contents of the batch of waste are still mysterious to them.

“You can imagine that operators get containers full of waste and don’t necessarily know the exact quality,” Wijnsma says. “They could sort the waste, immobilize hazardous substances, and recover as much valuable materials as possible, but this is not a profitable endeavor if the waste turns out to be of low-quality.” The decision thus largely depends on a best guess, based on past experiences and market dynamics, Wijnsma explains: “If an operator thinks there’s a very high probability of only getting bad waste, they could be less inclined to properly treat it.”

Addressing system breakdowns

The model highlights two key reasons the waste treatment chain breaks down.

  • First, it’s relatively easy for treatment operators to receive payment for treating waste while in fact dumping it—an example of moral hazard, i.e., when an actor faces little or no potential consequence for unwanted behavior.
  • Second, export policy has focused primarily on only prohibiting the export of low-quality waste. This can create situations in which the more valuable, high-quality waste is sent abroad, where treatment is cheaper. The result is that local recycling programs and treatment operators are left with mostly low-quality waste, which creates cascading effects. Operators have a greater incentive to dump the waste they receive since it’s very likely not profitable to treat.

Wijnsma and her colleagues formulated this second dynamic into one of their key recommendations: Regulations that treat high- and low-quality waste dramatically differently are likely to backfire. She calls this pattern the “waste haven effect,” wherein waste exports tend to flow to the countries where regulations and costs are lowest.

“Because of that, there’s been a large focus on trying to even out regulations between countries,” Wijnsma explains. A similar phenomenon occurs when regulations focus on low-quality waste and leave high-quality waste unregulated. “If you strengthen regulations for one waste category too much compared to another, then you also create perverse incentives.”

Another of the research team’s policy recommendations seeks to address the moral hazard problem by holding waste producers responsible when their downstream waste is disposed of improperly.

Notably, new laws in the EU and some U.S. states are trying to enforce that very shift. Extended Producer Responsibility (EPR) regulations place responsibility for the proper management of post-use products that contain hazardous materials with the producers that made them. In practice, this has required producers to simply contract with treatment operators to deal with their waste. But Wijnsma says that the paper’s findings suggest the laws should go even further.

“A still-nascent practice…is fining the manufacturers when they contract with treatment operators who are found to be engaged in dumping,” Wijnsma says. In other words, producers must be held accountable for not only contracting with a treatment operator, but for contracting with a trustworthy one. “Our results support expanding regulations where the producer can be held (partially) responsible for downstream violations,” she says.

Read the paper:  

Treat, Dump, or Export? How Domestic and International Waste Management Policies Shape Waste Chain Outcomes
By Sytske Wijnsma, Dominique Olié Lauga, and L. Beril Toktay
Management Science, December 2023

Augmented Intelligence

The generative artificial intelligence revolution is already happening in the workplace—and it looks nothing like you’d expect.

Since ChatGPT went mainstream this year, many of the news stories about generative artificial intelligence have been full of gloom, if not outright panic. Cautionary tales abound of large language models (LLMs), like ChatGPT, stealing intellectual property or dumbing down creativity, if not putting people out of work entirely. Other news has emphasized the dangers of generative AI—which is capable of responding to queries by generating text, images, and more based on data it’s trained on—such as its propensity to “hallucinate” wrong information or inject bias and toxic content into chats, a potential legal and PR nightmare.

Beyond these legitimate fears, however, many companies are adopting generative AI at a fast clip—and uses inside firms look different from the dire predictions. Companies experimenting with AI have discovered a powerful tool in sales, software development, customer service, and other fields.

On the leading edge of this new frontier, many Berkeley Haas faculty and alumni are discovering how it can augment human intelligence rather than replace human workers, aiming toward increased innovation, creativity, and productivity.

“We’re used to thinking of AI as something that can take repetitive tasks, things humans can do, and just do them a little faster and better,” says Jonathan Heyne, MBA 15, chief operating officer of DeepLearning.AI, an edtech company focused on AI training, who also teaches entrepreneurship at Haas. “But generative AI has the ability to create things that don’t exist—and do it through natural language, so not only software programmers or data scientists can interact with it. That makes it a much more powerful tool.”

More jobs, new jobs

Those capabilities make gen AI ideal for summarizing information, extracting insights from data, and quickly suggesting next steps. A report by a team of researchers from OpenAI, OpenResearch, and the University of Pennsylvania concluded that for 80% of workers, LLMs could affect at least 10% of their tasks, while 20% of workers could see at least 50% of their tasks impacted. Another report by Goldman Sachs predicts two-thirds of jobs could see some degree of AI automation, with gen AI in particular performing a quarter of current work, costing up to 300 million jobs in the U.S. and Europe alone. Yet, the report adds, worker displacement for automation “has historically been offset by creation of new jobs, and the emergence of new occupations following technological innovations accounts for the vast majority of long-run employment growth.”

That’s in line with the findings of Assistant Professor Anastassia Fedyk, whose research has found that AI has been leading to increased sales and employment. In a forthcoming paper in the Journal of Financial Economics, Fedyk and colleagues found that firms’ use of AI led to increased growth for companies through more innovation and creation of new products, which increased both sales and hiring.

Fedyk says that industries with particularly AI-related tasks, such as auditing, could see reductions in workforce over time. For most fields, however, she predicts that the workforce will stay steady but its composition will change. Her new National Bureau of Economic Research working paper studying employment at companies investing in AI found that they were looking for a workforce that was even more highly skilled, highly educated, and technical than other firms. “We’re seeing a lot of growth in jobs like product manager—jobs that help to manage the increase in product varieties and increase in sales,” Fedyk says.

Illustration of a typewriter with computer code on paper and keyboard.An explosion of possibilities

Company conversations about gen AI exploded this spring, says Amit Paka, MBA 11, founder and COO of Fiddler AI, a five-year-old startup that helps firms build trust into AI by monitoring its operation and explaining its black-box decisions. “Generative AI became a board-level conversation,” he says, “even if folks in the market don’t know how they’ll actually implement it.” For now, firms seem more comfortable using gen AI internally rather than in customer-facing roles where it could open them up to liability if something goes wrong.

Obvious applications are creative—for example, using it to generate marketing copy or press releases. But the most common implementations, says Paka, are internal chatbots to help workers access company data, such as human resources policies or industry-specific knowledge bases. More sophisticated implementations are models trained from scratch on a set of data, like Google’s Med-PaLM, an LLM to answer medical questions, and Bloomberg’s BloombergGPT, trained on 40 years of financial data to answer finance questions. Deciding what type of LLM to implement in a company is a matter of first figuring out the problem you need to solve, Paka says. “You have to find a use case where you have a pain point and where an LLM will give you value.”

For now, firms seem more comfortable using gen AI internally rather than in customer- facing roles where it could open them up to liability if something goes wrong.

The power of video

While many companies are already using gen AI to analyze and generate text, video applications are next. Sunny Nguyen, MBA 18, is lead product manager for multimodal AI at TwelveLabs, which recently launched Pegasus, a video-language foundation model that uses gen AI to understand video and turn its content into summary, highlights, or a customized output. “Video understanding is an extremely complex problem due to the multimodality aspect, and lots of companies still treat videos as a bunch of images and text,” Nguyen says. “Our proprietary multimodal AI is aimed at solving this challenge and powering many applications.” For example, sports leagues could use the technology to generate game highlights for fan engagement; online-learning publishers could generate chapters or highlights instantly; and police officers could get accurate, real-time reports of suspicious activity.

TwelveLabs is launching an interactive chat interface where users could ask questions in an ongoing dialogue about a video. “Just like ChatGPT but for video,” Nguyen says.

Norberto Guimaraes, MBA 09, cofounder and CEO of Talka AI, is focusing video analysis on business-to-business sales conversations, using gen AI to analyze not just verbal content but nonverbal cues as well. Guimaraes says nonverbal factors can account for up to 80% of the impact made in a conversation. Talka’s technology uses AI to analyze 80 different signals, including facial expressions, body language, and tone of voice to judge whether a conversation is achieving its purpose, usually of completing a sale.

Guimaraes says the technology could be used to train salespeople to communicate more effectively and discern clients’ needs. “We’ll be better able to understand what are the key frustrations from your customer, whether you’re taking into account what they’re saying, and whether or not the conversation is landing,” he says.

Computer programmers have begun implementing more formal techniques in a new field called AI fairness, which employs mathematical frameworks based on social sciences to de-bias embedded data.

Talka AI is currently testing the technology with a “very large” company that is “one of the best known for sales,” Guimaraes says. It currently has 70,000 conversations in its system and has been able to successfully predict whether a sale will occur 85% of the time.

Illustration of woman at a laptop shaking a hand coming out of the computer.Sales and service

Companies are also exploring the use of AI to take part in simple sales. Faculty member Holly Schroth, a distinguished teaching fellow who studies negotiations and influence, has consulted with the company Pactum, which has been working on an AI tool to manage low-level sales—repetitive negotiations that have just a few different issues such as length of contract, quantity, and price. In initial studies, Pactum has found that people prefer talking to AI versus a human. “People like talking with a bot because it’s kinder and friendlier,” says Schroth, “because it can be programmed that way.”

Specifically, AI bots can be programmed to use language that acknowledges what the other side is saying. “Humans sometimes get frustrated and may not be aware of the language they use that may be offensive,” says Schroth. “For example, ‘with all due respect’ is at the top of the rude list.” People may feel like they can get a better deal with AI, she says, since the bot will work to maximize value for both sides, while a human may not be able to calculate best value or may let emotions interfere.

AI is also perfectly positioned to be a coach, says Assistant Professor Park Sinchaisri. He’s explored ways AI can help people work more efficiently whether they are Uber drivers or physicians. In today’s hybrid environment, where workers are often remote without the benefit of on-the-job training or peer-to-peer learning, a bot can learn best practices from colleagues and identify useful advice to share with others. AI could also help human workers redistribute tasks when a team member leaves. However, Sinchaisri has found that while AI provides good suggestions, humans can struggle to adopt them. In his working paper on AI for human decision-making, workers accepted only 40% of machine-generated suggestions compared to 80% of advice from other humans, citing they did not believe the AI advice to be effective or understand how to incorporate it into their workflow.

Sinchaisri is studying ways to make coaching more effective—either by training the AI to give only as much advice as the person might accept or by allowing for human nature. “Research has shown that humans tend to take more advice if they can modify and deviate from it a little,” he says. “Good advice is often counterintuitive, meaning it is difficult for humans to figure it out on their own; AI needs to learn how to effectively deliver such advice to humans to reap its full potential.”

Illustration of a hand holding a pencil with a computer pointer icon at the tip.Bias and ethics

As powerful and versatile as AI can be, the warnings are real. Trained on the vastness of the internet, large language models pick up toxic content and racist and sexist language. Then there’s the real problem of hallucinations, in which AI output seems believable but includes false information.

Biases are baked into LLMs, says Merrick Osborne, a postdoc at Haas studying racial equity in business. In a new paper on bias and AI, Osborne explores how biased information results not only from the data a model is trained on but also from the engineers themselves, with their natural biases, and from the human annotators whom engineers employ to fine-tune and subjectively label data.

“You need to create a culture of accepting that generative AI is useful in many stages of work and encouraging people to be transparent with their co-workers about how they’re using it.”

—David Evan Harris

Certainly more diversity in the field of engineering would help. But it’s important, Osborne argues, that engineers and annotators undergo diversity training to make them more aware of their own biases, which in turn could help them train models that are more sensitive to equal representation among groups. Computer programmers have begun implementing more formal techniques in a new field called AI fairness, which employs mathematical frameworks based on social sciences to de-bias embedded data. “We aren’t born knowing how to create a fair machine-learning model,” Osborne says. “It’s knowledge we have to acquire.”

Another way Osborne suggests addressing both bias and hallucinations is to call in outside help. Vijay Karunamurthy, MBA 11, is doing just that as field CTO at Scale AI, a seven-year-old startup that’s worked to make models safer and fairer. “People understand that models come out of the box without any sensitivity or human values, so these base models are pretty dangerous,” he says. Scale AI employs teams of outside experts, including cognitive psychologists with backgrounds in health and safety, who can help decide what information would be too dangerous to include in an LLM—everything from teaching how to build a bomb to telling a minor how to illegally buy alcohol. The company also employs social psychologists, who can spot bias, and subject experts, such as PhDs in history and philosophy, to help correct hallucinations.

Of course, it’s not feasible to have hundreds of PhDs constantly correcting models, so the company uses the information to create what’s called a critique model, which can train the original model and make the whole system self-correcting.

For companies adopting AI, it’s important to develop internal processes to help guide ethical use by employees. One of those guidelines, says faculty member David Evan Harris, a chancellor’s public scholar, is disclosure. “People have a right to know when they’re seeing or interacting with generative AI content,” says Harris, who was formerly on the civic integrity, misinformation, and responsible AI teams at Meta. That goes for both internal use and external use with customers. “When you receive content from a human you probably have more reason to trust it than when it’s coming from AI because of the propensity of the current generation of AI to hallucinate.” That’s especially true, he says, when dealing with sensitive data, like financial or medical information.

Companies may also want to control how gen AI is used internally. For example, Harris says there have been numerous cases in Silicon Valley of managers using it to write peer reviews for regular performance evaluations. While a tempting shortcut, it could result in boilerplate verbiage or, worse, wrong information. Harris says it’s better to come up with new strictures for writing reviews, such as using bullet points. On the other hand, banning AI is unlikely to work. “You need to create a culture of accepting that generative AI is useful in many stages of work and encouraging people to be transparent with their co-workers about how they’re using it,” he says.

One practice to avoid when crafting internal policies around gen AI is to limit governance programs to the letter of the law—since it tends to lag behind ethics, says Ruby Zefo, BS 85, chief privacy officer at Uber. “The law should be the low bar—because you want to do what’s right,” says Zefo. “You have to create policies and programs and documentation that will put you on the right side of the laws you know are coming but aren’t yet here.”

For one, that means developing guidelines around personal or confidential data—both being sure to recognize and refrain from using other’s personal or proprietary information to train the model and to refrain from feeding such information into a model that is or might become public. When running algorithms on personal data for customers, she adds, it’s important to allow for human review. Companies should also control access to internal gen AI models to only those who have a legitimate purpose. More than anything, Zefo says, flexibility is key while the technology is still being developed. “You have to have a process where you’re always evaluating your guidelines, always looking to define what’s the highest risk.”

Illustration of a circuit board snake slithering toward a pair of feet.Planning for the future

That need to stay nimble extends to the workforce as well, says Heyne. In the past, AI was mostly used by technical workers programming models—but gen AI will be used by myriad employees, including creative, sales, and customer-service workers. As gen AI develops, their day-to-day work will likely change. For example, a sales agent interacting with a bot one day may be overseeing a bot negotiating with an AI counterpart the next. In other words, sales or procurement functions in an organization will remain but will look different. “We have to constantly think about the tasks we need to train for now to get the value that is the goal at the end,” Heyne says. “It’s a strategic imperative for any company that wants to stay in business.”

“The law should be the low bar—because you want to do what’s right. You have to create policies and programs and documentation that will put you on the right side of the laws you know are coming but aren’t yet here.”

—Ruby Zefo, BS 85

It’s also an education that needs to start much earlier in life, says Dimple Malkani, BS 98. She founded Glow Up Tech to prepare teenage girls for the tech industry by introducing them to successful female leaders in Silicon Valley. The skills necessary to succeed in a gen AI world aren’t necessarily those emphasized previously in tech, or even in previous iterations of AI, says Malkani, who spent decades working in marketing and business development. “The core skills these girls should be getting when they go to college aren’t just data science but strategy and creativity as well—to figure out what new product innovation we should create,” she says.

One thing she’s sure of as she talks with the next generation about gen AI is that, unlike current workers, they are ready to dive in. “Gen Z is very comfortable using gen AI,” she says. “In fact, they’ve already embraced it and expect it to be part of their working futures.”

Nick Sonnenberg, MFE 07
CEO and Founder, Leverage

Nick Sonnenberg, MFE 07.If you’ve ever felt overwhelmed with work, you’re not alone. Nick Sonnenberg heard the complaint so often that he wrote a book to solve the problem: Come Up for Air: How Teams Can Leverage Systems and Tools to Stop Drowning in Work.

It provides a framework for eliminating unnecessary tasks and focusing instead on work that drives results. Along with his operational efficiency platform, Leverage, Sonnenberg is reinventing the way people get things done.

Before he became an efficiency expert, Sonnenberg was barely staying afloat himself. He’d originally started a freelancer marketplace called Leverage that scaled very quickly. Then, his business partner walked out, jeopardizing the company’s future. Sonnenberg soldiered on, quickly noticing how much inefficiency there was, specifically in three areas: communication, planning, and resources.

“To have any chance of saving the company, I needed to get some time back,” he says. “Focusing on those buckets, things started turning around.”

Soon, people began contacting him for organizational advice. Eventually, he pivoted the company to become an efficiency training firm.

Sonnenberg says his success with Leverage wasn’t a case of getting lucky when his back was against the wall. He credits his MFE training and his years as a high-frequency trader, where he learned every second matters.

“Being a financial engineer, I’m programmed to find pattern recognition,” Sonnenberg says. “I started connecting the dots that there was this big opportunity to help a lot of people hopefully save millions of hours by teaching best practices of how to leverage all these amazing systems and tools, like Slack and Asana.”

Keeping Company

Know your gig workers to retain them

An Uber driver wearing a face mask and hat.When done right, the gig economy can mutually benefit companies and workers. Companies can tap into deep and vast labor pools, and workers can create their own schedules. But such flexibility challenges gig platforms in committing to a service capacity. What incentives, then, can entice workers to work more hours more often?

A recent study co-authored by Assistant Professor Park Sinchaisri and published in Manufacturing & Service Operations Management sought to answer that question.

The researchers utilized data from a U.S.-based ride-hailing company that included 358 days of driving activities and financial incentives for thousands of New York City drivers between 2016 and 2017. Perhaps not surprisingly, they found that drivers work toward their income goals and are less likely to work after meeting them.

More surprisingly, Sinchaisri found that workers who have previously worked longer shifts are more likely to start a new shift or work longer than drivers who have worked less. This finding goes against previous research on taxi drivers, who have more of a “time-targeting behavior.”

Sinchaisri says that gig platforms should ask what specific goals workers have and make targeted adjustments. “Once you know your workers’ goals, you can think of better ways to incentivize them,” he says.

There’s No Place Like Work

How place identity enhances engagement

Illustration of man sitting at work desk with sunlight being painted upon him.Post-pandemic workspaces have become increasingly fluid, and companies are trying out hot desks and hoteling spaces as they struggle to entice workers back to the office. But new research suggests that leaders wanting to build employee engagement should think less about rearranging the furniture and more about how employees relate the office space to their own work.

“When people feel a sense of self-esteem and distinctiveness derived from their workspace, we found it enhances their engagement,” says professional faculty member Brandi Pearce. “It also increases collaboration and their commitment to the organization.”

Pearce and colleagues from Stanford and Pepperdine universities studied “place identity,” as they refer to this sense of connection, at a software company transitioning workers at sites worldwide from traditional offices to open-plan innovation centers.

The research, published in Organizational Dynamics, found that whether people accepted or rejected the innovation centers didn’t align with their work functions or professional backgrounds, nor with age, gender, location, or other factors. “What seemed to matter more than the space itself was how people felt the space connected to them personally, positively differentiated them, and reflected a sense of belonging to something meaningful to them,” Pearce says.

“When people feel a sense of self-esteem and distinctiveness derived from their workspace…it enhances their engagement.”

What’s more, workers with a distinctive sense of place identity collaborated more actively with one another and were more engaged and committed to the organization.

So how can leaders cultivate place identity? Whether the setting is physical, hybrid, or virtual, Pearce suggests three best practices:

Broadcast the vision.

No matter the setup, leaders should clearly communicate the purpose of the space and what kinds of work are best done in the various workplaces: brainstorming sessions, workshops, and other collaborative tasks in work offices, for example, and focused time in home offices. To help define virtual workspaces, leaders can state whether video conferences are meant for efficiency or connection.

Model Enthusiasm.

Equally critical to visioning is the way leaders convey a positive attitude about space. In a hybrid setting, leaders can express enthusiasm by holding in-person meetings on in-office days and visibly blocking calendar time during remote-work days for solitary work.

Empower employees.

The researchers found place identity was highest when employees were encouraged to tailor their spaces to suit their needs and preferences. In one location, for example, employees were given resources to co-create furniture and other artifacts, enhancing their personal connection to the office. Remote workers could be given materials to customize their home spaces to create a connection to their team or organization, or—if they do visit the office—to create something with co-workers to bring home.