Laying Waste

The problem with e-waste

Pile of e-waste, like computers and circuit boards, in a landfill.E-waste is the world’s fastest-growing solid waste stream, and companies are struggling with a deluge of waste produced by their manufacturing processes and products. Some have been illegally exporting their e-waste—which may contain hazardous substances that need special treatment—or illegally dumping it closer to home.

In 2021, for example, Amazon was caught trashing some 130,000 unsold or returned items in a U.K. warehouse—including laptops, smart TVs, and other electronic devices—in one week. The company acted in line with financial incentives: Destroying these goods was cheaper than storing, repurposing, or recycling them.

These clashing incentives are causing waste processing systems to fall far short of best practices, according to research co-authored by Assistant Professor Sytske Wijnsma. The paper offers recommendations to help regulators improve ineffective laws.

Simulating waste streams

It’s estimated that 75% of e-waste globally is exported, typically from the EU or U.S. to developing countries, where disposal is less regulated. Only slightly over a third of the EU’s e-waste is properly handled.

Wijnsma and her colleagues constructed a model to simulate where waste typically leaks from the waste disposal chain, incorporating two key actors: a manufacturer producing waste and a treatment operator responsible for treating waste within a country.

Waste producers either generate high-quality waste—with resale value from its component parts—or low-quality waste, which is more hazardous and less valuable post-treatment.

Clashing incentives are causing waste processing systems to fall far short of best practices.

Typically, a treatment operator sets a price to manage a batch of waste without knowing whether it’s high or low quality.

The waste producer decides whether to contract with the treatment operator or to export it—legally or illegally, in which case it leaks from the system, often landing in developing countries where environmental regulations are spotty. Many countries prohibit low-quality waste export, while higher-quality waste export often remains legal.

Even if a producer contracts a local operator, proper treatment is not guaranteed. The operator might still opt to dump it illegally rather than disassemble it, immobilize hazardous substances, and recycle it for revenue. “If an operator thinks there’s a very high probability of only getting bad waste, then they might be more inclined to dump it.”

Addressing system breakdowns

The model highlights two key reasons the e-waste treatment chain breaks down. First, there are few if any consequences for waste producers when their contracted treatment operators violate regulations.

Second, current export policy focuses solely on prohibiting the export of low-quality waste. As such, waste with low post-treatment value is increasingly retained locally, causing treatment operators to raise the price of treatment. That, in turn, drives the more valuable waste to be sent abroad where treatment costs are lower. Consequently, local operators are left with primarily low-quality, unprofitable waste and have more incentive to dump it.

When it comes to policymaking, Wijnsma and her colleagues say that regulations that treat high- and low-quality waste dramatically differently create perverse incentives and are likely to backfire. The researchers also recommend holding waste producers partially responsible when their downstream waste is disposed of improperly.

Is it ethical? New undergrad class trains students to think critically about artificial intelligence

two sstudents in a Haas classroom listening intently
Berkeley Haas undergraduate students Hunter Esqueda (left) and Sohan Dhanesh (right) are enrolled in Genevieve Smith’s Responsible AI Innovation & Management class. Photo: Noah Berger

 

“Classified” is an occasional series spotlighting some of the more powerful lessons being taught in classrooms around Haas.

On a recent Monday afternoon, Sohan Dhanesh, BS 24, joined a team of students to consider whether startup Moneytree is using machine learning ethically to determine credit worthiness among its customers.

After reading the case, Dhanesh, one of 54 undergraduates enrolled in a new Berkeley Haas course called Responsible AI Innovation & Management, said he was concerned by Moneytree’s unlimited access to users’ phone data, and whether customers even know what data the company is tapping to inform its credit scoring algorithm. Accountability is also an issue, since Silicon Valley-based Moneytree’s customers live in India and Africa, he said. 

“Credit is a huge thing, and whether it’s given to a person or not has a huge impact on their life,” Dhanesh said. “If this credit card [algorithm] is biased against me, it will affect my quality of life.”

Dhanesh, who came into the class believing that he didn’t support guardrails for AI companies, says he’s surprised by how his opinions have changed about regulation. That he isn’t playing Devil’s advocate, he said, is due to the eye-opening data, cases, and readings provided by Lecturer Genevieve Smith.

A contentious debate

Smith, who is also the founding co-director of the Responsible & Equitable AI Initiative at the Berkeley AI Research Lab and former associate director of the Berkeley Haas Center for Equity, Gender, & Leadership, created the course with an aim to teach students both sides of the AI debate.

Woman in a purple jacket teaching
Lecturer Genevieve Smith says the goal of her class is to train aspiring leaders to understand, think critically about, and implement strategies for responsible AI innovation and management. Photo: Noah Berger

Her goal is to train aspiring leaders to think critically about artificial intelligence and implement strategies for responsible AI innovation and management. “While AI can carry immense opportunities, it also poses immense risks to both society and business linked to pervasive issues of bias and discrimination, data privacy violations, and more,” Smith said. “Given the current state of the AI landscape and its expected global growth, profit potential, and impact, it is imperative that aspiring business leaders understand responsible AI innovation and management.”

“While AI can carry immense opportunities, it also poses immense risks to both society and business linked to pervasive issues of bias and discrimination, data privacy violations, and more,” – Genevieve Smith.

During the semester, Smith covers the business and economic potential of AI to boost productivity and efficiency. But she also explores the immense potential for harm, such as the risk of embedding inequality or infringing on human rights; amplifying misinformation and a lack of transparency, and impacting the future of work and climate. 

Smith said she expects all of her students will interact with AI as they launch careers, particularly in entrepreneurship and tech. To that end, the class prepares them to articulate what “responsible AI” means and understand and define ethical AI principles, design, and management approaches. 

Learning through mini-cases

Today, Smith kicked off class with a review of the day’s AI headlines, showing an interview with OpenAI’s CTO Mira Murati, who was asked where the company gets its training data for Sora, OpenAI’s new generative AI model that creates realistic video using text. Murati contended that the company used publicly available data to train Sora but didn’t provide any details in the interview. Smith asks the students what they thought about her answer, noting the “huge issue” with a lack of transparency on training data, as well as copyright and consent implications.

Student in class wearing blue and yellow berkeley hoodie
Throughout the semester, students will develop a responsible AI strategy for a real or fictitious company. Photo: Noah Berger

After, Smith introduced the topic of “AI for good” before the students split into groups to act as responsible AI advisors to three startups, described in three mini cases for Moneytree, HealthNow, and MyWeather.  They worked to answer Smith’s questions: “What concerns do you have? What questions would you ask? And what recommendations might you provide?” The teams explored these questions across five core responsible AI principles, including privacy, fairness, and accountability. 

Julianna De Paula, BS 24, whose team was assigned to read about Moneytree, asked if the company had adequately addressed the potential for bias when approving customers for credit (about 60% of loans in East Africa go to men, and 70% of loans in India go to men, the case noted), and whether the app’s users are giving clear consent for their data when they download it. 

Other student teams considered HealthNow, a chatbot that provides health care guidance, but with better performance for men and English speakers; and MyWeather, an app developed for livestock herders by a telecommunications firm in Nairobi, Kenya, that uses weather data from a real-time weather information service provider.

The class found problems with both startups, pointing out the potential for a chatbot to misdiagnose conditions (“Can a doctor be called as a backup?” one student asked), and the possibility that MyWeather’s dependence on a partner vendor could lead to inaccurate climate data.

Preparing future leaders

Throughout the semester, students will go on to develop a responsible AI strategy for a real or fictitious company. They are also encouraged to work with ChatGPT and other generative AI language tools. (One assignment asked them to critique ChatGPT’s own response to a question of bias in generative AI.) Students also get a window into real-world AI use and experiences through guest speakers from Google, Mozilla, Partnership on AI, the U.S. Agency for International Development (USAID), and others. 

All of the students participate in at least one debate, taking sides on topics that include whether university students should be able to use ChatGPT or other generative AI language tools for school; if the OpenAI board of directors was right to fire Sam Altman; and if government regulation of AI technologies stifles innovation and should be limited.

Smith, who has done her share of research into gender and AI, also recommended many readings for the class, including “Data Feminism” by MIT Associate Professor Catherine D’Ignazio and Emory University Professor Lauren Klein; “Unmasking AI: My Mission to Protect What Is Human in a World of Machines” by AI researcher, artist, and advocate Joy Buolamwini; “Weapons of Math Destruction” by algorithmic auditor Cathy O’Neil; and “Your Face Belongs to Us” by New York Times reporter Kashmir Hill.

Smith said she hopes that her course will enable future business leaders to be more responsible stewards and managers of such technologies. “Many people think that making sure AI is ‘responsible’ is a technology task that should be left to data scientists and engineers,” she said. “The reality is, business managers and leaders have a critical role to play as they inform the priorities and values that are embedded into how AI technology is developed and used.”