Cutting-edge climate tech takes the stage at 2023 C2M Climate Tech Summit

The C2M summit, held at Spieker Forum in Chou Hall on Dec. 1, brought together eight UC Berkeley graduate student teams (including many Berkeley Haas MBA students). All photos: Jim Block

Promising climate technologies that address everything from water desalination to Earth element extraction to lightening-fast battery charging took center stage at the 2023 Cleantech to Market (C2M) Climate Tech Summit.

The summit, held at Spieker Forum in Chou Hall on Dec. 1, brought together eight UC Berkeley graduate student teams who presented their findings from a year’s work on entrepreneurial projects for C2M company founders. Each team spent nearly 1,000 hours working with founders, assessing new technologies, and investigating paths to commercialization. 

Brian Steel, co-director of the C2M program, which is part of the Energy Institute at Haas, called this year’s summit the most successful to date and reflected on C2M’s growth since its 2008 founding. 

“One of the things that’s so energizing for us as faculty is that the students come to us now with such wonderful depth and breadth of knowledge because cleantech has been around for so long. We feel so fortunate that the world has caught up with the sustainability work we have been doing for 15 years.”

One of the things that’s so energizing for us as faculty is that the students come to us now with such wonderful depth and breadth of knowledge because cleantech has been around for so long. — C2M co-director Brian Steel.

A total of $70,000 in MetLife Climate Solution Awards was awarded to three startups, who were supported by three C2M teams. The three teams honored during the summit were:

  • ChemFinity Technologies, which produces high-performing, highly modular porous polymer materials, won $40,000. The team included Chris Burke, MBA 24; Ethan Pezoulas, PhD 26 (chemistry); Kosuke “Taka” Takaishi, MBA 24; Matt Witkin, MBA 24; Mingxin Jia, PhD 24 (mechanical engineering); and Peter Pang, MBA 24. (The team also received the annual Hasler Cleantech to Market Award, given to the audience favorite.)

    Left to right: Kosuke “Taka” Takaishi, MBA 24, explains the catalytic converter recycling process alongside PhD student Ethan Pezoulas and Matt Witkin, MBA 24.


    The students worked with Brooklyn-based ChemFinity co-founders CEO Adam Uliana and CTO Ever Velasquez, both PhD 22 (chemical engineering). Uliana described the membrane filters the company built as “atomic catchers mitts that are designed to capture just one type of molecule and can be used to tackle water desalination or mineral recovery.”

    Witkin, who worked in economic consulting on decarbonization projects before coming to Haas, said that he mentioned Cleantech to Market in his application essay, as “the perfect course where I could help these innovative climate companies find and scale their impact.”

    “It was an honor working alongside Adam from ChemFinity and my C2M classmates as we considered how ChemFinity could apply and grow its impressive separation technology,” Witkin said.

    six haas students wearing suits in front of a large check
    The first-place ChemFinity team: (left to right) Chris Burke, MBA 24, Kosuke “Taka” Takaishi, MBA 24, Mingxin Jia, PhD 24 (mechanical engineering), Peter Pang, MBA 24, Matt Witkin, MBA 24, Ethan Pezoulas, PhD 26 (chemistry).
  • REEgen, which works to reduce the environmental impact of rare Earth element production, which won $20,000. The team included Carlos Vial, MBA 24; Francisco Aguilar Cisneros, MPP 24; Jeffrey Harris, MBA 24; Kelly McGonigle, MBA 24; Orion Cohen, PhD 24 (physical chemistry); and Sho Tatsuno, MBA 24 (MBA Exchange Program, Columbia Business School). The United States now imports more than 80% of its rare earth needs from China, said Alexa Schmitz, CEO of Ithaca, NY-based REEgen. REEgen is creating a new kind of rare Earth element production using bacteria to leach, recover, and purify rare Earth elements domestically.

    six students wearing business suits holding a large check
    Team REEgen: (left to right) Francisco Aguilar, MPP24, Sho Tatsuno, MBA 24, Orion Cohen,  PhD 24, Kelly McGonigle, MBA 24, Jeffrey Harris, MBA 24, and Carlos Vial, MBA 24.
  • Tyfast, a battery technology startup, which won $10,000. The team included Ankita Singh, EWMBA 24; Erik Better, MBA 24; Nicholas Landgraf, EWMBA 24; and Sterling Root, EWMBA 25. Tyfast builds high-performance lithium ion batteries “to make diesel engines obsolete in construction equipment,” said Tyfast CEO GJ la O’, BS 01, (materials science & engineering). San Mateo-based Tyfast uses a raw material that enables a new class of rechargeable battery, promising to deliver 10 times the power and cycle life with energy density exceeding commercial lithium iron phosphate (LFP) technology.
four students wearing business suits holding a large check
Team Tyfast: (left to right) Erik Better, MBA 24, Nick Landgraf, EWMBA 24, Ankita Singh, EWMBA 24, Sterling Root, EWMBA 25.

Steel said he’s grateful to all of those who support the program, in particular the C2M alumni who return to Haas to serve as coaches, mentors, judges, or speakers—or just to enjoy being a part of the audience.

This year’s event kicked off with speaker Ryan Hanley, C2M 10 and MBA 11, the founder and CEO of Equilibrium Energy, a 100-employee climate technology startup. Barbara Burger, MBA 94, energy director, advisor, and innovator, and former president of Chevron Technology Ventures, also joined a fireside chat with Harshita Mira Venkatesh, MBA 11, who participated in C2M in 2020 and is one of the first business fellows at Breakthrough Energy, founded by Bill Gates in 2015.

“It’s always gratifying to have alumni who were on stage last year come back to support this year’s teams,” Steel said. “People who have been coming to the summit for years appreciate that we keep raising the bar: that our students’ presentations keep getting better and better. It’s very rewarding to have that acknowledgement and appreciation.”

Ginny Whitelow, a director at MetLife, worked with the C2M program as a mentor. “These UC Berkeley students have been so amazing to partner with and have given me an added sense of purpose in my work at MetLife that goes beyond my day to day job,” she said. 

Augmented Intelligence

The generative artificial intelligence revolution is already happening in the workplace—and it looks nothing like you’d expect.

Since ChatGPT went mainstream this year, many of the news stories about generative artificial intelligence have been full of gloom, if not outright panic. Cautionary tales abound of large language models (LLMs), like ChatGPT, stealing intellectual property or dumbing down creativity, if not putting people out of work entirely. Other news has emphasized the dangers of generative AI—which is capable of responding to queries by generating text, images, and more based on data it’s trained on—such as its propensity to “hallucinate” wrong information or inject bias and toxic content into chats, a potential legal and PR nightmare.

Beyond these legitimate fears, however, many companies are adopting generative AI at a fast clip—and uses inside firms look different from the dire predictions. Companies experimenting with AI have discovered a powerful tool in sales, software development, customer service, and other fields.

On the leading edge of this new frontier, many Berkeley Haas faculty and alumni are discovering how it can augment human intelligence rather than replace human workers, aiming toward increased innovation, creativity, and productivity.

“We’re used to thinking of AI as something that can take repetitive tasks, things humans can do, and just do them a little faster and better,” says Jonathan Heyne, MBA 15, chief operating officer of DeepLearning.AI, an edtech company focused on AI training, who also teaches entrepreneurship at Haas. “But generative AI has the ability to create things that don’t exist—and do it through natural language, so not only software programmers or data scientists can interact with it. That makes it a much more powerful tool.”

More jobs, new jobs

Those capabilities make gen AI ideal for summarizing information, extracting insights from data, and quickly suggesting next steps. A report by a team of researchers from OpenAI, OpenResearch, and the University of Pennsylvania concluded that for 80% of workers, LLMs could affect at least 10% of their tasks, while 20% of workers could see at least 50% of their tasks impacted. Another report by Goldman Sachs predicts two-thirds of jobs could see some degree of AI automation, with gen AI in particular performing a quarter of current work, costing up to 300 million jobs in the U.S. and Europe alone. Yet, the report adds, worker displacement for automation “has historically been offset by creation of new jobs, and the emergence of new occupations following technological innovations accounts for the vast majority of long-run employment growth.”

That’s in line with the findings of Assistant Professor Anastassia Fedyk, whose research has found that AI has been leading to increased sales and employment. In a forthcoming paper in the Journal of Financial Economics, Fedyk and colleagues found that firms’ use of AI led to increased growth for companies through more innovation and creation of new products, which increased both sales and hiring.

Fedyk says that industries with particularly AI-related tasks, such as auditing, could see reductions in workforce over time. For most fields, however, she predicts that the workforce will stay steady but its composition will change. Her new National Bureau of Economic Research working paper studying employment at companies investing in AI found that they were looking for a workforce that was even more highly skilled, highly educated, and technical than other firms. “We’re seeing a lot of growth in jobs like product manager—jobs that help to manage the increase in product varieties and increase in sales,” Fedyk says.

Illustration of a typewriter with computer code on paper and keyboard.An explosion of possibilities

Company conversations about gen AI exploded this spring, says Amit Paka, MBA 11, founder and COO of Fiddler AI, a five-year-old startup that helps firms build trust into AI by monitoring its operation and explaining its black-box decisions. “Generative AI became a board-level conversation,” he says, “even if folks in the market don’t know how they’ll actually implement it.” For now, firms seem more comfortable using gen AI internally rather than in customer-facing roles where it could open them up to liability if something goes wrong.

Obvious applications are creative—for example, using it to generate marketing copy or press releases. But the most common implementations, says Paka, are internal chatbots to help workers access company data, such as human resources policies or industry-specific knowledge bases. More sophisticated implementations are models trained from scratch on a set of data, like Google’s Med-PaLM, an LLM to answer medical questions, and Bloomberg’s BloombergGPT, trained on 40 years of financial data to answer finance questions. Deciding what type of LLM to implement in a company is a matter of first figuring out the problem you need to solve, Paka says. “You have to find a use case where you have a pain point and where an LLM will give you value.”

For now, firms seem more comfortable using gen AI internally rather than in customer- facing roles where it could open them up to liability if something goes wrong.

The power of video

While many companies are already using gen AI to analyze and generate text, video applications are next. Sunny Nguyen, MBA 18, is lead product manager for multimodal AI at TwelveLabs, which recently launched Pegasus, a video-language foundation model that uses gen AI to understand video and turn its content into summary, highlights, or a customized output. “Video understanding is an extremely complex problem due to the multimodality aspect, and lots of companies still treat videos as a bunch of images and text,” Nguyen says. “Our proprietary multimodal AI is aimed at solving this challenge and powering many applications.” For example, sports leagues could use the technology to generate game highlights for fan engagement; online-learning publishers could generate chapters or highlights instantly; and police officers could get accurate, real-time reports of suspicious activity.

TwelveLabs is launching an interactive chat interface where users could ask questions in an ongoing dialogue about a video. “Just like ChatGPT but for video,” Nguyen says.

Norberto Guimaraes, MBA 09, cofounder and CEO of Talka AI, is focusing video analysis on business-to-business sales conversations, using gen AI to analyze not just verbal content but nonverbal cues as well. Guimaraes says nonverbal factors can account for up to 80% of the impact made in a conversation. Talka’s technology uses AI to analyze 80 different signals, including facial expressions, body language, and tone of voice to judge whether a conversation is achieving its purpose, usually of completing a sale.

Guimaraes says the technology could be used to train salespeople to communicate more effectively and discern clients’ needs. “We’ll be better able to understand what are the key frustrations from your customer, whether you’re taking into account what they’re saying, and whether or not the conversation is landing,” he says.

Computer programmers have begun implementing more formal techniques in a new field called AI fairness, which employs mathematical frameworks based on social sciences to de-bias embedded data.

Talka AI is currently testing the technology with a “very large” company that is “one of the best known for sales,” Guimaraes says. It currently has 70,000 conversations in its system and has been able to successfully predict whether a sale will occur 85% of the time.

Illustration of woman at a laptop shaking a hand coming out of the computer.Sales and service

Companies are also exploring the use of AI to take part in simple sales. Faculty member Holly Schroth, a distinguished teaching fellow who studies negotiations and influence, has consulted with the company Pactum, which has been working on an AI tool to manage low-level sales—repetitive negotiations that have just a few different issues such as length of contract, quantity, and price. In initial studies, Pactum has found that people prefer talking to AI versus a human. “People like talking with a bot because it’s kinder and friendlier,” says Schroth, “because it can be programmed that way.”

Specifically, AI bots can be programmed to use language that acknowledges what the other side is saying. “Humans sometimes get frustrated and may not be aware of the language they use that may be offensive,” says Schroth. “For example, ‘with all due respect’ is at the top of the rude list.” People may feel like they can get a better deal with AI, she says, since the bot will work to maximize value for both sides, while a human may not be able to calculate best value or may let emotions interfere.

AI is also perfectly positioned to be a coach, says Assistant Professor Park Sinchaisri. He’s explored ways AI can help people work more efficiently whether they are Uber drivers or physicians. In today’s hybrid environment, where workers are often remote without the benefit of on-the-job training or peer-to-peer learning, a bot can learn best practices from colleagues and identify useful advice to share with others. AI could also help human workers redistribute tasks when a team member leaves. However, Sinchaisri has found that while AI provides good suggestions, humans can struggle to adopt them. In his working paper on AI for human decision-making, workers accepted only 40% of machine-generated suggestions compared to 80% of advice from other humans, citing they did not believe the AI advice to be effective or understand how to incorporate it into their workflow.

Sinchaisri is studying ways to make coaching more effective—either by training the AI to give only as much advice as the person might accept or by allowing for human nature. “Research has shown that humans tend to take more advice if they can modify and deviate from it a little,” he says. “Good advice is often counterintuitive, meaning it is difficult for humans to figure it out on their own; AI needs to learn how to effectively deliver such advice to humans to reap its full potential.”

Illustration of a hand holding a pencil with a computer pointer icon at the tip.Bias and ethics

As powerful and versatile as AI can be, the warnings are real. Trained on the vastness of the internet, large language models pick up toxic content and racist and sexist language. Then there’s the real problem of hallucinations, in which AI output seems believable but includes false information.

Biases are baked into LLMs, says Merrick Osborne, a postdoc at Haas studying racial equity in business. In a new paper on bias and AI, Osborne explores how biased information results not only from the data a model is trained on but also from the engineers themselves, with their natural biases, and from the human annotators whom engineers employ to fine-tune and subjectively label data.

“You need to create a culture of accepting that generative AI is useful in many stages of work and encouraging people to be transparent with their co-workers about how they’re using it.”

—David Evan Harris

Certainly more diversity in the field of engineering would help. But it’s important, Osborne argues, that engineers and annotators undergo diversity training to make them more aware of their own biases, which in turn could help them train models that are more sensitive to equal representation among groups. Computer programmers have begun implementing more formal techniques in a new field called AI fairness, which employs mathematical frameworks based on social sciences to de-bias embedded data. “We aren’t born knowing how to create a fair machine-learning model,” Osborne says. “It’s knowledge we have to acquire.”

Another way Osborne suggests addressing both bias and hallucinations is to call in outside help. Vijay Karunamurthy, MBA 11, is doing just that as field CTO at Scale AI, a seven-year-old startup that’s worked to make models safer and fairer. “People understand that models come out of the box without any sensitivity or human values, so these base models are pretty dangerous,” he says. Scale AI employs teams of outside experts, including cognitive psychologists with backgrounds in health and safety, who can help decide what information would be too dangerous to include in an LLM—everything from teaching how to build a bomb to telling a minor how to illegally buy alcohol. The company also employs social psychologists, who can spot bias, and subject experts, such as PhDs in history and philosophy, to help correct hallucinations.

Of course, it’s not feasible to have hundreds of PhDs constantly correcting models, so the company uses the information to create what’s called a critique model, which can train the original model and make the whole system self-correcting.

For companies adopting AI, it’s important to develop internal processes to help guide ethical use by employees. One of those guidelines, says faculty member David Evan Harris, a chancellor’s public scholar, is disclosure. “People have a right to know when they’re seeing or interacting with generative AI content,” says Harris, who was formerly on the civic integrity, misinformation, and responsible AI teams at Meta. That goes for both internal use and external use with customers. “When you receive content from a human you probably have more reason to trust it than when it’s coming from AI because of the propensity of the current generation of AI to hallucinate.” That’s especially true, he says, when dealing with sensitive data, like financial or medical information.

Companies may also want to control how gen AI is used internally. For example, Harris says there have been numerous cases in Silicon Valley of managers using it to write peer reviews for regular performance evaluations. While a tempting shortcut, it could result in boilerplate verbiage or, worse, wrong information. Harris says it’s better to come up with new strictures for writing reviews, such as using bullet points. On the other hand, banning AI is unlikely to work. “You need to create a culture of accepting that generative AI is useful in many stages of work and encouraging people to be transparent with their co-workers about how they’re using it,” he says.

One practice to avoid when crafting internal policies around gen AI is to limit governance programs to the letter of the law—since it tends to lag behind ethics, says Ruby Zefo, BS 85, chief privacy officer at Uber. “The law should be the low bar—because you want to do what’s right,” says Zefo. “You have to create policies and programs and documentation that will put you on the right side of the laws you know are coming but aren’t yet here.”

For one, that means developing guidelines around personal or confidential data—both being sure to recognize and refrain from using other’s personal or proprietary information to train the model and to refrain from feeding such information into a model that is or might become public. When running algorithms on personal data for customers, she adds, it’s important to allow for human review. Companies should also control access to internal gen AI models to only those who have a legitimate purpose. More than anything, Zefo says, flexibility is key while the technology is still being developed. “You have to have a process where you’re always evaluating your guidelines, always looking to define what’s the highest risk.”

Illustration of a circuit board snake slithering toward a pair of feet.Planning for the future

That need to stay nimble extends to the workforce as well, says Heyne. In the past, AI was mostly used by technical workers programming models—but gen AI will be used by myriad employees, including creative, sales, and customer-service workers. As gen AI develops, their day-to-day work will likely change. For example, a sales agent interacting with a bot one day may be overseeing a bot negotiating with an AI counterpart the next. In other words, sales or procurement functions in an organization will remain but will look different. “We have to constantly think about the tasks we need to train for now to get the value that is the goal at the end,” Heyne says. “It’s a strategic imperative for any company that wants to stay in business.”

“The law should be the low bar—because you want to do what’s right. You have to create policies and programs and documentation that will put you on the right side of the laws you know are coming but aren’t yet here.”

—Ruby Zefo, BS 85

It’s also an education that needs to start much earlier in life, says Dimple Malkani, BS 98. She founded Glow Up Tech to prepare teenage girls for the tech industry by introducing them to successful female leaders in Silicon Valley. The skills necessary to succeed in a gen AI world aren’t necessarily those emphasized previously in tech, or even in previous iterations of AI, says Malkani, who spent decades working in marketing and business development. “The core skills these girls should be getting when they go to college aren’t just data science but strategy and creativity as well—to figure out what new product innovation we should create,” she says.

One thing she’s sure of as she talks with the next generation about gen AI is that, unlike current workers, they are ready to dive in. “Gen Z is very comfortable using gen AI,” she says. “In fact, they’ve already embraced it and expect it to be part of their working futures.”

Crisis Management

Mastering upheaval and systemic transformations

Olaf Groth lecturing to a class.Olaf Groth doesn’t like to make predictions, but he’s nonetheless become adept at helping executives see around corners and navigate the chaos of recent years.

The Great Remobilization book cover.A member of Haas’ professional faculty and a senior adviser at the Institute for Business Innovation, Groth is also chairman and CEO of think tank Cambrian Futures. In a new book, The Great Remobilization: Strategies and Designs for a Smarter World (MIT Press, 2023), he and co-authors Mark Esposito and Terence Tse focus on what they call the Five Cs—COVID; the cognitive economy, crypto, and web3; cybersecurity; climate change; and China—and show leaders how to power human and economic growth by replacing fragile global systems with smarter, more resilient ones.

Berkeley Haas talked with Groth about how executives can turn turmoil into opportunity.

BH: What will the next era of globalization look like?

The globalization everyone keeps talking about is how many bananas get shipped from Argentina to China. And of course that’s important for jobs today. But what we really have to ask is, who controls the flow of any given thing from one place to another? At the end of the day, the people who have the power in what we call the “cognitive economy” can influence these flows. There are traditional flows—of capital, of intellectual property, of people, of goods and services. But then there are also flows of data and genetic material. And that’s not even including the ecological flows of air and energy.

At the end of the day, the people who have the power in what we call the “cognitive economy” can influence these flows.

How do you define the cognitive economy?

In the cognitive economy, cybernetics—essentially smart, digital, command-and-control functions that are currently steered by the Googles and Amazons of this world—is getting injected into everything we do. Take the mobility industry. Electrified, autonomous cars use AI to create applications inside the car, and cars are now tied to a new charging infrastructure around the smart home. You need to design cars like an iPhone: Start with a chip and build everything around that. Our global logistics and supply chains, the movement and tracking of people, carbon, etc. is starting to get that injection of intelligence too.

Your book describes a FLP-IT (forces, logic, phenomena, impact, and triage) model for strategic leadership. How should executives approach the triage step, which requires them to decide what to keep, discard, or build from scratch?

I was at the World Economic Forum meeting in Tianjin and a senior executive in the petrochemicals industry told me that after 30 years of building megafactories in China, his company now needed to throw that out the window and create chains of “nanofactories” across 12 different markets. To solve that challenge, you need to see the new operating logic of your domain then decide which existing assets, positions, and capabilities to draw upon and which to divest.