Illustration of two faces facing each other. The one on the right has a real eye. The one on the left has a computer pointer for an eye, indicating it's computer generated. The words Augmented Intelligence are overlaid onto the faces.

Augmented Intelligence

The generative artificial intelligence revolution is already happening in the workplace—and it looks nothing like you’d expect.

Since ChatGPT went mainstream this year, many of the news stories about generative artificial intelligence have been full of gloom, if not outright panic. Cautionary tales abound of large language models (LLMs), like ChatGPT, stealing intellectual property or dumbing down creativity, if not putting people out of work entirely. Other news has emphasized the dangers of generative AI—which is capable of responding to queries by generating text, images, and more based on data it’s trained on—such as its propensity to “hallucinate” wrong information or inject bias and toxic content into chats, a potential legal and PR nightmare.

Beyond these legitimate fears, however, many companies are adopting generative AI at a fast clip—and uses inside firms look different from the dire predictions. Companies experimenting with AI have discovered a powerful tool in sales, software development, customer service, and other fields.

On the leading edge of this new frontier, many Berkeley Haas faculty and alumni are discovering how it can augment human intelligence rather than replace human workers, aiming toward increased innovation, creativity, and productivity.

“We’re used to thinking of AI as something that can take repetitive tasks, things humans can do, and just do them a little faster and better,” says Jonathan Heyne, MBA 15, chief operating officer of DeepLearning.AI, an edtech company focused on AI training, who also teaches entrepreneurship at Haas. “But generative AI has the ability to create things that don’t exist—and do it through natural language, so not only software programmers or data scientists can interact with it. That makes it a much more powerful tool.”

More jobs, new jobs

Those capabilities make gen AI ideal for summarizing information, extracting insights from data, and quickly suggesting next steps. A report by a team of researchers from OpenAI, OpenResearch, and the University of Pennsylvania concluded that for 80% of workers, LLMs could affect at least 10% of their tasks, while 20% of workers could see at least 50% of their tasks impacted. Another report by Goldman Sachs predicts two-thirds of jobs could see some degree of AI automation, with gen AI in particular performing a quarter of current work, costing up to 300 million jobs in the U.S. and Europe alone. Yet, the report adds, worker displacement for automation “has historically been offset by creation of new jobs, and the emergence of new occupations following technological innovations accounts for the vast majority of long-run employment growth.”

That’s in line with the findings of Assistant Professor Anastassia Fedyk, whose research has found that AI has been leading to increased sales and employment. In a forthcoming paper in the Journal of Financial Economics, Fedyk and colleagues found that firms’ use of AI led to increased growth for companies through more innovation and creation of new products, which increased both sales and hiring.

Fedyk says that industries with particularly AI-related tasks, such as auditing, could see reductions in workforce over time. For most fields, however, she predicts that the workforce will stay steady but its composition will change. Her new National Bureau of Economic Research working paper studying employment at companies investing in AI found that they were looking for a workforce that was even more highly skilled, highly educated, and technical than other firms. “We’re seeing a lot of growth in jobs like product manager—jobs that help to manage the increase in product varieties and increase in sales,” Fedyk says.

Illustration of a typewriter with computer code on paper and keyboard.An explosion of possibilities

Company conversations about gen AI exploded this spring, says Amit Paka, MBA 11, founder and COO of Fiddler AI, a five-year-old startup that helps firms build trust into AI by monitoring its operation and explaining its black-box decisions. “Generative AI became a board-level conversation,” he says, “even if folks in the market don’t know how they’ll actually implement it.” For now, firms seem more comfortable using gen AI internally rather than in customer-facing roles where it could open them up to liability if something goes wrong.

Obvious applications are creative—for example, using it to generate marketing copy or press releases. But the most common implementations, says Paka, are internal chatbots to help workers access company data, such as human resources policies or industry-specific knowledge bases. More sophisticated implementations are models trained from scratch on a set of data, like Google’s Med-PaLM, an LLM to answer medical questions, and Bloomberg’s BloombergGPT, trained on 40 years of financial data to answer finance questions. Deciding what type of LLM to implement in a company is a matter of first figuring out the problem you need to solve, Paka says. “You have to find a use case where you have a pain point and where an LLM will give you value.”

For now, firms seem more comfortable using gen AI internally rather than in customer- facing roles where it could open them up to liability if something goes wrong.

The power of video

While many companies are already using gen AI to analyze and generate text, video applications are next. Sunny Nguyen, MBA 18, is lead product manager for multimodal AI at TwelveLabs, which recently launched Pegasus, a video-language foundation model that uses gen AI to understand video and turn its content into summary, highlights, or a customized output. “Video understanding is an extremely complex problem due to the multimodality aspect, and lots of companies still treat videos as a bunch of images and text,” Nguyen says. “Our proprietary multimodal AI is aimed at solving this challenge and powering many applications.” For example, sports leagues could use the technology to generate game highlights for fan engagement; online-learning publishers could generate chapters or highlights instantly; and police officers could get accurate, real-time reports of suspicious activity.

TwelveLabs is launching an interactive chat interface where users could ask questions in an ongoing dialogue about a video. “Just like ChatGPT but for video,” Nguyen says.

Norberto Guimaraes, MBA 09, cofounder and CEO of Talka AI, is focusing video analysis on business-to-business sales conversations, using gen AI to analyze not just verbal content but nonverbal cues as well. Guimaraes says nonverbal factors can account for up to 80% of the impact made in a conversation. Talka’s technology uses AI to analyze 80 different signals, including facial expressions, body language, and tone of voice to judge whether a conversation is achieving its purpose, usually of completing a sale.

Guimaraes says the technology could be used to train salespeople to communicate more effectively and discern clients’ needs. “We’ll be better able to understand what are the key frustrations from your customer, whether you’re taking into account what they’re saying, and whether or not the conversation is landing,” he says.

Computer programmers have begun implementing more formal techniques in a new field called AI fairness, which employs mathematical frameworks based on social sciences to de-bias embedded data.

Talka AI is currently testing the technology with a “very large” company that is “one of the best known for sales,” Guimaraes says. It currently has 70,000 conversations in its system and has been able to successfully predict whether a sale will occur 85% of the time.

Illustration of woman at a laptop shaking a hand coming out of the computer.Sales and service

Companies are also exploring the use of AI to take part in simple sales. Faculty member Holly Schroth, a distinguished teaching fellow who studies negotiations and influence, has consulted with the company Pactum, which has been working on an AI tool to manage low-level sales—repetitive negotiations that have just a few different issues such as length of contract, quantity, and price. In initial studies, Pactum has found that people prefer talking to AI versus a human. “People like talking with a bot because it’s kinder and friendlier,” says Schroth, “because it can be programmed that way.”

Specifically, AI bots can be programmed to use language that acknowledges what the other side is saying. “Humans sometimes get frustrated and may not be aware of the language they use that may be offensive,” says Schroth. “For example, ‘with all due respect’ is at the top of the rude list.” People may feel like they can get a better deal with AI, she says, since the bot will work to maximize value for both sides, while a human may not be able to calculate best value or may let emotions interfere.

AI is also perfectly positioned to be a coach, says Assistant Professor Park Sinchaisri. He’s explored ways AI can help people work more efficiently whether they are Uber drivers or physicians. In today’s hybrid environment, where workers are often remote without the benefit of on-the-job training or peer-to-peer learning, a bot can learn best practices from colleagues and identify useful advice to share with others. AI could also help human workers redistribute tasks when a team member leaves. However, Sinchaisri has found that while AI provides good suggestions, humans can struggle to adopt them. In his working paper on AI for human decision-making, workers accepted only 40% of machine-generated suggestions compared to 80% of advice from other humans, citing they did not believe the AI advice to be effective or understand how to incorporate it into their workflow.

Sinchaisri is studying ways to make coaching more effective—either by training the AI to give only as much advice as the person might accept or by allowing for human nature. “Research has shown that humans tend to take more advice if they can modify and deviate from it a little,” he says. “Good advice is often counterintuitive, meaning it is difficult for humans to figure it out on their own; AI needs to learn how to effectively deliver such advice to humans to reap its full potential.”

Illustration of a hand holding a pencil with a computer pointer icon at the tip.Bias and ethics

As powerful and versatile as AI can be, the warnings are real. Trained on the vastness of the internet, large language models pick up toxic content and racist and sexist language. Then there’s the real problem of hallucinations, in which AI output seems believable but includes false information.

Biases are baked into LLMs, says Merrick Osborne, a postdoc at Haas studying racial equity in business. In a new paper on bias and AI, Osborne explores how biased information results not only from the data a model is trained on but also from the engineers themselves, with their natural biases, and from the human annotators whom engineers employ to fine-tune and subjectively label data.

“You need to create a culture of accepting that generative AI is useful in many stages of work and encouraging people to be transparent with their co-workers about how they’re using it.”

—David Evan Harris

Certainly more diversity in the field of engineering would help. But it’s important, Osborne argues, that engineers and annotators undergo diversity training to make them more aware of their own biases, which in turn could help them train models that are more sensitive to equal representation among groups. Computer programmers have begun implementing more formal techniques in a new field called AI fairness, which employs mathematical frameworks based on social sciences to de-bias embedded data. “We aren’t born knowing how to create a fair machine-learning model,” Osborne says. “It’s knowledge we have to acquire.”

Another way Osborne suggests addressing both bias and hallucinations is to call in outside help. Vijay Karunamurthy, MBA 11, is doing just that as field CTO at Scale AI, a seven-year-old startup that’s worked to make models safer and fairer. “People understand that models come out of the box without any sensitivity or human values, so these base models are pretty dangerous,” he says. Scale AI employs teams of outside experts, including cognitive psychologists with backgrounds in health and safety, who can help decide what information would be too dangerous to include in an LLM—everything from teaching how to build a bomb to telling a minor how to illegally buy alcohol. The company also employs social psychologists, who can spot bias, and subject experts, such as PhDs in history and philosophy, to help correct hallucinations.

Of course, it’s not feasible to have hundreds of PhDs constantly correcting models, so the company uses the information to create what’s called a critique model, which can train the original model and make the whole system self-correcting.

For companies adopting AI, it’s important to develop internal processes to help guide ethical use by employees. One of those guidelines, says faculty member David Evan Harris, a chancellor’s public scholar, is disclosure. “People have a right to know when they’re seeing or interacting with generative AI content,” says Harris, who was formerly on the civic integrity, misinformation, and responsible AI teams at Meta. That goes for both internal use and external use with customers. “When you receive content from a human you probably have more reason to trust it than when it’s coming from AI because of the propensity of the current generation of AI to hallucinate.” That’s especially true, he says, when dealing with sensitive data, like financial or medical information.

Companies may also want to control how gen AI is used internally. For example, Harris says there have been numerous cases in Silicon Valley of managers using it to write peer reviews for regular performance evaluations. While a tempting shortcut, it could result in boilerplate verbiage or, worse, wrong information. Harris says it’s better to come up with new strictures for writing reviews, such as using bullet points. On the other hand, banning AI is unlikely to work. “You need to create a culture of accepting that generative AI is useful in many stages of work and encouraging people to be transparent with their co-workers about how they’re using it,” he says.

One practice to avoid when crafting internal policies around gen AI is to limit governance programs to the letter of the law—since it tends to lag behind ethics, says Ruby Zefo, BS 85, chief privacy officer at Uber. “The law should be the low bar—because you want to do what’s right,” says Zefo. “You have to create policies and programs and documentation that will put you on the right side of the laws you know are coming but aren’t yet here.”

For one, that means developing guidelines around personal or confidential data—both being sure to recognize and refrain from using other’s personal or proprietary information to train the model and to refrain from feeding such information into a model that is or might become public. When running algorithms on personal data for customers, she adds, it’s important to allow for human review. Companies should also control access to internal gen AI models to only those who have a legitimate purpose. More than anything, Zefo says, flexibility is key while the technology is still being developed. “You have to have a process where you’re always evaluating your guidelines, always looking to define what’s the highest risk.”

Illustration of a circuit board snake slithering toward a pair of feet.Planning for the future

That need to stay nimble extends to the workforce as well, says Heyne. In the past, AI was mostly used by technical workers programming models—but gen AI will be used by myriad employees, including creative, sales, and customer-service workers. As gen AI develops, their day-to-day work will likely change. For example, a sales agent interacting with a bot one day may be overseeing a bot negotiating with an AI counterpart the next. In other words, sales or procurement functions in an organization will remain but will look different. “We have to constantly think about the tasks we need to train for now to get the value that is the goal at the end,” Heyne says. “It’s a strategic imperative for any company that wants to stay in business.”

“The law should be the low bar—because you want to do what’s right. You have to create policies and programs and documentation that will put you on the right side of the laws you know are coming but aren’t yet here.”

—Ruby Zefo, BS 85

It’s also an education that needs to start much earlier in life, says Dimple Malkani, BS 98. She founded Glow Up Tech to prepare teenage girls for the tech industry by introducing them to successful female leaders in Silicon Valley. The skills necessary to succeed in a gen AI world aren’t necessarily those emphasized previously in tech, or even in previous iterations of AI, says Malkani, who spent decades working in marketing and business development. “The core skills these girls should be getting when they go to college aren’t just data science but strategy and creativity as well—to figure out what new product innovation we should create,” she says.

One thing she’s sure of as she talks with the next generation about gen AI is that, unlike current workers, they are ready to dive in. “Gen Z is very comfortable using gen AI,” she says. “In fact, they’ve already embraced it and expect it to be part of their working futures.”

Back