Code of Conduct

Managing in the age of AI

Illustration of a computer screen showing a woman riding a bike with wheels that are a pie chart and the ChatGPT logo. A lamp shines on the screen.The release of ChatGPT in late 2022 opened the public’s eyes to the rapid pace of AI development, set off a wave of excitement and apprehension, and ramped up competition among tech giants and upstarts to deploy new AI technologies.

Professor Zsolt Katona, who holds PhDs in both computer science and marketing, began using generative AI in 2019, when he developed and began teaching business of AI classes to MBA students and through Berkeley Executive Education. He has recently focused his marketing research on AI as well.

We talked with Katona, the Cheryl and Christian Valentine Professor, about business applications and what skills marketing professionals need to thrive.

Berkeley Haas: What are the biggest misconceptions about AI?

Zsolt Katona: One is just the word “generative,” because people are applying it to everything.

People use generative to mean that the application generates something. But many of the uses people are familiar with are more like a kind of search. Also, most of the applications that are most lucrative are not generative in nature. I read an article that said Mastercard uses generative AI for fraud detection. But it really was just a tool that detects outliers, suspicious transactions.

Otherwise, the biggest misconception is that these things can work fully autonomously. That’s essentially non-existent in most applications. What you must do for pretty much every application is figure out how to make the AI portion and the humans work together.

Are we still in the hype phase while companies figure out lucrative commercial applications?

They’re figuring it out, and it’s just a matter of time. Some of the fancy stuff is not there yet because companies are having problems getting their data in a format that allows them to easily use simple AI applications—or they might not even own the data. But the non-flashy stuff is the most lucrative. For example, using cameras in a factory to detect manufacturing defects. It’s just looking for little differences on those pictures.

In your business of AI class, what skills do you say managers working with AI technology need?

The number one thing is to understand the fundamentals of how it works. It’s even better if they have some coding skills, and I do make my students go through an exercise with code so they have at least a feeling for the building blocks. Other than that, they need to understand how to manage technology, which is not that specific to AI.

“Translating the ‘objective function’ of a model…to the business objective is a critical task that somebody has to do, and it’s not going to be a data scientist. It’s rarely going to be the engineer.”

Can nontechnical people learn enough to be effective?

My colleague who teaches marketing analytics likes to say that it’s easier to teach managers analytics than to teach data scientists to be good managers. I share that thought, and again, they don’t have to be as technically advanced as the engineers. But they should understand how the data goes in and how it results in a desired outcome.

Managers should learn enough about how it works to talk to the people who make these things, especially with respect to data needs. Translating the “objective function” of a model (i.e. what it should do) to the business objective is a critical task that somebody has to do, and it’s not going to be a data scientist. It’s rarely going to be the engineer.

Will there be jobs for marketing managers without engineering backgrounds?

I think there will be. Marketing is such a subjective topic that it’s hard to evaluate all the things AI needs to do. It comes down to a lot of human judgment. If AI can do every job in the world, then yes, marketing people will be replaced as well. But it’s a very complex type of work and it’s hard to show that a machine can do it better than humans.

Adrien Lopez Lanusse, MBA 99
Consultant & Adviser, ALL Insights

As the child of immigrants, Adrien Lopez Lanusse recalls being a “cultural ambassador” to his French and Mexican parents.

“I was an interpreter—not just with language. I tried to understand how they could be relevant to different audiences,” Lanusse says. “My dad was a gardener, and there was an entrepreneurial spirit in the home, which helped me understand target audiences and not try to be one thing to everybody.”

It’s a skill Lanusse has used to considerable success.

From 2012 to 2021, he was the first vice president of consumer insights at Netflix, leveraging consumer insights to grow the company from $2 billion in annual revenues to $25 billion. He also helped the streaming giant expand to 190 countries.

“Traditionally, research data came from surveys or interviews. Now there are additional sources, whether it’s behavioral data or unstructured textual data in the social media space,” Lanusse says. “The expanded tool chest gives an accurate depiction of customers. It’s not just who they are and what they do but what motivates them.”

These days, he’s a consultant and adviser at ALL Insights in San Mateo, Calif., where he helps companies ranging from startups to large tech firms better understand audiences and develop strategies to drive business. The work makes a transaction feel “personal” to customers, he says.

“Algorithms allow us to create a product that adapts to an individual’s needs, but that alone doesn’t make it feel personal,” he adds. “Having a product that is relevant but feels like it comes from a human and not a machine is something a lot of companies strive to do.”

linkedin.com/in/adrienlanusse

To err is human. And in the age of AI, it may be humanizing.

A man's hands are typing on a laptop with an AI Chatbot. Speech bubbles say: Hello! and Hello! How can I help you?
Image: AdobeStock

To err is human…and in the age of AI, it may be humanizing.

A study co-authored by Associate Professor Juliana Schroeder found that people view customer service agents that make typographical errors—and correct them—as more human and sometimes even more helpful.

“For decades, people worked to make machines smarter and less prone to errors,” Schroeder says. “Now that we’re living through real-world Turing tests in most of our online interactions, an error can actually be a beneficial cue for signaling humanness.”

In a paper published in the Journal of the Association for Consumer Research, Schroeder and colleagues from Yeshiva University, Stanford, and the University of Colorado Boulder developed their own chatbot—named Angela—and conducted five studies involving over 3,000 participants. Across all studies, participants rated agents that made and corrected typos as more human than those that made no typos or left typos uncorrected. They also viewed them more warmly.

The effect was strongest when participants did not know if the agent was a bot or a human, but interestingly, it held even when participants were told this information. “Seeing an agent correct a typo led people to expect the agent would be more helpful,” Schroeder says.

Prior research dating back the 1960s—dubbed the “Pratfall Effect”—showed that under certain conditions, making mistakes can increase a person’s likability. But other studies have shown that communicators who make typos, spelling mistakes, or grammatical errors are seen as less intelligent or competent as those who don’t. Schroeder and her co-authors suggest it’s what happens after an error is made that can make the difference.

“We suspect that correcting an error is humanizing because it shows an engaged mind,” she says. “It’s a sign that the communicator cares about how they’re perceived.”

The researchers, who include Shirley Bluvstein from Yeshiva University, Xuan Zhao from Stanford, Alixandra Barasch from the University of Colorado Boulder, do not suggest that companies intentionally program their chatbots by inserting typos—which could be seen as manipulative and raise ethics questions. Recent policy efforts in some states require bots to disclose their identities or companies to watermark AI-generated content. Yet, if a genuine mistake is made and the chatbot (or person) has the wherewithal to address it, this may impress customers.

Overall, the findings suggest that it may be possible to improve chatbots by implementing humanizing cues— such as fixing mistakeswhile still being transparent to consumers. These cues, the researchers say, “can signal a company’s dedication to connecting with consumers, potentially offsetting the impersonal and dehumanizing nature of text-based interactions.”

Read the full paper:

Imperfectly Human: The Humanizing Potential of (Corrected) Errors in Text-Based Communication
Journal of the Association for Consumer Research
By Shirley Bluvstein, Xuan Zhao, Alixandra Barasch, and Juliana Schroeder
July 2024

Will AI replace marketing managers? Q&A with Professor Zsolt Katona

A photo shows a man in purple dress shirt behind computer monitors
Professor Zsolt Katona

Even though AI assistants such as Apple’s Siri and Amazon’s Alexa have been in widespread use for years, the release of ChatGPT in late 2022 was astonishing. The bot’s advanced language understanding and human-like responses opened the public’s eyes to the rapid pace of AI development, set off a wave of excitement and apprehension, and ramped up competition among tech giants and upstarts to deploy new AI technologies. Apple joined the race this week with the announcement of Apple Intelligence, which will put a powerful AI chatbot into the pockets of hundreds of millions of iPhone users worldwide.

Professor Zsolt Katona, who holds PhDs in both computer science and marketing, began using generative AI in 2019 to write scripts for his Berkeley Executive Education course. That year, Katona also developed and began teaching the Business of AI course for MBA students, and he continues to teach the AI for Executives course. He has recently focused his marketing research on AI as well.

We talked with Katona, the Cheryl and Christian Valentine Professor, about misconceptions, business applications, and how AI is influencing marketing.

Berkeley Haas: Right now, what do you think are the biggest misconceptions about AI?

Zsolt Katona: I guess one of the biggest misconceptions is just the word ‘generative’ because people are applying it to everything—that’s the hype—and definitions aren’t clear.

“…One of the biggest misconceptions is just the word ‘generative’ because people are applying it to everything—that’s the hype—and definitions aren’t clear.”

Tell me more, because I could be one of those people.

People use generative to mean that the application generates something. But many of the uses people are familiar with are more like a kind of search. And in terms of beliefs about how valuable different applications are, the family of generative AI is overestimated. Most of the applications that are the most lucrative are not generative in nature.

I read an article that said Mastercard uses generative AI for fraud detection. But it turns out that they use a transformer model—these are (neural networks) behind language models. Transformers were originally designed for generative applications, but they have nongenerative uses, and Mastercard’s use was nongenerative. It really is just a tool that detects outliers, suspicious transactions.

Other than confusion about terminology, the biggest misconception is beliefs about the ability of these things to work fully autonomously. That’s essentially nonexistent in most applications. What you have to do for pretty much every application is figure out how to make the AI portion and the humans work together.

I’ve heard it said that we’re still in the hype phase, and many companies are still trying to figure out lucrative commercial applications.

They’re figuring it out, and it’s just a matter of time. Some of the fancy stuff is not there yet because companies are having problems with getting their data infrastructure ready. Their data is messy, it’s not in a format that allows them to easily use simple AI applications—or they might not even own the data. But it’s the nonflashy stuff that’s most lucrative. For example, using cameras in a factory to detect manufacturing defects. There’s nothing generative AI about it—it’s just looking at little differences in those pictures.

Illustration shows an hands typing on a laptop with a digital image with computer code and the letters AI
Image: AdobeStock

What’s an example of something businesses are doing well, or a significant change, in marketing?

That depends on whether you mean something that’s widely used versus something that’s flashy and innovative. I like the ones that are useful, such as all the personalization that’s happening on a large scale. For example, customized videos for each product where somebody explains exactly what the product does. It’s happening on Chinese ecommerce sites, but it’s coming to Amazon very soon. It’s just impossible to do something very personalized to the consumer without this kind of technology.

In your recent research, you looked at whether market researchers can replace human participants with ‘synthetic respondents’ generated through a large language model (LLM). What did you find?

We only tried a couple of product categories, but it worked pretty well. We had just three variables—age, gender, and income. There was 75% to 90% agreement with human data.

Are we almost at the point where market researchers can accurately use AI for product research?

They’re already doing it. Our paper was about validating that this works, and for that, you still need human data to compare. The promise of these ‘synthetic respondents’ is that you can get very specific types of responses that would be otherwise hard to get from humans, and at a much lower cost. Let’s say, a person who earns a million-plus dollars and lives in a fully electrified home and drives a cyber truck. You can ask the AI to pretend to be that person and answer questions about perceptions about cars. You’ll get a response, but it’s still hard to validate because you have to find that human to compare it to.

“The promise of these ‘synthetic respondents’ is that you can get very specific types of responses that would be otherwise hard to get from humans, and at a much lower cost. Let’s say, a person who earns a million-plus dollars and lives in a fully electrified home and drives a cyber truck.”

So the more specific you get, the less accurate it might be because it would be hard to create a sample of people like that.

The question becomes, how bad it would get? Would it be totally random, or would it give you some idea?

How would this work with a new product—such as a brand-new car brand with no information from humans on the internet, or a really innovative product that doesn’t exist yet?

In theory, if you can accurately describe the new brand or new product, the language model could make those inferences. For example, you can tell AI to create an image of a cat with a mask on, right? You don’t need examples of cats with masks on. You need examples of cats, and examples of masks, and examples of some kind of human or animal with a mask on so that it understands these three things.

How does AI do in coming up with new products?

We actually have a working paper on creativity for product ideas with humans versus AI. We do find that AI is better, but if you look only at the top answers, the difference is much smaller. And again, it’s more of a human data problem because, with the AI, we can get the best models to generate ideas, but we can’t ensure the most creative humans are going to be in our pool of human participants. It’s still very likely that if you somehow managed to get all the humans in the world to participate in our study, the best ones would be better than AI.

You teach the business of AI to MBA students and through Berkeley Executive Education. With things moving so fast, what skills do managers working with AI technology need?

They need to understand the fundamentals of how it works. That’s the No. 1 thing. It’s even better if they have some coding skills, and I do make them go through an exercise with code, so they have at least a feeling for what it looks like and for the building blocks. Obviously, anyone who studies engineering will understand it in great detail, but for nontechnical people, just understanding how it works helps them a lot. Other than that, they need to understand how to manage technology, which is not that specific to AI. If you put together those skills with some understanding of how the specific technology works, it’s tremendously helpful.

Can nontechnical people learn enough to be effective?

My marketing colleague who teaches marketing analytics likes to say that it’s easier to teach managers analytics than to teach data scientists to be good managers. I share that thought, and again, they don’t have to be as technically advanced as the engineers. But they should understand how the data goes in and how it results in a desired outcome.

My advice is that managers should learn enough about how it works to talk to the people who make these things, especially with respect to the data needs. What I call the “objective function” of the model is what it should do. That’s just an equation, but translating that equation to the business objective is a critical task that somebody has to do, and it’s not going to be a data scientist. It’s rarely going to be the engineer.

Is that pretty much the same as for technology management in general?

Well, the difference is in how AI works, and specifically that it learns from examples. So you have to think hard about what those examples are, and you have to think hard about how you train it. How should the so-called error or loss function be specified? What are you aiming for, and when do you say it’s good enough?

Will there be jobs for marketing managers without engineering backgrounds?

I think there will be. Marketing is such a subjective topic that it’s hard to evaluate all the things AI needs to do. It comes down to a lot of human judgment. If AI can do every job in the world, then yes, marketing people will be replaced as well. But it’s very hard to show that a machine can do the work better than a human.

“Marketing is such a subjective topic that it’s hard to evaluate all the things AI needs to do. It comes down to a lot of human judgment.”

Because of all the different aspects of it?

Yes, it’s a very complex type of work. And then, because of the complexity, you need a lot of people who know how to use these tools. So, their jobs might change, but they will have jobs for sure. Everybody was saying a few years ago how the blue-collar jobs would be replaced, and now they are instead talking about all the white-collar jobs. But neither is happening, really. Some tasks are being replaced, yes, but people will still have jobs—although they will be different.