Brain scans help researchers untangle the truth about lie detection
Even though AI assistants such as Apple’s Siri and Amazon’s Alexa have been in widespread use for years, the release of ChatGPT in late 2022 was astonishing. The bot’s advanced language understanding and human-like responses opened the public’s eyes to the rapid pace of AI development, set off a wave of excitement and apprehension, and ramped up competition among tech giants and upstarts to deploy new AI technologies. Apple joined the race this week with the announcement of Apple Intelligence, which will put a powerful AI chatbot into the pockets of hundreds of millions of iPhone users worldwide.
Professor Zsolt Katona, who holds PhDs in both computer science and marketing, began using generative AI in 2019 to write scripts for his Berkeley Executive Education course. That year, Katona also developed and began teaching the Business of AI course for MBA students, and he continues to teach the AI for Executives course. He has recently focused his marketing research on AI as well.
We talked with Katona, the Cheryl and Christian Valentine Professor, about misconceptions, business applications, and how AI is influencing marketing.
Berkeley Haas: Right now, what do you think are the biggest misconceptions about AI?
Zsolt Katona: I guess one of the biggest misconceptions is just the word ‘generative’ because people are applying it to everything—that’s the hype—and definitions aren’t clear.
“…One of the biggest misconceptions is just the word ‘generative’ because people are applying it to everything—that’s the hype—and definitions aren’t clear.”
Tell me more, because I could be one of those people.
People use generative to mean that the application generates something. But many of the uses people are familiar with are more like a kind of search. And in terms of beliefs about how valuable different applications are, the family of generative AI is overestimated. Most of the applications that are the most lucrative are not generative in nature.
I read an article that said Mastercard uses generative AI for fraud detection. But it turns out that they use a transformer model—these are (neural networks) behind language models. Transformers were originally designed for generative applications, but they have nongenerative uses, and Mastercard’s use was nongenerative. It really is just a tool that detects outliers, suspicious transactions.
Other than confusion about terminology, the biggest misconception is beliefs about the ability of these things to work fully autonomously. That’s essentially nonexistent in most applications. What you have to do for pretty much every application is figure out how to make the AI portion and the humans work together.
I’ve heard it said that we’re still in the hype phase, and many companies are still trying to figure out lucrative commercial applications.
They’re figuring it out, and it’s just a matter of time. Some of the fancy stuff is not there yet because companies are having problems with getting their data infrastructure ready. Their data is messy, it’s not in a format that allows them to easily use simple AI applications—or they might not even own the data. But it’s the nonflashy stuff that’s most lucrative. For example, using cameras in a factory to detect manufacturing defects. There’s nothing generative AI about it—it’s just looking at little differences in those pictures.
What’s an example of something businesses are doing well, or a significant change, in marketing?
That depends on whether you mean something that’s widely used versus something that’s flashy and innovative. I like the ones that are useful, such as all the personalization that’s happening on a large scale. For example, customized videos for each product where somebody explains exactly what the product does. It’s happening on Chinese ecommerce sites, but it’s coming to Amazon very soon. It’s just impossible to do something very personalized to the consumer without this kind of technology.
In your recent research, you looked at whether market researchers can replace human participants with ‘synthetic respondents’ generated through a large language model (LLM). What did you find?
We only tried a couple of product categories, but it worked pretty well. We had just three variables—age, gender, and income. There was 75% to 90% agreement with human data.
Are we almost at the point where market researchers can accurately use AI for product research?
They’re already doing it. Our paper was about validating that this works, and for that, you still need human data to compare. The promise of these ‘synthetic respondents’ is that you can get very specific types of responses that would be otherwise hard to get from humans, and at a much lower cost. Let’s say, a person who earns a million-plus dollars and lives in a fully electrified home and drives a cyber truck. You can ask the AI to pretend to be that person and answer questions about perceptions about cars. You’ll get a response, but it’s still hard to validate because you have to find that human to compare it to.
“The promise of these ‘synthetic respondents’ is that you can get very specific types of responses that would be otherwise hard to get from humans, and at a much lower cost. Let’s say, a person who earns a million-plus dollars and lives in a fully electrified home and drives a cyber truck.”
So the more specific you get, the less accurate it might be because it would be hard to create a sample of people like that.
The question becomes, how bad it would get? Would it be totally random, or would it give you some idea?
How would this work with a new product—such as a brand-new car brand with no information from humans on the internet, or a really innovative product that doesn’t exist yet?
In theory, if you can accurately describe the new brand or new product, the language model could make those inferences. For example, you can tell AI to create an image of a cat with a mask on, right? You don’t need examples of cats with masks on. You need examples of cats, and examples of masks, and examples of some kind of human or animal with a mask on so that it understands these three things.
How does AI do in coming up with new products?
We actually have a working paper on creativity for product ideas with humans versus AI. We do find that AI is better, but if you look only at the top answers, the difference is much smaller. And again, it’s more of a human data problem because, with the AI, we can get the best models to generate ideas, but we can’t ensure the most creative humans are going to be in our pool of human participants. It’s still very likely that if you somehow managed to get all the humans in the world to participate in our study, the best ones would be better than AI.
You teach the business of AI to MBA students and through Berkeley Executive Education. With things moving so fast, what skills do managers working with AI technology need?
They need to understand the fundamentals of how it works. That’s the No. 1 thing. It’s even better if they have some coding skills, and I do make them go through an exercise with code, so they have at least a feeling for what it looks like and for the building blocks. Obviously, anyone who studies engineering will understand it in great detail, but for nontechnical people, just understanding how it works helps them a lot. Other than that, they need to understand how to manage technology, which is not that specific to AI. If you put together those skills with some understanding of how the specific technology works, it’s tremendously helpful.
Can nontechnical people learn enough to be effective?
My marketing colleague who teaches marketing analytics likes to say that it’s easier to teach managers analytics than to teach data scientists to be good managers. I share that thought, and again, they don’t have to be as technically advanced as the engineers. But they should understand how the data goes in and how it results in a desired outcome.
My advice is that managers should learn enough about how it works to talk to the people who make these things, especially with respect to the data needs. What I call the “objective function” of the model is what it should do. That’s just an equation, but translating that equation to the business objective is a critical task that somebody has to do, and it’s not going to be a data scientist. It’s rarely going to be the engineer.
Is that pretty much the same as for technology management in general?
Well, the difference is in how AI works, and specifically that it learns from examples. So you have to think hard about what those examples are, and you have to think hard about how you train it. How should the so-called error or loss function be specified? What are you aiming for, and when do you say it’s good enough?
Will there be jobs for marketing managers without engineering backgrounds?
I think there will be. Marketing is such a subjective topic that it’s hard to evaluate all the things AI needs to do. It comes down to a lot of human judgment. If AI can do every job in the world, then yes, marketing people will be replaced as well. But it’s very hard to show that a machine can do the work better than a human.
“Marketing is such a subjective topic that it’s hard to evaluate all the things AI needs to do. It comes down to a lot of human judgment.”
Because of all the different aspects of it?
Yes, it’s a very complex type of work. And then, because of the complexity, you need a lot of people who know how to use these tools. So, their jobs might change, but they will have jobs for sure. Everybody was saying a few years ago how the blue-collar jobs would be replaced, and now they are instead talking about all the white-collar jobs. But neither is happening, really. Some tasks are being replaced, yes, but people will still have jobs—although they will be different.
Posted in:
Topics: