The race to control the gen AI market has begun. Who will come out on top?

Generative AI is at an inflection point between consolidation and true competition, argues Associate Professor Abhishek Nagaraj.

An AI generate image shows a futuristic data center. (AdobeStock)

While generative AI (gen AI) is spurring a quantum leap of innovation in fields from consumer marketing to protein discovery, a rapid consolidation is taking place behind the scenes. A few major players may soon gain tight control over the future of the field—unless policymakers act fast to promote more balance and competitiveness, argues Berkeley Haas Associate Professor Abhishek Nagaraj.

“There’s no doubt in my mind that the market will be dominated by a few key players,” says Nagaraj, who sounds a warning in a recent National Bureau of Economic Research working paper, Old Moats for New Models: Openness, Control, and Competition in Generative AI. “The big question is how concentrated are we talking about? Even turning the dial on that a little will be really beneficial.”

The stakes are high: A landscape controlled by a few AI overlords could lead to less transparency, innovation, and efficiency overall, stifling potentially more transformative versions of gen AI technology for the future. The traditional “moats” that protect startup technology, however, such as patent protections and secrecy around intellectual property, are unlikely to be effective in the gen AI landscape due to the massive edge that large companies already enjoy, Nagaraj says.

How firms gain advantage

In the paper, cowritten with MIT Sloan’s Pierre Azoulay and Harvard Business School’s Joshua Krieger, the authors draw upon the pioneering work of Haas professor David Teece, who examined innovation and competition for more than 40 years in industries such as computing, pharmaceuticals, and the internet. Teece makes the distinction between two different ways to gain advantage in a competitive environment: appropriability, the ability to guard against copying an innovation’s core technology; and complementary assets, or control over the ability to transform the innovative know-how into a value proposition that customers might pay for.

“If I come up with the idea for a drug, for example, I can protect that idea with a strong patent, even if you can already see the idea,” Nagaraj says. “But I can also protect it by controlling distribution, or the capacity to gain approval or convince doctors to prescribe it.”

With AI technology, the foundational model for gen AI is well-understood, essentially published in open-source articles, making it difficult to protect the core technology. And even if it weren’t, the rapid turnover among Silicon Valley firms makes secrecy impossible.  “In California, noncompete clauses are illegal, so it’s quite common for firms to hire from rival companies,” Nagaraj says, “as well as a relative willingness to discuss, even informally, how these things work.” Therefore, pioneering AI firms cannot use pure intellectual property to protect from competition like their counterparts in the pharmaceutical industry.

Massive computing power needed

On the other hand, he and his co-authors argue, the complementary assets big gen AI firms already enjoy are impressive. Chief among them is the massive amount of computing infrastructure needed to run systems, what they call the “compute environment,” which requires Herculean levels of data crunching. Meta alone is acquiring hundreds of thousands of Nvidia’s state-of-the art H100 graphics cards at the cost of billions of dollars, leading to runs on supply (and driving up Nvidia’s share price). “The scale required is mind-boggling,” Nagaraj says.

In addition, big firms are scraping the internet for immense amounts of data on which to train their models at a level prohibitive for smaller companies. Large players are able to use the data to set their own performance benchmarks and ethical standards in a way that other companies have little choice but to follow, giving the big fish advantage in the way their AI systems are ranked. “These benchmarks are all super-subjective, and tied to the training data that firms use, so they are implicitly designed in a way that makes the market leaders look good,” Nagaraj says.

The role of open source

Ironically, the gen AI environment has so-far continued to remain competitive due to one of the big players themselves. Bucking the trend of its rivals, Meta released an open-source version of its gen AI model called LLaMA, which was accelerated through an accidental leak last year. In its wake, multiple knockoffs, including Berkeley’s Vicuna, Stanford’s Alpaca, and other “spawns of LLaMA” flooded the market, instantly creating a renaissance in the field, the authors argue. “There are so many ways people are experimenting with it that wouldn’t be possible with just OpenAI,” Nagaraj says. While promising, Meta hasn’t been completely open with its training data, and Nagaraj speculates it may go the way of Google with its Android cell phone technology, trying to exert control over the platform through other means, like Google does with Android.

A national AI infrastructure?

In the meantime, Nagaraj and his coauthors argue for a more hands-on role by policymakers to better control the complementary assets that give big companies an advantage—before it’s too late. One intriguing idea is a national AI infrastructure open for any company to use, similar to the way that a national highway system aids in interstate commerce. “It could really lower the bar to democratize the compute environment,” Nagaraj says. Even though the bill is controversial, California’s SB 1047 regulation includes a provision for a system called CalCompute which would be similar in spirit. Apart from that, regulators could standardize benchmarks for performance and safety in a way that creates greater transparency, setting more objective measures that could level the playing field and allow more innovative companies to showcase their abilities. “We can’t let a small number of companies decide what’s good or what’s safe,” Nagaraj says.

Even as they are implementing curbs, Nagaraj warns, policymakers must be careful not to overregulate in a way that could reduce competition. Placing limits on training data by requiring companies to reimburse content providers for use of their data would be great from the perspective of compensating content creators. But, from a competitive perspective, it could actually lend advantage to larger firms with deeper pockets to pay, unless a provision were in place to exempt firms under a certain size. Policy-makers must balance these competing effects.

In the end, the first-mover advantage for larger companies may be too great to overcome, leaving smaller startups to innovate on the margins of foundational models, Nagaraj says. After all, computers, cell phones, and cloud computing have all become dominated by just a few firms. On the other hand, the early internet offers an example of a much more democratic model, full of messiness and possibilities. The extent to which gen AI can follow suit can only increase its potential to be a truly transformative technology for the future.

Read the paper:

“Old Moats for New Models: Openness, Control, and Competition in Generative AI”
By Pierre Azoulay, MIT Sloan; Joshua Krieger, Harvard Business School; and Abhishek Nagaraj, Haas School of Business, UC Berkeley
NBER Working Paper

Back