The power of groupthink: Study shows why ideas spread in social networks

A group of lit matches
Photo: JamesBrey for iStock/Getty Images

There’s a reason that ideas—even erroneous ones—catch fire on social media or in popular culture: groupthink.

New research co-authored by Berkeley Haas Asst. Prof. Douglas Guilbeault shows that large groups of people all tend to think alike, and also illustrates how easily people’s opinions can be swayed by social media—even by artificial users known as bots.

In a series of experiments, published in the journal Nature Communications, Guilbeault and co-authors Damon Centola of the University of Pennsylvania and Andrea Baronchelli of City University London created an online game that asked numerous people to identify what they saw in Rorschach inkblots.

Bigger groups, fewer categories

“In small groups, there was a ton of variation in how people described the shapes,” says Guilbeault, who studies collective intelligence and creativity, categorization, and social media policy. “As you increase the size of the group, however, rather than creating unpredictability, you could actually increase your ability to predict the categories.”

It’s not that there was a lack of ideas in the large groups—in fact, the larger the group, the more categories for blots were initially proposed. However, some categories just seemed to appeal to more people than others. As more people communicated with each other, the slightly more popular categories won out. The large groups consistently settled on just a handful of categories, including “crab,” “bunny,” “frog,” and “couch”—even when the blots themselves varied.

“When you’re in a small group, it’s more likely for unique perspectives to end up taking off and getting adopted,” Guilbeault explains. “Whereas in large groups, you consistently see ‘crab’ win out because multiple people are introducing it, and you get a cascade.”

When you’re in a small group, it’s more likely for unique perspectives to end up taking off and getting adopted.

The influence of bots

Interestingly, however, he and his colleagues were able to manipulate the choices people made by introducing “bots” with an agenda into the system. These automatic participants continually implanted the idea that the blots looked like a sumo wrestler, an otherwise unpopular category. Sure enough, when a critical mass of bots pushed the idea, human participants also started adopting it.

Once more than a third (37%) of participants advocated for sumo wrestler, they found, the group was likely to adopt it over other categories. What’s more, when researchers afterwards showed those participants the image that was most likely deemed a crab by other groups, they were much more likely now to call it sumo as well. “We showed people the crabbiest crab, and now people said it looked like a wrestler. No one described it as looking like a sumo wrestler, let alone like a person, in the large groups without bots,” Guilbeault says.

The same phenomenon happens on social media, says Guilbeault, who has previously researched the influence of Twitter bots, including their role in the 2016 election. By pushing an idea over and over, both real and automated users are able to sway the majority to use their terms. “In some sense, Trump’s presidency was a war over categories,” Guilbeault says. “Ten years ago, no one was talking about ‘fake news,’ and now everyone is trying to categorize whether news media is fake or not.”

In some sense, Trump’s presidency was a war over categories. Ten years ago, no one was talking about ‘fake news,’ and now everyone is trying to categorize whether news media is fake or not.

For that reason, he says, content moderation by social media platforms that relies on identifying the difference between real and fake news may be actually be doing more harm than good by subtly validating categories they are criticizing. “Just by trying to put out the fire in the moment, they are sending the message that this is the right category,” he says, “while a different category system may allow for more nuance and subtlety.”

A better approach may be to focus on getting rid of the bots spreading the categories in the first place—or to create more accurate categories that are also appealing enough to spread, he says. Take the naming of the more contagious coronavirus variants that have appeared, for example. Small groups of scientists devised highly technical names, such as 501.Y2 and B.1.1.7, to describe the virus strains. Meanwhile, the public has adopted easier-to-remember geographical names such as the South Africa and UK variants, respectively.

The problem is that those names risk stigmatizing regions and misrepresenting how widespread these variants really are. A scientific body could test new naming systems that might be both more accurate and more successful in getting adopted.

“You could do market research in a networked focus group,” says Guilbeault, discovering or creating the social media equivalent of ‘crab’ in order to spread more benign ideas. Those strategies might ultimately succeed better than using flags or warnings in changing the way people communicate, he says, leading to a more civil public discourse overall.

Back