Table of Contents

Like-Minded

The power and vulnerability of social networks

Featured Researcher

Douglas Guilbeault

Assistant Professor, Management of Organizations

By

MICHAEL BLANDING

Illustration by

DANIEL HERTZBERG

Brain formed of images of people with outstretched arms.

There’s a reason ideas—even erroneous ones—catch fire on social media, says Berkeley Haas Assistant Professor Douglas Guilbeault: groupthink. His new research, published in Nature Communications, shows that large groups all tend to think alike and illustrates how easily people’s opinions can be swayed by social media—even by artificial users known as bots.

In an experiment, Guilbeault and colleagues asked numerous people to identify what they saw in Rorschach blots.

“In small populations, there was a ton of variation in how people described the shapes,” says Guilbeault. “As you increase the size of the population, however, rather than creating unpredictability, you could actually increase your ability to predict the categories they’d decide on.”

The large groups consistently settled on just a handful of ways to describe the numerous different blots, including “crab,” “bunny,” “frog,” and “couch.”

“When you’re in a small group, it’s more likely for unique perspectives to end up taking off and getting adopted,” Guilbeault explains. “Whereas in large groups, you consistently see ‘crab’ win out because separate individuals are introducing it, and you get a cascade.”

Interestingly, he and his colleagues were able to manipulate people’s choices by introducing bots with an agenda into the system. These automatic participants continually implanted the idea that the ink blots looked like a sumo wrestler, an otherwise unpopular category. Sure enough, when bots accounting for 37% of participants pushed the idea, human users also started adopting it over other categories.

Ten years ago, no one was talking about ‘fake news,’ and now everyone is trying to categorize whether news media is fake or not.

What’s more, when researchers afterwards showed those participants the image commonly deemed a crab by other groups, they were much more likely to call it sumo as well. “We showed people the crabbiest crab,” Guilbeault says, “but now plenty of people described it as looking like a sumo.”

The same phenomenon happens on social media, says Guilbeault, who has previously researched the influence of Twitter bots. By pushing an idea continuously, both real and automated users are able to sway the majority to use their terms. “In some sense, Trump’s presidency was a war over categories,” Guilbeault says. “Ten years ago, no one was talking about ‘fake news,’ and now everyone is trying to categorize whether news media is fake or not.”

For that reason, he says, content moderation by social media platforms that relies on identifying the difference between real and fake news may actually be doing more harm than good by subtly validating those very categories. A better approach, Guilbeault says, may be to focus on eliminating the bots spreading the categories in the first place—or to create more accurate categories that are also appealing enough to spread.

“You could do market research in a networked focus group,” says Guilbeault, in order to discover and spread more benign ideas. Those strategies might ultimately succeed better than using flags or warnings in changing the way people communicate, leading to a more civil public discourse overall.

Posted in: