Systems scientists find clues to why false news snowballs on social media | MIT News

The spread of misinformation on social media is a pressing societal problem that tech companies and policy makers continue to confront, yet those who study the issue still do not deeply understand why and how fake news spreads.

To shed some light on this enigmatic topic, researchers at the Massachusetts Institute of Technology have developed a theoretical model of a Twitter-like social network to study how news is shared and explore situations in which an unreliable news item spreads far beyond the truth. Agents in the model are motivated by the desire to persuade others to take their point of view: the basic assumption in the model is that people bother to share something with their followers if they think it is persuasive and are more likely to bring others closer to what they have. Mentality. Otherwise, they will not participate.

The researchers found that in such a situation, when the network is highly connected or the opinions of its members are sharply polarized, potentially false news spreads more widely and travels deeper into the network than higher-credibility news.

This theoretical work could inform empirical studies of the relationship between news credibility and its reach, which could help social media companies adapt networks to limit the spread of misinformation.

“We show that even if people are rational in how they decide to share news, it can inflate information with low credibility. With this impulse of persuasion, no matter how extreme my beliefs—since the more extreme they are gained by changing other people’s opinions—there is Always someone might be pissed off [the information]Says senior author Ali Jadbaei, professor and chair of the Department of Civil and Environmental Engineering and principal faculty member at the Institute for Data, Systems and Society (IDSS) and principal investigator in the Laboratory for Information Systems and Decision Making (LIDS).

Jadbabaie is joined on the paper by first author Chin-Chia Hsu, a graduate student in IDSS’s Social and Engineering Systems program, and Amir Ajurlu, a LIDS research scientist. The research will be presented this week at the IEEE Conference on Decision and Control.

Meditation on persuasion

This research builds on a 2018 study by Sinan Aral, the David Austin Professor of Management at MIT Sloan Management; Deb Roy, Professor of Media Arts and Sciences at Media Lab; and former postdoc Sorosh Vosugi (now Assistant Professor of Computer Science at Dartmouth University). Their empirical study of data from Twitter found that fake news spreads wider, faster, and deeper than real news.

Jadbai and his assistants wanted to delve deeper into the cause This happens.

They hypothesized that persuasion might be a powerful motivator for sharing news—agents in the network might want to persuade others to take their point of view—and decided to build a theoretical model that would allow them to explore this possibility.

In their model, agents have some preconceived belief about politics, and their goal is to persuade followers to move their beliefs closer to the agent’s side of the spectrum.

The news item is initially fired to a small random subset of agents, which must decide whether to share that news with their followers. The agent weighs the importance and credibility of the news, and updates his or her belief based on how surprising or convincing the news is.

“They will do a cost-benefit analysis to see if, on average, that news will bring people closer than they think or take them away from them. We include a nominal cost to share. For example, when you take some action, if you are scrolling on social media. If you share something embarrassing, you have to stop doing it. Think of it as a cost. Or a reputation cost may come if you share something embarrassing. Everyone has that cost, so the more extreme and more interesting the news, the more you want to share it,” says Jadbai. .

If the news confirms the agent’s point of view and has persuasive power beyond the nominal cost, the agent will always share the news. But if the agent believes the news is something that others may already have seen, the agent is not motivated to share it.

Since the agent’s willingness to share the news is a product of his or her point of view and how persuasive he is about the news, the more extreme the agent’s perspective or the more surprising the news, the more likely the agent will share it.

The researchers used this model to study how information spreads through a news thread, an uninterrupted chain of engagement that quickly permeates the network.

Communication and polarization

The team found that when there is a high network and news is surprising, the credibility threshold for starting a news thread is lower. High connectivity means that there are multiple connections between many users in a network.

Likewise, when the network is highly polarized, there are a lot of agents with extreme views who want to share the news item, and they start the news thread. Either way, news with low credibility creates the biggest cascades.

“For any news, there is a natural limit to the network speed, and a range of connectivity, which facilitates the transmission of information well as the sequence size is enlarged with real news. But if you exceed the speed limit, you will encounter situations where inaccurate news or low-credibility news has a sequence size Bigger,” says Jadbai.

If the opinions of users in the network become more diverse, it is unlikely that a news of poor credibility will spread more widely than spreading the truth.

Jadbai and his colleagues designed the agents in the network to behave rationally, so the model would better capture the actions that real humans might take if they were to impress others.

“Someone might say that’s not why people get involved, and it’s true. Why people do certain things is a topic of intense debate in cognitive science, social psychology, neuroscience, economics, and political science,” he says. “Depending on your assumptions, you end up with different results. But I feel that assuming persuasion is the motive is a natural one.”

Their model also shows how costs can be manipulated to reduce the spread of misinformation. Agents perform a cost-benefit analysis and will not share news if the cost of doing so outweighs the benefit of sharing.

“We don’t offer any political prescriptions, but one thing this work suggests is that having some cost associated with sharing news isn’t a bad idea. The reason you get so many of these cascades is because the cost of sharing news is actually very low,” he says.

“The role of social networks in shaping opinions and influencing behavior has been widely noted. Empirical research by Sinan Aral and his associates at MIT shows that fake news is not involved in this research,” says Sanjeev Goyal, a professor of economics at the University of Cambridge. transmitted more broadly than real news.”In their new paper, Ali Jadbaei and his collaborators give us an explanation of this puzzle with the help of an elegant model.

This work was supported by an Army Research Office Interdisciplinary University Research Initiative grant and a Vannevar Bush grant from the Office of the Secretary of Defense.

.

Leave a reply:

Your email address will not be published.