As extremist groups and fringe movements like QAnon have gained mainstream awareness, their ability to rapidly spread misinformation and conspiracy has put social media platforms under tight public scrutiny. Facebook, Twitter and other technology companies have been reprimanded by Congress and the media alike for failing to tackle online extremism among their users. With increasing political polarization in the United States, the question of whether these platforms’ algorithms – inadvertently or by design – help users detect extremist and misleading content is becoming more pressing.
But as Homa Hosseinmardi points out, surprisingly one of the major platforms has received less attention: YouTube. Hussein Mardi, Senior Research Scientist and Principal Investigator of the PennMap Project with Penn’s Computational Social Science (CSS) Laboratory–Part of the College of Engineering and Applied Sciences, Annenberg School of Communication, and Wharton School – notes that while it’s often seen as an entertainment channel rather than a news source, YouTube is perhaps the world’s largest media consumption platform.
“The researchers overlooked YouTube, because we didn’t think it was a place for news,” Huseyinmardi says. “But if you look at the metric, it has over two billion users. If you take that population and multiply it by the portion of news content viewed on YouTube, you realize that the amount of information consumed on YouTube is much greater than it is on Twitter.”
Hussein Mardi’s research is based on questions about human behavior, particularly in online spaces. In 2019, she joined CSSLab, directed by Stevens University professor Duncan Watts. In her work with the lab, Hussain Mardi uses extensive data and computational methods to gain insights into issues including media polarization, algorithmic bias, and how social networks affect our lives.
Several years ago, a team of researchers including Hossein Mardi and Watts became interested in the relationship between online extremism and YouTube news consumption. To what extent do YouTube’s algorithms enhance interaction with highly biased or extremist content, and to what extent is this affected by an individual’s online behavior? It aims to answer this question: If people start somewhere on YouTube, after watching a few videos in a row, will they end up in the same destination?
This story was written by Alina Lidginsky. Read more at the Annenberg School of Communication.