Can Effective Regulation Reduce the Impact of Divisive Content on Social Networks?

Amid a new storm of controversy sparked by The Facebook Files, a showcase of several internal research projects that, in some ways, suggest that Facebook is not doing enough to protect users from harm, a fundamental question that needs to be addressed is often asked distorted by inherent bias and targeting Specific to Facebook, unlike social media, and computational content amplification as a concept.

That is, what do we do to fix it? What can be done, realistically, that will actually make a difference; What changes in regulations or policies can feasibly be implemented to reduce the amplification of harmful and divisive posts that increase anxiety within society as a result of the increasing influence of social media applications?

Here it is important to look at social media more broadly, because every social platform uses algorithms to determine content distribution and access. Facebook is the largest, and it has more influence on key elements, such as news content — and of course, the search insights themselves, in this case, came from Facebook.

Focusing on Facebook, specifically, makes sense, but Twitter also amplifies content that sparks more engagement, and LinkedIn ranks its feed based on what it determines will be more engaging. The TikTok algorithm is very much in line with your interests.

The problem, as Facebook whistleblower Frances Hogan points out, is the algorithmic distribution, not Facebook itself — what ideas do we have that could realistically improve this element?

The other question is, will social platforms be willing to make such changes, especially if they pose a risk to their levels of engagement and user activity?

Haugen, an expert in algorithmic content matching, has suggested that social networks should be forced to stop using sharing-based algorithms altogether, through reforms to the Section 230 laws, which currently protect social media companies from legal liability for what users share in their apps.

As Haugen explained:

“If we have the proper oversight, or if we do the repair [Section] 230 To make Facebook accountable for the consequences of deliberate rating decisions, I believe they will do away with share-based ranking.”

The concept here is that Facebook – and thus, all social platforms – will be responsible for the ways in which they amplify certain content. So if more people end up seeing, say, misinformation about COVID due to computational interference, Facebook could be held legally responsible for any effects.

This would add significant risk to any decision-making about creating such algorithms, and as Hogan notes, it could then force platforms to backtrack on actions that enhance the reach of posts based on how users interact with that content.

Essentially, this will likely force social media platforms back to the pre-algorithm days, when Facebook and other apps simply show you a list of content from the Pages and people you follow in chronological order, based on when you posted. This, in turn, will reduce the motivation for people and brands to share more controversial and contentious content in order to play at the whims of the algorithm.

The idea has some advantages – as various studies have shown, triggering an emotional response with your social posts is key to maximizing engagement and, therefore, access based on algorithm amplification, and the most effective emotions, in this regard, are humor and anger. The jokes and funny videos still work well on all platforms, fueled by the arrival of the algorithm, but also the infuriating hot-shots, which news outlets and partisan figures work with, which can be a major source of division and anxiety we now see online.

To be clear, Facebook alone cannot be held responsible for this. Partisan publishers and controversial figures have always played a role in the broader discourse, generating attention and engagement with their center-left views long before the arrival of Facebook. The difference now is that social networks facilitate this broad reach, while also, through likes and other forms of engagement, they provide a direct incentive for such, with individual users gaining dopamine by stimulating response, paying publishers more referral traffic, and gaining more exposure. exposure through provocation.

Indeed, one of the main problems when thinking about the previous result is that everyone now has a voice, and when everyone has a platform to share their thoughts and opinions, we are all more exposed to such a thing, and more aware. In the past, you may have had no idea what your uncle’s political convictions were, but now you know, because social media reminds you every day, this kind of peer engagement also plays a role in the broader divide.

However, Haugen’s argument is that Facebook is incentivizing it – for example, a report that Haugen leaked to the Wall Street Journal explains how Facebook updated its News Feed algorithm in 2018 to Putting more emphasis on interaction between users and reducing political discussion, which is becoming an increasingly divisive component of the app. Facebook did this by changing its weight for different types of interaction with posts.

The idea was that this would stimulate more discussion, by weighting responses more – but as you can imagine, by giving more value to comments, in order to increase reach, this also prompted more publishers and pages to more broadly share the emotional divides. Increased- Engaged posts, in order to elicit more reactions, and get higher scores from posts as a result. With this update, likes are no longer the main driver of reach, as they were, with Facebook making comments and reactions (including “Angry”) increasingly important. As such, sparking discussion about political trends is actually becoming more prominent, exposing more users to such content in their feeds.

The suggestion then, based on this internal data, is that Facebook knew this, and knew that this change led to an increase in divisive content. But they chose not to back down, or implement another update, because engagement, a key measure of its business success, has already increased as a result.

In this sense, removing algorithm motivation would make sense—or perhaps, you could look to remove algorithm motivation for certain types of posts, such as political discussion, while still increasing access to more engaging posts from friends, meeting engagement and divisive goals. Fears.

This is noted by Facebook’s Dave Gillis, who works on the platform’s product safety team at Tweet topicIn response to the revelation.

according to Gillies:

At the end of a Wall Street Journal article on algorithmic ranking, we mentioned—almost in passing—that we had moved away from participatory ranking of civic and health content in the News Feed. But comment – that’s kind of a big deal, right? It’s probably reasonable to categorize videos and baby photos by likes, etc., for example, but treat other types of content more carefully. This is, in fact, what our team advocated doing: using different rating signals for health and civic content, prioritizing quality + trustworthiness for engagement. We’ve worked hard to understand the impact, and to get leadership on board – yes, Mark too – which is an important change.

This may be a way forward, using different ranking signals for different types of content, which may enable optimal amplification of content, promoting useful user engagement, while reducing the incentive for some actors to post controversial material in order to feed algorithms up.

Does this work? Again, it’s hard to say, since people will still be able to share posts, they’ll still be able to comment and redistribute material online, and there are still many ways amplification can happen outside of the algorithm itself.

Basically, there are advantages to both proposals, that social platforms can process different types of content differently, or that algorithms can be eliminated to reduce amplification of such material.

As Haugen notes, focusing on the systems themselves is important, because content-based solutions unlock different complexities when materials are published in other languages ​​and regions.

Leave a reply:

Your email address will not be published.