Amid a new storm of controversy sparked by The Facebook Files, a showcase of several internal research projects that, in some ways, suggest that Facebook is not doing enough to protect users from harm, a fundamental question that needs to be addressed is often asked distorted by inherent bias and targeting Specific to Facebook, unlike social media, and computational content amplification as a concept.
That is, what do we do to fix it? What can be done, realistically, that will actually make a difference; What changes in regulations or policies can feasibly be implemented to reduce the amplification of harmful and divisive posts that increase anxiety within society as a result of the increasing influence of social media applications?
Here it is important to look at social media more broadly, because every social platform uses algorithms to determine content distribution and access. Facebook is the largest, and it has more influence on key elements, such as news content — and of course, the search insights themselves, in this case, came from Facebook.
Focusing on Facebook, specifically, makes sense, but Twitter also amplifies content that sparks more engagement, and LinkedIn ranks its feed based on what it determines will be more engaging. The TikTok algorithm is very much in line with your interests.
The problem, as Facebook whistleblower Frances Hogan points out, is the algorithmic distribution, not Facebook itself — what ideas do we have that could realistically improve this element?
The other question is, will social platforms be willing to make such changes, especially if they pose a risk to their levels of engagement and user activity?
Haugen, an expert in algorithmic content matching, has suggested that social networks should be forced to stop using sharing-based algorithms altogether, through reforms to the Section 230 laws, which currently protect social media companies from legal liability for what users share in their apps.
As Haugen explained:
“If we have the proper oversight, or if we do the repair [Section] 230 To make Facebook accountable for the consequences of deliberate rating decisions, I believe they will do away with share-based ranking.”
The concept here is that Facebook – and thus, all social platforms – will be responsible for the ways in which they amplify certain content. So if more people end up seeing, say, misinformation about COVID due to computational interference, Facebook could be held legally responsible for any effects.
This would add significant risk to any decision-making about creating such algorithms, and as Hogan notes, it could then force platforms to backtrack on actions that enhance the reach of posts based on how users interact with that content.
Essentially, this will likely force social media platforms back to the pre-algorithm days, when Facebook and other apps simply show you a list of content from the Pages and people you follow in chronological order, based on when you posted. This, in turn, will reduce the motivation for people and brands to share more controversial and contentious content in order to play at the whims of the algorithm.
The idea has some advantages – as various studies have shown, triggering an emotional response with your social posts is key to maximizing engagement and, therefore, access based on algorithm amplification, and the most effective emotions, in this regard, are humor and anger. The jokes and funny videos still work well on all platforms, fueled by the arrival of the algorithm, but also the infuriating hot-shots, which news outlets and partisan figures work with, which can be a major source of division and anxiety we now see online.
To be clear, Facebook alone cannot be held responsible for this. Partisan publishers and controversial figures have always played a role in the broader discourse, generating attention and engagement with their center-left views long before the arrival of Facebook. The difference now is that social networks facilitate this broad reach, while also, through likes and other forms of engagement, they provide a direct incentive for such, with individual users gaining dopamine by stimulating response, paying publishers more referral traffic, and gaining more exposure. exposure through provocation.
Indeed, one of the main problems when thinking about the previous result is that everyone now has a voice, and when everyone has a platform to share their thoughts and opinions, we are all more exposed to such a thing, and more aware. In the past, you may have had no idea what your uncle’s political convictions were, but now you know, because social media reminds you every day, this kind of peer engagement also plays a role in the broader divide.
However, Haugen’s argument is that Facebook is incentivizing it – for example, a report that Haugen leaked to the Wall Street Journal explains how Facebook updated its News Feed algorithm in 2018 to Putting more emphasis on interaction between users and reducing political discussion, which is becoming an increasingly divisive component of the app. Facebook did this by changing its weight for different types of interaction with posts.
The idea was that this would stimulate more discussion, by weighting responses more – but as you can imagine, by giving more value to comments, in order to increase reach, this also prompted more publishers and pages to more broadly share the emotional divides. Increased- Engaged posts, in order to elicit more reactions, and get higher scores from posts as a result. With this update, likes are no longer the main driver of reach, as they were, with Facebook making comments and reactions (including “Angry”) increasingly important. As such, sparking discussion about political trends is actually becoming more prominent, exposing more users to such content in their feeds.
The suggestion then, based on this internal data, is that Facebook knew this, and knew that this change led to an increase in divisive content. But they chose not to back down, or implement another update, because engagement, a key measure of its business success, has already increased as a result.
In this sense, removing algorithm motivation would make sense—or perhaps, you could look to remove algorithm motivation for certain types of posts, such as political discussion, while still increasing access to more engaging posts from friends, meeting engagement and divisive goals. Fears.
This is noted by Facebook’s Dave Gillis, who works on the platform’s product safety team at Tweet topicIn response to the revelation.
according to Gillies:
“At the end of a Wall Street Journal article on algorithmic ranking, we mentioned—almost in passing—that we had moved away from participatory ranking of civic and health content in the News Feed. But comment – that’s kind of a big deal, right? It’s probably reasonable to categorize videos and baby photos by likes, etc., for example, but treat other types of content more carefully. This is, in fact, what our team advocated doing: using different rating signals for health and civic content, prioritizing quality + trustworthiness for engagement. We’ve worked hard to understand the impact, and to get leadership on board – yes, Mark too – which is an important change.“
This may be a way forward, using different ranking signals for different types of content, which may enable optimal amplification of content, promoting useful user engagement, while reducing the incentive for some actors to post controversial material in order to feed algorithms up.
Does this work? Again, it’s hard to say, since people will still be able to share posts, they’ll still be able to comment and redistribute material online, and there are still many ways amplification can happen outside of the algorithm itself.
Basically, there are advantages to both proposals, that social platforms can process different types of content differently, or that algorithms can be eliminated to reduce amplification of such material.
As Haugen notes, focusing on the systems themselves is important, because content-based solutions unlock different complexities when materials are published in other languages and regions.
In the case of Ethiopia, there are 100 million people and six languages. Facebook supports only two of these languages for integrity systems. This strategy of focusing on language-specific and content-specific systems in order for AI to save us is doomed to fail.”
Perhaps, then, removing the algorithms, or at least changing the regulations on how algorithms work, would be the best solution, which could help reduce the effects of outrageous negative content across the social media space.
But then we come back to the original problem that Facebook’s algorithm was designed to solve – in 2015 Facebook explained that it needed the News Feed algorithm not only to increase user engagement, but also to help ensure that people saw all the updates that matter most to them.
As explained, tThe average Facebook user, at the time, had about 1,500 posts eligible to appear in their News Feed on any given day, based on the Pages they liked and their personal connections—while for some of the most active users, that number was more like 15,000. Simply put, it’s not that It is possible for people to read each of these updates every day, so Facebook’s primary focus with its initial algorithm was to create a system that reveals the best and most relevant content to each individual, in order to provide users as much as possible. Engage in the experience, and then make them come back.
As Facebook Product Manager Chris Cox explained to Time magazine:
“If you could take stock of everything that happened on earth today and was posted anywhere by any of your friends, any of your family members, or any news source, and then pick the ten most important to know today, that would be a really great service for us to build. This Truly what we aspire to become “Latest News”.
The News Feed approach has evolved a lot since then, but the fundamental challenge it was designed to solve remains. People have too many connections, follow too many pages, and are members of too many groups to get all their updates daily. Without the feed algorithm, they would miss relevant posts and relevant updates like family announcements and birthdays, and simply wouldn’t participate in the Facebook experience.
Without the algorithm, Facebook would lose out, by failing to improve audience desires — and as shown in another report shared as part of Facebook Profiles, it is already seeing a decline in engagement in some sub-populations.
You can imagine that if Facebook scraps the algorithm, or is forced to change its direction in this regard, this graph will only get worse over time.
So it is unlikely that Zuck and Co. Keen on this solution, so a compromise, such as that suggested by Gillis, may be the best that can be expected. But this comes with its drawbacks and risks.
Either way, it’s worth noting that the focus of the debate needs to shift to algorithms more broadly, and not just on Facebook alone, and whether there really is a viable and practical way to change incentives around algorithm-based systems to reduce the distribution of more divisive elements.
Because that’s a problem, no matter how Facebook or anyone else tries to spin it, and that’s why Haugen’s position is so important, because it could be the spark that leads us to a new, more nuanced debate about this key element.