The recent ban of Trump on Twitter fuelled the simmering debate about regulation on social media platforms. Never before had such a major figure been banned by Twitter, or any other company, but due to the social pressure and the heat of the moment, it saw no other option but to enact its final resort. Twitter released a statement in the light of two tweets by Trump which – according to the company – were “highly likely to encourage and inspire people to replicate the criminal acts that took place at the U.S. Capitol on January 6, 2021” and therefore concluded it to be an infringement of the Glorification of Violence policy.
While many applauded the rigorous action of Twitter, plenty of people simultaneously expressed their concerns with the development of social media accounts banning users. Amongst them is German chancellor Angela Merkel who clearly stated that she found the ban “problematic”. Through her spokesman, she told the German press that “this fundamental right [freedom of speech] can be intervened in, but according to the law and within the framework defined by legislators – not according to a decision by the management of social media platforms.” Germany itself has, what can be considered, one of the strictest laws on online hate speech and Merkel indicated that the United States should follow this example.
European diplomat Josep Borell called the events on the 6th of January “the climax of very worrying developments happening globally in recent years. It must be a wake-up call for all democracy advocates”. Also, France’s finance minister, Bruno de la Maire and the Russian opposition leader Alexei Navalny, attacked the decision. Mr. Navalny himself wrote on Twitter that the ban “is a decision of people we don’t know in accordance with a procedure we don’t know based on emotions and based on personal political preferences.”
These statements might highlight the fundamental difference between the disagreement between the United States and Europe on how social media should be regulated; either by governmental law or by the internet companies themselves. Whether the government or the companies will be the ones responsible; it leaves them with an incredibly difficult and important task. Social media has gotten an undeniably big influence on the political landscape and elections all over the world. Conspiracies, fake news and incitement of violence often lie in a grey area and are more complicated than portrayed. The nuanced matter of whether the content is fake or inciting violence also asks for more nuanced solutions.
Deleting questionable content or completely banning users is not the only possible solution. There are other things that can be done before proceeding to take this measure. The most obvious and easy one might be putting warnings on posts before one is able to see certain content. Instagram already utilises this with all posts that contain some kind of information on Covid-19, with a link available to the official government websites. Also with possibly disturbing content, Instagram puts a warning before the user can see a certain photo or video. Why not do the same with content that is in this grey area of fake news and violence-inciting posts? This at least warns the user that certain content is highly questionable, has no scientific evidence, is dangerous or contentious.
The spokesman of Merkel said that she had no objection to these kinds of warnings that would inform users that some information might be misleading. Content that is evidently fake or leading to violence can of course still be removed. Removing certain content for specific reasons is already part of the rut for most big tech companies. To give an example, Facebook has a – rather controversial – office in Berlin that solely works on deleting inappropriate content from its platform (Tagesschau, 2017). This office holds 500 employees, whose job it is to scroll through the endless amount of posts and get rid of the inappropriate ones. Besides, the companies monitoring their users’ behaviour, governments and other external actors also have a finger in the pie. To illustrate, according to research of Freedom House – an NGO that strives for a more free and democratic world – from the investigated 65 governments, 40 took part in social media surveillance, including “the leader of the free world” and the Republic of China. This results in a booming market for software developers specified in surveillance. These facts might not come as a surprise. However, they do undermine the outworn argument that a warning policy is inconceivable due to lack of control.
Another problem with the current measures is that it either puts too much power into the hands of the government or the major internet companies. A solution for this might be an independent organisation or committee – with no political or economic affiliations – that decides over the more ambiguous situations. Also, Mr. Navalny, the Russian dissident, is positive about creating a committee and added that “we need to know the names of the members of this committee, understand how it works, how its members vote and how we can appeal against their decisions.” The latter is actually crucial for the committee to be working properly. A clear and transparent procedure ensures trust in and acceptance of the decisions made by them.
While these solutions might solve a part of the problem, we must also go back to the core of the business models of the companies, since they partly fuel the issue. Given the fact that most platforms provide “free” services, we pay a rather big price via a side-track. In order to understand this, the way social media platforms generate profit must be explained. The biggest source of income is the advertisements sold to third parties. When looking at Facebook, a study shows that in 2019 98.5 (!) per cent of its profit leads back to ads. The more engagement on a platform, the higher the price of the advertisement. This creates an incentive for the tech companies to keep a user online as much as possible. Now, the psychology of the human brain comes to play. Research has proven that controversy spikes interest, combined with anonymity (and other factors which are embedded in the carcasses of the big tech) and results in higher engagement than civil posts. This clarifies why misinformation and controversial posts are beneficial for the platforms, explaining why they rather choose to look away instead of address the issue, since “viral content pays”. A possible solution for this could be changing profit policies, switching the centre of gravity from advertisements to paid subscriptions to both keep the companies lucrative and preserve integrity.
The right to remove users from their platforms puts a lot of power and responsibility in the hands of social media platforms. We cannot imagine an election, campaign or political party without any social media. Rules that once so strictly applied to the traditional printing press are simply not possible to implement in the digital world. Nonetheless, given that social media increasingly influences the political landscape and given that its market has a strong oligarchic structure occupied by a few major internet companies, it is with greater importance that new and adapted regulations will be institutionalized. Because while Trump’s ban might have been supported by a lot of people, the future will bring more contested cases; on the right, left, conservative and progressive side of the political spectrum.
This article is written by Imke Bruining and Loisanne op ’t Land
Photo by Brett Jordan on Unsplash