Something strange is happening in American politics at this moment — a rare consensus between Democrats and Republicans on the need to reform Section 230 of the 1996 Communications Decency Act. The CDA concerns content moderation on the Internet and its controversial Section 230 releases online platforms from having any liability for what their users post. Rather than being treated as publishers in the eyes of the law, they are viewed as hosts, a legal categorisation that means social media platforms like Facebook and Twitter are able to avoid accountability for the posts and actions of their subscribers.
The debate in Washington is as follows: politicians on both the left and the right agree that Section 230 must be reformed or abolished, but they disagree on the reasons why. Republicans believe that large technology firms are manipulating the law to censor Conservatives and their opinions, while Democrats argue that not enough moderation is allowing the spread of far-right conspiracy theories, hate speech, and disinformation.
Calls for reform to Section 230 have sounded from all corners for years. But recent events — the tumult of the Trump presidency and the rise of disinformation concerning vaccines and election outcomes — have made it clear that government action and legal intervention is needed to ensure effective and ethical regulation of the Internet.
Section 230 means that Internet platforms are largely left to their own devices when it comes to content regulation. Over time, they have evolved to act as their own moderators as their user base has grown. Larger companies have developed their own moderation rules, but these vary widely between platforms and are open to interpretation and application by the moderators.
This past summer, when Donald Trump tweeted “when the looting starts, the shooting starts” on Twitter, it was hidden by a banner that stated the tweet violated the platform’s rules about glorifying violence. The same tweet appearing on Facebook was left alone without a content warning. Mark Zuckerberg, CEO and Founder of Facebook, has defended his company’s policy of non-interference. He has argued that “Facebook shouldn’t be the arbiter of truth of everything that people say online”.
Zuckerberg is right. A private commercial company like Facebook should not have the responsibility or power to define what is truth and what is not, to be the arbiter of constituting morality in regards to online content, or to influence free speech norms to the extent to which they are currently. That is for the law to do, and for these companies to abide by through a legally regulated standard. Without a clear industry standard or legal regulation, the definition and enforcement of these moderation rules are open to company bias and liable to arbitrary application and oversight.
The immunity provided by Section 230 is too broad. Social media companies in particular have evolved to play a large role in the spread of information, opinion, and news. This, therefore, gives a small number of large technology firms too much power and influence over our online reality and the way in which we perceive things in the real world. The online discourse that circulates on these platforms has come to have a very real and very powerful impact on human action.
A BMJ Global Health study exploring the relationship between vaccine disinformation, social media campaigns, and public attitudes towards vaccine safety found that the prevalence of anti-vaccination propaganda resulted in an increase in the belief that vaccinations are unsafe. Donald Trump’s campaign to undermine the result of the 2020 United States Election by spreading fraudulent claims and conspiracy theories on social media therefore also had the potential to threaten faith in systems of government and the democratic process itself.
Governments have started to put pressure on large social media companies to do more to combat the spread of offending and potentially dangerous content. In 2016, the European Union agreed to a “Code of Conduct” with Facebook, Microsoft, Twitter, and YouTube, which committed these platforms to deal with the most offensive content within 24 hours. In 2018, Congress signed a law that created an exception in Section 230 for platforms that knowingly assist or facilitate sex trafficking. This move was viewed by some as setting a precedent for more immunity exceptions under the law.
Most recently, in December 2019, Senate Majority Leader Mitch McConnell attempted to include legislation for the repeal of Section 230 by tying it to higher stimulus payments as part of the US government’s COVID-19 relief aid. There is growing political desire from all sides to tackle the issue of Section 230 and it will be interesting to see how tech companies and governments navigate the balance between freedom of expression and content regulation with the incoming Biden-Harris administration.
Large technology companies themselves are broadly in favour of keeping Section 230. They argue that its removal would foment obstruction of online discussion and free speech on the Internet due to heightened litigation risks. Now liable for what their visitors post, websites would be more likely to err on the side of caution by restricting content on their platforms and blocking and removing a great deal more posts. Platforms newer and smaller than the Twitters and Facebooks of the Internet may fail without the protection of Section 230, unable to afford the cost of legal action taken against them because of the actions of their users.
Freedom of speech and expression is an important right that facilitates online discussion. However, companies must take responsibility for the communities they foster on their platforms. Their failures to curtail and regulate harmful discourse and misinformation translates to real and profound consequences in our modern society. Technology firms must accept that the circulation of propaganda and the posting of false information and hate speech on their platforms have a powerful impact on our social and political landscape. Nuanced reform proposals that would create uniform legal standards for content moderation on the internet represent potential solutions.
For example, a bill co-sponsored by Senators John Thune and Brian Schatz, Republican and Democrat respectively, seeks to require Internet firms to explain their content moderation policies to their users and to provide statistics on which items were removed or hidden. It would also require platforms to remove content determined to be offensive and unlawful by a court within just 24 hours. Another alternative, put forth by the University of Chicago’s Booth School of Business, would be to adopt a “quid pro quo” approach for Section 230 where platforms would have a choice to either take on greater duties related to content moderation or face suspension of some of the protections the law has afforded them thus far.
The debate surrounding Section 230 has raised fundamental questions concerning the boundaries of freedom of speech online, the ways in which truth and reality can become manipulated and distorted in the rabbit hole of social media, and the power these platforms have in shaping the online narrative. Section 230 allowed for a nascent Internet community to become a prolific and abundant network of blogs, websites, and platforms that established a global forum for discussion and expression. It was an important tool that protected developing Internet companies from the constraints of excessive litigation and allowed for innovation and growth.
However, it is now clear that social media has far-reaching social, political, and ethical consequences for the way in which reality is disseminated online, in turn influencing human thought and action. The power of misinformation on social media came to a dangerous, and violent, head earlier in January 2021 in Washington DC, where months of unsubstantiated claims of election fraud and provocative rhetoric led to an insurrectionist riot upon the Capitol Building, leaving five people dead. In response to the role Donald Trump played in provoking the storming of the Capitol, Twitter and several other social media platforms banned his account, citing violations to their rules which prohibit users from using their platforms to incite violence. Perhaps it was the right decision, as The Washington Post revealed that mentions of election fraud dropped 73 percent on social media following President Trump’s suspension. However, it was also an editorial decision that raised more questions surrounding how internet platforms themselves should be regulated.
Technology is evolving faster than ever and it is imperative that the law catch up. In order for online platforms and communities to continue being sustainable and ethical in their influence, Section 230 needs further reform to hold platforms accountable for what their users post. It must provide a standardised framework for moderation guidelines that tackle harmful disinformation, fake news, and offensive speech on the World Wide Web.
ความคิดเห็น