top of page
Rebecca Tennant

The Online Safety Bill: A New Era of Digital Communities

Last year, the St Andrews Law Review explored some of the potential and actual harms caused by harmful content posted on social media platforms. The article subsequently highlighted the need for online content moderation, consistent with free speech rights. The United Kingdom's proposed Online Safety Bill 2021 seeks to achieve this within the context of human rights.


Context


Technology reaches into our homes, to the dinner table, into family conversations, and through Twitter scrolling before bed. We are sometimes more connected with our phones than with people, and more intimate with apps than real individuals. While we have allowed this familiarity to blossom, we have not always done enough to safeguard ourselves or others from harm or to educate ourselves on the potential of online risks until it is too late.


For instance, the pandemic has highlighted both how much we rely on the digital world as well as the extent to which certain harms proliferate online. In a month-long period during the UK's national lockdown, the Internet Watch Foundation and its partners blocked at least 8.8 million attempts by UK Internet users to access videos and images of children suffering sexual abuse. Young people use social media to gather information on self-harming more than previously and media reports suggest that "teen terrorism inspired by social media is on the rise".


Social media platforms and other large user-generated platforms have consequently come under intense scrutiny, fueling the generation of the Online Harms White Paper by Parliament. These platforms have faced a wave of criticisms for both modifying and allowing illegal content to "fester" online, in the words of the British Home Secretary, Priti Patel.


To unlock digital innovation, competition, and growth online, there needs to be consumer trust in technology. Current statistics and reports show that the UK has succeeded in digitalising much of its economy but the growth of linked protections and safeguards have not advanced at the same speed. Given the influence of tech companies in our lives, there must be a corresponding level of responsibility from the companies. For example, in response to criticisms of online platforms, new, more private, and/or encrypted platforms have sprouted up, such as Telegram.


The Proposed Online Safety Bill


Digital Secretary Oliver Dowden and the Home Secretary released a press statement in December 2020 confirming the government’s plans to introduce "proportionate yet effective" legislation to make the UK a pioneer in online safety. The bill at a first glance seems to cover 4 key areas:


  1. the safety of children online,

  2. holding popular platforms responsible by the Office of Communications (Ofcom) for tackling certain activities,

  3. increased measures to protect free speech, and

  4. an increased obligation on companies to protect users online.


It is important to distinguish between illegal content and non-illegal but harmful content so that we can appreciate the extent of what the Online Safety Bill could achieve. An example of illegal content would be materials inciting people to commit crimes of terrorism. Conversely, an example of harmful but non-illegal content would be bullying or misinformation around Coronavirus vaccinations. The consequences of both activities can be significant and devastating.


One aspect of the new legislation looks at the proportionality of the content and the harm it can potentially cause. The fact sheet defines this as follows: "The legislation would define harmful content and activity as that which 'gives rise to a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals'”. This victim-oriented approach is quite welcome.


Social media platforms and other companies that host user interaction must abide by the laws local to the jurisdictions in which their users operate and in which their licenses are registered. For some platforms, such as Twitter, this currently involves a general statement that changes per jurisdiction, within their user policy that all users must abide by their own domestic laws. Despite this, Twitter only moderates content that falls within groups they have defined in their user code, which is not reflective of local laws. For example, in Germany, defamation is a crime, but Twitter will not remove defamatory tweets made by a German individual in Germany about a German person. The UK’s position on the bill could be pivotal, shaping the way for other countries to follow. This could result in a more cohesive approach, linking local laws with online laws more clearly and transparently by one neutral regulator.


This bill therefore follows in the footsteps of Australia by placing an obligation on companies to introduce frameworks to increase protections while simultaneously protecting freedom of expression. The European Commission has also been considering some of these issues currently proposed by the UK government through the Digital Services Act which also imposes due diligence on online platforms.


Creating a Duty of Care


Another positive aspect of the Online Safety Bill is that it will commit companies to a duty of care around both illegal content and non-illegal but harmful content, as outlined by the government’s fact sheet on the White Paper. For example, it will no longer be enough for companies to work on a "report and take-down" approach to harmful tweets. A duty of care would oblige companies to take significantly more responsibility for the content their users see when the content falls within the defined scope of the bill. Companies will not be responsible for the generation of the content but for the way they address it.


Those impacted by the legislation include:

  • companies that host user-generated content accessed by users in the UK,

  • companies that facilitate public or private online interaction between service users (one or more of whom is in the UK), and

  • search engines

A small group of companies with the largest online presence and high-risk features, which is likely to include Facebook, TikTok, Instagram, and Twitter, will be in Category 1; those with a high reach and a high potential threat level.


In April 2019, the UK government published its first White Paper defining 27 types of online harms. Following a period of consultation, the categories have been redefined and condensed. According to a document in the House of Commons Library, the categories of harmful content would be set out in secondary legislation:

  • criminal offences (e.g. child sexual exploitation and abuse, terrorism, hate crimes, and the sale of illegal drugs and weapons),

  • harmful content and activity affecting children (e.g. pornography), and

  • harmful content and activity that is legal when accessed by adults, but which may be harmful to them (e.g. content about eating disorders, self-harm, or suicide).

The most popular social media sites will need to set and enforce clear terms and conditions, explicitly stating how they will handle content that is legal but could cause significant physical or psychological harm to adults.


Freedom of Speech


A duty of care related to content rightly generates some questions about freedom of speech. Free speech is a legislated right in all countries that are members of the European Union, is upheld in the European Convention on Human Rights, and is embodied in many domestic laws, such as the UK's Human Rights Act 1998. Even when not translated into local laws, the Convention has primacy over local laws for EU member states.


In the same way that domestic laws are translated online and offline, the same can be said for our rights. Richard Pursey, Group CEO & Co-Founder of technology safety company Safe To Net, said “online safety is a fundamental human right”. Further mechanisms are being put in place to maintain freedom of expression after concerns were raised by the Open Rights Group and the Index on Censorship.


The government's response to the proposed bill has also stated that freedom of speech and media freedoms will be upheld and protected. They give some practical examples of how this will be achieved, such as through appeals against content that is removed.


The government is also working with the Law Commission on several areas, including whether the promotion of self-harm should be made illegal. The government says: "We will carefully consider using the online harms legislation to bring the Law Commission’s final recommendations into law, where it is necessary and appropriate to do so". Our criminal law has continuously advanced over the last 100 years to be of great use in modern society. For it to continue to do so, it must become more applicable in the digital sphere.


Conclusion


The forthcoming legislation marks a pivotal moment for online safety, one that will hopefully mean social platforms are made safe by design. This action cannot come soon enough. As our lives continue to become more digital, we are increasingly exposed to online threats. The bill appears to be going very much in the right direction and placing emphasis on the right areas. The government has listened to the public and professionals and, as a result, the UK seems to be standing on the cusp of a new, safer, online, and digital age. It remains to be seen how Ofcom will balance the reduction of harms with the protection of rights, but it is reassuring to see that this topic is at the forefront of some of the conversations happening around this new piece of legislation.

bottom of page