Online Safety Bill – everything you need to know about UK safety legislation

As digital technologies continue to shape our daily lives, governments around the world are developing new rules to better control tools and platforms. While some of these regulations focus on how security software – like VPNs or antivirus services – handles user data, others seek to draw a legal framework for how content posted online is handled.

The UK Online Safety Bill falls into the latter category. First published in draft form a year ago, a revised version was finally presented to Parliament in March 2022 to start the review process.

Its objective is quite ambitious: making the UK the safest place in the world to be online. The bill aims to tackle a wide range of harmful content – with particular attention to protecting children – while holding tech giants accountable.

However, many commentators have criticized how the tighter control brought by these new guidelines could end up undermining internet freedom. Free speech and end-to-end encryption appear to be the areas most at risk, according to civil liberties groups.

Here’s everything you need to know about the Online Safety Bill.

What is the Online Security Bill?

Considered a “world first” of its kind, the online security law (opens in a new tab) is a massive piece of legislation that aims to regulate the digital space and protect internet users from harm online.

The bill introduces a “duty of care” for big tech companies who will have to follow its regulations to ensure a safe environment for their users. This includes the responsibility to modify their terms and conditions to comply with the new guidelines, while removing all harmful content posted on their platforms.

Specifically, the law applies to user-generated platforms – these include social media such as Facebook and Twitter, online forums and messaging apps such as WhatsApp – as well as large search engine like Google.

A group of cubes all displaying social media logos

(Image credit: Shutterstock/Bloomicon)

What is the online safety bill for?

As mentioned earlier, big tech companies will have a responsibility to protect users from harmful content. This includes :

  • Prevent the dissemination of illegal content by requiring organizations to remove it as soon as they see it. Examples are messages and images related to child sexual abuse, terrorism, cyberflashing (opens in a new tab) and content encouraging self-harm
  • Protect children ensuring they are not exposed to inappropriate content online. This includes stricter age verification processes for accessing certain websites – such as porn sites – and, in some cases, the need to monitor private chat for child sexual abuse material.
  • Safeguarding adults against “lawful but harmful content” by removing such content from their platforms. This rule applies on major social media like Instagram (already in the spotlight for harming mental health) and TikTok. Although the specifics have yet to be defined, these categories are likely to include abuse, harassment, self-harm and eating disorders.
  • Outsmart online fraud by forcing the biggest platforms to act against fraudulent paid advertisements published or hosted on their services.

The body that will be responsible for ensuring that these regulations are implemented is Ofcom, the UK communications regulator. Among other things, the Government Communications Office will have the power to gather information to support its investigations as well as take action to get companies to change their behavior.

In a major change from the draft version presented last year, the UK government has reduced the application period from 22 months to just two – this means companies will have just over eight weeks from the sanction royal law to ensure they are in full compliance to avoid penalties. Penalties could be up to two years in prison if found guilty of obstructing an Ofcom investigation in any way.

In addition, companies that fail to comply with their responsibilities could face fines of up to £18 million or 10% of their annual worldwide turnover, whichever is higher.

Overhead view of little boy sitting alone on sofa holding tablet feeling frustrated reading bad comments

(Image credit: Shutterstock)


The Online Safety Bill represents an important first step in trying to minimize a wide range of harm online – from online fraud and cyberbullying to child abuse – by making businesses more proactive and accountable to do faced with these problems.

In particular, the bill will require social media and search engine platforms to to be legally more transparent with their users. In their terms and conditions, they will need to include what type of legal content is allowed and what is not in a complete, clear and accessible way. This way, adults will be able to make informed decisions before joining the platform. Companies will also need to be more transparent when applying these terms.

In an attempt to defend freedom of expression and pluralism of voices, the bill clearly sets out the obligation of these platforms to protect journalism and democratic content.

To promote press freedom, all news outlets and individuals disseminating journalistic material online will be exempt from regulation under the bill. At the same time, to contribute to the British political debate, online platforms will have the duty to take into account the democratic importance of the content posted online while ensuring that any political opinions are respected.

…and evil

The online safety bill has drawn widespread criticism from individuals and civil liberties commentators, fearing that these regulations will harm users’ privacy and freedom of expression.

Of particular concern is its ‘legal but harmful content’ guideline which will have the potential to drastically shape what we can see online. After conducting a legal analysis of the bill’s impact on free speech, the charity Index of Censorship concluded that it “severely restrict freedom of expression.”

The vagueness over which categories are considered legal but harmful, as well as the fact that politicians will have a say in what social media platforms should censor, has caused many concerns among Internet users. A petition against this point has already reached more than 50,000 signatures.

Provisions that require companies to actively monitor private chats for images of child abuse or terrorist material have also been criticized. Similar to what is happening in the EU, critics fear this will virtually ban end-to-end encryption technology.

Many faces creating two large faces and a red pencil writing a red cross on a mouth

(Image credit: Shutterstock)

What’s next for the Online Safety Bill?

Currently pending in the House of Commons, the Online Safety Bill is expected to come into force at the end of the year. However, many believe that this date could be pushed back further.

On the other hand, seeking to regulate the online world is a difficult task. Prohibited ‘illiberal and impractical, (opens in a new tab)‘ the new law will surely change the internet as we know it.

What is certain is that the stakes are high. Whether lawmakers manage to minimize harm online without restricting people’s free speech remains to be seen. For now, many doubts remain.

Comments are closed.