
The British State’s Online Harms Act, first introduced by freedom loving liberal Boris Johnson and his champion of all things British government, is much in the news and generating a lot of debate and not a little fear. On its website, the Government says that “The Online Safety Act will help protect young people and clamp down on racist abuse online, while safeguarding freedom of expression”. Now just recently, our revered leader, ex-DPP and lawyer, told a sceptical Donald Trump that he is proud of our free speech of which, says, we have plenty. Surely, we can take him and his government at face value? Right?
On paper, the UK’s Online Safety Act 2023 (The Act) promises to usher in a safer, more civil internet. The Act is framed as a well-intentioned legislative response to rising digital harms, especially to children, it hands sweeping new powers to Ofcom, which reassuringly heads its website introduction by saying that its “Network neutrality is the principle that you control what you see and do online, not the broadband provider who connects you to the internet’. It’s odd therefore, that the Act compels tech companies to act as proactive gatekeepers of online content. Sceptics, numerous but, surely, paranoid say scratch beneath the surface and you’ll find a raft of provisions that risk killing off free speech, outsource censorship to algorithms and erode the open web in the name of safety.
So, let’s have a brief look at the act and its likely consequences, intended or otherwise, for freedom of expression, democratic discourse, and the internet as a public place. The Act certainly imposes new legal obligations on tech companies, especially social media platforms and search services, to:
Prevent illegal content from being shared or accessible.
Shield children from “harmful” or “age-inappropriate” content.
Introduce effective age verification systems for access to adult content.
Empower Ofcom to enforce these duties with investigatory powers, fines, and enforcement actions.
As of March 17, 2025, platforms were legally required to take down illegal content and ensure that they have systems in place to reduce the risk of such content appearing at all. By July 25, 2025, they are also required to implement safeguards for children, including blocking access to pornography and “harmful” material, defined broadly to include pro-self-harm, bullying, or dangerous challenges.
The Act does not just regulate what people do online, it fundamentally reshapes what people can say and see. It also marks a significant change in how speech is regulated: not by courts, not even by Parliament, but by private companies under the threat of punishment.
Yes, it is reasonable, and already long-standing law, that content such as child sexual abuse imagery, terrorist propaganda, or incitement to violence is prohibited. These are not protected forms of speech under UK law, and the internet should not be a safe haven for them. But the Act goes much further. It creates a class of “harmful” content that platforms must also act upon, even if the speech in question is not illegal. For adults, some of these duties were weakened during legislative revisions, but for children, the bar remains extremely low. Content that is distressing, offensive, or “inappropriate” for certain age groups may be targeted for restriction.
This is not just a child protection measure, it is a mechanism of speech control, one with profound implications. The result is a growing pressure on companies to over-censor and remove content just to be safe. Platforms, facing the threat of hefty fines from Ofcom (up to 10% of global turnover), are incentivised to err on the side of censorship.
The Act mandates the use of “highly effective age assurance” tools to prevent children – here assumed to be those below 16 years of age - from accessing adult material. At first glance, this seems sensible, nobody wants children stumbling upon pornography or suicide forums. But the practical effect is the end of anonymous browsing, at least for age-gated content.
To comply, platforms will be pushed to implement biometric ID checks, facial recognition, government ID uploads, or other invasive verification tools a ticking timebomb as it is a matter of when, not if, such information finds its way into criminal hands. This introduces a mass surveillance regime where private users must prove their age and identity just to view or post content. The risk is twofold: privacy erosion: The data gathered could be misused, leaked, or hacked, and access inequality: Those unwilling or unable to verify their age may be excluded from large parts of the internet even for legal, harmless content.
Civil liberties groups like the Open Rights Group and Big Brother Watch have already warned that such measures could amount to a de facto ban on anonymous speech, a cornerstone of democratic engagement, particularly for whistleblowers, vulnerable minorities, and political dissidents.
At the heart of the Act is Ofcom, the UK’s media regulator, which now finds itself with extraordinary powers over online speech. Ofcom can investigate, fine, or restrict access to platforms that fail to meet their obligations under the Act. It can issue binding Codes of Practice, setting standards for how companies must moderate content, handle complaints, and design their services.
This fundamentally alters the British regulatory landscape. Ofcom, an organisation entirely outside democratic accountability, is now both rule-maker and enforcer, with discretion to define what “compliance” means in practice. While the Codes are not laws per se, non-compliance opens the door to legal action, effectively giving them the force of law. Critics have rightly raised concerns about accountability and transparency. Ofcom is not elected. It is not directly answerable to the public. Yet it now plays a pivotal role in determining what can be said online, by whom, and under what conditions.
Given the sheer scale of online content, companies will inevitably turn to AI moderation systems to comply with the Act. But such systems are notoriously imprecise, flagging satire, contextually appropriate material, or even legal speech as harmful or illegal. This is already happening. YouTube has demonetised videos discussing mental health; Facebook routinely removes posts about historical events misinterpreted by algorithms; X (formerly Twitter) often suspends users for quoting offensive language, even in protest.
The Act exacerbates these problems. The pressure to pre-emptively detect and remove “harmful” content leads platforms to adopt overly aggressive filters. These errors are not bugs, they are features of a compliance-first, freedom-last regime.
Once platforms embed the values of the Act into their community guidelines, the norms of online speech will shift from open discourse to risk-averse blandness. Speech that is provocative, edgy, controversial, or emotionally intense—hallmarks of free societies—may be removed. The chilling effect is real: knowing your posts are monitored not just by companies but by state-backed regulators leads people to self-censor.
Moreover, the UK’s approach could become a template for other liberal democracies. Already, Canada, Australia, and the EU have floated or implemented similar legislation. The danger is the emergence of a global censorship consensus, where speech is filtered not by what is legal or true, but by what is safe, appropriate, emotionally soothing – and of course acceptable to government and the Establishment that controls it.
As of 25 July 2025, platforms are now required to use age assurance to prevent access to a broad set of content deemed harmful to children, including pornography, content promoting suicide or eating disorders, bullying, hateful content, and more. Let us be charitable and agree that the intentions are noble, but we must also note that these provisions rest on vague definitions of harm and empower both platforms and regulators to act with impunity. The result is sweeping removal of content, aggressive age verification, and the effective erasure of whole categories of expression—not because they are illegal, but because they might cause "emotional harm."
In practice, educational material, artistic content, or even political commentary might be restricted for fear of violating compliance. We risk raising a generation not only shielded from risk, but from reality.
Free Speech Backlash was always intended to be a participatory forum dedicated to free speech. It has always stood for liberty and dissent. Under the Online Safety Act, you, the reader/participator, can play an active role in keeping FSB online and in affecting how this law is enforced.
Now there is no defence against maliciousness, but taking the law at face value, we need a set of guidelines, sadly lacking in the legislation. I’m not a lawyer, but have worked with lawyers and barristers in cases involving possible breaches of contract or infringements of maritime law by, for example, shipowners. I was always told that the yardstick is what a reasonable person would expect a reasonable, reasonably competent shipowner acting with due diligence would do in the circumstances, and that seems to me to be a reasonable approach to this legislation.
I think we can all agree that we want nothing that can reasonably be regarded as harmful to children on FSB. In fact, I would go so far as to say that reassuring children that they are not about to be boiled as the climate change story doing the rounds is a hoax, that they are not toxic because they are boys, or have inherited original sin because they have white skin, would be the best thing we can do to protect children in this country, together with pressing the State to have a genuine enquiry into the ‘grooming gangs’.
But that said, I think FSB does not meet the requirement of being likely to be accessed by children. In fact, I think very, very unlikely. As part of FSB’s ‘due diligence’ we have assessed the probable demographic of our readership, and concluded that it is overwhelmingly mature, mostly middle aged to post-retirement, and of robust mind. From the legislation therefore, I do not think it necessary to introduce intrusive personal data scanning on the remote possibility that an under-aged reader might chance upon our site. To expect anything different is, surely, unreasonable.
Likewise, FSB most definitely opposes incitements to violence and racial hatred. Your writer is married to a racially different woman and has mixed-race children. Inciting racial hatred is the last thing we want. We encourage integration and the formation of one big, happy, British-identifying family. In fact, I would say that the biggest promoter of racial antagonism is the State, with its perceived anti-whiteness and policies of mass immigration and multiculturalism. Being opposed to further mass immigration or multiculturalism is not stirring up racial hatred, but in my opinion the very opposite – and plenty in the ethnic minorities agree. We need their voices on our side.
But if you do spot a post or comment that you think takes criticism of mass immigration or immigrants too far, into incitement to violence or open hatred, use the complaints mechanism. For the record, in the eleven moths we’ve been online, we have received no formal complaints. We’ve had a handful of emails about what was perceived as personal abuse, and these were all dealt with, mostly amicably.
But you, dear reader, can help police this issue and the others, by spotting illegal content: If you come across content that clearly violates UK law, such as child exploitation, terrorist incitement, supporting illegal immigration or targeted threats, report it to us using the platform's built-in tools i.e. the Complaints function, and/or email. By such means you help us meet our legal duties and make it harder for authorities to justify broader censorship.
I want to emphasise that you should be aware of, and make use of, the Complaints System. It is now one of our Terms & Conditions for accessing FSB that a user’s first approach, on seeing something they think might be illegal, must be to FSB via the complaints mechanism. Platforms are now legally required to offer a clear, timely complaint processes. If you believe your content was unfairly removed or harmful content is present or wasn't dealt with properly, file a complaint.
The Online Safety Act 2023 is a landmark law. But it is also a deeply flawed one. While its goal of protecting children and tackling illegal content is laudable, its methods are heavy-handed, opaque, and potentially dangerous.
It outsources censorship to corporations, incentivises over-removal, empowers an unelected regulator, and paves the way for intrusive surveillance under the guise of age verification. Its vague definitions of harm and safety may come to haunt – even harm - an entire generation raised under its regime. If free speech is the foundation of a free society, then "safety" must not become the Trojan horse for silencing it. There is a vast and vital difference between protecting citizens and infantilising them; between removing genuine threats and removing uncomfortable truths.
As readers of Free Speech Backlash, it is not enough to observe—we must act. Participate in enforcement in cases of true harm, but speak loudly and often where safety becomes censorship. Because once protection becomes prohibition, we may never get our voices back.
Please sign the petition to have this sinister legislation repealed - and vote Reform, as they have pleadged to repeal it. https://petition.parliament.uk/petitions/722903
Coming soon, an article on the entirely coincidental new National Internet Intelligence Investigations Team.