A Comprehensive Analysis of the UK’s Online Safety Act 2023: Legislative Scope, Impact, and International Comparisons

Navigating the Digital Frontier: A Comprehensive Analysis of the UK’s Online Safety Act 2023

Many thanks to our sponsor Esdebe who helped us prepare this research report.

Abstract

The Online Safety Act 2023 (OSA) stands as a landmark legislative instrument enacted by the United Kingdom, aiming to fundamentally reshape the regulation of online content and significantly bolster user safety. Receiving Royal Assent in October 2023, the OSA imposes far-reaching legal obligations on a diverse array of online service providers, mandating proactive measures to protect users from illegal and harmful content, with a particularly stringent focus on safeguarding children. This detailed research paper undertakes an exhaustive examination of the OSA’s intricate legislative framework, delving into its expansive scope, nuanced definitions, and the specific duties it places upon digital platforms. It further explores the Act’s profound impact across various digital services, including social media, messaging, and search engines, while also conducting a rigorous comparative analysis with seminal international regulations such as the EU’s Digital Services Act (DSA) and Australia’s Online Safety Act. The paper critically assesses the OSA’s tiered regulatory requirements, the myriad legal challenges it has encountered, the adaptive responses from the market, and its contentious extraterritorial reach, ultimately offering a comprehensive understanding of its immediate and projected implications for the global digital ecosystem and fundamental rights.

Many thanks to our sponsor Esdebe who helped us prepare this research report.

1. Introduction: The Imperative for Online Safety in a Digital Age

The digital revolution has ushered in an era of unprecedented connectivity, transforming the fabric of global communication, commerce, and information exchange. From instant messaging to vast social networks, online platforms have become indispensable conduits for human interaction and societal functioning. However, this rapid technological advancement, while offering immense benefits, has simultaneously fostered a complex landscape rife with significant challenges to online safety. The proliferation of illegal content, ranging from child sexual abuse material (CSAM) to terrorist propaganda, alongside pervasive issues such as cyberbullying, online harassment, misinformation, and various forms of exploitation, has underscored a pressing need for robust regulatory intervention. Public concern, often galvanized by tragic incidents of online harm, exerted considerable pressure on governments to act, prompting a global shift towards greater accountability for digital service providers.

In this context, the United Kingdom government embarked on a protracted legislative journey, culminating in the introduction of the Online Safety Bill. After extensive parliamentary debate, numerous amendments, and significant public consultation, the Bill received Royal Assent on 26 October 2023, officially becoming the Online Safety Act 2023. The Act articulates an ambitious vision: to make the UK ‘the safest place in the world to be online’ (gov.uk, 2023). This aspiration underpins the OSA’s comprehensive framework, which seeks to impose stringent duties of care on online service providers, compelling them to proactively mitigate and address a spectrum of online harms, thereby recalibrating the balance between digital innovation and user protection.

This paper aims to dissect the OSA, moving beyond a superficial overview to provide an in-depth understanding of its construction, implementation challenges, and anticipated consequences. It scrutinises the legislative intent, the practical implications for diverse online services, the regulatory machinery established for its enforcement, and the broader international context in which it operates. By examining its core tenets, the evolving legal landscape, and the responses from industry and civil society, this analysis endeavours to offer a holistic perspective on one of the world’s most comprehensive pieces of internet regulation.

Many thanks to our sponsor Esdebe who helped us prepare this research report.

2. Legislative Framework of the Online Safety Act 2023: Architecture and Mandates

2.1 Genesis and Evolution of the Legislation

The journey towards the Online Safety Act was neither swift nor straightforward. The groundwork for the OSA was laid by the UK government’s 2019 Online Harms White Paper, which identified a wide array of online harms and proposed a new regulatory framework. This paper initiated a protracted period of policy development, public consultation, and parliamentary scrutiny, reflecting the inherent complexities and sensitivities involved in regulating the internet. Key debates during the Bill’s passage through Parliament revolved around the scope of ‘legal but harmful’ content for adults, the protection of freedom of expression, the practicality of age verification, and the contentious issue of encryption. The final Act reflects numerous compromises and refinements, seeking to balance the government’s safety objectives with concerns raised by industry, civil liberties groups, and cross-party parliamentarians (parliament.uk, 2023).

2.2 Scope and Definitional Precision

The effectiveness and reach of the OSA are fundamentally determined by its carefully crafted definitions, which delineate the types of services and content falling within its regulatory ambit.

2.2.1 User-to-User Services (U2U) and Search Services

The OSA casts a wide net, defining ‘user-to-user services’ as internet services that allow users to generate, upload, or share content that can be encountered by other users. This broad classification is critical, encompassing a vast and evolving ecosystem of digital platforms. Examples include, but are not limited to, major social media networks (e.g., Facebook, X, Instagram), online forums and discussion boards, messaging applications that permit public or group content sharing (e.g., WhatsApp, Telegram in certain functionalities), video-sharing platforms (e.g., YouTube, TikTok), online dating services, gaming platforms with interactive communication features (e.g., in-game chat, voice channels), and even online marketplaces that facilitate user-generated reviews or direct peer-to-peer communication. The defining characteristic is the enablement of user-generated content visible or accessible to other users.

Crucially, the Act also extends its scope to ‘search services’, encompassing search engines that enable users to find content and information online. Their duties primarily revolve around preventing illegal content from appearing prominently in search results and ensuring transparency regarding their content handling practices.

2.2.2 Defining Harmful Content Categories

The Act distinguishes between several critical categories of harmful content, each with specific regulatory implications:

  • Illegal Content: This category forms the bedrock of the Act’s content duties. It refers to content that constitutes a criminal offence under UK law. The OSA explicitly lists numerous such offences, providing a robust legal basis for content removal. Examples include: child sexual abuse material (CSAM), terrorism content (incitement, glorification), hate crime offences, fraud, promotion or facilitation of self-harm, incitement to violence, revenge porn, controlling or coercive behaviour, and harassment. The Act also criminalised ‘cyberflashing’ – the unsolicited sending of explicit images – with the first conviction occurring in March 2024 (apnews.com, 2024). Service providers are expected to understand the nuances of these criminal offences and implement systems to detect and remove them promptly.

  • Content Harmful to Children: Recognising the unique vulnerability of minors, the OSA imposes particularly stringent duties concerning content harmful to children. This includes, but is not limited to, pornography, content encouraging self-harm, suicide, or eating disorders. For these specific categories, platforms are mandated to prevent children from encountering them, often necessitating robust age verification or age-gating mechanisms. The Act defines a ‘child’ as anyone under the age of 18, aligning with international children’s rights conventions. The definition of ‘harm’ in this context is broad, encompassing psychological and physical harm, and platforms must conduct assessments to identify such risks.

  • ‘Legal but Harmful’ Content for Adults (Historical Context): Initially, the Online Safety Bill aimed to regulate ‘legal but harmful’ content for adults. This proposal faced significant opposition, with critics arguing it would lead to censorship, chill freedom of expression, and be practically impossible to define and regulate without infringing on fundamental rights. Consequently, this aspect was substantially scaled back. The final Act primarily focuses on illegal content for adults and specific harms to children. However, platforms retain obligations to enforce their own terms of service regarding content deemed harmful but not illegal, and to provide adults with tools to filter content they do not wish to see. This represents a nuanced approach, shifting the burden of defining and moderating ‘legal but harmful’ content for adults back to platform policies and user choice, rather than direct government mandate (en.wikipedia.org, 2023).

2.3 Core Duties of Online Service Providers: A Comprehensive Framework

The OSA imposes a series of interconnected duties designed to embed online safety into the operational fabric of digital services:

2.3.1 Risk Assessment and Mitigation

A foundational duty is the requirement for online service providers to conduct thorough risk assessments of their services. This involves identifying potential risks of illegal content appearing on their platforms and the risk of children encountering harmful content. These assessments must be specific, regular, and consider the design, functionality, and user base of the service. Based on these assessments, platforms must implement ‘proportionate systems and processes’ to mitigate identified risks. These systems can include, but are not limited to, advanced artificial intelligence (AI) and machine learning (ML) tools for automated content detection, robust human moderation teams, proactive content filtering, and innovative safety-by-design principles incorporated into platform architecture.

2.3.2 Duties Regarding Illegal Content

For all categories of services falling under the Act, there is a clear and unequivocal duty to combat illegal content. This involves:

  • Proactive Measures: Implementing systems to reduce the risk of illegal content appearing or being shared on their services in the first place, such as upload filters for known CSAM or terrorist material, or proactive scanning of public content.
  • Reactive Measures: Taking down illegal content promptly upon becoming aware of its presence. This necessitates efficient user reporting mechanisms, effective internal flagging systems, and rapid response protocols to remove, block access to, or disable illegal material. The expectation is for a swift, decisive response, often within hours for severe harms.
  • Reporting to Law Enforcement: A statutory duty to refer identified CSAM to the National Crime Agency (NCA) and other illegal content to relevant law enforcement agencies.

2.3.3 Child Protection Duties

Protecting children online is a central pillar of the OSA. Service providers must:

  • Prevent Access to Harmful Content: Implement measures to prevent children from encountering content that is harmful by default for minors, such as pornography or content promoting self-harm, suicide, or eating disorders. This often requires age assurance or age verification systems.
  • Design Services Safely for Children: Ensure that services likely to be accessed by children are designed with appropriate safety measures and default settings. This includes age-appropriate content curation, robust parental controls, and stringent privacy settings for minors.
  • Enforce Age Limits: Where a service has an age limit, take proportionate steps to prevent children below that age from accessing it.

2.3.4 Terms of Service Duties

Service providers are required to clearly communicate their terms of service (ToS) to users, outlining what content and behaviour are prohibited on their platforms. Critically, they must then consistently enforce these ToS and explain how they will do so. This duty aims to foster transparency and predictability in content moderation, giving users a clear understanding of platform rules.

2.3.5 User Reporting and Redress Mechanisms

The Act mandates that platforms provide users with clear, accessible, and effective ways to report problems online, including illegal content or content harmful to children. Furthermore, platforms must establish effective complaints and appeals procedures for users whose content has been removed or accounts restricted, ensuring a degree of due process and accountability in moderation decisions.

2.3.6 Transparency and Accountability

Platforms are required to publish regular transparency reports detailing their efforts to combat online harms, including statistics on content removals, user reports, and proactive detection. This data is intended to provide greater public and regulatory oversight of platforms’ safety performance.

2.4 Enforcement, Oversight, and Penalties: Ofcom’s Regulatory Might

2.4.1 Ofcom’s Mandate and Powers

Ofcom, the UK’s independent communications regulator, has been designated as the online safety regulator under the OSA. Its mandate is extensive, granting it significant powers to oversee compliance and enforce the Act. Ofcom’s responsibilities include assessing and enforcing providers’ adherence to their safety duties, investigating non-compliance, and imposing sanctions.

To facilitate compliance, Ofcom is tasked with issuing detailed ‘codes of practice’. These codes will provide practical guidance and outline steps that providers can take to fulfil their statutory safety duties. They will cover areas such as age verification, content moderation processes, risk assessment methodologies, and user reporting mechanisms. While not legally binding in themselves, failure to adhere to the codes can be used as evidence of non-compliance with the overarching duties of the Act (UK Online Safety Act 2023, lw.com).

2.4.2 Financial Penalties and Service Restrictions

Non-compliance with the OSA carries severe penalties. Ofcom has the authority to impose fines of up to £18 million or 10% of the provider’s global annual turnover, whichever figure is higher. For large multinational tech companies, this could amount to billions of pounds, serving as a powerful deterrent. In cases of persistent or egregious non-compliance, particularly for illegal content, Ofcom is empowered to seek court orders to block access to particular websites or services within the UK (en.wikipedia.org, 2023). This site-blocking power is a significant escalatory measure, raising concerns about its potential impact on internet freedom and its effectiveness against determined non-compliant actors.

2.4.3 Executive Liability

Further demonstrating the Act’s teeth, senior managers of non-compliant companies can face criminal charges, including imprisonment, if they consistently fail to comply with Ofcom’s information requests or obstruct its investigations. This provision aims to ensure accountability at the highest levels of corporate governance, compelling executives to take online safety seriously and integrate it into their business strategies.

Many thanks to our sponsor Esdebe who helped us prepare this research report.

3. Impact on Various Digital Services and Technologies

The OSA’s broad scope ensures that its impact reverberates across nearly every segment of the digital landscape, necessitating significant operational and technological adjustments for diverse service providers.

3.1 Social Media Platforms

Social media platforms, due to their vast user bases, reliance on user-generated content, and widespread influence, are arguably the most heavily impacted by the OSA. They face multifaceted challenges:

  • Content Moderation at Scale: The sheer volume of content uploaded daily (billions of posts, images, videos) presents an immense challenge. Platforms must implement robust and sophisticated content moderation systems, combining advanced AI/ML algorithms with large teams of human moderators. This requires substantial investment in technology, multilingual capabilities, and mental health support for moderators exposed to harmful content.
  • Age Verification and Gating: The duty to protect children from harmful content necessitates effective age verification or age-gating mechanisms. This is particularly challenging for platforms designed for general audiences. Methods range from self-declaration with AI-backed estimation to third-party ID verification or biometric scanning. Each method carries its own issues concerning accuracy, user friction, privacy implications, and the potential for circumvention (e.g., users exploiting photo modes in video games for facial verification, as noted by some critics) (lemonde.fr, 2025).
  • Balancing Freedom of Expression and Safety: A constant tension arises between the Act’s safety mandates and the protection of freedom of expression. Critics express concerns that platforms, facing severe penalties, may adopt overly cautious moderation policies, leading to the ‘over-removal’ or ‘chilling effect’ on legitimate speech. Platforms must meticulously craft and enforce their terms of service to reflect the OSA’s requirements while upholding principles of free speech, which is a delicate balance.
  • Transparency and Reporting: Social media platforms will be required to publish detailed transparency reports on their content moderation activities, including data on proactive detections, user reports, content removals, and the reasons for such actions. This level of granular reporting aims to enhance accountability but also presents an administrative burden.

3.2 End-to-End Encrypted Messaging Services

End-to-end encrypted (E2E) messaging services face perhaps the most contentious challenges under the OSA, particularly concerning the detection of CSAM and terrorism-related content.

  • The Encryption Dilemma: E2E encryption is designed to ensure that only the sender and intended recipient can read messages, rendering content inaccessible to the service provider and third parties. This strong privacy protection fundamentally conflicts with the Act’s explicit requirement for platforms to scan for illegal content, particularly CSAM. Critics argue that any attempt to scan content on an E2E encrypted service would inherently compromise the encryption, creating a ‘backdoor’ that could be exploited by malicious actors or authoritarian regimes, thereby undermining global digital security and privacy.
  • Technical Feasibility Clause: The government has acknowledged this dilemma by including a clause stating that Ofcom will not exercise its power to mandate scanning on E2E encrypted services until it is ‘technically feasible’ to do so ‘without undermining the security and privacy of encrypted communications’ (UK Online Safety Act 2023, lw.com). This clause has been met with skepticism by privacy advocates, cryptographers, and tech companies, who argue that no such ‘technically feasible’ solution exists that would not fundamentally weaken encryption. Proposals like client-side scanning (scanning content on a user’s device before encryption) have been widely criticised as a form of mass surveillance that could lead to widespread privacy infringements.
  • Impact on User Trust and Global Standards: The OSA’s stance on encryption has drawn significant international attention. Major encrypted messaging providers have voiced strong opposition, with some threatening to withdraw services from the UK if forced to implement scanning capabilities that compromise encryption. This aspect of the Act could set a precedent that influences global debates on encryption and digital rights, potentially fragmenting the internet and leading to a ‘race to the bottom’ on privacy.

3.3 Search Engines

Search services are included in the Act’s scope, with specific duties related to illegal content:

  • Preventing Illegal Content in Results: Search engines are required to implement systems to prevent illegal content, particularly CSAM and extremist material, from appearing in their search results. This involves ongoing algorithmic adjustments and prompt removal of indexed illegal content upon notification.
  • Transparency: They must also publish information about how they handle harmful content, ensuring greater accountability for the discoverability of such material online.

3.4 File-Sharing and Cloud Storage Services

Platforms enabling users to store and share files, such as cloud storage providers and peer-to-peer file-sharing services, are also subject to the OSA:

  • Identification and Removal of Illegal Content: They must implement measures to detect and remove illegal content, especially CSAM, from their platforms. This often relies on hashing technologies and automated scanning of uploaded files.
  • Reporting Mechanisms: Clear and accessible reporting tools are necessary for users who encounter illegal material shared via these services.

3.5 Online Gaming Platforms

Interactive online gaming environments, particularly those with in-game chat, voice communication, and user-generated content features, fall under the U2U definition:

  • Moderation of In-game Abuse: Platforms must moderate hate speech, cyberbullying, grooming, and other forms of online abuse occurring within their gaming communities.
  • Age-Appropriate Content and Interaction: Ensuring that game content, communication features, and user interactions are suitable for the age groups playing and taking steps to prevent age-inappropriate interactions or exposure to harmful content for children.

3.6 Adult Content Platforms

Providers of commercially available pornography and other adult content face specific, stringent requirements:

  • Mandatory Age Verification: The OSA mandates robust age verification for sites hosting pornography. This crucial provision, which began phased implementation, requires platforms to use reliable methods to ensure users are 18 or over. This could include third-party age verification services, biometric checks, or government-issued ID checks (lemonde.fr, 2025). The challenges lie in maintaining user privacy, ensuring accessibility, and preventing circumvention.
  • Removal of Illegal Content: Standard duties regarding the removal of illegal content, such as CSAM, also apply.

Many thanks to our sponsor Esdebe who helped us prepare this research report.

4. Comparative Analysis with International Regulatory Frameworks

The UK’s Online Safety Act does not exist in a vacuum. It is part of a growing global trend towards greater regulation of online platforms, sharing common objectives and divergent approaches with similar legislation in other jurisdictions.

4.1 EU’s Digital Services Act (DSA)

The European Union’s Digital Services Act (DSA), which became fully applicable in early 2024, is arguably the most significant contemporary counterpart to the OSA. Both laws represent pioneering efforts to hold online platforms accountable for content, but they exhibit important similarities and critical differences.

4.1.1 Similarities

  • Shared Objectives: Both the OSA and the DSA aim to enhance user safety, combat illegal content, increase platform accountability, and promote transparency in content moderation. Both seek to create a safer digital environment.
  • Tiered Approach: Both regulations adopt a tiered approach, imposing more stringent obligations on larger platforms with significant reach and systemic risk. The DSA’s Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) are analogous to the OSA’s Category 1 services.
  • Risk Assessments: Both mandate platforms to conduct comprehensive risk assessments of their services to identify and mitigate potential harms.
  • Content Moderation and User Rights: Both require platforms to establish clear content moderation policies, provide accessible reporting mechanisms, and offer effective appeal processes for users whose content has been removed.
  • Transparency Reports: Both laws compel platforms to publish regular transparency reports on their content moderation activities and safety measures.
  • Extraterritorial Reach: Both apply to services operating in their respective jurisdictions, regardless of the provider’s physical location.

4.1.2 Key Differences

  • Scope of Services: The DSA has a broader scope, covering a wider range of ‘intermediary services’, including ‘mere conduit’ services (e.g., ISPs), ‘caching’ services, and ‘hosting’ services, in addition to online platforms. The OSA primarily focuses on ‘user-to-user’ services and search engines.
  • Definition of Harm: The DSA’s primary focus is on ‘illegal content’ as defined by EU or national law. While it indirectly addresses some harms, it largely avoids a direct mandate on ‘legal but harmful’ content for adults. The OSA, particularly for children, has a more expansive definition of ‘harmful content’ (e.g., content encouraging self-harm for children) that goes beyond what is strictly illegal. The OSA also criminalises specific harms like cyberflashing.
  • Encryption: This is perhaps the most significant divergence. The DSA largely respects and does not directly challenge end-to-end encryption. The OSA, with its powers to mandate scanning for CSAM on E2E encrypted services (albeit with the ‘technical feasibility’ caveat), stands in stark contrast and has generated significant international privacy concerns.
  • Enforcement Structure: The DSA establishes a network of Digital Services Coordinators in each Member State, with the European Commission having direct oversight for VLOPs and VLOSEs. The OSA centralizes regulatory power in a single body, Ofcom, for all covered services.
  • Geopolitical Context: The DSA is a single market regulation, harmonizing rules across 27 Member States. The OSA is a national regulation post-Brexit, reflecting the UK’s independent regulatory path.
  • Crisis Response Mechanism: The DSA includes a crisis response mechanism for widespread dissemination of illegal content or systemic risks, which the OSA does not explicitly mirror in the same way.

For global platforms operating in both jurisdictions, navigating these differences can be complex, potentially requiring tailored compliance strategies for each market or leading to the adoption of the strictest common denominator.

4.2 Australia’s Online Safety Act (2021)

Australia’s Online Safety Act (2021) also provides a comparative lens, sharing some philosophical underpinnings with the OSA, particularly its strong emphasis on child safety.

  • Similarities: Both acts establish a dedicated online safety regulator (Australia’s eSafety Commissioner), empower the regulator to issue content removal notices, and focus heavily on the protection of children from online harms, including CSAM and cyberbullying.
  • Key Differences: The Australian Act is more directly focused on ‘abhorrent violent material’ and specific types of online abuse. It includes a proactive cyber-bullying scheme for children, enabling the eSafety Commissioner to order the removal of cyberbullying content directed at an Australian child. While it addresses illegal content, its overall scope and specific duties differ from the comprehensive nature of the OSA for all online harms.

4.3 Other International Approaches

  • Germany’s NetzDG (Network Enforcement Act): An earlier regulatory effort, NetzDG (2018), focused on requiring social media platforms to swiftly remove ‘manifestly unlawful’ content, primarily hate speech, within strict timeframes. It was an influential precursor to broader online safety regulations but was narrower in scope than the OSA or DSA.
  • Ireland’s Online Safety and Media Regulation Act (OSMRA): Similar to the UK and EU, Ireland has also enacted legislation creating a new online safety regulator (Coimisiún na Meán) to address online harms, particularly for children, reflecting the broader European regulatory trend.
  • United States: The US approach to online content regulation has historically been distinct, primarily relying on Section 230 of the Communications Decency Act, which grants broad immunity to platforms from liability for third-party content. While there are ongoing debates about reforming Section 230 and imposing more accountability, the US generally lacks a comprehensive, overarching online safety law akin to the OSA or DSA, instead addressing specific harms through sector-specific legislation or content-based laws.

These international efforts highlight a global trend towards greater online accountability, but also the challenges of achieving regulatory harmonization in a globalized digital space, with each jurisdiction tailoring its approach to national legal traditions and specific societal concerns.

Many thanks to our sponsor Esdebe who helped us prepare this research report.

5. Tiered Regulatory Requirements and Compliance Obligations

The OSA adopts a sophisticated tiered approach to regulation, recognizing that a one-size-fits-all model would be disproportionate and impractical for the diverse range of online services. This tiered system aims to align regulatory obligations with the size, functionality, and risk profile of each service, ensuring that smaller platforms are not unduly burdened while larger, higher-risk platforms face the most stringent requirements.

5.1 Classification of Services

Ofcom is responsible for designating services into specific categories based on several criteria:

  • User Numbers: The number of UK users is a primary factor. Platforms with a significant number of UK users will face higher obligations.
  • Functionality and Content: The nature of the service, particularly whether it hosts user-generated content or allows public interaction, and the types of content typically found on the platform, will influence its risk assessment.
  • Systemic Risk: Ofcom will assess the potential for a service to cause serious harm, considering its design, algorithms, and reach.

5.1.1 Category 1 Services (Highest Risk)

These are designated as ‘Very Large Online Platforms’ (VLOPs) and ‘Very Large Online Search Engines’ (VLOSEs). They represent the largest platforms with the most significant user bases and the highest potential for spreading harmful content. Examples include major social media platforms (e.g., Facebook, X, Instagram, TikTok) and prominent search engines (e.g., Google, Bing).

Obligations for Category 1 Services:

  • Most Stringent Duties: Subject to the highest level of regulatory scrutiny and the most extensive obligations.
  • Comprehensive Risk Assessments: Required to conduct detailed, regular, and granular risk assessments encompassing all types of illegal content and content harmful to children.
  • Robust Content Moderation: Mandated to implement highly sophisticated content moderation systems, including advanced AI/ML tools, large human moderation teams, and proactive detection mechanisms.
  • Age Verification/Gating: Strict requirements for age verification to prevent children from accessing harmful content.
  • Transparency: Extensive transparency reporting duties, including detailed statistics on content removals, user reports, and proactive detection rates.
  • Terms of Service Enforcement: Must clearly articulate and rigorously enforce their terms of service regarding prohibited content.
  • Safety-by-Design: Expected to integrate safety considerations into the very design and development of their services.

5.1.2 Category 2 Services (Medium Risk)

These services have substantial user interaction but are generally smaller in scale or pose a lower systemic risk compared to Category 1. Examples might include smaller social networks, online forums, certain dating apps, or specific video-sharing platforms.

Obligations for Category 2 Services:

  • Significant Duties: Subject to substantial obligations, tailored to their specific risk profile.
  • Tailored Risk Assessments: Required to conduct proportionate risk assessments focused on their specific functionalities and potential harms.
  • Child Protection Measures: Must implement specific measures to protect children from harmful content, potentially including age assurance or content filtering.
  • Illegal Content Mitigation: Robust duties to address illegal content, including effective reporting and removal mechanisms.

5.1.3 Category 3 Services (Lower Risk)

These are smaller user-to-user services with limited user bases or functionalities that pose a lower risk of harm. Examples could include niche online communities, small discussion forums, or local noticeboard services.

Obligations for Category 3 Services:

  • Basic Obligations: Primarily subject to fundamental duties related to illegal content.
  • Clear Reporting: Must provide accessible mechanisms for users to report illegal content.
  • Prompt Removal: Duty to remove illegal content once identified.

5.1.4 Non-Designated Services

It is important to note that even services not formally designated into these categories still retain a fundamental legal duty to remove illegal content if they host it. The tiered system primarily dictates the extent of proactive measures, risk assessment, and transparency required.

5.2 Compliance Obligations and Best Practices for Providers

To meet their obligations, providers across all tiers must consider a range of strategies:

  • Integration of Safety by Design: Embedding safety considerations into the initial design and development phases of new products and features, rather than retrofitting them.
  • Investment in Safety Technologies: Deploying advanced AI/ML tools for automated detection of specific harms (e.g., CSAM matching, hate speech detection), coupled with human oversight and continuous model training.
  • Robust Human Moderation: Building and supporting diverse, well-trained human moderation teams capable of handling complex content, cultural nuances, and language variations. Providing mental health support for moderators is crucial.
  • Clear and Enforceable Terms of Service: Developing unambiguous ToS that explicitly prohibit illegal and harmful content, and consistently applying these rules across the platform.
  • Effective User Reporting Systems: Creating intuitive, easy-to-use reporting tools for users to flag problematic content or behaviour, with transparent feedback mechanisms.
  • Age Assurance Solutions: Implementing proportionate and privacy-preserving age verification or age-gating technologies where required, for example, through third-party services, biometric analysis, or robust self-declaration systems.
  • Transparency and Auditing: Regularly publishing detailed transparency reports and preparing for potential audits of their systems and processes by Ofcom.
  • Collaboration with Law Enforcement: Establishing clear channels and protocols for cooperating with law enforcement agencies on requests for user data related to illegal content and criminal investigations.
  • Data Protection Impact Assessments (DPIAs): Ensuring compliance with GDPR and other data protection laws, especially when collecting age verification data or implementing new content moderation technologies that process user data.

Many thanks to our sponsor Esdebe who helped us prepare this research report.

6. Legal Challenges, Criticisms, and Market Responses

The Online Safety Act, despite its stated goal of making the UK a safer online space, has faced considerable scrutiny and legal challenges, reflecting the complex interplay between safety, freedom, and innovation in the digital realm.

6.1 Fundamental Rights Concerns

Numerous civil liberties groups, human rights organisations, and academic experts have raised significant concerns about the potential for the Act to infringe upon fundamental rights.

  • Freedom of Expression: A primary criticism is that the Act, particularly in its duties to address ‘harmful content for children’ and its broad scope, could lead to platforms over-removing legitimate content (the ‘chilling effect’) to avoid hefty fines. This is exacerbated by the legal uncertainties surrounding what constitutes ‘harmful by default’ content for minors, which could encourage platforms to err on the side of caution. Critics argue that such broad requirements could stifle public discourse, limit access to information, and lead to disproportionate restrictions on speech, even for adult users who share content that might be deemed unsuitable for children (en.wikipedia.org, 2023).
  • Privacy Rights: Concerns about privacy are particularly acute regarding age verification mandates and the provisions related to end-to-end encrypted messaging. Age verification systems often require users to provide sensitive personal data, including biometric information or government ID, raising questions about data storage, security, and potential for misuse. The demand for scanning content on E2E encrypted services is seen by many as a direct assault on privacy, fundamentally weakening a crucial technology designed to protect personal communications from surveillance. Critics argue it represents a move towards ‘mass surveillance’ and sets a dangerous precedent for other nations (youtube.com, 2025).
  • Due Process and Transparency: Some critics argue that the Act grants Ofcom extensive powers with insufficient checks and balances, potentially leading to arbitrary enforcement. Concerns have also been raised about the transparency of content moderation decisions and the effectiveness of appeal mechanisms, fearing that platforms might become de facto arbiters of speech without adequate judicial oversight.

6.2 Specific Legal Challenges and Notable Cases

  • Wikimedia Foundation’s Judicial Review: A prominent legal challenge emerged from the Wikimedia Foundation, the non-profit organization behind Wikipedia. The Foundation initiated a judicial review against the potential designation of Wikipedia as a ‘Category One’ service under the Act. Their core argument is that Wikipedia, as an open-editing, collaborative online encyclopedia, does not function as a ‘user-to-user service’ in the same manner as social media platforms. They expressed profound concerns that complying with Category One obligations, such as extensive content moderation requirements, would fundamentally compromise Wikipedia’s open, volunteer-driven editing model, invite state-driven censorship, and potentially force them to curtail services in the UK (en.wikipedia.org, 2023). This challenge underscores the difficulty in applying broad regulatory frameworks to diverse digital services.
  • Tech Industry Apprehension: Beyond direct legal challenges, the tech industry has expressed widespread apprehension regarding the Act’s practical implications. Concerns include the immense compliance costs, the technical feasibility of certain mandates (especially regarding encryption), and the potential for regulatory fragmentation globally. Some have warned that stringent requirements could disincentivize innovation or lead to platforms withdrawing certain services from the UK market.

6.3 Market Adjustments and Industry Responses

In anticipation and response to the OSA, several platforms have begun to adjust their operations:

  • Proactive Compliance: Microsoft, for instance, announced the implementation of new age verification procedures for Xbox users in the UK, directly aligning with the Act’s requirements for child protection. Other platforms have invested in enhancing their content moderation capabilities and improving reporting tools.
  • Age Verification Rollout: The mandatory age verification for commercial pornography sites represents a significant shift. While some sites have already adopted such measures, the Act’s enforcement will push for widespread implementation, driving innovation in age assurance technologies and potentially leading to a more secure online environment for minors (lemonde.fr, 2025).
  • Circumvention Challenges: Despite efforts, challenges remain in enforcing age verification. Reports indicate that users have found ways to circumvent systems, such as exploiting photo modes in video games to produce images that meet facial verification requirements. This highlights the ongoing ‘cat and mouse’ game between regulators/platforms and users seeking to bypass restrictions.
  • Potential for Service Withdrawal: The most significant potential market response, particularly for E2E encrypted services, is the threat of withdrawal. Companies like Signal and WhatsApp have indicated that they would rather exit the UK market than compromise their encryption standards, which they view as fundamental to user privacy and security (youtube.com, 2025).
  • Innovation in Safety Tech: The Act could also spur innovation in the ‘safety tech’ sector, driving the development of new tools and solutions for content moderation, age verification, and risk assessment.

6.4 Economic and Societal Impact

  • Compliance Costs: The Act imposes significant compliance costs on businesses, particularly for smaller platforms and startups that may lack the resources of large tech giants. This could disproportionately affect SMEs and potentially stifle innovation.
  • Impact on User Experience: Users may experience changes in how they interact with online services, including increased friction due to age verification, altered content accessibility, and potentially more cautious moderation.
  • Digital Divide Concerns: Concerns exist that sophisticated age verification technologies might inadvertently exclude individuals without specific forms of ID or digital literacy, exacerbating a digital divide.

Many thanks to our sponsor Esdebe who helped us prepare this research report.

7. Extraterritorial Reach and International Implications

One of the most complex and potentially contentious aspects of the Online Safety Act is its extraterritorial reach, meaning its provisions can apply to services operating outside the United Kingdom.

7.1 Criteria for Extraterritorial Application

The OSA applies to any online service that has a ‘significant number of UK users’ or that ‘targets UK users’, regardless of where the service provider is based. This broad application is designed to ensure that foreign companies cannot evade their responsibilities simply by operating from outside the UK. Key criteria for determining extraterritoriality include:

  • Significant UK Users: Ofcom will assess the volume and engagement of UK users to determine if this threshold is met. The exact definition of ‘significant’ will likely be clarified through Ofcom’s guidance.
  • Targeting UK Users: This can be inferred from various factors, such as the use of the English language, offering services in Pound Sterling, having a UK-specific domain or marketing campaigns directed at UK residents, or the nature of content being relevant to the UK population.

7.2 Jurisdictional Challenges and Conflicts of Law

The extraterritorial application raises several profound questions and potential challenges:

  • Enforcement Against Foreign Entities: The practicalities of enforcing UK law, including imposing fines or blocking orders, on companies located in other sovereign jurisdictions can be complex. While large multinational companies with a physical presence or significant operations in the UK are more susceptible to enforcement, smaller foreign entities might prove harder to bring into compliance.
  • Clash with Other National Laws: The OSA’s requirements could directly conflict with laws in other countries, particularly concerning freedom of speech or data privacy. For example, content deemed ‘harmful to children’ in the UK might be considered legitimate expression elsewhere. This creates a dilemma for global platforms that must navigate a patchwork of conflicting national regulations.
  • Sovereignty and Digital Borders: The assertion of national jurisdiction over global digital services underscores the ongoing struggle to define ‘digital borders’ in an inherently borderless internet. It raises questions about whose values and legal norms should prevail in the digital commons.
  • US Trade Negotiations: The UK government has reiterated that it will not alter the Online Safety Act as part of US trade negotiations, indicating its resolve to uphold its regulatory stance despite potential international pressures (reuters.com, 2025).

7.3 Shaping Global Standards and Regulatory Harmonization

Despite the challenges, the OSA’s extraterritorial reach positions the UK as a significant player in the global online safety debate:

  • UK’s Influence: The Act contributes to the growing international movement towards greater platform accountability. Its comprehensive nature and explicit focus on child safety and illegal content may influence legislative efforts in other countries.
  • ‘Brussels Effect’ vs. ‘UK Effect’: The EU’s GDPR and DSA have demonstrated a ‘Brussels Effect,’ where global companies adopt EU standards due to the size of the single market. The OSA, while powerful, will test whether the UK, post-Brexit, can exert a similar ‘UK Effect’ on global tech companies, or if companies will primarily adapt to the largest common denominator (e.g., the DSA) and then make specific adjustments for the UK.
  • Potential for International Cooperation: Addressing global online harms ultimately requires international cooperation. The OSA could serve as a model or a catalyst for enhanced dialogue and collaboration between regulators worldwide to develop harmonized standards and cross-border enforcement mechanisms.

Many thanks to our sponsor Esdebe who helped us prepare this research report.

8. Conclusion and Future Outlook

The Online Safety Act 2023 represents a pivotal and ambitious legislative undertaking by the United Kingdom, designed to confront the pervasive challenges of online harm and establish a world-leading framework for digital accountability. The Act’s comprehensive scope, its meticulous definitions of harmful content, and its multi-tiered duties on online service providers underscore a determined effort to enhance user safety, particularly for children, and to compel platforms to take proactive responsibility for the content circulating on their services. The designation of Ofcom as a powerful, well-resourced regulator, equipped with substantial enforcement powers including significant fines and executive liability, signals a new era of robust online governance.

However, the implementation of the OSA is not without its complexities and inherent tensions. The ongoing debates surrounding freedom of expression, the practicalities and privacy implications of age verification, and most critically, the controversial stance on scanning end-to-end encrypted communications, highlight the profound challenges in balancing competing fundamental rights and technological realities. The legal challenges, such as the Wikimedia Foundation’s judicial review, and the varied market responses, ranging from proactive compliance by some platforms to threats of service withdrawal by others, demonstrate the profound impact of the Act and the significant adjustments required across the digital ecosystem.

Looking ahead, the successful implementation of the OSA will hinge on several critical factors: Ofcom’s development of clear, proportionate, and technically feasible codes of practice; the judiciary’s interpretation of the Act in forthcoming legal challenges; the adaptability and innovation of the tech industry in developing effective safety solutions without compromising user rights; and the ongoing international dialogue to address online harms on a global scale. The Act’s extraterritorial reach, while asserting UK sovereignty in the digital space, will inevitably lead to complex jurisdictional questions and potential conflicts of law, necessitating careful diplomatic engagement.

Ultimately, the Online Safety Act 2023 is not merely a static piece of legislation but a dynamic framework poised to evolve in response to technological advancements, emerging online harms, and societal needs. Its journey will serve as a crucial test case for how democracies can regulate the internet to foster safety and accountability, while striving to uphold the foundational principles of privacy and freedom of expression in an increasingly digital world. Its long-term legacy will be determined by its ability to demonstrably make the UK ‘the safest place in the world to be online’ without inadvertently stifling innovation or undermining fundamental digital rights.

Many thanks to our sponsor Esdebe who helped us prepare this research report.

References

  • AP News. (2024, March 19). ‘The first ‘cyberflasher’ is convicted under England’s new law and gets more than 5 years in prison’. Retrieved from apnews.com
  • Gov.uk. (2023, October 26). ‘UK children and adults to be safer online as world-leading bill becomes law’. Retrieved from gov.uk
  • Lemonde.fr. (2025, July 20). ‘Age verification becomes mandatory on porn sites in the UK, and gradually in France’. Retrieved from lemonde.fr
  • Parliament.uk. (2023, September 20). ‘Online Safety Bill completes passage through parliament’. Retrieved from parliament.uk
  • Reuters. (2025, April 9). ‘UK will not change online safety law as part of US trade negotiations’. Retrieved from reuters.com
  • UK Online Safety Act 2023. (n.d.). Retrieved from lw.com
  • Wikipedia. (n.d.). ‘Online Safety Act 2023’. Retrieved from en.wikipedia.org
  • YouTube. (2025, August 19). ‘The UK Online Safety Act Just Got DESTROYED!’. Retrieved from youtube.com

Be the first to comment

Leave a Reply

Your email address will not be published.


*