On the 25th July 2025, the Online Safety Act 2023 was assented into law after passing all the parliamentary processes. The new law outlines the legal responsibility of tech firms that provide online services in the UK, regardless of whether they are located in the UK.
The Act obliges platforms to keep users out of both unlawful (such as terrorism, child abuse, hate speech and fraud) and harmful to children (such as pornography, self-harm, or eating-disorder content). The communications regulator in the UK, Ofcom, can now exercise much more power to police these rules. That is, big platforms may face some serious consequences should they fail to do so.
How the Act Works and the Role of Ofcom
This law gave broad powers to the UK regulator, Ofcom. It is now able to impose compliance by reviewing platforms, imposing fines of up to £ 18 million or 10% of global revenue, whichever is greater.
Ofcom also has the right to seek court orders to stop services, such as cutting off ad or payment access to platforms that do not conform. At worst, access to websites can be prohibited.
Age Verification and Browsing Safety Measures
As of 25 July 2025, websites should apply high-risk age assurance to include sites that allow users to access online pornography or self-harm content only after confirming their legal age.
Ofcom has asked platforms to utilize secure mechanisms and constraints such as face age estimation, photo ID verification or credit card checks, with respect to privacy and the addition of extra information maintained where necessary.
Social media is not an exception to restrictions. They are not supposed to expose children to bullying, hate speech, exploitation, dangerous challenges, triggers to self-harm or other harmful content. They should also provide open reporting features and it must not send messages to strangers where children are involved through websites.
In this way, browser fingerprinting technology is becoming an important feature of the online safety provision as the Online Safety Act in UK provides greater restrictions on harmful and unlawful materials. These applications apply technical details collected by a browser, such as screen size, time zone, installed plugins and hardware details, to create a uniquely identifying fingerprint of any single device.
For instance, one can use Pixelscan to analyze your browser’s fingerprint to understand what happens behind your back and then you can have control over your browser and keep safe and secure. It can help in protecting user identities, detecting suspicious activity, and ensuring bad actors cannot avoid surveillance with VPNs or bogus accounts.
In the case of browser fingerprinting, companies and individuals can spot patterns that are related to the bots, fraud or underage users trying to gain access to restricted content. That is part of the goal of the legislation, which is to assist in the improved protection of children and the establishment of safer online spaces.
Weighing Safety vs. Freedom
The Act also provides protection to democratic content, including news articles and political views, even in circumstances when they are posted by a user. The protection of such content should be an issue in the moderation of platforms.
According to a recent YouGov survey, 70% of adults are in support of the law, but 64% of them are sceptical that it is working to prevent under-18 access. Privacy issues also became a debate because third-party age checks may lack a central authority. Critics declare that the legislation is a threat to censorship and overreach.
Proponents of free speech, such as platform X (previously Twitter), caution that a wide interpretation will be detrimental to legal speech. Privacy advocates also caution that it may undermine encryption since social networks may be required to scan encrypted communications to identify CSA-based content, which in this case would be technically impossible to do without invading the privacy.
Tech investors, such as Marc Andreessen, have denounced the broad powers of the law, such as invasive age checks, and criticized the law in its effect on free speech.
Looking Ahead
The Online Safety Act is currently in effect and enforcement is underway since Ofcom started to enforce the rules fully. The growing impact of the law is featured in headlines on tech platforms, by rights groups, and in government reports.
Will it be effective in safeguarding children and the victims of damaging materials? Or will it cause new privacy threats, censorship, and compliance overheads? The success or failure of the UK regulation is becoming an international concern and this step may be adopted by other states.
Meanwhile, as this law comes into practice, the true test of whether it can make the internet safer by not undermining rights and privacy will be tested. Watch on–the future of online safety will depend on how the law goes in the next few months.