Blocking Access to Harmful Content Will Not Protect Children Online, No Matter How Many Times UK Politicians Say So

Business

The UK is having a moment. In late July, new rules took effect that require all online services available in the UK to assess whether they host content considered harmful to children, and if so, these services must introduce age checks to prevent children from accessing such content. Online services are also required to change their algorithms and moderation systems to ensure that content defined as harmful, like violent imagery, is not shown to young people.

During the four years that the legislation behind these changes—the Online Safety Act (OSA)—was debated in Parliament, and in the two years since while the UK’s independent, online regulator Ofcom devised the implementing regulations, experts from across civil society repeatedly flagged concerns about the impact of this law on both adults’ and children’s rights. Yet politicians in the UK pushed ahead and enacted one of the most contentious age verification mandates that we’ve seen.

The case of safety online is not solved through technology alone.

No one—no matter their age—should have to hand over their passport or driver’s license just to access legal information and speak freely. As we’ve been saying for many years now, the approach that UK politicians have taken with the Online Safety Act is reckless, short-sighted, and will introduce more harm to the children that it is trying to protect. Here are five reasons why:

Age Verification Systems Lead to Less Privacy 

Mandatory age verification tools are surveillance systems that threaten everyone’s rights to speech and privacy. To keep children out of a website or away from certain content, online services need to confirm the ages of all their visitors, not just children—for example by asking for government-issued documentation or by using biometric data, such as face scans, that are shared with third-party services like Yoti or Persona to estimate that the age of the user is over 18. This means that adults and children must all share their most sensitive and personal information with online services to access a website. 

Once this information is shared to verify a user’s age, there’s no way for people to know how it’s going to be retained or used by that company, including whether it will be sold or shared with even more third parties like data brokers or law enforcement. The more information a website collects, the more chances there are for that information to get into the hands of a marketing company, a bad actor, or a state actor or someone who has filed a legal request for it. If a website, or one of the intermediaries it uses, misuses or mishandles the data, the visitor might never find out. There is also a risk that this data, once collected, can be linked to other unrelated web activity, creating an aggregated profile of the user that grows more valuable as each new data point is added. 

As we argued extensively during the passage of the Online Safety Act, any attempt to protect children online should not include measures that require platforms to collect data or remove privacy protections around users’ identities. But with the Online Safety Act, users are being forced to trust that platforms (and whatever third-party verification services they choose to partner with) are guardrailing users’ most sensitive information—not selling it through the opaque supply chains that allow corporations and data brokers to make millions. The solution is not to come up with a more sophisticated technology, but to simply not collect the data in the first place.

This Isn’t Just About Safety—It’s Censorship

Young people should be able to access information, speak to each other and to the world, play games, and express themselves online without the government making decisions about what speech is permissible. But under the Online Safety Act, the UK government—with Ofcom—are deciding what speech young people have access to, and are forcing platforms to remove any content considered harmful. As part of this, platforms are required to build “safer algorithms” to ensure that children do not encounter harmful content, and introduce effective content moderation systems to remove harmful content when platforms become aware of it. 

Because the OSA threatens large fines or even jail time for any non-compliance, platforms are forced to over-censor content to ensure that they do not face any such liability. Reports are already showing the censorship of content that falls outside the parameters of the OSA, such as footage of police attacking pro-Palestinian protestors being blocked on X, the subreddit r/cider—yes, the beverage—asking users for photo ID, and smaller websites closing down entirely. UK-based organisation Open Rights Group are tracking this censorship with their tool, Blocked.

We know that the scope for so-called “harmful content” is subjective and arbitrary, but it also often sweeps up content like pro-LGBTQ+ speech. Policies like the OSA, that claim to “protect children” or keep sites “family-friendly,” often label LGBTQ+ content as “adult” or “harmful,” while similar content that doesn’t involve the LGBTQ+ community is left untouched. Sometimes, this impact—the censorship of LGBTQ+ content—is implicit, and only becomes clear when the policies are actually implemented. Other times, this intended impact is explicitly spelled out in the text of the policies. But in all scenarios, legal content is being removed at the discretion of government agencies and online platforms, all under the guise of protecting children. 

Children deserve a more intentional and holistic approach to protecting their safety and privacy online.

People Do Not Want This 

Users in the UK have been clear in showing that they do not want this. Just days after age checks came into effect, VPN apps became the most downloaded on Apple’s App Store in the UK. The BBC reported that one app, Proton VPN, reported an 1,800% spike in UK daily sign-ups after the age check rules took effect. A similar spike in searches for VPNs was evident in January when Florida joined the ever growing list of U.S. states in implementing an age verification mandate on sites that host adult content, including pornography websites like Pornhub. 

Whilst VPNs may be able to disguise the source of your internet activity, they are not foolproof or a solution to age verification laws. Ofcom has already started discouraging their use, and with time, it will become increasingly difficult for VPNs to effectively circumvent age verification requirements as enforcement of the OSA adapts and deepens. VPN providers will struggle to keep up with these constantly changing laws to ensure that users can bypass the restrictions, especially as more sophisticated detection systems are introduced to identify and block VPN traffic. 

Some politicians in the Labour Party argued that a ban on VPNs will be essential to prevent users circumventing age verification checks. But banning VPNs, just like introducing age verification measures, will not achieve this goal. It will, however, function as an authoritarian control on accessing information in the UK. If you are navigating protecting your privacy or want to learn more about VPNs, EFF provides a comprehensive guide on using VPNs and protecting digital privacy—a valuable resource for anyone looking to use these tools.

 Alongside increased VPN usage, a petition calling for the repeal of the Online Safety Act recently hit more than 400,000 signatures. In its official response to the petition, the UK government said that it “has no plans to repeal the Online Safety Act, and is working closely with Ofcom to implement the Act as quickly and effectively as possible to enable UK users to benefit from its protections.” This is not good enough: the government must immediately treat the reasonable concerns of people in the UK with respect, not disdain, and revisit the OSA.

Users Will Be Exposed to Amplified Discrimination 

To check users’ ages, three types of systems are typically deployed: age verification, which requires a person to prove their age and identity; age assurance, whereby users are required to prove that they are of a certain age or age range, such as over 18; or age estimation, which typically describes the process or technology of estimating ages to a certain range. The OSA requires platforms to check ages through age assurance to prove that those accessing platforms are over 18, but leaves the specific tool for measuring this at the platforms’ discretion. This may therefore involve uploading a government-issued ID, or submitting a face scan to an app that will then use a third-party platform to “estimate” your age.

From what we know about systems that use face scanning in other contexts, such as face recognition technology used by law enforcement, even the best technology is susceptible to mistakes and misidentification. Just last year, a legal challenge was launched against the Met Police after a community worker was wrongly identified and detained following a misidentification by the Met’s live facial recognition system. 

For age assurance purposes, we know that the technology at best has an error range of over a year, which means that users may risk being incorrectly blocked or locked out of content by erroneous estimations of their age—whether unintentionally or due to discriminatory algorithmic patterns that incorrectly determine people’s identities. These algorithms are not always reliable, and even if the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance that people should not have to be subject to just to access content that the government could consider harmful.

Not Everyone Has Access to an ID or Personal Device 

Many advocates of the ‘digital transition’ introduce document-based verification requirements or device-based age verification systems on the assumption that every individual has access to a form of identification or their own smartphone. But this is not true. In the UK, millions of people don’t hold a form of identification or own a personal mobile device, instead sharing with family members or using public devices like those at a library or internet cafe. Yet because age checks under the OSA involve checking a user’s age through government-issued ID documents or face scans on a mobile device, millions of people will be left excluded from online speech and will lose access to much of the internet. 

These are primarily lower-income or older people who are often already marginalized, and for whom the internet may be a critical part of life. We need to push back against age verification mandates like the Online Safety Act, not just because they make children less safe online, but because they risk undermining crucial access to digital services, eroding privacy and data protection, and limiting freedom of expression. 

The Way Forward 

The case of safety online is not solved through technology alone, and children deserve a more intentional and holistic approach to protecting their safety and privacy online—not this lazy strategy that causes more harm that it solves. Rather than weakening rights for already vulnerable communities online, politicians must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms. We encourage politicians in the UK to look into what is best, and not what is easy.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *