Technologists and policymakers are confronting a generation-defining problem on the internet. While it can be a revolutionary force for unprecedented education and global connection, it can also pose significant dangers to children who have completely unfettered access. There is no simple way to monitor children’s internet access without also surveilling adults, which paves the way for disastrous online privacy violations.
While some advocates praise new laws as victories for children’s safety, many security experts warn that these laws are being proposed and passed with flawed implementation plans. These flaws pose dangerous security risks for adult users as well. In the United States alone, twenty-three states had enacted age verification laws as of last month, with two more states following suit in September. Meanwhile, the United Kingdom’s Online Safety Act, which took effect in July, requires many online platforms to verify users’ identities before granting access.
When we talk about age verification laws today, we are not referring to the simple checkboxes from the past, like those used for Neopets accounts that asked users to affirm they were at least thirteen years old. Those types of age checks were a result of the Children’s Online Privacy Protection Act, or COPPA, an internet safety law passed in 1998. As many know from experience, such checks are very easy to bypass; a user simply clicks a box.
In the context of the laws that have emerged during the 2020s, age verification usually refers to a user uploading an official ID to a third-party verification system to prove their identity. Users might also upload biometric facial scans, similar to the technology that powers Face ID on iPhones.
The point of age verification is to address serious parental and lawmaker concerns. The goal is to prevent children from accessing content that is potentially dangerous for minors, such as online pornography, information about illicit drug use, and social media sites where they may encounter strangers with bad intentions.
These concerns are not unfounded. Parents have shared horrific stories of how their children died after purchasing fentanyl-laced drugs on Facebook, or how they took their own lives after facing incessant bullying on Snapchat. As technology becomes more sophisticated, the problem is getting worse. Meta’s AI chatbots have reportedly flirted with children, while Character.AI and OpenAI are facing lawsuits over the suicides of children that were allegedly encouraged by the companies’ chatbots.
We know the internet is not all bad. Without leaving home or spending money, a person can learn to play guitar or write code. They can form meaningful friendships with people from the other side of the world, access specialized telehealth care, or find the answer to just about any question at any moment.
This is how global lawmakers have arrived at what they believe to be a sound compromise. They will not remove the whole internet, but they will put certain content behind a gate that can only be unlocked by proving one is an adult. In this case, that does not mean just clicking a box; it means uploading a government ID or scanning biometric data.
The safety of any digital security measure depends entirely on its implementation. Apple builds products like Face ID so that biometric scans of a user’s face never leave the iPhone; they are never shared over the cloud, which massively limits the potential for hackers to gain access. But when any sort of connection to another network is involved, identity verification can become risky. We have already seen how these measures can play out poorly when the technology is not rock-solid.
As the Electronic Frontier Foundation writes, no method of age verification is both privacy-protective and entirely accurate. Each method falls on a spectrum of being dangerous in one way to dangerous in a different way.
Recent examples show how badly things can go when a company fails on security. On the app Tea, which women use to share information about men they meet on dating apps, users had to upload selfies and photos of their IDs to prove their identity. Users on the web forum 4chan found that Tea left users’ data exposed, meaning bad actors could access tens of thousands of government IDs, selfies, and direct messages where women shared sensitive dating experiences. What was purported to be an app for women’s safety ended up exposing its users to vicious harassment and giving bad actors access to personal information like home addresses. These hacks occurred despite Tea’s promise that the images were not stored and were deleted immediately.
This kind of data breach happens all the time. It is not just happening to new apps like Tea; world governments and trillion-dollar tech giants are certainly not exempt from data breaches.
Some may wonder if losing internet anonymity really matters if they are not doing anything shady. The backlash to these laws is not just about people being shy about linking their porn viewership to a government ID.
In places where people can be prosecuted for political speech, anonymity is vital. It allows people to meaningfully discuss current events and critique those in power without fear of retribution. Corporate whistleblowers could be unable to report a company’s wrongdoing if all their online activity is linked to their identity, and victims of domestic abuse will find it even more difficult to flee dangerous situations.
In the U.S., the idea of being prosecuted for one’s political beliefs is becoming less theoretical. President Trump has threatened to send his political opponents to prison, and the government has revoked visas from international students who have criticized the Israeli government or participated in protests against the country’s military actions.
In the United States, twenty-three states have enacted age verification laws as of August 2025, while two more states have laws slated to take effect in late September. These laws mostly impact websites that host certain percentages of material deemed “sexual material harmful to minors,” a definition which varies from state to state.
In practice, this means pornographic websites must verify a user’s identity before granting access. But some sites, like Pornhub, have opted to simply block traffic from certain states. Pornhub stated that since age verification software requires users to hand over extremely sensitive information, it opens the door to the risk of data breaches, and that governments have historically struggled to secure this data.
The definition of “sexual material harmful to minors” varies depending on who is enforcing the law. At a time when LGBTQ rights are under attack in the U.S., activists have warned that such laws could be used to classify non-pornographic information about the LGBTQ community, as well as basic sex education, as harmful material. These concerns appear well-founded, given that the Trump administration has removed references to civil rights movements and LGBTQ history from some government websites.
Texas’s age verification law was upheld in a Supreme Court ruling in June. It was passed around the same time the state imposed other legal restrictions on the LGBTQ community, including limits on public drag shows and bans on gender-affirming care for minors. The drag show law was later deemed unconstitutional for violating the First Amendment.
The United Kingdom enacted the Online Safety Act in July 2025. It requires many online platforms to verify a user’s identity before allowing access. If a user is identified as a minor, they are blocked from certain websites. The Act applies to search engines, social media platforms, video-sharing platforms, instant messaging services, and cloud storage sites.
In practice, this means websites like YouTube, Spotify, Google, X, and Reddit require UK users to verify their identity before accessing certain content. These requirements do not just apply to pornographic or violent content; people in the UK have been barred from viewing vital education and news sources, making it difficult to access information without exposing themselves to potential privacy concerns.
The UK does not mandate one specific verification method; individual websites decide what mechanism to use, with oversight from the communications regulator, Ofcom. But as the Tea example showed, we cannot trust that any given authentication tool will be safe. Users must now decide if they want to freely access information or expose themselves to privacy risks.
Even if you do not live in the U.K., you may be impacted by tech platforms that are pre-complying with these regulations. In the U.S., YouTube has already begun to roll out technology that estimates users’ ages based on their activity, regardless of the age listed when they registered their account.
It is possible to use a VPN to get around these barriers. After the Online Safety Act took effect, half of the top ten free apps on iOS in the U.K. were VPNs. VPN downloads also spiked after Pornhub access was blocked in many U.S. states. When Pornhub was suspended in France, one VPN provider reported that registrations spiked by 1000% within half an hour.
This introduces another issue: free VPNs do not always have great privacy practices, even if they are advertised as such.