By Katie Paul, Supantha Mukherjee and Byron Kaye
NEW YORK/STOCKHOLM/SYDNEY, March 9 (Reuters) – For years, tech companies successfully resisted pressure from child safety advocates to do more to keep kids off their services, claiming technical limitations would make any attempt to restrict access for teens impractical, overly broad or a security risk.
Now, a growing list of governments is concluding those hurdles are not insurmountable, and pushing ahead with aggressive new age-checking requirements for social networks, AI chatbots and porn purveyors alike.
Three months after Australia launched a landmark ban on teen social media accounts, regulators across Europe, in Brazil and in a handful of U.S. states are moving to emulate it. California Governor Gavin Newsom – seen as a likely Democratic candidate for president in 2028 – joined the call last month, while Republican President Donald Trump is also reportedly “taking an interest” in age limits, according to his daughter-in-law.
Spurring them along are escalating concerns over online abuse and teen mental health, and a recent outcry over the spread of AI-generated child sexual images, as well as increased confidence in the capabilities of “age assurance” software that backers say can suss out a person’s approximate age using facial analysis, parental approval, ID checks and other digital clues.
Recent advancements in artificial intelligence have boosted the effectiveness and slashed the cost of those age-gating tools, according to Reuters interviews with more than a dozen regulators, child safety advocates, independent researchers and vendors who perform the age checks for big tech companies, including TikTok, Facebook owner Meta and OpenAI.
“The age-assurance market has matured a lot in the last couple of years,” said Ariel Fox Johnson, a senior adviser to San Francisco-based Common Sense Media, a children’s online advocacy group. She pointed to improving technology, as well as the establishment of trade groups, technical protocols and certification schemes standardizing evaluation of the various tools’ effectiveness.
AGE-ASSURANCE MARKET MATURES
Social media companies now can often confidently guess a person’s age group using digital breadcrumbs like the year an account was established or the type of content it views, they said, while a burgeoning industry of age assurance vendors like Yoti, k-ID and Persona offer additional layers of checks via automated tools like face scans and machine-based analysis of government IDs.
At the app-store level, too, Apple and Alphabet’s Google have rolled out tools that allow parents to indicate their child’s age range to app developers.
“The tech definitely has gotten better, not just for age verification specifically but for overall identity verification,” said Merritt Maxim, a vice president at Massachusetts-based research firm Forrester. “That, in turn, has driven down the average cost of verification, so that where you were using it five years ago only for higher-value types of transactions, now you can use it for pretty much anything without a significant financial impact.”
Vendors generally charge well under $1 per check for basic machine-only age assurance tools, though for large volumes the price is often as low as single-digit cents, said industry executives. More costly traditional processes like human confirmation and triangulation of personal data that were standard a decade ago are still available at a premium, but are needed less frequently, the executives said.
Independent evaluations back up executives’ descriptions of rapid progress. According to an ongoing study run by the U.S. National Institute of Standards and Technology (NIST), face-scanning software from firms including Yoti – which performs checks for TikTok and Meta’s Facebook, Instagram and Threads – were off in their age estimations by an average of 4.1 years as of initial testing in 2014, while by 2024 that average had dropped to 3.1 years, and is currently 2.5 years.
FACE-SCANNING GAINS PRECISION
UK-based Yoti said the performance of its latest face analysis model due out in April surpasses that of models it submitted for the NIST and Australian studies, with an average error of only 1.04 years for kids in regulators’ target age range of 14 to 18. Persona, a San Francisco-based identity verification firm used by OpenAI and Reddit, touts a similar average error of 1.77 years for the 13-to-17-year-old age range.
A report commissioned by the Australian government likewise determined last year that photo-based age estimation products were broadly accurate, although it acknowledged that users within three years of the law’s age cutoff of 16 were in a “grey zone where system uncertainty is higher” and recommended they be diverted to “supplementary assurance methods, such as ID-based verification or parental consent.”
The systems also struggle more with certain skin types, with grainier imagery captured by older phones and when using privacy-protective “on-device” data processing, which entails performing a check entirely on the person’s phone without sending their data out to a cloud server, executives said.
For instance, systems using on-device processing are less likely to catch attempts by enterprising youngsters to appear older than they are, said Rick Song, CEO of San Francisco-based Persona. Common tricks used by teens include donning masks, applying heavy makeup or fake facial hair, or scanning the plastic faces of action figures instead of their own, he said.
Still, said executives, facial age estimation can provide a digital version of the kind of screening performed daily at bars and liquor stores in the offline world.
“If you look young, you can be challenged, and you may have to provide your ID,” said Robin Tombs, CEO of London-based Yoti.
He added that social media services generally require fewer face scans and ID checks than porn or gambling sites because they already have reams of personal information on their users. This means they can lean more on an age assurance method called “inference” — involving analysis of online activities, connected financial information and other signals — to satisfy regulators’ requirements.
The 10 social media companies included in Australia’s teen ban all declined Reuters requests for data on the effectiveness of their age assurance tools.
EARLY IMPLEMENTATION RESULTS
Australia’s internet regulator, the eSafety commissioner, has said it will collect population data for two years to assess the ban’s impact and publish first results later this year. Already, companies have locked 4.7 million suspected underage accounts since the law came into effect in December, it said, although industry participants have told Reuters that some of the accounts were likely underage Google accounts that were prevented from logging in to YouTube, regardless of whether they were active.
Meta said it took down about 550,000 Instagram, Facebook and Threads accounts suspected to be underage in the first weeks of the Australian ban. Snapchat said it took down about 415,000.
Regulators elsewhere are watching carefully. European Commission President Ursula von der Leyen is set to discuss age verification during an upcoming visit to Canberra, according to a European lawmaker briefed on her agenda. The United Kingdom, which requires age verification for porn websites and is considering tightening child safety rules for social media and AI chatbots as well, is likewise swapping notes with Australian counterparts.
Early results from the Australian experiment should be taken with a grain of salt, as companies affected by the ban generally were doing the bare minimum to comply with legal requirements, said Iain Corby, the executive director of the Age Verification Providers Association, a trade association that represents about three dozen vendors including Yoti and Persona.
In some cases, he added, the social media companies asked AVPA member firms to turn off controls that make the age checks more robust.
“They are extremely worried this is going to be contagious and be a policy that is adopted around the world, so they are not really motivated for it to be a glowing success,” he said.
“They are testing the regulator’s patience to see what they can get away with.”
(Reporting by Katie Paul in New York, Supantha Mukherjee in Stockholm and Byron Kaye in Sydney; Editing by Kenneth Li and Matthew Lewis)
Disclaimer: This report is auto generated from the Reuters news service. ThePrint holds no responsibility for its content.

