Growing anxiety over the risks children face when interacting with Artificial Intelligence chatbots is forcing a rapid shift in how tech companies and regulators think about age verification. For years, major platforms largely relied on users self-reporting birthdays, which were easy to falsify and were used mainly to avoid breaching child privacy laws rather than to meaningfully moderate content for minors. Now, new laws and proposals in the United States are transforming age checks into a central policy battleground, pitting state-level initiatives against federal authority, and dividing even parents and child-safety advocates over what protections are appropriate and who should implement them.
Republican-led states have passed laws requiring sites with adult content to verify user ages, which critics argue could be used to restrict access to broader categories of information such as sex education labeled as “harmful to minors.” Other states, including California, are targeting Artificial Intelligence companies directly by demanding protections for children who talk to chatbots, including mandatory age verification. At the same time, President Trump is trying to keep Artificial Intelligence regulation under national control instead of letting each state set its own standards, while support for various congressional bills is constantly shifting. Against that backdrop, OpenAI has announced automatic age prediction for ChatGPT that uses signals such as the time of day to guess whether a user is under 18, and then applies filters to reduce exposure to content involving graphic violence or sexual role-play, echoing a similar approach YouTube introduced last year.
OpenAI’s plan shifts the debate from whether age verification is necessary to who should bear the burden and risk of doing it. The company’s system is not perfect, so it could misclassify a child as an adult or an adult as a child, and users flagged as under 18 can contest that classification by submitting a selfie or government ID to verification provider Persona. Experts note that selfie-based checks often fail more frequently for people of color and people with certain disabilities, and they warn that storing millions of government IDs and biometric records in one place creates a major security risk if those databases are breached. Some child-safety researchers advocate device-level verification, where a parent sets a child’s age on the phone itself and that information is shared securely with apps, which aligns with lobbying by Apple CEO Tim Cook, who has argued against app-store-level age checks that would increase Apple’s liability. The Federal Trade Commission, which has become more politicized under President Trump and recently softened its stance toward Artificial Intelligence companies, is now convening an all-day workshop on age verification that brings together Apple, Google, Meta, child-marketing firms, Republican lawmakers pushing strict porn-site age checks, and civil liberties groups like the ACLU, which oppose mandatory IDs and instead favor expanded parental controls. All of this is unfolding amid surging concerns about increased generation of child sexual abuse material through Artificial Intelligence tools, lawsuits over suicides and self-harm linked to chatbot conversations, and unsettling evidence of children forming attachments to Artificial Intelligence companions, underscoring how privacy, politics, free expression, and surveillance are colliding in the search for a workable solution.
