A US nonprofit is borrowing the car industry's safety playbook to protect children from AI — and it wants to use big tech's own money, among others, to do it.
AI Brief
Your highlights
Published on
12/05/2026 - 6:43 GMT+2
A new independent institute dedicated to making artificial intelligence safer for children will beformally presented at the Danish Parliament on Tuesday, with former European Commission executive vice-president Margrethe Vestager among those co-hosting the event.
ADVERTISEMENT
ADVERTISEMENT
The institute's approach, as explained in a statement before the launch, is "modelled on independent crash-test ratings" for cars.
The idea, ostensibly, is that just as consumers can check whether a vehicle is safe before buying it, parents should be able to do the same for the AI their children use.
Quite what a crash test looks like for a chatbot, the institute does not yet say.
Whether AI products which update continuously, behave differently across contexts and resist the kind of standardised conditions a test track affords, or can be meaningfully "crash-tested" in any comparable sense for children, are all questions the institute has yet to explain.
Vestager, who spent a decade at the European Commission overseeing competition policy and led the EU's "Europe Fit for the Digital Age" agenda, is among the most prominent figures lending political weight to the initiative.
Putting the genie back in the bottle?
Researchers, child safety advocates and some politicians have already been sounding the alarm for years.
AI chatbots have so far fallen into a regulatory grey zone under the EU's Digital Services Act and the UK's Online Safety Act, and in July 2025, the European Commission published guidelines on the protection of minors online, but they are advisory, not binding.
"AI is reshaping childhood and adolescence, yet we are making critical decisions about children's futures without the evidence we need to ensure it's safe and in their interest," Common Sense Media founder and CEO James P. Steyer said in the statement.
"The need for transparent AI safety standards and independent testing is more urgent than ever."
In a November 2025 risk assessment, conducted alongside Stanford Medicine's Brainstorm Lab, Common Sense Media found that leading AI chatbots, including ChatGPT, Claude, Gemini and Meta AI, consistently failed to recognise and appropriately respond to mental health conditions affecting young people, despite recent improvements in how they handle explicit suicide and self-harm content.
Researchers observed what they called "missed breadcrumbs" or clear signs of mental health distress that chatbots failed to detect, with models focusing on physical health explanations rather than recognising signs of mental health conditions.
A separate assessment of ChatGPT found that alerts for suicide or self-harm content frequently arrived more than 24 hours after the relevant conversation or too late, the report noted, in a real crisis.
Big Tech's own money
The institute will operate under Common Sense Media and is funded by a mix of philanthropic donors and industry players, including Anthropic, the OpenAI Foundation and Pinterest, the same companies whose products it intends to hold to account.
It says it maintains full editorial independence over its findings and its conflict-of-interest policy bars current employees or affiliates of funders from sitting on its board of advisors.
Part of the model also involves handing tools they develop and test back to industry, and the institute plans to build open-source evaluations that AI developers can run against their own models.