Exclusive: Doctors and education experts who studied AI’s impact on the young call for a 5-year moratorium in schools
Researchers, doctors, and child development experts have studied what generative AI does to developing brains. Their conclusion: It shouldn’t be anywhere near a classroom, and action should be taken immediately.
Recommended Video
“We just don’t want to waste another 10 years in which our kids’ education is undermined,” Leonie Haimson, cochair of the Parent Coalition for Student Privacy, told Fortune. “It took more than 10 years to ban cell phones from schools. We can’t afford that again.”
Boston-based child advocacy nonprofit Fairplay is leading a coalition of more than 250 experts and organizations in calling for a five-year moratorium on all student-facing generative AI products in pre-K through 12 schools in the U.S. and Canada. The group, made up of mental health experts, parents, educators, and organizations geared toward protecting children online, warned that any product that fails safety testing during that pause should be permanently banned. The report, shared exclusively with Fortune, will be released right when advocates rally in front of New York’s City Hall to push for a two-year ban in the city’s public schools specifically.
Fairplay last month led a similar coalition of experts in penning a letter to YouTube and its parent company, Alphabet, to stop the spread of “AI slop” in YouTube Kids videos. The report was coauthored by members of the Screen Time Action Network’s Screens in Schools work group, including Emily Cherkin, a screen time consultant and professor at the University of Washington’s Evans School of Public Policy, along with other experts.
University of Washington professor Emily Cherkin
“It’s an unproven, untested product, and we’re giving it to children in the name of improving education or equity or cognition, when none of those things have been proven,” Cherkin told Fortune. “If a local children’s hospital told parents, ‘We’ve got this new drug, it has potential to save lives, just trust us,’ people would be horrified. We have vetting processes for all kinds of industries, and yet somehow we’re allowing generative AI companies access to our most vulnerable population.”
The experts’ core finding is that AI doesn’t just distract children: It actively interferes with the developmental work they need to do. The human brain isn’t fully formed until the mid-twenties, and the prefrontal cortex, used in planning, reasoning, emotion regulation, and critical thinking, is among the last regions to mature. “The problem with giving children generative AI is not just that they will cognitively offload the skill building,” Cherkin said. “It’s that they will displace the building of those skills even in the first place. If they’re never building skills, they have none to offload.”
The report pointed to a joint MIT and Harvard study finding that AI use accumulates “cognitive debt,” impairing independent thinking over time. Similarly, OECD research found that students who use ChatGPT as a study tool actually perform worse on tests than peers without access, even when the AI tutor has been programmed not to provide direct answers.
The mental health findings are equally apparent. Google and Character.AI are currently facing lawsuits alleging their chatbots contributed to user suicides and induced children to harm family members. The American Psychological Association issued a health advisory on AI and adolescent well-being. The report notes that teachers, therapists, and counselors must maintain licensure and follow ethics codes to work with children, but generative AI products face none of those requirements, and have been found to violate ethical standards in providing mental health support.
Under-resourced schools are more likely to rely on AI as a substitute for human teachers while well-resourced schools retain them. Because AI training datasets contain historical bias, the report warns, these products are likely to amplify existing educational inequities rather than close them. A February 2026 Pew Research Center survey found that 60% of teenagers say students at their school use chatbots to cheat “very often” or “somewhat often.”
The report is also pointed about what remains unknown. There is no proven educational benefit to generative AI in schools: It is marketed purely on “potential,” which the authors define as “literally what something is not.” Long-term effects on children’s cognitive and social-emotional development are entirely uncharted. “Giving children untested generative AI products based on future potential is dangerous,” the report states.
“The precautionary principle must be employed,” Cherkin said. “The best preparation for a digital future is an analog childhood. If we want kids to navigate generative AI someday, we should be doubling down on the skills that help them think critically, and that’s not happening at all.”
In New York City, Haimson, who is also a member of the Department of Education’s own AI working group, said Mayor Zohran Mamdani has failed to d