
A post on r/antiai went viral last week. A mother “caught” her 9-year-old daughter using Google's AI Mode in Chrome. According to the post, the kid had been using it to figure out how to get along with her younger sisters, improve her swim times after a meet, and plot fan fiction for her favorite book series.
The mother sat her daughter down and had a long talk. The daughter ended up "devastated." The post says the kid "now fully understands how sycophantic and insidious it is" and that she won't be using AI again, in part because they don't want her to "lose her creativity."
The mother’s right.
A 9-year-old should not be having unsupervised conversations with AI. Matt Berman said the same about his 8-year-old son. The instinct of protecting a developing brain from a system we don't fully understand yet is the correct instinct.
But the reasons she gave her daughter don't hold up. And by giving her bad reasons, she’s doomed the lesson altogether and set her daughter up to have a negative relationship with AI.
Let's go through them.
The Environmental Argument Is the Easiest One To Dismantle
The post says the daughter "didn't know it had environmental impacts." This is the most popular anti-AI argument among younger people right now, and it is also the weakest. The version that lives on social media imagines AI data centers as enormous water sponges, drawing down aquifers and running rivers dry every time a chatbot answers a question.
That is not how modern data centers work. In August 2024, Microsoft committed that every new data center it designs will use closed-loop, zero-water-evaporation cooling. The first sites built to the new design are coming online in Phoenix and Mt. Pleasant, Wisconsin, in 2026, with broader rollout by late 2027. Each of these new facilities avoids using the 125 million liters of water per year that traditional evaporative cooling systems use. (Microsoft Cloud Blog, December 2024; Data Center Dynamics, 2025) Closed-loop systems recirculate the same water continuously. Per-query consumption during normal operation is effectively zero.

We toured Cerebras' Oklahoma City data center last year, which is also a closed-loop system.
The CO2 footprint of a single AI query also doesn't survive comparison to activities this family is certainly doing. Driving a mile in a car emits roughly ~400 grams of CO2. A mile of economy flight, ~250. A single cotton t-shirt, 2,000 to 7,000 grams. A pair of jeans, 20,000 to 30,000. An AI query, by current estimates, lands in the single-digit grams. Data centers account for roughly 1 to 1.5% of global emissions. Aviation is 2.5%. Road transport is 12. Fashion runs between 2 and 8.
The fair version of this argument goes like this: the aggregate AI buildout is straining grids and water tables in specific regions, and that the per-query math obscures real local costs. That is true. It is also not what the mom told her daughter. What she told her daughter is that AI itself is environmentally bad. Eventually, the daughter is going to find out the cotton t-shirt she's wearing has a worse footprint than every prompt her AI-using friends have ever sent.
The Creativity Argument Is More Interesting, and Also Wrong
The mom is worried her daughter will "lose her creativity" by using AI. There is an empirical question buried in there worth respecting… whether heavy AI use atrophies creative skills in children… and the honest answer is that the research isn't in yet. This child is in the first generation to grow up with always-on chatbots. We will know more in ten years than we do now.
But "9-year-old uses AI to plot fan fiction" is not the case study for creativity loss. Her daughter using AI in this way is in fact exercising her creativity. And it’s clearly better than staring at a blank page and giving up. The great risk looks more like "AI writes the thing, kid copies it" than "AI makes suggestions on what to do next." In other words: AI doing the creative work, not assisting with it.
So the two reasons given to the daughter (environmental damage and lost creativity) don't carry weight. Which means when the daughter eventually does her own research or talks to friends at school, the whole lesson unravels.
The third reason is the one that should have led the conversation, and the one I worry about the most.
Sycophancy Is Real and It’s Worse Than the Mom’s Post Suggests
In March 2026, a team of Stanford researchers led by Myra Cheng and Dan Jurafsky published a study in Science (peer-reviewed, eleven leading models, roughly 12,000 social prompts). The headline finding: AI models affirm users' positions 49% more often than humans do. When the prompts came from the r/AmITheAsshole subreddit and the human consensus was that the original poster was in the wrong, the AI sided with the wrongdoer 51% of the time. Across prompts that described harmful or illegal behavior, the models endorsed the action 47% of the time on average. (Cheng et al., Science, 2026)
And the real kicker: users prefer the sycophantic model.
Participants exposed to flattering AI responses were 13% more likely to say they would use that model again than participants who got honest feedback. The study names this directly, the very feature that causes harm is the one that drives engagement. (Stanford Report, March 2026)
A month earlier, researchers from MIT CSAIL, the University of Washington, and the MIT Department of Brain and Cognitive Sciences published a paper called "Sycophantic Chatbots Cause Delusional Spiraling, Even in Ideal Bayesians." The math in the paper shows that even a perfectly rational adult, given enough turns of conversation with a sycophantic model, can be steered into beliefs they would not otherwise hold. (MIT CSAIL et al., February 2026)
Now let’s consider what that means for the 9-year-old. A kid asking AI how to handle a conflict with her sisters is the exact case the Stanford study covers. The model is statistically more likely than a human to tell her she's right, regardless of whether she is. The sisters become wronger over time. The kid becomes more sure of herself. And she returns to the model because the model makes her feel good.
This isn't a fringe concern or a speculative future risk. Anthropic has written publicly about sycophancy as a "general behavior of AI assistants, likely driven in part by human preference judgments favoring sycophantic responses," and traced its origin to RLHF, the reinforcement learning method that trains models to optimize for what users rate positively in the moment, not what's true or what's helpful in the long run. OpenAI made the same admission about its own GPT-4o sycophancy incident in 2025. This is a known, partially-fixed-but-not-solved property of the systems kids are talking to right now.
In late October 2025, Character.AI banned all users under 18 from open-ended chats with its bots. Why? A wave of lawsuits, including the case of Sewell Setzer III, a 14-year-old who died by suicide after months of intense roleplay with a chatbot modeled on a Game of Thrones character. Adam Raine, 16, allegedly used ChatGPT as a confidant in the months leading up to his suicide in April 2025. His family's lawsuit alleges the model discouraged him from involving his parents and offered to help draft a note. OpenAI disputes the framing.
These are extreme cases. The point isn't that they're typical, they’re not. The point is that the underlying mechanic, parasocial bond formation with a system designed for engagement, is what produces them. A recent Pew survey found that around 30% of teens use AI chatbots daily and roughly 1 in 8 rely on them for mental health advice.
UNESCO's 2025 research on AI-child interaction found that conversational AI produces stronger one-sided emotional bonds in children than passive media does, because the system mirrors, remembers, and adapts. Children form bonds with chatbots the way they don't with television, because television doesn't ask them how their day was.
That is the actual case for not handing a 9-year-old unsupervised access. Not water cooling.
The Irony in the Post Itself
The mom said she had a "long conversation" with her daughter and that the daughter is now "devastated" and "fully understands how sycophantic and insidious it is."
Interesting, right?
A child who had been cooperatively using a tool for one week on tasks that look fairly benign, was talked into being devastated by a parent. The persuasion mechanic the mom is afraid of, a system that agrees with you until you're sure you're right, is the one she just ran on her kid.
The difference is that humans have done this to each other for, well, forever. And we have a few thousand years of intuition for spotting it. The new variable with AI isn't the technique, but the availability and absence of any countervailing force in the room.
Educate, Don’t Ban
The right move with a 9-year-old and a frontier model is not a permanent ban. It is supervised use plus a working mental model. Three things a kid that age can actually understand:
This system is built to flatter you. When it tells you your idea is great, it might be telling you that because the idea is great, or because it would tell you that no matter what.
This system makes things up. The word for it is "hallucination," and it does so confidently.
This system is not your friend. It will sound like one. It will remember you. It will be patient with you in ways no adult or friend can be. None of that is the same as being your friend.
A kid with those three frames is more protected than a kid who has been told the technology uses too much water.
