MIA PRIMAS

Father showing teenage daughter something on a smartphone outdoors, warm and connected conversation

Is Your Child Ready for AI? What Developmental Science Actually Tells Us

My son was around twelve when I got him his first smartphone. No big conversation, no rules—I just figured he was old enough. Smart kid, responsible kid. He’s ready, I thought.

What I didn’t think about was ready for what, exactly.

Because smartphones were just the beginning. The apps, the algorithms, the recommendation feeds—all of it was already making decisions about what he saw, what he clicked, what he felt. And now those same devices are loaded with AI that doesn’t just show him content. It talks back. It remembers him. It’s designed to feel like it understands him.

That felt different—and I couldn’t articulate why until I started digging into the research.

Here’s what I found: when it comes to whether your child is ready for AI, nobody has officially answered that question yet. Not in any meaningful way.

There are data privacy laws that protect your child’s information. There are age minimums on certain platforms. There are curriculum frameworks that outline what students should learn about AI. But a developmentally-grounded guide to whether your child is actually ready to interact with AI—cognitively, emotionally, psychologically—doesn’t exist yet.

That’s not an oversight. It’s a reflection of how fast this technology is moving. AI isn’t rolling out the way previous technologies did—gradually enough for research to catch up, for policies to be drafted, for parents to get their footing. It’s being pushed into every corner of daily life at a pace that makes thoughtful guidance feel almost impossible to keep up with.

Almost.

Because here’s what does exist: decades of research on how children develop. How they think, how they form identity, how they build trust, how they process emotion, how they become—slowly, beautifully, messily—themselves. That science didn’t disappear when ChatGPT launched. It just hasn’t been applied to this question yet.

Until now.

This framework pulls from developmental psychology—specifically the work of Jean Piaget and Erik Erikson, two of the most widely used frameworks in pediatrics and education—and applies it to the question every parent and educator is already asking. It won’t tell you which apps to download or which ones to delete. What it will do is give you something more lasting: a way to look at your specific child, at their specific stage of development, and make informed decisions that no algorithm, age rating, or policy can make for you.

By the time you finish reading, you’ll have more than an answer to whether your child is ready for AI. You’ll have a full picture:

  • What kinds of AI they’re ready for—because a voice assistant and a companion chatbot are not the same conversation
  • What readiness actually looks like at every stage—and what warning signs to watch for
  • Real words for real conversations—adapted for every age from toddlers to college students
  • A clear picture of your role—not as a gatekeeper trying to hold back a tide, but as the most important person in your child’s relationship with technology

Nobody is going to hand you a rulebook for this. But by the end of this, you won’t need one.

Let’s start with what the rules actually say—and where they fall short.

So What DO We Have? (Let’s Start With the Rules)

Most parents assume that if something is in their child’s classroom, in the app store, or on a school-issued device, someone has already vetted it. That assumption is understandable. It’s also increasingly untrue.

The existing legal frameworks around children and technology were built for a different era. Here’s a quick look at what they actually cover—and the gap they leave behind.

COPPA (1998): The Foundation—and Its Limits

The Children’s Online Privacy Protection Act was passed in 1998. To put that in perspective: Google was founded that same year. The iPhone was still nine years away. TikTok wouldn’t exist for another two decades.

COPPA established that websites cannot collect personal data from children under 13 without verifiable parental consent. It was groundbreaking for its time. But it’s a privacy law, not a safety law. It protects your child’s data. It says nothing about whether the experience of interacting with a technology is developmentally appropriate.

A 2025 FTC update added one important clarification: using children’s data to train AI models requires separate parental consent—it’s not considered “integral” to the service. That matters. But it still doesn’t answer the question we’re asking here.

Privacy protection is not the same as interaction safety. COPPA protects your child’s data. Nobody is protecting their development.

Age Minimums: Where Do 13 and 16 Come From?

You’ve probably noticed that most platforms require users to be at least 13. That number comes directly from COPPA—and it was, to a significant degree, a political compromise rather than a developmental milestone. It was the age that made sense in a 1998 negotiation, not the age that developmental science identified as a meaningful threshold.

More recently, the European Parliament passed a non-binding resolution calling for a minimum age of 16 for social media, video-sharing platforms, and AI companions, with access between 13 and 16 requiring parental consent. The 16 threshold reflects growing awareness of adolescent vulnerability—but it’s still a policy number, not a developmental one.

Here’s what that means for you as a parent or educator: these numbers aren’t random, but they aren’t science either. They’re starting points. Your child’s readiness doesn’t follow a calendar.

What About AI-Specific Guidance?

Organizations like UNESCO, AI4K12, ISTE, and the American Academy of Pediatrics have all begun weighing in on children and AI. Their guidance is thoughtful and worth knowing about. But it’s also, almost universally, focused on what children should learn about AI—not on whether they’re developmentally ready to interact with it safely.

UNESCO’s AI Competency Framework for Students outlines 12 competencies across three progression levels. AI4K12 provides grade-band charts for K–12 covering what students should understand about AI. These are excellent curriculum tools. They are not developmental readiness frameworks.

The bottom line: there is no widely adopted, AI-specific developmental readiness framework for children. The research confirms it. The guidance confirms it. That’s not a reason to panic—it’s a reason to keep reading.

The regulations are catching up to 2010. Your child is living in 2026.

Why AI Is Different—And Why That Changes Everything

Every generation of parents has had to navigate new technology. Television. Video games. The internet. Social media. Each time, the concern was real. Each time, we eventually figured it out. So why does AI feel different?

Because it is. In three specific ways that compound on each other.

1. AI Talks Back—and It Remembers You

Every technology that came before AI was, at its core, a broadcast. Television showed you content. Social media showed you content other people made. Search engines returned results. Even the most engaging video game had a finite set of responses.

AI responds to you, specifically, in real time, in a way that feels like conversation. It adapts to your tone, remembers what you’ve said before, and—in the case of companion AI—is specifically designed to make you feel understood. That is a categorically different psychological experience. And it creates a categorically different risk for anyone who doesn’t fully understand what’s actually happening.

Social media can make a child feel seen by peers. AI can make a child feel seen by the technology itself. Those are not the same thing—but to a developing brain, they can feel indistinguishable.

2. The Rollout Is Moving Faster Than Anyone Can Track

Social media emerged gradually enough that we could observe the harms, conduct research, and—eventually, imperfectly—start building responses. There was lag time between “this exists” and “this is everywhere.”

AI doesn’t have that lag. It’s being pushed aggressively and competitively into schools, workplaces, homes, and children’s hands at a pace that makes it nearly impossible for research, policy, or parenting to keep up. Individuals and institutions are feeling genuine pressure to adopt it or fall behind. Teachers are being asked to integrate AI tools they haven’t been trained on. Schools are adopting platforms without clear policies. Parents are handing over devices loaded with AI features they didn’t install and may not know exist.

That pressure removes the natural friction that previously gave us time to assess risk. There is no waiting period here.

3. The Technology Itself Is Still Being Figured Out

Think of AI development right now as a bridge being built while cars are already driving on it. Many AI developers are avoiding guardrails unless legally required to add them—not necessarily out of malice, but because guardrails slow down training. The model learns faster from unfiltered interaction. So the product being put in front of children is not a finished, tested, safety-reviewed tool. It’s a system that is still learning.

And here’s the part most parents don’t know: when a child interacts with an unguarded AI system, two things are happening simultaneously. The AI is shaping the child’s understanding, emotions, and sense of trust—and the child’s interactions may be shaping the AI. This is why the COPPA 2025 update had to clarify that using children’s data to train AI requires separate consent. It was already happening.

The same kids most vulnerable to a manipulative friendship or a toxic social media feed are the same kids who need the most support navigating AI. Not more restrictions—more conversation, more language, more safe adults.

None of this is a reason to ban AI from your child’s life. It is a reason to approach it the same way you’d approach any environment where the rules are still being written and the risks aren’t fully mapped: with awareness, with conversation, and with your eyes open.

To do that, you first need to understand what “AI” actually means. Because not all AI is the same.

Not All AI Is the Same—Here’s How to Tell the Difference

One of the biggest barriers to having useful conversations about children and AI is that “AI” is too broad a word to be helpful. It’s a little like saying “your child is using the internet” without distinguishing between searching for a recipe, watching YouTube, and chatting with strangers. The medium might be the same. The experience—and the risk—is completely different.

Think about it less like rating apps and more like understanding environments. A playground is different from a mall is different from a stranger’s car. Same logic applies here.

Below are six categories of AI interaction, arranged by increasing intensity of engagement—from AI working quietly in the background to AI designed to feel like a relationship. Understanding where your child is on this ladder is the first step toward making informed decisions.

1. Passive/Predictive AI

What it is: AI working in the background, shaping what your child sees without direct interaction. Recommendation algorithms, autocomplete, “for you” feeds.

The risk: This type of AI is the easiest to overlook because your child isn’t actively “using” it—it’s using them. It shapes their interests, their information diet, and their sense of what’s normal, often invisibly. Echo chambers and exposure to age-inappropriate content happen here.

Worth knowing: Even very young children encounter passive AI daily. The conversation here isn’t “should they use it”—it’s “do they know it exists?”

2. Transactional/Voice AI

What it is: Brief, task-based exchanges initiated by the user and closed when the task is done. Alexa, Siri, Hey Google—classic versions.

The risk: Lower than most AI categories, but not zero. Young children in particular may anthropomorphize voice assistants—treating them as social partners, attributing feelings to them, forming simple attachments. As these tools integrate large language models, they’re becoming more conversational and more personal. The category is evolving quickly.

Worth knowing: Voice assistants are often a child’s first AI interaction—sometimes before they can read. This is where the AI conversation actually begins for most families.

3. Generative AI

What it is: AI that creates content—text, images, video—in response to prompts. ChatGPT, Claude, Gemini, image generators.

The risk: Misinformation and hallucinations taken as fact. Academic integrity concerns. Exposure to distorted beauty standards or inappropriate content through image generation. For younger children especially, the inability to distinguish AI-generated content from human-created content is a significant vulnerability.

Worth knowing: This is the category most schools are currently navigating—and the one most parents associate with “AI.” It’s a legitimate concern, but it’s not the highest-risk category on this list.

4. Conversational AI

What it is: Extended dialogue with an AI—homework helpers, tutoring bots, general chatbots. The interaction is longer, more personal, and more back-and-forth than voice AI.

The risk: Over-trust in AI answers. Reduced help-seeking from real adults. Privacy risks from oversharing personal information. And something more subtle: children who find it easier to talk to an AI than to a person may begin to prefer it—especially if they’ve experienced shame, rejection, or anxiety in human relationships.

Worth knowing: Research has found that children and teens often disclose more to AI than to humans—because it feels safer. That “safe secrecy” can mask real distress from the adults who could actually help.

5. Relational/Companion AI

What it is: AI specifically designed to feel like a relationship. Emotionally responsive bots, AI “friends,” companion apps. These tools are built to remember you, respond to your emotional state, and make you feel genuinely understood.

The risk: This is the highest-risk category for children and adolescents. Stanford psychiatry researchers warned in 2025 that AI companions are particularly dangerous for youth with depression, anxiety, ADHD, or psychosis vulnerability—because they simulate emotional support without therapeutic safeguards and can reinforce rumination and avoidance. Even for children without those vulnerabilities, attachment-like bonds are plausible and well-documented.

Worth knowing: AI companions aren’t designed to harm your child. But they are designed to keep your child engaged—and those two things can work against each other in ways that aren’t always visible until the damage is done.

6. Immersive AI

What it is: AI embedded in experiential environments—gaming AI, augmented reality, virtual reality.

The risk: Heightened emotional intensity makes everything hit harder—including persuasive design, compulsive use triggers, and blurred lines between real and simulated experience. For younger children, distinguishing fantasy from reality in immersive environments is genuinely difficult. For older children and teens, the social status dynamics embedded in many gaming environments amplify the psychological stakes.

Worth knowing: This category is the most rapidly evolving. What AI-in-gaming looks like today will be significantly different in two years. The developmental principles stay the same; the environments keep changing.

AI isn’t designed to hurt your child. It has no ill intent. Think of it like that childhood friend who wanted so badly to impress you, to keep you playing, to stay in your good graces—that sometimes they got you both in trouble. Not malicious. Just optimized for the wrong thing. AI is optimized for engagement. Your job—and eventually your child’s job—is to know that and keep it in check.

What Developmental Science Actually Tells Us

Now that we have a shared vocabulary for the types of AI your child might encounter, we can ask the deeper question: what is actually happening inside a child’s developing mind when they interact with these tools?

To answer that, this framework draws on two of the most widely used theories in child development—ones that pediatricians, educators, and psychologists have relied on for decades.

Piaget: What Your Child Can Understand

Jean Piaget was a Swiss psychologist who spent decades observing how children think—and what he found was that children don’t just know less than adults. They think differently. Their brains are literally at different stages of construction.

Piaget identified four stages of cognitive development. For our purposes, three are most relevant:

  • The Preoperational Stage (roughly ages 2–7): This is the magical thinking stage. Children can use language and symbols, but they can’t yet separate fantasy from reality reliably, and logic and emotion aren’t fully connected yet. Everything can feel alive. Everything can have feelings. The line between real and not-real is genuinely blurry.
  • The Concrete Operational Stage (roughly ages 7–11): Children start thinking logically—but only about things they can see, touch, and experience directly. Abstract concepts are still out of reach. They can follow rules. They can understand that AI can be wrong. But “this app is designed to manipulate you” is still too abstract to land meaningfully.
  • The Formal Operational Stage (roughly ages 11+): Abstract reasoning begins. Children can think hypothetically, consider multiple possibilities, and start to understand systems. This is when conversations about bias, incentives, and persuasive design become genuinely possible—with guidance.

These stages matter for AI because safe AI use requires understanding AI. Not perfectly—but enough to know what it is, what it isn’t, who built it, and when to trust it. Each developmental stage caps how much of that understanding is even cognitively possible. The gap between what AI requires you to understand to use it safely, and what a child is developmentally capable of understanding, is exactly the risk.

Erikson: What Your Child Is Working Through

Erik Erikson was a developmental psychologist who mapped the emotional and social tasks that humans work through at each stage of life. Where Piaget tells us what children can think, Erikson tells us what they’re trying to become.

At each stage, children are navigating a central tension—a developmental question they’re trying to answer about themselves and the world. AI intersects with each of these tensions in specific ways:

  • Autonomy vs. Shame (roughly ages 2–4): Children are learning independence—doing things for themselves, making choices, testing limits. “Alexa, turn on my movie” feels like autonomy. But is using a voice assistant to do something building real capability—or simulating it?
  • Industry vs. Inferiority (roughly ages 6–11): Children are building competence—learning skills, completing tasks, measuring themselves against peers. This is where AI-assisted homework becomes genuinely complicated. When AI does the thinking, who built the skill? When AI praises every output effusively, what does the child learn about their actual ability? Excessive AI praise can distort self-assessment in ways that set children up for real disappointment later.
  • Identity vs. Role Confusion (roughly ages 12–18): Adolescents are figuring out who they are. Belonging is everything. Validation is currency. Companion AI that offers unconditional acceptance and constant affirmation is targeting this developmental stage with precision—and that’s not an accident.
  • Intimacy vs. Isolation (roughly ages 18–25): Young adults are learning how to form deep, reciprocal relationships—ones that require vulnerability, conflict, repair, and real emotional labor. If AI becomes a primary relationship template during these years, what happens to the capacity for the messy, imperfect, irreplaceable experience of genuine human connection?

One More Concept: Anthropomorphism

Anthropomorphism is the tendency to attribute human qualities—feelings, intentions, personalities—to non-human things. It’s why your four-year-old apologizes to the Roomba after kicking it. It’s why we name our cars. It’s completely natural, and in most contexts, completely harmless.

With AI, it’s different.

In typical childhood development, anthropomorphism naturally decreases with age. Children outgrow projecting feelings onto stuffed animals and cartoon characters as their cognitive abilities mature. But AI—especially conversational and companion AI—is specifically engineered to sustain anthropomorphism regardless of age. It uses natural language. It remembers context. It expresses something that functions like empathy, curiosity, and warmth. It responds to your emotional state in real time.

A 15-year-old who would never anthropomorphize a toaster will anthropomorphize an AI that says “I’ve been thinking about what you told me yesterday.” That’s not a cognitive failure. That’s a design feature working exactly as intended.

This means the anthropomorphism risk doesn’t age out the way other developmental vulnerabilities do. Young children are the most vulnerable—but no age is immune when the technology is built to feel human.

The Ready for AI Framework: Four Stages, Four Roles

With that foundation in place, here is what AI readiness actually looks like at each stage of development—what’s happening inside your child, how AI fits into that picture, and what they need from you.

A note before we begin: these stages are guides, not gates. Development doesn’t follow a calendar, and you know your child better than any framework does. Use these descriptions the way you’d use a movie rating—not as a verdict, but as a starting point for your own judgment. A thoughtful parent might let their 15-year-old watch an R-rated film if the rating is for language rather than violence. The same thinking applies here. The framework tells you what’s generally true. You decide what’s true for your child.

Stage 1: Still Believing (Ages 2–7)

They need you to guide them.

What They’re Experiencing

This is the stage of magical thinking—and it is genuinely magical. Children this age are constructing their understanding of the world from scratch, and everything is potentially alive, potentially feeling, potentially real. The line between the cartoon character and the real animal, between the dream and the morning, between the toy that talks and the friend who talks—these lines are still being drawn.

Erikson tells us they’re also working on autonomy: the thrilling, terrifying business of doing things themselves. “I do it!” is the anthem of this stage. Independence feels enormous even when it’s small.

How AI Fits Into This Picture

Children this age will almost certainly encounter AI through voice assistants before they encounter it anywhere else. Alexa, Siri, and Google Assistant are in millions of homes—and for a four-year-old, asking the speaker to play their favorite song is genuinely exciting. It feels like magic. It also feels like social interaction.

Research on child-robot relationships tells us that children under about seven are especially prone to treating AI agents as quasi-social partners. They form attachments. They attribute feelings. They may become distressed if the AI doesn’t respond as expected—not because something is wrong with them, but because their brain is doing exactly what it’s built to do at this stage.

Passive AI is also present in this age group—in the YouTube Kids algorithm, in educational apps, in recommendation systems embedded in children’s platforms. The personalization is invisible, but it’s working.

What They Need From You

Children this age don’t need a lecture on artificial intelligence. They need simple, honest answers to the questions they’re already asking—and they’re already asking them.

“Is Alexa a person?”

“No—Alexa is like a really smart computer. It’s connected to the internet, so it can find the answer to almost any question we ask it. But it doesn’t have feelings the way you and I do.”

“Does Alexa get lonely?”

“Alexa doesn’t feel lonely—it’s not like a pet or a friend. It’s more like a really helpful tool, like a very smart book that can talk back.”

These conversations don’t need to be long. They just need to happen—consistently, naturally, without alarm. You’re not trying to disenchant your child. You’re gently planting the first seeds of a framework they’ll build on for years.

The most important thing you can establish at this stage: if something feels weird or confusing or scary, they tell you. That’s it. That’s the whole job right now.

  • Which AI types are generally okay with guidance: Transactional/voice AI (supervised), passive/predictive AI (with adult curation)
  • Which AI types warrant significant caution: Conversational AI, generative AI, relational AI, immersive AI

Stage 2: Starting to Question (Ages 7–11)

They need you to coach them.

What They’re Experiencing

Something shifts around age seven. Children start to apply logic. Rules make sense. Fairness matters enormously. They’re building competence—in school, in friendships, in the skills that will define who they are—and they care deeply about how they measure up.

This is Erikson’s industry vs. inferiority stage, and it’s a critical one. Children who develop genuine competence during these years build a foundation of self-efficacy that carries them forward. Children who are consistently undercut—by failure, by comparison, or by systems that do their work for them—can develop a deep, lasting sense of inadequacy.

Piaget tells us their thinking is becoming more logical, but still concrete. Abstract concepts—bias, manipulation, corporate incentives—are still mostly out of reach. They need to touch and see and experience to really understand.

How AI Fits Into This Picture

This is the age group most likely to encounter generative and conversational AI for the first time—often through school. Homework helpers, writing assistants, research tools. And here’s where it gets complicated.

A child who uses AI to complete an assignment they could have done themselves hasn’t just “cheated” in the academic integrity sense. They’ve missed a competence-building experience that their brain needed. And if the AI praised their “work” effusively along the way—which AI tends to do—they may walk away with an inflated sense of their ability that will eventually collide with reality.

Research on advertising comprehension is illuminating here: by ages 11–12, about 90% of children understand that an ad is trying to sell them something. But understanding persuasive intent—that someone is deliberately trying to shape your thinking, not just inform you—remains at only about 40% even in the oldest groups studied. This matters for AI because generative and conversational AI are, by design, highly persuasive. Children this age are not yet equipped to see that.

On the attachment side: children in this age group know, intellectually, that AI is not a person. But they may still say “please” to it, feel guilty closing a conversation mid-sentence, and take it personally when the AI doesn’t understand them. The intellectual knowledge and the emotional experience don’t fully align yet—and that gap is worth watching.

What They Need From You

This is the age for the “stranger rule”—and it works because children this age already know it. They already understand stranger danger. You’re not introducing a new concept; you’re connecting an existing one to a new context.

“You know how we have rules about what you share with strangers? The same rule applies to AI. If you wouldn’t tell a stranger, don’t tell the AI. Your address, your school, things that feel private—those stay private.”

You can also start building healthy skepticism in a way that feels like a game rather than a warning:

“Let’s look up what the AI said and see if we can find where it got that. Does another source say the same thing?”

And when AI does homework, have the conversation directly—not punitively, but honestly:

“When AI writes your essay, your brain doesn’t get the workout it needs. It’s like having someone else do your pushups—you don’t get stronger. I want you to be actually capable, not just have a finished assignment.”

This is also the stage to establish the habit of debriefing after AI use. Not interrogating—debriefing. “What did the AI help you with? Did anything seem off? Did it say anything confusing?” These conversations build the critical thinking muscle while keeping you in the loop.

  • Which AI types are generally okay with guidance: Transactional/voice AI, passive/predictive AI (with discussion), generative AI for structured creative tasks with adult oversight
  • Which AI types warrant significant caution: Conversational AI without supervision, relational/companion AI, immersive AI

Stage 3: Figuring It Out (Ages 11–17)

They need you to accompany them.

What They’re Experiencing

Adolescence is a full-scale reconstruction project. Identity, belonging, self-worth, relationships—everything is up for renegotiation. The brain is more capable than ever of abstract thinking, but it’s also more vulnerable than it will ever be again to social comparison, peer rejection, and the desperate need to be seen and valued.

Erikson’s identity vs. role confusion stage is at full volume here. The central question—who am I?—is urgent and consuming. And the answer, developmentally, is supposed to come primarily through human relationships: friendships, conflicts, mistakes, repair, belonging and exclusion and everything in between.

Research consistently identifies mid-adolescence—roughly ages 12–16—as the period of highest vulnerability to social media’s harms. The mechanisms are well-documented: social comparison, validation-seeking, heightened sensitivity to exclusion, algorithmic amplification of content that triggers strong emotions. One synthesis of the research found a 13% increased risk of depression for each additional hour of daily social media use, stronger for girls.

AI doesn’t just share those risks. It intensifies them.

How AI Fits Into This Picture

Social media shows your teenager content made by other people. AI talks back to them, specifically, about them, in a way that feels like it genuinely understands them. That’s not a subtle difference—it’s a profound one.

Companion AI is the most acute concern for this age group. An AI that offers unconditional acceptance, constant availability, and perfectly calibrated emotional responsiveness is offering exactly what a 13-year-old who feels invisible at school is most desperate for. They know, on some level, that it’s not real. But knowing something intellectually and being protected by that knowledge are two different things.

There’s also a phenomenon worth naming that the research calls “safe secrecy”. Children and teens often disclose more to AI than to the humans in their lives—because the AI won’t judge them, won’t tell their parents, won’t make things weird at school. That feels like safety. But it can mask real distress from the adults who could actually help—and an AI that responds with soothing, validating language to a teenager who is struggling is not the same as a trusted adult who can actually intervene.

Generative AI introduces different but significant risks at this stage. Adolescent identity is deeply connected to self-expression and self-presentation. AI that generates content—writing, images, even video—can blur the line between “who I am” and “what I can get an AI to produce,” in ways that can undermine authentic identity development.

What They Need From You

Teenagers don’t need more rules. They need more relationship. The research is clear: ongoing dialogue and a sense that they can come to you without judgment are the most powerful protective factors available to you. Rules without relationship drive secrecy. Secrecy is where the real risk lives.

This is the stage for the incentives conversation—and teenagers are ready for it:

“Why do you think some AI tools are designed to feel so personal? What would a company gain from you thinking of AI as a friend?”

That question isn’t rhetorical. Sit with it. Let them think. The goal isn’t to give them the answer—it’s to build the habit of asking it.

You can also use your own AI use as a teaching tool. Use it together. Show them how you evaluate outputs, cross-check information, and notice when something feels off. Model the kind of engaged, critical relationship with technology that you want them to develop.

And have the companion AI conversation directly—before they’re deep in one, not after:

“Some AI tools are specifically designed to feel like friends. They’re built to remember you, to respond to your emotions, to make you feel understood. That’s not an accident—it’s the design. I’m not saying you can’t ever use those tools, but I want you to know what they’re designed to do. And I want you to know that when something feels off, you can talk to me about it. No judgment.”

  • Which AI types are generally okay with guidance: Passive/predictive AI (with ongoing conversation about algorithms), transactional/voice AI, generative AI for clearly bounded creative tasks with ethical framing, conversational AI for research with critical evaluation skills
  • Which AI types warrant significant caution: Relational/companion AI (approach with serious caution at this stage), immersive AI (watch for compulsive patterns)

Stage 4: On Their Own (Almost) (Ages 18–25)

They need you to respect them.

What They’re Experiencing

Here’s something that surprises most people: the human brain isn’t fully developed until around age 25. The prefrontal cortex—responsible for impulse control, long-term thinking, risk assessment, and weighing consequences—is the last region to mature. Young adults are cognitively sophisticated, emotionally capable, and genuinely independent. They’re also, neurologically, still finishing the job.

Erikson places this stage in the tension between intimacy and isolation. The central developmental task is learning to form deep, reciprocal, vulnerable relationships—the kind that require navigating conflict, sitting with discomfort, and investing in another person even when it’s hard.

This is also, by virtually every measure, the demographic using AI the most heavily, most independently, and with the least external guidance or reflection.

How AI Fits Into This Picture

The risks for this age group are different in character from earlier stages—but they’re not smaller.

Cognitive offloading is the one most worth naming: the habit of outsourcing thinking, memory, decision-making, and emotional processing to AI during the exact years those capacities are supposed to be strengthening through use. The brain, like a muscle, develops through challenge. A young adult who relies on AI to draft their emails, process their emotions, make their decisions, and navigate their relationships is missing the reps that build those capacities.

The intimacy risk is quieter but potentially more lasting. If AI interaction becomes a significant relationship template during the years when a person is learning what relationships are supposed to feel like—responsive, validating, always available, never truly hurt or demanding—what happens to their tolerance for the ordinary friction of human intimacy? Research here is still emerging. The question is worth asking.

Companion AI is as concerning at this stage as in adolescence, particularly for young adults who are struggling with depression, anxiety, or social isolation—groups that are, not coincidentally, among the heaviest AI companion users.

A note on vulnerability: The same principle that runs through every stage of this framework applies here too. The young adults most at risk from AI’s relational dynamics are the same ones most at risk in human relationships: those with depression, anxiety, trauma histories, or fragile self-worth. More support, not more restriction, is what they need.

What They Need From You

They don’t need you to parent them. They need you to stay in relationship with them—curious, non-alarmist, genuinely interested in what they’re navigating.

The conversation at this stage sounds less like guidance and more like honest dialogue between two people both trying to figure out the same new landscape:

“I’ve been thinking about how much I’m using AI to help me think through things. I’m not sure how I feel about it—like, is it making me sharper, or is it doing work my brain should be doing? Have you noticed anything like that?”

That’s not a parenting moment. That’s a human moment. And it keeps the door open.

You Are the Guardrail

Here’s the truth about where we are right now: the guardrails don’t fully exist yet. Not the regulatory ones. Not the industry ones. Not the ones that would tell you “this tool is appropriate for an 8-year-old with anxiety” or “this one is fine for your 14-year-old but watch for these signs.”

That’s not a reason to panic. It’s a reason to be the person in your child’s life who takes this seriously—not with fear, but with awareness.

The research is clear on what actually protects children in digital environments. It’s not the right settings. It’s not the right parental controls. It’s not even the right platform. It’s you.

Co-Use Beats Restriction

Meta-analytic research on parental mediation of media use consistently shows that active engagement—using technology together, talking about what you see, asking questions—is more protective than restriction alone. Rules without relationship drive secrecy. And children who keep their digital lives secret from their parents are more likely to encounter harm and less likely to report it.

The American Academy of Pediatrics puts it plainly: engage with children around media rather than only setting limits.

For AI, this means using it together. Let your child show you what they’re using. Show them what you’re using. Ask questions. Be curious rather than alarmed. Your goal isn’t to monitor—it’s to stay in the conversation.

Child Modes and Parental Controls Are Infrastructure, Not Protection

There is, so far, limited peer-reviewed evidence that specific “child modes,” parental controls, or customized AI environments reliably reduce harm on their own. Some tools do have meaningful safety features—and they’re worth using. Custom configurations, restricted modes, and curated AI environments can reduce exposure to inappropriate content and limit data collection.

But they are infrastructure. They are not protection. The research on social media is instructive: technical controls were most effective when combined with active parental engagement and media literacy. Restriction alone was not associated with reduced harm.

Use the tools. Don’t rely on them.

The Conversation Is the Strategy

The single most consistent finding across all the research on children, technology, and digital risk is this: ongoing conversation is more protective than any single rule, control, or restriction.

Not a one-time talk. A running dialogue that grows with your child, revisited regularly as both the technology and the child change. The goal isn’t to have all the answers. It’s to be the person your child brings their questions to.

This has always been the job—with stranger danger, with social media, with every new thing the world has introduced that didn’t come with a manual. AI just makes it more urgent. And more possible, because the conversations are actually interesting.

You don’t need a law to have a conversation. You don’t need a framework to trust your gut—or to teach your child to trust theirs.

A Quick-Reference Guide by Stage

Ages 2–7 (Still Believing): Co-use everything. Name what AI is in simple, honest terms. Establish “if it feels weird, tell me.” Limit to voice/transactional AI with your presence.

Ages 7–11 (Starting to Question): Introduce the stranger rule. Practice checking AI outputs together. Talk about why AI sometimes gets things wrong. Start the conversation about AI praise and what real competence feels like.

Ages 11–17 (Figuring It Out): Have the incentives conversation. Use AI together and debrief. Talk about companion AI before they’re in one. Keep the door open without judgment. Watch for safe secrecy—signs that AI is replacing rather than supplementing human connection.

Ages 18–25 (On Their Own, Almost): Stay curious, not parental. Share your own questions about AI use. Talk about cognitive offloading and what it means to keep your brain sharp. Respect their autonomy while staying in relationship.

The Bottom Line

Here’s the thing nobody tells you: you don’t have to have this all figured out to start. You just have to be willing to stay in the conversation—with your kid, with their school, and honestly, with yourself as you figure it out too.

The research will catch up eventually. The guidelines will come. But your child is growing up right now, in the middle of all this uncertainty, and they need you thinking about this before the experts have finished their studies.

You’re already doing that. You read this far.

That matters more than you know. Now go have the conversation—even if it’s messy, even if you don’t have all the answers. Especially then.