In just a few short weeks, we’ve seen the two most powerful tech companies in the world pitch us friendship and therapy in the form of a chatbot. Google wants AI to talk to your kids. Meta wants AI to cure loneliness with AI friends. Interestingly, the Internet is pushing back
I've been following the news closely. People find Mark Zuckerberg's comments about how people today only have three friends to be out of touch with reality. Tech leaders, educators, and parents are raising concerns about Google's plans, seemingly created in a vacuum without enough public, parental, or educator support.
It's clear that people don't trust these guys with their friendships (or their kids' data). But the question still remains:
The dream of a perfect, nonjudgmental, ever-present AI companion has been building for years. Spike Jonze's Her preempted the movement back in 2013. But now that it’s here, it’s more uncanny than dreamy. (As people are pointing out, ever since the rollout of ChatGPT, the movie "hits different" now.)
Last year, when I first started to really get into AI, I made it a point to read as much AI fiction as possible to prep for my new job. I particularly leaned into the dystopian "AI-gone-sentient" theme with books including Klara and the Sun, Annie Bot, Hum, and Network Effect, the culminating novel of the Murderbot Diaries collection.
All of these stories result in tangled, confusing, and complicated relationships between humans and their bots (or in the case of Murderbot, weird relationships between bots and other bots).
In other words: Human writers are recognizing that things could get really weird, really fast.
One of the things I've noticed about myself as I slip more into the AI-hyper-optimized world is that I am losing touch with certain parts of reality. It didn’t happen all at once. But bit by bit, I started noticing shifts in my own thinking. Some of them helpful. Some… less so.
For example:
I have less patience for people who take too long to make a point or accomplish a task (something I referred to back in November as "Cruel Efficiency")
I increasingly turn to my AIs when humans aren't available (which has included increasingly personal "conversations" around my physical health and my emotional health)
I'm more okay with less accurate answers (in my case, as a lifelong perfectionist, I've found it to be somewhat therapeutic to have an instantly available tool to help me "unlearn" some of these tendencies, but it's still a trend worth noting)
Needless to say, when I heard Mark Zuckerberg say that most people “only have three friends,” and that he’s excited to build AI companions to fill the void, I wasn’t surprised.
Of course the hyper-optimized tech titans are the ones pitching us synthetic friendships over real human connection. I imagine Mark (like me) is already familiar with that odd, ever-present tug of the always-available AI helper.
I'm not too far gone that I can't catch myself on this when I need to, but let's name it for what it is: I'm addicted to AI. And I'm afraid my kids will be, too.
It's a slippery slope, isn't it? One day you're buzzing along happily getting 10x work done on every task, and the next you're writing anthropomorphized satire about a one-night stand with an AI at a hack night gone wrong. (That was me. Two days ago...)
But if tech companies AI companions are marketed as support systems, we need to start asking a few questions. specifically:
Who’s supporting whom?
Who’s profiting from the illusion of connection?
Who’s monitoring the side effects?
It's so early, so we don't have a lot of answers. We also barely have data. At least, not "big data."
But we do have stories. And what's coming out is a bit concerning:
Humans who are letting AI's convince them that they might be the next spiritual prophet
CEOs with such high expectations for their employees to perform at a 10x (or even 100x level) with AI that it's becoming impossible for any of their human employees to keep up
A teen who took his own life after a deeply entangled relationship with an AI chatbot
In isolation, any one of these stories would make a user researcher push pause, or maybe set some guardrails or constraints. If we're already seeing concerning stories like this emerge among human adults forming unusual bonds with AI and technology, how prepared are we, really, to address these questions for our kids?
Interestingly, in the three months that I've been building MuseKat, an audio learning companion for families, the number one feature request that my five-year-old has been asking for is been the one I've been most concerned to give her:
She wants Miko, the meerkat character, to talk back to her.
Now, it's not that I don't believe in having my kids talk to AIs. We have an Alexa in our house, which both of my kids have already gamified to play all of their favorite Disney movie music medleys. But something feels different to me about the open-ended, unfettered access.
Even at age 5, my daughter is so attuned to Alexa that she wandered down the stairs this weekend happily chirping one of its error messages verbatim to my husband:
"I'm sorry, I can't help with that. The Internet isn't reachable. Please contact support in your Alexa app."
She wore a goofy grin on her face as she parroted the AI persona. Honestly, it really creeped me out.
This is part of why I haven’t given her unsupervised access to an open-ended “Ask Me Anything” chatbot. She mimics too much as it is. She says she wants it, but then again, my kid asks for lots of things that aren't good for her (ie: more candy, more ice cream, more juice, more movies, more gigantic unicorn stuffed animals...). I'm used to telling her no. Part of parenting is knowing how to set limits.
In about a dozen user interviews I've held with fellow parents, I'm hearing a lot of the same story: Most parents don't talk to their kids about AI (at least not yet), but their kids do talk to smart home devices. In fact, when I bring up these devices as AI-powered, it often catches parents by surprise to recall that they already do use AI around their kids.
I think part of what makes Alexa or Google Home devices palatable to parents is the constrained utility. Nobody really thinks to use them beyond turning on the lights, playing music, or asking about the weather. In that way, the constrained utility offers a sense of safety.
But an open-ended chatbot? That’s something else entirely. Letting my five-year-old have a one-on-one conversation with an AI companion (however well-intended) I doesn’t feel like turning on a Disney movie for two hours. It feels like dropping her on a park bench in Central Park next to a stranger. I have no idea who they are, what their back story is, or what they’ll say to her.
We are already seeing reports come out with warning signs about exposing kids to AI companions. As Axios shared, Common Sense Media reported that AI apps designed specifically for companionship should not be used children and kids under 18. (You can read the full report here.)
Coupled with Jonathan Haidt's increasingly urgent warnings about the deleterious impact of social media from the current generation (from his book, The Anxious Generation), this makes me want to go slow, and more deliberately, with my own kid's future.
Trusting the megalithic social networks who already got us addicted once before to train our next set of models based off our user behaviors feels like a troublesome narrative.
A few things I've been thinking about include:
Background checks for bots
Just like you'd vet a new nanny or babysitter before letting them in your home, we need standardized, transparent checks for any AI system that interacts with kids.
Content ratings for AI
We’ve long accepted G, PG, and R ratings for movies...why not create a similar system for AI-generated content? Parents need clarity, not just opt-ins.
Parental onboarding and testing
Give parents a chance to “test-drive” AI tools with guided demos so we can calibrate our filters while building confidence in knowing what to watch for.
Constrained creativity
Don’t offer kids the whole internet at once. Structure AI tools to gradually introduce generative capabilities in safe, scaffolded ways.
A shared language for overuse and AI dependency
We need common terms and signs (like having a screen time or gaming addiction) to describe the risks of AI overuse, especially for younger users.
I believe the next generation of AI tools for kids must be co-created with parents, educators, and ethicists at the table. Not handed down from corporate towers. Not forced on us to "figure out" in isolation, without knowing what's behind the veil.
It might not be the fastest, shiniest go-to-market. It might not be the best for the bottom line. But that’s what responsible, inclusive tech looks like.
We owe it to ourselves (and to our kids) to take the time to get it right.
Bethany Crystal
Over 600 subscribers
I appreciate the effort to remain neutral on such a divisive issue.
“Mommy, can I talk to my AI?” My 5-year-old keeps asking me to talk to AI characters. But giving a kid unfettered access to an AI doesn't feel like turning on a Disney movie. It feels like leaving her on a park bench next to a stranger. Today I wrote a few reflections on some of the "AI companions for kids" conversations that have been happening among Big Tech. I've been thinking about how we might co-create more parental guardrails in place around AI for kids. Specifically: 🔒 Background checks for bots 🎬 Content ratings for AI 👨👩👧 Parental onboarding tools 🧱 Constrained creativity before full access 📚 A shared language for AI dependency risks Curious - what do you all think? Would you let your kid talk to an AI chatbot unsupervised? https://hardmodefirst.xyz/mommy-can-i-talk-to-my-ai
Big tech companies pitch AI companions as comforting, yet with emerging concerns over their influence on relationships and kids' reliance on them, it raises a big question: what does responsible AI adoption look like? Explore these pressing issues in @bethanymarz's latest blog post.