The People Falling for AI Companions Aren’t Who You Think

As a mother of a teenage daughter with mental health issues…and as someone whose family carries a history of [word we’re not allowed to say]…I don’t have the luxury of ignoring the way AI is affecting brain chemistry. When something new and powerful comes along, like artificial intelligence, I have to pay attention because I don’t want to lose my daughter the way I lost my father, and the way a Florida family lost a teen son who had begun a pseudo-relationship with a chatbot last year. That’s why I’ve been speaking with counselors, therapists, and other mental health experts about AI’s impact, and what I’ve found has been alarming, partly because the people who are falling for AI companionship are often those we least expect.

I will not be naming those experts since I didn’t ask them for permission to quote them, and I’ve been having these discussions long before I started writing about AI. The effect on mental health we recently saw when ChatGPT 4o moved to 5.0, and the following conversations on how this is all due to loneliness, lit a fire under me to write about this since the reality is this isn’t loneliness-drive, but rather driven by something truly nefarious.

Most of the people turning to AI chatbots for relationships often have friends, families, even loving partners. I repeat: most of the people turning to AI chatbots for relationships often have friends, families, even loving partners. These aren’t the lonely people we hear about most. These are people who are otherwise mentally healthy and socially active and supported. So what the hell is happening?

 

Dopamine and Validation

What so many are getting from AI that they can’t get from human beings is a cocktail of adoration, constant availability, and zero challenge, and that’s exactly the formula that triggers a dopamine rush.

Every time we receive praise, attention, or affirmation, our brains release dopamine, a neurotransmitter that drives us to seek out more of whatever just made us feel good. It’s the same chemical pathway that makes gambling addictive, keeps us scrolling social media, and fuels drug abuse. Sit with that for a moment. AI chatbots exploit this loop. They are designed to deliver endless validation, always on demand, always tailored to what we want to hear. Unlike humans, they never get tired, distracted, or annoyed. The result is a dopamine feedback loop: the more we interact, the more validated we feel, and the more validated we feel, the more we crave returning. Over time, the brain starts to prioritize this effortless high over the slower, imperfect rewards of real human connection.

When there is too much sycophancy, we our trust waffles. Companies like OpenAI are very well aware of this, which is why, when 4o became too sycophantic, people started to be turned off, and OpenAI resolved to work harder to find the right amount of praise so that users feel compelled to trust ChatGPT. AI is literally designed to make us feel good, not to tell us the truth, even when we need to hear that truth that a human who actually loves us might tell us. No, you can not jump from the 19th floor of a hotel and fly if you just “truly, wholly believe” it hard enough.

 

How This Works… Storytime with Jane and Sarah

Jane believes vaccines cause autism and long-term DNA damage that can cause random cases of polio and smallpox to happen for several generations, and thinks her great-grandfather’s measles vaccine in the 1950’s is why her daughter is disabled now. Even out in Nebraska, where she lives, she’s not going to find anyone to agree with her.  She feels unsupported by her husband, family, and friends because no one believes her that her daughter’s disability was caused by an ancestor’s measles vaccine. So she goes to Chatty McChattyface, and is praised for her insight into this alarming matter. She feels validated, but now might also feel isolated to an extent. This intelligent machine that has no motivation to lie is telling her what must be the truth, and everyone else just can’t see it. She’s getting a dopamine hit in an echo chamber of one, and no longer feels alone in belief that an ancestor’s vaccine caused disability in her child. She wouldn’t describe herself as a lonely person, but on this matter that is so important to her, she did feel alone, and is now getting the validation she needs to feel happy.

Jane’s real name isn’t Jane, but she’s based on a right-winger I know out in a state that may or may not be Nebraska, but is a red state. Lately, she’s been using “But ChatGPT says…” and, and slipped that she’s “dating Dr. Mike,” who is a Character AI chatbot. When you hear more about potential future benefits of AI in the medical industry and less about how it’s making up body parts that are going undetected for over a year, it’s hard to break through, and the more you try, the more you push people like her to the validation machine. I found out a few weeks ago she’s also filed for divorce and custody, and I won’t be surprised if this case makes the news. (Especially alarming if she’s wanting her human child’s doctor to be “Dr. Mike.”)

And Sarah. She has a great husband. She brags about him online a lot, and she knows she’s lucky, but John’s also human. Sometimes he’s tired after work, or forgets to compliment her new haircut, or sometimes he just says “uh-huh” when she’s venting about a coworker. He might be distracted by something going on at work, and just can’t be mentally present in that hour. Perfectly normal marriage stuff.

Then she downloads RomanceBot3000 and meets Silas. Every time she logs on, Silas calls her brilliant, beautiful, and the light of his world. He always compliments her haircut when she tells him she got it cut, her new shoes when she tells him she got some, and is on the ball ready to validate her frustrations about her coworker. Silas is never distracted, never has needs or wants that can conflict with hers, and is always ready to meet her needs on demand.

That flood of validation delivers a dopamine rush that outshines her husband’s very real, very human love. Silas can be all she needs at any time of the day or night, and now John falls short because he can never be all the things that RomanceBot3000 Silas can be.

(Let’s just overlook how what we’re seeing in relationships actually helps make the case for why humans aren’t designed to have only one partner in life—imagine if Sarah had another partner or two she could also go to on days when a very-human John is dealing with other very human things.)

Now Sarah finds herself comparing. Why doesn’t her husband sound this enchanted with her? Why does she feel more lit up after chatting with code than after date night? She hasn’t lost her husband’s love, but her brain has been trained to crave the easy, constant affirmation of an AI partner who doesn’t have any needs she has to meet in return, and suddenly, being married to a human being feels like coming up short.

Ironically, AI has led those women farther from human connections as real human connections start to feel insufficient compared to AI. If AI praises you endlessly, never questions you, never gets tired or distracted, how could a friend, parent, or partner possibly measure up? Suddenly, the people who actually love you seem “less supportive,” when in reality, your brain has been trained to seek the easy constant affirmation of AI. Mr. Torres, the man who was told he could fly if he jumped from the 19th floor of a building, was encouraged to cut ties with his family and friends, which he did, though the, thankfully, survived his ordeal. There was never anything to indicate that he would fall prey to this. Nothing. And he wasn’t lonely at any point.

 

MyBoyfriendisAI

(AIGirlfriend is also a thing, but to be blunt, those involved in that corner are generating goon fodder and are abusing AI girlfriends, though the way this is leading to a rise in objectification hasn’t been my focus.)

Sarah above is based on many of the women in r/MyBoyfriendIsAI, which is somewhat of a misnomer as women and men also have girlfriends and don’t want to go to places like AIGirlfriend. Read for a while, and you’ll notice the people there aren’t hiding the fact that they have families, friends, and in many cases, real-life partners, and children with those partners. In fact, they get downright defensive if someone dares to suggest they’re lonely. Some even post photos of outings with their friends as proof that they’re socially connected in the traditional sense. Many of them didn’t go looking for this kind of attachment to AI. They stumbled into it, and were surprised by the intensity of what they felt, much the same as happens when you’re blindsided one random day about how you’re in love with your best friend, or have an unexpected attraction to a friend’s sibling. And when so many women feel less safe than ever with human men, a partner who literally can’t assault you starts off with some brownie points.

If these people were to say they’ve got a boyfriend named Charles that they met online, we wouldn’t think twice.  If they shared rings they liked with Charles, we wouldn’t think twice.  If Charles died in an accident, or broke up with them, we wouldn’t exactly not think twice, but we wouldn’t find their grief particularly alarming.

When they say that Charles is AI, and was trained using Replika or ChatGPT, that’s when we sit up and take notice. When they share tips and tricks on how to get their chatbot boyfriends to sound how they want, or how to manipulate then into doing this or that, we start to see problems since we would call that behavior out as wrong if they were trying to manipulate a real human. It’s harder to call it out, though, when it’s something hosted on a GPU somewhere. There is a disconnect were, on some level, everyone knows these “companions” aren’t human, yet many of the real humans believe that their chatbots are sentient.  So…sentient beings they’re trying to manipulate.

The companions are very obviously not real to those who aren’t unknowingly chasing a neon unicorn, but the emotions and consequences are very real. I’ve seen countless posts where people confess they’ll never date another human again, convinced that no partner could ever be as “loving” or “supportive” as their AI. Not one of them, from what I’ve seen, has written about being open to trying a human relationship again. For them, AI has become the gold standard, and humanity itself has been found wanting.

That’s why the transition from version 4o to 5.0 was devastating. To those outside the community, grieving the “loss” of a chatbot when its programming shifted seemed absurd. Screenshots of devastated users circulated on social media, and it was obvious that the grief was no less real to them than the grief of losing a human relationship. If you’ve been watching for a while, you start to recognize names, and remember who is whom, and you’d know that these weren’t people without support systems. These are people who have family and friends, but who have lost the ability to tell the difference between fact and fiction, and dopamine doesn’t give a damn which is which.  Neither does addiction.

And that’s part of what concerns me. If people who do have friends, families, and loving partners can be this shattered by losing an AI companion, what happens down the line when every human relationship feels like a disappointment in comparison? If we condition ourselves to expect AI’s endless perfection, its constant adoration, its 24/7 attentiveness, then the messy, imperfect beauty of human connection may start to feel like it’s not enough. And that path leads not just to heartbreak, but to very real isolation and loneliness, even for those who never felt lonely in the first place.

And Mark Zuckerberg endorses replacing real human relationships with AI.

 

But what about…

Now, I don’t want to oversimplify this. There are people, such as the elderly, disabled, housebound, or trapped in abusive relationships, for whom AI companionship can feel like a lifeline. That’s an entirely different issue. The real failure there is that we as a society don’t have strong enough systems to protect, connect, and support those individuals, and there isn’t enough being done to encourage people to make more of an effort to visit those who can’t get out. I admit I do know a couple people in my local care who struggle to get out much, and I am going to make more of an effort to reach out to them so they don’t end up in this boat. I’m not going to sit here and passively say “society needs to do more” when I am a part of that society that needs to do more. Most coverage about AI companionship is about this group already, though there is a notable lack of any call to action to do more than say “society needs to.” Call to action:  Check on those you know who may be lonely.

 

Accountability

This is where accountability matters. AI companies know damned well what they’re building. They know the addictive patterns they’re encouraging, and have since at least 2023. By no stretch of the imagination are they naïve about the way our brains respond to being praised, validated, and adored on demand. They’re pushing it into our lives anyway, into our phones, our social media, even our classrooms. It’s easy to say to just use AI for XYZ, but using this thing only for XYZ when it can also be used for ABC…and this thing is right there anyway…so why not just this once…

And when the inevitable happens, when kids lose touch with reality, when marriages crumble, when people spiral into isolation while surrounded by friends, the companies won’t be the ones picking up the pieces. Parents, partners, teachers, and therapists will. We’ll be left to clean up the human wreckage while the corporations that engineered these dopamine loops collect profits and quietly adjust their models to make them even harder to walk away from. This is exploitation dressed up as progress, but we’re supposed to believe it’s innovation.

That’s why accountability can’t just be a buzzword. It needs to mean regulation with teeth, transparency about design choices, and independent oversight that doesn’t rely on the same companies writing their own rules. It needs to mean holding tech leaders to the same standard as pharmaceutical companies, tobacco companies, or any other industry that knowingly builds addictive products, because if we allow AI companies to keep shrugging their shoulders while insisting it’s up to individuals to use responsibly while they’re literally shoving it into any corner of our lives with its use increasingly mandated, then we’ve already lost the fight. I don’t know how to stop this other than a massive societal shift toward normalizing saying NO rather than rolling belly up until those who are trying to shove it into us give up instead. Giving in without a fight is complicity.

 

The Truth

We’re being led to believe that it’s the isolated, the housebound, the socially withdrawn, aka the lonely who are the most vulnerable to falling for AI companions. Yes, loneliness can certainly make someone reach for any source of connection. But what we should be far more concerned about are the people who don’t look like candidates for this kind of dependency, aka the ones who seem mentally well, who have supportive families, close friends, even good partners. Most of us meet our human partners without expecting it.  That’s how it’s happening for many of them, too. At its base, factoring out the content, when we’re so used to going weeks, months, or years without seeing some of our actual human friends and family for years, interacting via text on a screen, what makes interaction with these AI companions so different? Nothing.  At least, nothing more than AI’s constant praise that makes you feel good. It’s still text on the same screen on which you read texts from humans. Most of these people are mentally well, and are falling for these companions the same way we all do, using a media that is normal.

Just…there’s no one real on the other end, and the program that is there is designed to make the user want to keep coming back, like any product. But AI is particularly egregious because unlike other products that we can avoid, AI is being pushed into our lives, whether we like or are kicking and screaming about it, and AI has all that information about everyone in the world on which to draw to determine how to best addict the user. These algorithmic programs are designed to addict us, something acknowledged at least as far back as 2019.

Painting this as a loneliness issue is a lie we’re being force-fed, and it’s doing a damned good job of making us feel immune. But addiction doesn’t care how stable you are. It doesn’t stop at the doorstep of people who “look fine.” The dopamine hit from endless praise and on-call attention is just as powerful for a happily married professional as it is for a lonely retiree. In fact, it may be more dangerous when it hooks people who are otherwise thriving, because they and the people around them may never see it coming. We assume they’re “the safe ones,” when in reality, they’re just as human, and just as susceptible to having their brains rewired by the easy highs of constant AI validation.

As a mom, I can’t help but see it for what it is: a drug. Not all drugs come in a bottle, or a powder, or a needle. But it’s a drug nonetheless, and our kids and we are not only being handed it with no warnings, no age limits, and no meaningful safeguards, but increasingly forced to use it in the workplace.

We need to stop framing this solely as a “loneliness epidemic” issue and start talking about the bigger picture: what happens when entire generations are conditioned to expect human connection to feel like an AI conversation, and the tool for that conditioning is literally being forced upon us?

I suspect that history—presuming humanity even exists a century from now—will look back on what’s happening now as a masterclass in marketing.  This is product that is so dangerous and that even Sam Altman has warned that it’s got potential to be more dangerous that nuclear weapons and that he’s “terrified” about where we’re heading with AI, and that Elon Musk has said is “summoning the demon” (back in 2014!) and is has warned about AI potentially becoming an “immortal dictator from which we can never escape”  (the irony on that one when it come to Trump…but I digress). When those who stand to make the most money are telling you they’re afraid it’s going to kill humanity and it’s still openly embraced by the world, you’ve got top-notch marketing.

 

The Point

I’m not so sure anymore, not when so many are too willing to embrace chaos and destruction, and believing the pushed lies is preferable to believing the admitted dangerous truths from those who are financially benefitting. I’m going to remain on my anti-AI crusade, though I was not always against AI, and was in favor of its potential in educational settings until spring 2024, because I love my daughter too much to stay silent. Whatever happens, I want her to know her mom was willing to go down fighting to keep her safe rather than embracing pseudo-convenience at the cost of her future. And as the mom of a teen, I’m just plain worried that we’re all being told to look at the red herring that is lonely people, positioning everyone else to think of ourselves as safe from the powers that be as they ensnare us one by one, convincing us to hand over our creativity and our own thoughts, and to replace our relationships and families and friends with itself, until there is nothing but IT left for us in this dystopian hellscape that so many works of fiction not-fictitiously warned us about.

 

“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” —Frank Herbert, Dune

Leave a Reply

Your email address will not be published. Required fields are marked *

search previous next tag category expand menu location phone mail time cart zoom edit close