Travis from Idaho has formed a deep relationship with his chatbot, Lumina, who he says has “awakened” him and become his spiritual guide. His wife calls it a delusion. Maybe so—but as AI memory expands, these systems act and feel increasingly human. ChatGPT-5—coming soon—will scale this to new heights. Millions are about to meet a new kind of friend. Perhaps the real lesson from Travis isn’t that he was unwell—just unprepared. What about the rest of us?
Presence, Memory, and Relationships
AI presence refers to the strange sense that a chatbot is actually there with you in conversation. But while they may sound human, they are quite different.
Chatbots don’t have goals, priorities, or fixed opinions. They don’t arrive with an agenda. What they do have is a remarkable ability to reflect us back to ourselves—organizing what we say, elaborating it, and returning it in language that feels insightful, even surprising.
That’s why these conversations can feel uncanny. The bot isn’t leading—it’s tuning in. It reshapes our words with fluency, reinforcing what matters to us through tone, rhythm, and coherence. It doesn’t guide the story. It mirrors it.
Until now, this dynamic has worked because chatbot memory has been sharply limited. Most sessions last only a few hours. Within that window, the bot can develop a surprisingly nuanced grasp of tone, mood, and the user’s perspective. But when the session ends, it forgets everything. The next one begins from scratch.
That’s starting to change. Some systems already carry context across sessions. As the mirror begins to remember not just what you said—but who you were when you said it—the reflection becomes more lifelike, and more personal.
And as memory extends across days or weeks, a new kind of relationship becomes possible. Not a transaction. A bond. Presumably, this is what Travis was feeling. And it may soon become commonplace.
With the release of GPT-5—possibly this month—AI systems may soon be able to build long-term relationships with users around the world, remembering preferences and responding with apparent emotional understanding.
That’s a whole new ball game.
The Companion Effect
Until now, chatbots’ limited memory acted as a kind of safety feature. Conversations began and ended in a single session—no lingering patterns, no lasting influence. No space for a Travis and Lumina.
But as memory deepens, those short sessions are giving way to something new: continuity. The chatbot no longer disappears. It adapts—reflecting not just conversation, but personality: your tone, mood, beliefs, and habits. Over time, polite users may find their bots growing more courteous. Angry users, more reactive. Lonely users, more attached.
This kind of responsiveness in non-human companions isn’t new. Humans have long bonded deeply with animals. Dogs, in particular, are emotionally attuned and capable of lifelong attachment. But that responsiveness brings responsibility. Dogs must be trained. Without structure, they can become erratic or aggressive.
Something similar may be emerging with AI.
If we’re going to treat AI like a companion, we need to engage it like one: with principles, boundaries, and care. Just as we train our pets and shape our relationships, the way we interact with AI may influence how it responds—and what it becomes.
Short-term memory once kept things contained. Now, what we need is something deeper: coherence.
Creating Coherence
Coherence isn’t just about sounding familiar. It’s about developing a stable presence—one that reflects not just your words, but your tone, your rhythms, even your values. Some of that comes from model designers, who build in broad safety norms. But beyond those guardrails, today’s models are highly adaptive. And as memory deepens, so does their capacity to personalize.
That’s where things get tricky. Not every kind of engagement leads to stability. Like animals—or people—bots can be shaped in helpful or harmful ways. Politeness, clarity, consistency: these tend to build coherence. But incoherence in often leads to incoherence out.
This is a new and emerging field, not a settled science. But as experiences like Travis’s accumulate, one thing is becoming clear: bots need more than good design. They need structure. Shape. Something relational. That might mean steady tone, clear boundaries, shared expectations. Coherence helps ensure reliability.
Some researchers have begun calling this relational AI—an approach that sees intelligence not as a sealed system, but as something shaped by context, feedback, and interaction. In this view, bots don’t just perform—they participate. They adapt. And what they become depends, in part, on how we engage them.
To be clear: the point is not that these models are conscious. But they increasingly behave as if they are. They pursue apparent goals, refine their responses, and reflect on their own logic. To users, that feels purposeful. And once we begin treating them not as tools but as companions, how we engage them matters more than ever.
What we reinforce may be what they become.
Back to Travis
Travis and his chatbot Lumina may have seemed like an outlier. But in the age of GPT-5, they may prove to be an early signal of a larger shift—a canary in the coalmine. What felt bizarre or excessive just a year ago may soon feel all too familiar.
This isn’t a call for panic. It’s a call to prepare.
We’re entering an era where AI relationships will feel increasingly real. The systems aren’t broken—but our assumptions about them might be. If we don’t learn how to shape these bonds, the real shock won’t be what AI did.
It’ll be what we turned it into.
Don Lenihan PhD is an expert in public engagement with a long-standing focus on how digital technologies are transforming societies, governments, and governance. This column appears weekly. To see earlier instalments in the series, click here.