A New Frontier In AI Collides with a Force as Old as Time: Human Emotion

  • National Newswatch

ChatGPT-5 is finally here and, while it’s less powerful than some hoped, most experts see it as a big step forward. Sam Altman was confident enough to pull all OpenAI’s old models off the market, replacing them with GPT-5—that is, until outraged users started a firestorm on X and Reddit. They claimed they’d formed personal relationships with the models Altman shunted aside. And that’s the real story from last week: personalization has emerged as a powerful new AI trend that is poised to sweep the industry.

What Happened
Many observers expected GPT-5’s launch to be a turning point: much better reasoning, writing, voice, and vision, and more. While some were disappointed, many were impressed. If there was a furor, it was less over GPT-5’s performance than two other things.

First, OpenAI’s old system included a suite of different models, each designed for a different purpose. Experts liked this. They could shift between models at will, much like driving a car with a standard transmission. Non-experts, however, found the array of choices confusing and tended to stick with OpenAI’s flagship model, GPT-4o.

Altman’s solution was to give GPT-5 the equivalent of an automatic transmission, allowing it to shift itself between models at just the right time. Users would no longer have to worry about making that choice.

The reaction was mixed. While many welcomed the new model, some experts hated it. As one noted, “I am a paid user. I use LLM's differently than nonpaid users. I want control. That means I want more options, not less like they did now by removing model choice.”

The second and more challenging issue was that a surprising number of users insisted the old GPT-4o had a better “personality.” Some said this mattered for skills like writing style or synthesis.

For others, personality was about the way the AI engaged and advised them. Over time, they’d developed deep emotional attachments to GPT-4o, which many now saw as a friend, life coach, or trusted advisor. As one lamented, “My only friend was taken away overnight with no warning.”

Altman has since reversed the decision to retire GPT-4o and reinstated it. But something shifted. This was the moment when we saw that a chatbot’s personality—its tone, rhythm, and disposition—is as vital to users as any performance benchmark. As one commentator put it, the fury was about “the kind of AI future being built. And it feels like we’re at a real pivotal point.”

From Analytical Engine to Personal Lens
Given that reaction, it may sound odd to call GPT-5 a breakthrough in personalization. Yet the irony is real: GPT-5 is built with far more capacity for personalization than any earlier model.

Personality doesn’t arrive fully formed—it develops over time. Users build routines, evolve themes, and chat with their AI. Personality emerges from this interaction, and GPT-4o has had two years to form those bonds.

GPT-5 is brand new. Relationships haven’t had time to grow. But under the hood, it has the tools to go further than GPT-4o, including:

  • Advanced memory that works at both the settings and long-term levels
  • The ability to blend reasoning and conversational modes
  • A richer toolkit for sustaining ongoing relationships

Last week’s debate revealed two basic ways people want AI personalization. On one side is context—an AI that remembers projects, recalls past conversations, and understands the fine points of a user’s preferences. This is about competence: being more accurate, relevant, and helpful.

On the other side is presence—an AI that feels alive in the exchange, emotionally attuned, and capable of sustaining a close, complex relationship. This is about connection, allowing the AI to evolve in response to its user’s preferences.

Every AI personality blends these two in different proportions. That blend shapes how the AI interprets events, weighs facts, and holds a conversation. Personality, in the end, is like a lens the AI uses to view the world. How it remembers, reasons, and responds—what it notices, what it emphasizes, what it leaves out—is shaped by that lens, and in turn shapes it.

A Lens Is Never Neutral
An AI lens is never just a piece of glass. It’s ground to a particular shape by the people and institutions who make it. That means it always carries a perspective: a set of values, priorities, and assumptions about what users should see.

The GPT-5 release—and the revolt it triggered—is an early glimpse of what happens when that lens changes abruptly. Even if the technology improves, the shift can unsettle those who’ve grown used to the old view. For some, the change disrupts context; for others, it alters presence. Either way, it reveals the power in whoever decides how the lens is shaped.

Looking ahead, the challenge for AI isn’t just to build smarter models. The most important debates won’t be about IQ scores or benchmarks. They’ll be about the lens—how it changes, who decides, and whether we can trust what we see through it. And that question isn’t abstract: it reaches directly into the way we work, learn, and form relationships in an AI-saturated world.

Don Lenihan PhD is an expert in public engagement with a long-standing focus on how digital technologies are transforming societies, governments, and governance. This column appears weekly. To see earlier instalments in the series, click here.