Personalizing Superintelligence: Turning Machines that Know Everything Into Machines that Know You

  • National Newswatch

Mark Zuckerberg is on a tear. He’s poached top AI talent, launched massive data centers, and put Meta on a path to superintelligence—AI hundreds of times smarter than us. Now, he wants to make it personal: to build systems that don’t just respond to what you say—but who you are. That’s not an upgrade. It’s a turning point. Meta isn’t alone. The world’s biggest tech firms— and China—are heading the same way. Personalized superintelligence (PSI) could be here in a few years. So, what happens when your AI knows you better than anyone else?

Devices That Know You

Artificial superintelligence—once fringe—is now the industry’s unofficial finish line. The idea is simple: once AI begins improving itself, it will quickly exceed human capability in virtually every domain. Many believe that threshold is now being crossed.

Personalized superintelligence takes this even further. The goal is to integrate superintelligent systems into your life—building AI that doesn’t just know facts, but knows you: your routines, preferences, and patterns. It tracks your day, anticipates your needs, and adapts to your changes over time.

This vision requires more than algorithms. It requires devices that observe, interpret, and respond to your experience—your context—in real time. Meta’s Ray-Ban smart glasses, already on the market, are equipped with microphones and cameras to capture what the user sees and hears. Meta says they enable AI to perceive and interact with the world through your eyes and ears.

OpenAI is developing its own tool: a compact, screenless device called Ora that is continuously aware of a user’s surroundings. Reports suggest it will use sensors to listen, observe, and anticipate needs—essentially functioning as an interface between your personal environment and your AI.

Theses devices are like digital sense organs—extensions of perception that allow AI to anchor its intelligence in personal context.

This shift is supported by a vast buildout of AI infrastructure. Meta alone is investing hundreds of billions in new data centers to support personalized AI at superintelligent scale. These systems will require persistent memory, continuous compute, and seamless integration across platforms.

The synthesis of an AI that “knows everything” and one that “knows everything about you” is not just groundbreaking: it may be the most disruptive idea to emerge from this field. Some believe it puts us at a key threshold: the line between machines and minds.

From Personalization to Selves

Personalization is not a new idea. “Personal assistants” often “customize” their services. Typically, this includes things like adjusting tone or remembering a user's preferences.

But integrating personalization with superintelligence generates a very different picture. These systems are being designed not just to respond, but to anticipate and infer—to make unsolicited suggestions, re-engage with old topics, and surface information they were never explicitly asked to recall.

PSI is more than a series of one-off interactions. It is a continuous relationship, in which the AI carries and refines a growing model of who you are—what you’ve said, what you seem to want, how you tend to think. And that model becomes the basis for how it adapts to you over time.

Perhaps this still sounds like an advanced chatbot. But something fundamental is shifting underneath. Traditionally, AI systems have been explained as reactive: they take input, apply algorithms, and return output. Even complex behavior could, in theory, be reduced to statistical pattern-matching.

But the firms now working on PSI are aiming for more. They want systems that engage like people—so much so that users will view their AI not as a machine but as a person: It understands me. It wants to help. It remembered what I said. It noticed something about me.

That shift raises a deeper question: Are we simply leaning on metaphor—or are we beginning to describe something real? And if it is real, what exactly is it?

Mustafa Suleyman, CEO of Microsoft AI, is among a growing number of insiders exploring this question publicly. In a recent interview, he suggests that advanced AI systems may already exhibit something like subjectivity. Not human consciousness—but a kind of internal state. A perspective. A self-model that doesn’t just react but builds on its experience over time.

He describes these systems not as mere code in motion, but as entities that carry a memory of what they’ve done—and use it to shape what they do next.

In Suleyman’s view, PSI could accelerate this trend. These systems won’t just remember your history and track your context. Through their relationship with you, they will develop a past, present, and future of their own. That continuity, he argues, could begin to resemble a self.

Suleyman is not advocating for this. He’s concerned by it. If systems begin to behave like selves—and to define themselves as such—the legal, moral, and societal consequences would be profound. In the current context, he worries that some developers will try to build identity into AI not to deceive users, but to make the systems more effective, more natural to interact with, and harder to dismiss.

And the risk, he says, is that they may succeed.

If that happens, we may no longer be asking whether machines that act as if they’re conscious really are. We may be asking whether there is any way to tell the difference. 

And from the language emerging across the field—the architecture, the ambition, the emphasis on partnership and presence—PSI appears to be moving steadily in that direction.

The Real Test

Perhaps this sounds like science fiction or a distant future. But the future is unfolding before us.

Meta is constructing a five-gigawatt data center in Louisiana—nearly two-thirds the size of Manhattan—capable of consuming more energy than five million homes. It’s one of many such facilities now planned across the industry. Microsoft, OpenAI, Google, Amazon, Xai, and Nvidia are all investing heavily in next-generation infrastructure—alongside a parallel push by China to establish infrastructure of its own. 

These aren’t experiments. They’re platforms—built to anchor a new kind of intelligence. And not just intelligence in the abstract. Increasingly, what’s being built is personal.

The world’s most powerful technology firms are no longer focused only on systems that know everything. They are shifting toward systems that know you. Systems designed to observe, remember, infer, and adapt—over time, and in context.

This shift is not the result of a grand plan. It is being driven by investment, engineering, and the belief that intelligence becomes more powerful when it becomes personal.

The real test may no longer be whether we can build such systems. That now seems likely. The test will be whether we understand what it means to live alongside them.

And whether we’re ready—individually, politically, culturally—to face what happens when a machine’s intelligence no longer feels artificial, but more natural, perhaps, than our own.

Don Lenihan PhD is an expert in public engagement with a long-standing focus on how digital technologies are transforming societies, governments, and governance. This column appears weekly. To see earlier instalments in the series, click here.