Stargate: AI’s $100 Billion Portal is Coming and Here’s What You Need to Know

  • National Newswatch

Publisher’s Note: This column is the fifth in a series by Don Lenihan exploring the issues around the use of AI, including the social, economic and governance implications. To see earlier instalments in the series, click here.

Microsoft and OpenAI have announced plans to build a $100 billion supercomputer to drive artificial intelligence (AI) to a new level. Will “Stargate” achieve Artificial General Intelligence (AGI)? Perhaps. But in the meantime, two questions will increasingly shape the public debate over AI: “What is AGI?” and “Why does it matter to me?”

What is AGI?

Humans have the capacity to generalize. Unlike other animals, we can integrate knowledge from different fields and apply it in new ways to solve problems. For example, observing traffic patterns might inspire me to streamline my business operations to improve efficiency. The best ideas often come from the most unexpected sources – and we can thank generalized intelligence.

AI doesn’t work this way—at least not yet. Current AI systems are limited to "narrow" or specialized tasks, such as driving a car. While a Tesla can navigate traffic, it can't use that knowledge to reengineer a business – at least, not without AGI. AGI is about flipping the switch that gets machines to apply knowledge across various domains. It’s about getting them to think like humans.

The idea of a machine that thinks like a human is both exciting and unsettling. It highlights what we hope for and what we fear most in AI. For example, AGI could assume a wide range of complex tasks, from managing our businesses to caring for the elderly, freeing us to do other things. AGI also raises deep concerns, from the loss of jobs to the loss of control over increasingly autonomous machines.

While AGI doesn’t exist yet, many experts believe it is coming soon, perhaps within five years. When it is achieved, huge changes will follow. These machines will be our intellectual equals, able to perform whatever (intellectual) tasks humans can perform – and perhaps better. AGI represents a unique threshold that, once crossed, will change human society irreversibly.

So, is AGI coming soon? Stargate could provide the breakthrough needed. To understand why, let’s look briefly at how AI learns.

Scaling Text to Build AI

AI models are “trained" by feeding them vast amounts of text. They use advanced algorithms to identify patterns in the text, mastering tasks like summarizing documents, telling stories, and even making jokes. The remarkable progress in AI over the last decade is largely due to increasing the amount of text used to feed the models. Basically, the more they read, the more they learn.

OpenAI is a prime example. In 2018, they used about a billion words to train their first GPT model. Today, ChatGPT-5 (expected this fall) is rumored to be trained on at least two trillion words. The graph below illustrates this growth in the number of words:

Stargate not only continues this trend but takes it to new heights. It represents a $100 billion gamble that increasing computing capacity a hundredfold not only will produce huge gains in AI, but very possibly AGI. The theory is that if OpenAI inserts enough text into the machine, the boundaries of narrow intelligence will dissolve, and AGI will emerge. Will it work? 

No one knows for sure but consider this: Thousands of the world’s top computer scientists, along with the CEOs and management teams from the world’s biggest companies, have enough faith in the approach to invest hundreds of billions (see e.g., Google and Meta) – possibly trillions – of dollars into scaling. Is this just misguided? Or have they seen something extraordinary in their research labs that we can only glimpse through a glass darkly?

What Does Stargate Mean for You?

The inaugural column in this series argued that AI might be the most significant development of our time. On one hand, it promises extraordinary benefits, including major scientific advances, productivity gains, and improved management of complex systems like healthcare and education.

But there are also significant risks. Bad actors could use AI to spread disinformation, create dangerous viral or chemical devices, or conduct cyberattacks. In the worst-case scenario, AI could even turn against humanity.

This initial column called for an “AI Pragmatism” approach, recognizing that while we cannot stop AI's growth, we can shape its future. Let’s now add that, to succeed, first we must acknowledge and confront AI’s immense capacity to reshape our lives and society. AGI is central to this discussion and will become increasingly so as the decade unfolds.

This column is a head’s-up. Whether or not Stargate achieves AGI, it marks the beginning of a new phase in AI’s development, but the public lags far behind. Most still see AI as, well, the latest technology, a collection of clever apps. But behind these tools lies a rapidly evolving force that will transform society at its core. The stakes are high, and the time to engage and understand the implications is now.

Don Lenihan PhD is an expert in public engagement who has a long-standing involvement in how digital technologies are transforming societies, governments, and governance. This column appears weekly.