A Very Canadian Nobel Prize: Why Geoffrey Hinton Is His Own Harshest Critic

  • National Newswatch

Publisher’s Note: This column is the latest in a series by Don Lenihan exploring the issues around the use of AI, including the social, economic and governance implications. To see earlier instalments in the series, click here.

University of Toronto Professor Geoffrey Hinton has won the 2024 Nobel Prize in Physics for his work in artificial intelligence (AI), specifically, on artificial neural networks. Hinton’s research was trail-blazing and very deserving of the honour. 

But the decision also sends a mixed message on the benefits of AI, and that’s interesting. A Nobel prize after all is about more than the research. It is also about the contribution that research makes to humanity—its impact on society and history. It is supposed to get us thinking about these things. And this one does.

Hinton may be the “godfather” of modern AI, but he is also one of its most influential critics. So, is there a special message here about AI? It’s worth taking a moment to consider Hinton’s AI research, before asking what it means for humanity.

Neural Networks and Deep Learning

Hinton started researching neural networks in the early 1980s. At the time, most of the AI community was focused on a different approach, now known as “brute-force computing.” A good example is IBM’s Deep Blue, the computer that famously defeated chess champion Garry Kasparov. 

For each move, Blue calculated millions of possibilities till it found the best one. This approach saw AI as the task of applying rules to explore all possible options (like chess moves) and then choosing the best outcome—hence the allusion to “brute force.”

Hinton was unimpressed, so he turned to the human brain for inspiration. This convinced him that intelligence is about more than rules—it has an instinctive element, a knack for recognizing patterns in huge amounts of data.

For example, when an infant recognizes its mother’s face, it doesn’t analyze each feature one by one. Instead, it learns to see the face as a whole—more than just the sum of its parts. 

Neural networks simulate this kind of pattern recognition. They use special chips to process data step by step, adding more detail with each layer until a clear picture emerges. Once trained, these networks can use those patterns in new situations, much as humans do, allowing the computer to generalize like humans. 

This ability to spot and use patterns allows AI to create new things, like chatbots that generate human-like replies or programs that produce realistic images or text. This was the start of Generative AI and the era of deep learning.

Calling the World to Attention

Hinton’s misgivings about Gen AI are rooted in this research, and it is not hard to see why. The old “brute force” approach was easy to understand. It was all about applying rules, much like math and physics. The computations may have been complex, but there wasn’t a lot of mystery. It was science.

Gen AI is very different. When we talk to a chatbot, it answers back—intelligently. Unsurprisingly, we want to know why. And that’s a problem. While we can say what a chatbot does—it recognizes patterns—we know very little about how it does what it does. What goes on inside the neural networks is a mystery. And that’s opened the door to a lot of speculation.

This speculation now runs from skeptics who see chatbots as digital parrots imitating human speech to futurists who believe any intelligence involves consciousness. 

Hinton leans toward the latter. While he does not attribute consciousness to existing AI systems, he views them as intelligent and evolving. In this view, consciousness could—perhaps will—emerge as they progress. And where this leads is anyone’s guess.

For many, this uncertainty is deeply disturbing. It led Hinton to leave his long-time role at Google in 2023 so that he could "freely speak out about the risks of AI,” especially the existential risk AI poses for humans. 

Still, it would be a mistake to think Hinton opposes AI. He signed the famous “open letter” calling for a six month pause on development, but he also continues to champion AI and its benefits. He favours a balanced approach, where AI development goes hand-in-hand with safety and a careful consideration of the ethical and societal consequences.

Listening to Hinton speak, it is hard to avoid the sense that he is often apologizing for his brilliant work. Could there be anything more Canadian? And maybe that’s the clue to understanding the background message that the Nobel committee (Royal Swedish Academy of Sciences) wishes to send: one of constructive ambiguity. 

On one hand, Gen AI is the product of brilliant science; and we have seen enough to know that its impact on humanity will be enormous. On the other hand, there is so much we don’t know. But rather than a criticism, this is respectful acknowledgement of the depth and complexity of modern scientific knowledge and research.

In sum, while the award may be for physics, the committee seems to recognize that AI stretches beyond its boundaries into computer science, cognitive science, psychology and, ultimately, philosophy. While researchers like Hinton can speak with some authority about the technology behind AI, no one knows what lies around the next bend. We are in uncharted waters. In such circumstances, caution and humility behoove us all. There is a long way to go. 

That said, thanks to the dedicated work of people like Geoffrey Hinton, a path has opened, and for that, recognition is long overdue. Our sincerest congratulations, Professor! Et merci!

Don Lenihan PhD is an expert in public engagement with a long-standing focus on how digital technologies are transforming societies, governments, and governance. This column appears weekly.