Radical Reads

Exclusive: Geoffrey Hinton on his Nobel Prize

By Aaron Brindle, Partner, Public Affairs

http://00nobel%20peace%20hinton%20hfo%2001%20pkcw%20videoSixteenByNine3000

This week, Geoffrey Hinton won the Nobel Prize in Physics alongside John Hopfield. Hinton’s foundational breakthroughs in the field of artificial intelligence has enabled remarkable advancements across various scientific disciplines, from computer vision and natural language processing to drug discovery and materials science. 

Underscoring this point, the Nobel committee awarded the Nobel Prize in chemistry this week to Demis Hassabis and John Jumper for the development of AlphaFold, a protein structure prediction system. AlphaFold leverages deep learning techniques, including those pioneered by Hinton, to accurately predict the 3D structure of proteins. This AI-powered technology has profound implications for understanding biological processes, designing new drugs, and addressing global health challenges.

Geoffrey Hinton’s Nobel Prize is a testament to the power of curiosity-driven research. His unwavering commitment to exploring the frontiers of knowledge has led to the creation of technology that is accelerating scientific progress across the globe. Hinton’s contribution to science is propelling researchers towards new discoveries that will push the boundaries of human understanding for generations.

Radical partner Aaron Brindle caught up with our friend Geoffrey Hinton this weekend to get his thoughts on being named a Nobel laureate.

Aaron Brindle

Congratulations Geoff. What was your first thought when you received a call from the Nobel Prize committee?

Geoffrey Hinton

They said it was in physics and I am not a physicist so I thought it might be a prank.

Aaron

Did you find it sort of ironic, given you dropped out of physics in university?

Geoffrey

If I had been better at math, I would have stayed in physics and wouldn’t have got it.

Aaron

Can you help connect the dots between physics, your work on restricted Boltzmann Machines and modern AI?

Geoffrey

Terry Sejnowski and I figured out a really neat learning algorithm for Hopfield nets with hidden units. It allows a generative neural network to learn features and interactions between features that make it possible for the neural net to generate data that resembles its training data at an abstract level. The Boltzmann machine learning algorithm used ideas from statistical physics. 

Unfortunately, it’s not how the brain works and it’s not the best way to do machine learning. Fortunately, a restricted version of the algorithm is a good way to initialize the layers of feature detectors in a deep neural net. Initializing in this way made backpropagation work much better.

So there is a link between Boltzmann machines and backprop that was historically important. But people no longer use this kind of initialization because they later discovered there are simpler methods that work just as well.

Aaron

AI is now a tool powering so many innovations across science. Demis Hassbis and John Jumper for example won the Nobel Prize in chemistry for their work on AlphaFold which predicted the structure of every known protein – and, at its core, this was a generative AI challenge as much as a chemistry challenge. Do you think AI – or computer science more broadly – might need its own Nobel?

Geoffrey

Yes.

Aaron

You have always been such a champion of curiosity-driven research. Looking back, what was that primary curiosity that drove you towards these early breakthroughs?

Geoffrey

I wanted to know how the brain worked. We still don’t know. It probably does not use backpropagation, but a failed theory of the analog brain turns out to be a pretty good way to do learning in a digital computer.

Aaron

Does winning a Nobel satiate that curiosity at all?

Geoffrey

No.

Aaron

What are you curious about today?

Geoffrey

Why do dentists say “bite down” when you clearly bite up?

Aaron

Aside from endless interview requests (sorry) how are you celebrating?

Geoffrey

I just gave half of the prize money to an organization that provides safe drinking water for indigenous communities in Canada.

In the 1980’s, Terry and I had a deal that if one of us received a prize for Boltzmann machines and the other didn’t we would split the money. But Terry is trying to renege on the deal on the grounds that it was not just for Boltzmann machines. 

Aaron

Anything else?

Geoffrey

I celebrate each time I get to the end of another damn interview.

Aaron

Ha! Well then we’ll let you celebrate. Congratulations again. 

AI News This Week

  • Radical Ventures aims to entice AI Founders with classes, GPUs   (Bloomberg)

    The Radical AI Founders Masterclass, now in its fourth year, offers practical business guidance to aspiring founders of AI-first companies, helping them identify markets, build teams, and scale technologies. Its 2,000-member community spans top industry labs and research institutions like OpenAI, Meta, Stanford, University of Toronto, IIT, and Oxford. This year, select participants will gain access to Radical’s cloud cluster, developed in partnership with Google Cloud, and up to $250,000 in compute credits for each of 16 startups (totaling $4 million) with no equity requirement. The program, running from October 9 to 31, features guest speakers like Fei-Fei Li of World Labs and Ari Morcos of DatologyAI.

  • Geoffrey Hinton in conversation with Fei-Fei Li  (Radical Talks)

    Our latest podcast celebrates Geoffrey Hinton’s Nobel Prize, revisiting a conversation between Geoffrey Hinton and Radical Scientific Partner and Co-Founder of World Labs, Fei-Fei Li. Moderated by Jordan Jacobs, Co-Founder and Managing Partner of Radical Ventures, this entertaining and wide-ranging conversation took place before a live audience at the University of Toronto as part of last year’s Radical AI Founders Masterclass, and it marked the first time Hinton and Li have shared a stage. The two pioneers of AI discuss the vast potential of AI, its inherent risks and the critical importance of responsible development and stewardship of this critical technology. Geoffrey and Fei-Fei also share a dramatic recounting of the 2012 ImageNet competition, when their professional careers first intersected. It was a pivotal moment in the history of AI, when neural networks proved capable of ‘solving’ computer vision.

  • Nobel Physics Prize awarded for pioneering AI research by 2 scientists  (The New York Times)

    Geoffrey Hinton and John Hopfield were awarded the 2024 Nobel Prize in Physics for their pioneering work inventing the foundations of modern AI. Hinton advanced neural networks through the Boltzmann machine and backpropagation, which are key to today’s AI technologies. Hopfield’s development of the Hopfield network in 1982 revolutionized understanding of memory recall, connecting physics with brain processes. The Nobel Committee emphasized the importance of responsible AI development and AI’s broad societal impact, particularly in healthcare, where AI is improving patient outcomes.

  • Nobel Prize in chemistry goes to 3 scientists for predicting and creating proteins  (The New York Times)

    The 2024 Nobel Prize in Chemistry was awarded to Demis Hassabis, John Jumper, and David Baker for advancing protein science with artificial intelligence. Hassabis and Jumper’s AlphaFold technology cracked the long-standing challenge of protein folding, predicting structures in hours instead of decades, transforming drug discovery and biological research. Baker, from the University of Washington, was honoured for pioneering computational protein design, enabling the creation of novel proteins with potential applications in treatments for diseases like COVID-19 and Alzheimer’s.

  • Research: Were RNNs All We Needed?  (Mila/Université de Montréal/Borealis AI)

    Researchers are addressing a key challenge in sequence modeling: the scalability limitations of Transformers for long sequences. While Transformers are powerful, their efficiency decreases as sequences grow, renewing interest in parallelizable recurrent models. By simplifying LSTMs and GRUs, removing hidden state dependencies, the team developed minimal versions (minLSTM and minGRU). These streamlined models use fewer parameters, train up to 175× faster on long sequences, and match the performance of recent state-of-the-art models like Mamba, offering a promising alternative for efficient sequence processing.

Radical Reads is edited by Leah Morris (Senior Director, Velocity Program, Radical Ventures).