Radical Blog

Geoffrey Hinton and Jeff Dean in Conversation

By Jordan Jacobs, Co-Founder & Managing Partner

This week we have a holiday treat! During NeurIPS 2025, I was privileged to moderate a discussion between Geoffrey Hinton, the Nobel Prize-winning “Godfather of AI,” and Jeff Dean, Google’s Chief Scientist and Gemini Lead, who is widely considered the greatest software engineer of all time.

The story of Geoff and Jeff’s friendship is, in many ways, the story of modern AI itself — the fusion of a powerful idea with the infrastructure and data needed to realize its potential. Their collaboration is perhaps one of the most productive in modern history.

Our conversation spanned over three decades of innovation, including:

  • How Jeff Dean’s 1990 undergraduate thesis focused on parallelizing compute for neural networks long before the world was paying attention.
  • The inside story of how Geoff, Ilya Sutskever and Alex Krizhevsky auctioned their early startup (DNN Research) and what it was like when Geoff joined Google as a “64-year-old intern” under Jeff’s supervision.
  • A candid moment where Geoff challenged Jeff on whether Google regrets publishing the seminal Transformer paper that would underpin advances in generative AI, including the launch of ChatGPT (Jeff’s answer: “No.”).
  • Their thoughts on the future of AI, moving beyond the Transformer, the development of Google’s TPUs, and how AI is now being used to design the very hardware it runs on.

It is a friendship that has changed the world, leading us into an era in which AI will impact every aspect of society and our lives. I hope you enjoy the conversation as much as I did. Happy holidays!

You can listen to the full discussion on our podcast on Spotify and Apple Podcasts, or watch the full episode on YouTube.

AI News This Week

  • Fei-Fei Li of World Labs: AI is Incomplete Without Spatial Intelligence  (FT)

    Radical Ventures Scientific Partner Fei-Fei Li, CEO and Co-founder of Radical Ventures portfolio company World Labs, discusses how world models like their recent release Marble enable users to create 3D worlds from photo or video prompts. Fei-Fei argues that spatial intelligence, which is the ability to understand, reason, and interact in physical space, is essential for AI’s evolution beyond language models. 

  • Trump Signs Executive Order to Curtail State AI Laws  (WSJ)

    President Trump signed an executive order aimed at overriding state AI laws to establish a unified federal standard, with the Justice Department empowered to challenge state regulations deemed restrictive. The move follows lobbying from Silicon Valley executives who argued that diverse state-level AI regulations could hamper U.S. competitiveness against China in the AI race. 

  • Job Apocalypse? Not Yet. AI is Creating Brand New Occupations  (The Economist)

    AI is creating new roles that emphasize human skills, from data annotators that train advanced models in finance, law, and medicine, to forward-deployed engineers who customize and embed AI tools within organizations. The fastest-growing role is AI risk-and-governance specialists, who ensure bots don’t leak data or crash operations, while the newly established chief AI officer role oversees the integration of multiple AI systems across enterprises.

  • For the First Time, AI Analyzes Language as Well as a Human Expert  (Wired)

    UC Berkeley researchers tested large language models on sophisticated linguistic tasks, finding that some models demonstrated “metalinguistic” abilities, which is the capacity to analyze language itself rather than simply use it. The models successfully parsed complex recursive sentences and correctly inferred phonological rules from 30 newly invented mini-languages never encountered during training, challenging linguist Noam Chomsky’s view that AI systems cannot perform sophisticated language analysis.

  • Research: On the Interplay of Pre-Training, Mid-Training, and RL on Reasoning Language Models  (CMU)

    Researchers developed a framework to isolate the contributions of pre-training, mid-training, and reinforcement learning to language model reasoning capabilities. The study found that RL produces genuine capability gains only when tasks fall at the model’s “edge of competence” and when pre-training leaves sufficient room for improvement. Introducing a mid-training phase between pre-training and RL substantially improved performance under fixed compute budgets, highlighting mid-training as an underexplored but powerful component of training pipelines.

Radical Reads is edited by Ebin Tomy (Analyst, Radical Ventures)