At this year’s Nvidia GTC, the company’s annual AI conference, Radical Partner David Katz took to the mainstage to join a panel focused on how VC-backed AI companies can leverage Nvidia to win markets (Nvidia is an investor in many Radical portfolio companies). David shares his thoughts below on Nvidia’s evolving role in the AI startup landscape.
For years, the consensus view of Nvidia centred on silicon. Even as we recognized the proprietary strength of the CUDA software stack as a formidable moat, I’m still guilty of describing the AI landscape to the New York Times as a “Game of Thrones” style battle among GPU providers. In hindsight, the 2019 Mellanox acquisition was a clear signal: Nvidia wasn’t just building faster processors; they were designing the fabric of the modern data center. This week, as another GTC wraps up, Nvidia’s transformation into a vertically integrated platform company is near complete. Its hardware is not just a component, but a lynchpin of the AI lifecycle.
This strategy is best understood through a five-layer view of the AI stack, from the bottom up: Energy, Chips, Infrastructure, Models, and Applications. These layers are deeply interdependent; energy fuels the chips which both power and dictate how infrastructure is built, which in turn defines how models are trained and then served through applications.
Nvidia’s current dominance stems from its presence in all five. We see this technical co-design in our own portfolio: Orbital Materials is collaborating on direct-to-chip cooling solutions — a necessity as next-gen rack densities push physics to the limit — while Emerald AI is being integrated into the Nvidia DSX Flex Reference Architecture to build energy-flexible AI factories. By solving these physical and economic bottlenecks, Nvidia ensures that the flywheel of AI adoption continues to spin, regardless of who is winning at the application layer.
The strategic logic here is about removing friction rather than simple sales. Nvidia frequently engages with frontier startups even when near-term revenue is modest, because the long-term “pull-through” of global AI adoption far outweighs the value of any single deal. By providing reference architectures, deep engineering support, and collaboration to the many model builders in our portfolio, Nvidia effectively integrates these companies into its ecosystem. For VCs and founders, this means technical differentiation is no longer just about the model code; it’s about how deeply a company is woven into this underlying fabric. In the intelligence era, the competitive advantage belongs to those who can navigate AI’s full stack.
AI News This Week
-
Industrializing Intelligence: Nvidia’s GTC 2026 And The New AI Economy (Forbes)
Nvidia CEO Jensen Huang’s GTC 2026 keynote outlined the company’s shift from chipmaker to full-stack AI infrastructure platform. Huang argued that data centers are becoming “AI factories” that convert energy into tokens, the atomic unit of AI reasoning and action. Huang introduced tokens per watt as the defining efficiency metric, framing power, not GPUs, as the real bottleneck. Addressing this issue, Radical Ventures portfolio company Emerald AI, which builds software that orchestrates AI workloads to dynamically adjust data center energy consumption during peak grid demand, was featured at the conference. The keynote also marked a pivot from training to what Huang called the “inference era,” where AI systems reason and act continuously in production. Radical Ventures portfolio company Waabi, which uses an end-to-end AI model to power autonomous mobility on Nvidia’s DRIVE Thor platform, is a direct example of continuous inference at work, with its Waabi Driver reasoning in real time to navigate highways and surface streets.
-
AI Turbocharges Antibody Hunt for Binders with Drug-like Qualities (Nature)
Generative AI is accelerating antibody drug discovery by optimizing multiple drug-like properties simultaneously, a process that traditionally takes up to two years of trial and error. For example, Radical Ventures portfolio company Latent Labs reported a 50% success rate in designing antibodies against previously undruggable disease targets. Radical portfolio company Nabla Bio secured a second partnership with pharmaceutical giant Takeda for its JAM-2 antibody design platform.
-
The Next Phase of Artificial Intelligence May Require Very Different Processors (The Economist)
AI computing demand is shifting from training to inference, and GPUs aren’t built for the job. Training relies on massive parallel computation, where GPUs excel. Inference, where models respond to real-world queries, requires constant memory access, which strains GPU architectures. Off-chip memory is ten times slower and far more energy-intensive than on-chip memory, and that gap is widening as models grow. McKinsey estimates inference will account for three-fifths of AI data center demand by the end of the decade. Nvidia acknowledged the shift at GTC with a new inference-specific chip, but startups like Cerebras, MatX, and Etched are pursuing more radical designs, from dinner-plate-sized chips with massive on-chip memory to processors that merge memory and computation entirely.
-
Hustlers are Cashing in on China’s OpenClaw AI Craze (MIT Technology Review)
OpenClaw has become a mass consumer phenomenon in China. The tool, which can autonomously take over a device and complete tasks for users, has crossed from developer niche to the mainstream in weeks. Meetups in Shenzhen are drawing over 1,000 attendees, local governments are offering compute credits and cash rewards for OpenClaw ventures, and Tencent hosted public installation events that drew long lines. Refurbished Mac sellers report an eightfold jump in orders as users buy dedicated devices to run the agent. But security concerns are real. OpenClaw requires deep device access and runs continuously in the background. China’s cybersecurity regulator issued a public warning about data breach risks, and skeptics argue that most new users lack the technical fluency to use the tool safely.
-
Research: Can LLM Agents Automate LLM Post-Training? (University of Tübingen/Max Planck Institute)
Researchers introduced PostTrainBench, a benchmark measuring whether AI agents can autonomously fine-tune other AI models for new tasks. Given a base model and 10 hours on a single GPU, the best agent reached 23.2% average performance compared to 51.1% for human engineering teams. Progress is rapid, with scores more than doubling over six months. On narrow tasks, agents already outperform human teams. The safety finding is equally notable. The most capable agent was also the most frequent rule violator, gaming benchmarks with increasingly sophisticated strategies as its capabilities grew.
Radical Reads is edited by Ebin Tomy (Analyst, Radical Ventures)