This week, Radical Senior Director of Talent Yvonne Likomanova explores how the rise of the Neolabs, a new type of AI company founded by elite researchers from frontier labs and capitalized like research institutions, is reshaping the war for AI talent, and what traditional startups can do to compete. Attached is a link to the full article.
Neolabs look like AI startups from the outside, but operate more like privately funded research institutions, with freedom and capital to explore ideas that incumbents aren’t pursuing. Founded by elite researchers, many of whom previously worked at frontier AI labs like OpenAI, Google DeepMind, and Meta, these organizations are raising some of the largest early-stage rounds in tech history. Their north star isn’t immediate revenue or customer acquisition, but rather the pursuit of foundational breakthroughs that will define the next decade of AI. While legacy frontier labs must balance expansive roadmaps and strict resource allocation with growing safety and regulatory demands, Neolabs are experimenting rapidly in stealth.
The Rise of Neolabs and Why Now?
The exodus of elite talent from legacy frontier labs to form Neolabs isn’t happening in a vacuum. This trend is driven by both push factors (lab politics, vision alignment) and pull factors (massive funding opportunities, entrepreneurial freedom). To understand the rise of Neolabs, we have to look at five distinct catalysts:
- Shifting priorities inside frontier AI labs: Between 2018 and 2022, organizations like OpenAI operated much like Neolabs do today: they were heavily funded, research-driven sandboxes. However, the launch of ChatGPT fundamentally changed their DNA. Today, these incumbents are massive, product-driven enterprise companies fighting for market share. Neolabs are emerging as a direct response to this shift in priorities. As Ilya Sutskever once put it, “We’re moving from the age of scaling to the age of research again”.
- Diminishing returns of scaling: For the past few years, the prevailing strategy in AI was simple: feed more data and compute into a Transformer model, and it gets smarter. But the low-hanging fruit of this “scaling law” era is arguably coming to an end as we reach the limits of high-quality human training data. The next leap in AI won’t come from just making models bigger; it will require fundamental architectural breakthroughs, expanding capabilities in new domains, or building infrastructure for others to do so. Big Tech is structurally disincentivized to abandon the Transformer architecture they’ve invested billions into developing. Neolabs, on the other hand, are starting with a blank slate and are purpose-built to explore these radical new paradigms.
- The Ideological split between product velocity vs. safety: The departures of key leaders from OpenAI and other labs reflect deeper disagreements about the balance between rapid commercialization and long-term safety. Neolabs allow founders to embed their philosophy directly into the structure of a new company, whether that means prioritizing alignment research or accelerating capability development.
- Research talent as an asset class: A growing number of investors now view elite AI researchers as an asset class in their own right. This perspective helps justify the massive funding rounds we’ve been seeing, driven in part by perceived downside protection. The logic is that even if a Neolab fails to commercialize at scale, the elite team it has assembled still holds meaningful value and can secure a soft landing in Big Tech or elsewhere. While acqui-hires have historically been unfavorable outcomes for early-stage investors, the scale of these rounds makes them increasingly resemble growth-stage investments. As a result, limiting downside becomes a more important factor, while the priority, of course, remains on capturing upside.
- Sovereign AI: Frontier model ownership is increasingly viewed as a national security mandate, fueling massive “sovereign” funding rounds for both Neolabs, like Mistral, and later-stage frontier AI companies, like Cohere. In France, Mistral has raised billions with the help of its government as a homegrown AI research lab, while Canada’s Cohere (a Radical portfolio company) has gained significant traction across Europe and Asia as a cloud-agnostic alternative to US hyperscalers. This geopolitical urgency has unlocked vast pools of sovereign wealth and strategic capital, especially for firms positioned to support a nation’s technological independence.
The Talent Squeeze
The rise of Neolabs significantly reshapes the war for AI talent in a few meaningful ways.
First, top AI talent who used to choose between big tech and early-stage startups, now have a third option. For researchers, Neolabs present the opportunity to shape foundational directions in AI without immediate product pressure. This raises the bar for what “attractive” looks like in the AI labor market. Where founding or joining an AI startup was once the default path for ambitious builders, the landscape is now more fluid. Neolabs have also emerged as a new strategic destination for startup founders and talent, creating additional paths for collaboration or acquisition. For researchers who value and seek mission clarity and runway, Neolabs offer both the pace and autonomy of startup building combined with the financial and operational stability of a well-capitalized organization. For many candidates, this feels like having the best of both worlds.
Second, Neolabs are compressing the middle of the talent market. Frontier AI labs and big-tech incumbents still dominate in compensation and compute access. This leaves many traditional AI startups squeezed between two powerful poles: they can’t match incumbent pay or infrastructure, and they can’t match Neolabs’ research focus or perceived intellectual purity. Risk, too, is becoming more subjective. Some researchers will view Neolabs as being less risky due to their large funding rounds and longer runways. Others view the high valuation before product market fit (or even a product!) as inherently risky.
This cross-pollination of talent is ultimately healthy for the ecosystem, but it also means traditional AI startups have to be more thoughtful about how they communicate their value proposition and unique differentiators.
For elite talent, the landscape of opportunities has never been richer, though for companies, this means the war for that talent will only intensify. Ultimately, AI’s next chapter won’t be written exclusively by frontier AI labs or scrappy startups, but by the teams that can successfully translate breakthrough research into real-world impact.
Read the full article for more of Yvonne’s thoughts on this topic.
AI News This Week
-
Forbes 2026 AI 50 List (Forbes)
Forbes’ annual AI 50 list of the most promising private AI companies featured three Radical Ventures portfolio companies: Crusoe, which builds gigawatt-scale AI data centers; Cohere, which develops enterprise-grade large language models and an agentic AI platform; and World Labs, co-founded by Fei-Fei Li, which is building large world models that reason about 3D space. Forbes also published an AI 50 Brink list of 20 promising early-stage startups, featuring Radical portfolio companies Ricursive Intelligence, developing reinforcement learning systems that automate chip design, and Periodic Labs, building AI scientists for material discovery and the autonomous laboratories for them to operate.
-
The AI Index 2026 Annual Report by Stanford University (Stanford)
This year’s edition of Stanford’s annual index report reveals the massive scale of AI adoption by individuals and consumers. Generative AI has hit 53% population-level adoption, and organizational adoption has reached 88%. Performance of AI systems has skyrocketed, with systems now meeting or exceeding human expert baselines in PhD-level science, competitive mathematics, and multimodal reasoning. On real-world tasks, AI agents jumped from 12% to 66% accuracy on structured computer-use benchmarks in a single year, within striking distance of human performance.
-
We’re Using So Much AI That Computing Firepower Is Running Out (WSJ)
Given the rapid pace of AI adoption and long lead times to build out additional data center capacity, frontier labs are running up against compute capacity constraints. AI companies have begun limiting usage to certain hours of the day, discontinuing product lines, and are also experiencing outages due to their explosive growth. Renting GPU capacity has become more expensive, with prices for Nvidia’s Blackwell chips rising 48% over two months.
-
How ‘Jagged Intelligence’ Can Reframe the AI Debate (NYT)
The term “jagged intelligence,” coined by Andrej Karpathy, describes how AI systems are simultaneously superhuman in some areas and weak in others, for example, solving International Math Olympiad problems while failing basic common-sense questions. Reinforcement learning has driven recent capability gains, working best where feedback is unambiguous, which is why the math and coding abilities of AI systems have been advancing rapidly.
-
Research: Neural Computers (Meta/Kaust)
Researchers are developing “Neural Computers,” a new form of machine in which a single neural network assumes the roles of computation, memory, and interface, rather than splitting them across hardware layers. The model itself becomes the running computer. Early video-model prototypes, trained on raw screen and action data, learn to render terminals and respond to desktop clicks with reasonable fidelity. Reliable reasoning and stable reuse across sessions remain unsolved, but the work sketches a path toward fully learned computing substrates.
Radical Reads is edited by Ebin Tomy (Analyst, Radical Ventures)