Radical Reads

DeepSeek’s Wake-Up Call—Five Implications for AI Startups in 2025

By Ryan Shannon, Partner

http://Screenshot%202025%2002%2023%20115015

Credit: Radical Ventures

The emergence of DeepSeek’s latest model last month surprised many of those who follow the AI industry. You can read more of our initial analysis of DeepSeek’s technical innovations and implications for the AI ecosystem in the February 3rd edition of Radical Reads. Today we take a further look at the ripple effects of DeepSeek’s arrival and what it means for startups looking to survive and thrive in 2025.

  1. The AGI Frontier Remains a Big Tech Game

Pursuing AGI still requires vast resources, proprietary data, and infrastructure. While DeepSeek showcased resourcefulness in the face of constraints, the pursuit of AGI remains the focus of  tech giants (Google, OpenAI/Microsoft, Anthropic/Amazon). With a few exceptions, new startups should avoid competing head-on here and instead focus on applying frontier models to specific problems.

  1. Selling “Just a Model” Is a challenging Strategy

DeepSeek’s release—and allegations of model distillation—highlight a harsh truth: standalone model APIs have diminishing defensibility. As replication costs plummet and open-source alternatives proliferate, value must come from product differentiation, vertical expertise, unique data, or tailored workflows. 

  1. Full-Stack, Product-Centric AI Wins

The most successful AI companies aren’t selling raw compute or generic models. They’re building end-to-end solutions that solve specific pain points (e.g., hiring bottlenecks, marketing workflows, etc). Radical portfolio companies like Cohere and Writer are already pursuing this strategy effectively. In 2025, startups must own the last mile—delivering intuitive, industry-specific products that customers need.

  1. Efficiency Is the New Battleground

DeepSeek’s engineering breakthroughs—optimizing software stacks to bypass hardware limitations—prove that cost-effective training is achievable. Startups can replicate this mindset: smaller, domain-specific models augmented with RAG or fine-tuning often outperform larger, generic ones. Platforms like CentML, which optimize both models and compute resources, are lowering barriers further, enabling startups to deploy powerful AI without massive upfront investment.

  1. The Golden Age of AI-Native Apps Has Arrived

As model access democratizes, value shifts to applications. Startups that leverage open-source or cost-efficient models to build task-specific tools—think AI copilots for healthcare diagnostics or supply chain agents—can dominate. The key? Solving concrete problems with razor-sharp focus, not chasing vague “AI-powered” buzzwords.

2025: The Year of “Non-BS AI”

The market is done with hype. Buyers demand solutions that “just work”—tools with clear ROI, domain expertise, and seamless integration. For startups, this means:

Vertical depth > horizontal breadth

Product velocity > model size

Customer obsession > technical novelty

The race is on. The winners will be those who ship.

 

Find more of Ryan Shannon’s analysis of AI startup investing at his substack.

AI News This Week

  • A.I. is prompting an evolution, not extinction, for coders   (The New York Times)

    Software developers are emerging as early adopters of AI coding assistants, with tools capable of automating up to 80-90% of code creation in corporate environments. KPMG reports that AI co-pilot tools boosted their developer productivity by 10-30%, and saved 4.5 hours weekly. This shift is prompting training programs to adapt their curricula, focusing on the fundamentals of AI-assisted programming skills. Rather than technical mastery of specific languages, future developers will need to cultivate creativity, critical thinking, and management capabilities to remain competitive.

  • Cohere has a plan to win the AI race—without burning piles of money  (The Logic)

    Co-founder Nick Frosst outlines how Radical portfolio company Cohere is pursuing a strategy focused on embedding customized AI securely within enterprises and government. The company’s North platform works across client applications and databases, serving regulated industries like banking, healthcare, and government for which security and privacy are crucial. The company is also focusing on international expansion and has developed state-of-the-art multilingual LLMs, having recently launched Takane with Fujitsu for the Japanese market. Unlike competitors spending billions on larger models, Cohere achieves success through smaller, data-efficient models focused on quality over computational power. 

  • AI cracks superbug problem in two days that took scientists years  (BBC)

    A new AI system demonstrated breakthrough capabilities by solving a complex microbiology problem in just two days that had taken Imperial College London researchers a decade to investigate. Described as a “shock” by Professor José R Penadés, his team was studying how superbugs develop antibiotic resistance through viral tails that enable bacterial spread between species. The AI independently reached their same conclusion and suggested additional research avenues, despite having no access to their unpublished findings. 

  • Biggest-ever AI biology model writes DNA on demand  (Nature)

    Scientists at the Arc Institute and Stanford University have released Evo-2, the largest AI model for biology to date, trained on 128,000 genomes spanning the entire tree of life. The open-source model can write chromosomes and small genomes from scratch, and interpret complex non-coding DNA sequences linked to disease. Unlike previous protein-focused models, Evo-2 handles both coding and non-coding DNA sequences, processing patterns up to 1 million base pairs apart. While showing promise in predicting disease-causing mutations and generating synthetic genomes, researchers emphasize that further improvements are needed before artificially designed genomes could function in living cells. 

  • Research: MuJoCo Playground simplifies robot learning and deployment  (UC Berkeley/Google DeepMind/University of Toronto/University of Cambridge)

    Researchers have introduced MuJoCo Playground, an open source framework that streamlines AI powered robotics development. This system enables training robotic policies in minutes on a single GPU, a process that previously required days on multiple machines. Three key factors make its timing significant: the availability of affordable high performing robots, matured reinforcement learning for real world transfer, and sufficient GPU power for local training. With simple installation and cross platform support, this development could democratize AI robotics and catalyze experimentation by developers. 

Radical Reads is edited by Ebin Tomy (Analyst, Velocity Program, Radical Ventures).