Radical Reads

How Accurate Were Our 2024 AI Predictions?

By Rob Toews, Partner

http://Screenshot%202024%2012%2009%20092327

At the end of last year, Radical Ventures partner Rob Toews published a list of 10 predictions in Forbes about the world of artificial intelligence in 2024. With 2024 drawing to a close, Rob published a follow-up retrospective to evaluate how those predictions played out.

This week, we share the scorecard for Rob’s 2024 AI predictions. In the interest of brevity, we have included commentary for only 3 out of the 10 predictions. To read the full analysis, check out the article in Forbes.

 

Prediction 1: Nvidia will dramatically ramp up its efforts to become a cloud provider.

Outcome: Right

Prediction 2: Stability AI will shut down.

Outcome: Wrong

Prediction 3: The terms “large language model” and “LLM” will become less common.

Outcome: Wrong

Prediction 4: The most advanced closed models will continue to outperform the most advanced open models by a meaningful margin.

Outcome: Right

The release of Meta’s state-of-the-art open-weight Llama 3 models this year, combined with persistent delays in the release of OpenAI’s GPT-5, may have made it tempting to conclude that the performance gap between closed and open models is closing.

Then OpenAI dropped o1.

o1, OpenAI’s new reasoning model, has opened up an entirely new vista in artificial intelligence research.

In his seminal 2019 essay The Bitter Lesson, Rich Sutton emphasized “the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great.” Sutton posited that “the two methods that seem to scale arbitrarily in this way are search and learning.”

Before o1, frontier AI models relied heavily on the second method and largely neglected the first. o1 has changed this. Rather than improving AI by massively scaling learning during training, o1 introduces a new paradigm of massively scaling search during inference.

Because it is so recent and so novel, o1’s full significance is not yet widely appreciated. But it will profoundly impact AI’s trajectory in the years ahead.

It is true that open-weight alternatives to o1 have already emerged, mere months after o1’s launch, including some—like Alibaba’s QwQ-32B-Preview—that appear to rival o1 in performance.

This is not surprising, and it does not change the fact that the most important zero-to-one innovations in AI continue to come out of the closed labs. As we wrote last year:

“As in many other domains, catching up to the frontier as a fast follower, after another group has defined it, is easier to achieve than establishing a new frontier before anyone else has shown it is possible. For instance, it was considerably riskier, more challenging and more expensive for OpenAI to build GPT-4 using a mixture-of-experts architecture, when this approach had not previously been shown to work at this scale, than it was for Mistral to follow in OpenAI’s footsteps several months later with its own mixture-of-experts model.”

Prediction 5: A number of Fortune 500 companies will create a new C-suite position: Chief AI Officer.

Outcome: Right

Prediction 6: An alternative to the transformer architecture will see meaningful adoption.

Outcome: Right

The transformer remains the dominant AI architecture today, by far. But 2024 proved to be the year that, to quote last year’s article, “a challenger architecture broke through and won real adoption, transitioning from a mere research novelty to a credible alternative AI approach used in production.”

That alternative architecture is the state space model (SSM).

Mamba, today’s most prominent state space model, has been downloaded hundreds of thousands of times on Hugging Face since its publication about a year ago. Mamba has inspired a number of variants that are in wide use today, from Vision Mamba to Mixture-of-Experts Mamba to MambaByte. As one example, well-funded Israeli startup AI21 Labs built its flagship model (named Jamba) on the Mamba architecture.

Cartesia, a young startup out of Chris Re’s Stanford lab focused on productizing and commercializing SSMs, has seen significant growth this year. Its generative audio models—built on the SSM architecture—have emerged as a serious challenger to industry leaders ElevenLabs and OpenAI thanks to their superior efficiency, latency and ability to handle long inputs.

(Other challenger architectures also made progress this year—for instance liquid neural networks—but none have yet achieved the real-world adoption that state space models have.)

Prediction 7: Strategic investments from cloud providers into AI startupsand the associated accounting implicationswill be challenged by regulators.

Outcome: Right

Prediction 8: The Microsoft/OpenAI relationship will begin to fray.

Outcome: Right

Prediction 9: Some of the hype and herd mentality behavior that shifted from crypto to AI in 2023 will shift back to crypto in 2024.

Outcome: Wrong

Prediction 10: At least one U.S. court will rule that generative AI models trained on the internet represent a violation of copyright. The issue will begin working its way up to the U.S. Supreme Court.

Outcome: Wrong

Nearly three dozen lawsuits are underway in the United States today on the issue of whether generative AI models trained on internet data represent a violation of copyright, or conversely are protected by the fair use doctrine. Every major AI provider, from OpenAI to Anthropic to Meta, has been caught up in the litigation.

But a meaningful ruling has not yet been issued in a single one of these cases.

In other words, expecting this prediction to come true in 2024 reflected an overoptimistic assessment of how quickly U.S. courts move.

But substantive rulings from the courts on these cases are coming soon. Summary judgment decisions are expected within months in two different cases on this topic, one against Nvidia and another against legal AI startup ROSS Intelligence. By the first or second quarter of 2025, we will have much more signal (though by no means the final verdict) about where courts are landing on this critical issue.

 

Read the article as published in Forbes. We will share Rob’s 2025 predictions in an upcoming edition of Radical Reads.

AI News This Week

  • World Labs Unveils Groundbreaking AI: Transforms single photos into 3D interactive scenes  (World Labs)

    World Labs, a Radical Ventures portfolio company, has achieved a breakthrough in spatial intelligence with an AI system that transforms a single image into an immersive and interactive, high fidelity 3D world. Unlike other AI systems, World Labs’ scenes are spatially consistent and enable camera control. The company is giving a few creators an early sneak peek at the technology to begin experimenting with the possibilities enabled by a 3D-native generative AI workflow.

  • Amazon to pilot AI-designed material for carbon removal  (Reuters)

    Amazon Web Services (AWS) is partnering with Radical Ventures portfolio company Orbital Materials to pilot a carbon-removal system for its data centers. The AI-designed material, described by Orbital CEO Jonathan Godwin as an “atomic-level sponge,” targets CO2 molecules at a fraction of traditional carbon offset costs. Starting in 2025, the three-year partnership will test the material in one AWS data center while exploring additional solutions for water usage and chip cooling challenges in AI infrastructure.

  • How close is AI to human-level intelligence?  (Nature)

    While transformer-based large language models excel at pattern recognition across different data types, performance degrades on tasks that require abstract reasoning and recombining knowledge in novel ways. Researchers are making rapid strides in key areas, including pattern recognition across multiple domains and internal feedback mechanisms. World models, like those noted in active development at Radical portfolio company World Labs, are also critical to creating more complex knowledge that intersects with the real world. While not artificial general intelligence (AGI), promising new architectures like generative flow networks are attempts to move beyond next-token prediction and develop agency in data processing. 

  • Best workplaces for innovators 2024: Biotech  (Fast Company)

    Radical portfolio company Nabla Bio has been named the top biotech workplace in Fast Company’s 2024 Best Workplaces for Innovators awards. Nabla distinguishes itself with a zero-hierarchy culture that values ideas over credentials. The company has eliminated degree requirements, including PhDs for scientists, and implemented a streamlined process for proposing new experiments through cloud-shared documents that require team feedback within 24 hours.

  • Research: Boltz-1 democratizing biomolecular interaction modelling  (MIT CSAIL, MIT Jameel Clinic, Genesis Therapeutics)

    Researchers from MIT and Radical portfolio company Genesis Therapeutics have introduced Boltz-1, an open-source deep learning model that matches commercial solutions in predicting 3D structures of biomolecular complexes. Boltz-1 achieves accuracy similar to the state-of-the-art AlphaFold3 model while incorporating innovations in architecture, speed optimization, and data processing. The breakthrough aims to democratize access to advanced biomolecular modelling tools to accelerate drug discovery and protein design through global scientific collaboration.

Radical Reads is edited by Leah Morris (Senior Director, Velocity Program, Radical Ventures).