In this week’s article, we feature Rob Toews’ follow-up article on non-invasive brain-computer interfaces (BCI). He covered invasive BCIs and the dynamics of the startups building in the space in an article published in October. The following is an excerpt from the full article published in Forbes.
It is not possible to understand the long-term future of artificial intelligence without understanding brain-computer interfaces.
Why is that? Because brain-computer interfaces (BCI) will play a central role in defining how human intelligence and artificial intelligence fit together in a world with powerful AI.
To most people, brain-computer interfaces sounds like science fiction. But this technology is getting real, quickly. BCI is nearing an inflection point in terms of real-world functionality and adoption. Far-fetched though it may sound, capabilities like telepathy will soon be possible.
The world of BCI can be divided into two main categories: invasive approaches and non-invasive approaches. Invasive approaches to BCI require surgery. They entail putting electronics inside the skull, directly in or on the brain. Non-invasive approaches, on the other hand, rely on sensors that sit outside the skull (say, on headphones or a hat) to collect and interpret signals from the brain.
In the first part of this article series, published in October, we dove deep into invasive BCI technologies and startups. In this article, we turn our attention to non-invasive BCI.
Together, BCI and AI will reshape humanity and civilization in the years ahead. Now is the time to start paying serious attention to this technology.
And the best way to understand the state of the art in non-invasive BCI—what is possible, what is not possible, where the biggest future opportunities lie—is to explore what today’s leading startups are doing.
Let’s dive in.
Read Rob’s full article in Forbes. He writes a regular column for Forbes about the big picture of artificial intelligence.
Radical Talks: Purpose Driven Innovation
Joelle Pineau is one of the architects behind modern AI — and as Chief AI Officer at Cohere, she’s focused on turning breakthrough research into real-world impact. In this special episode of Radical Talks, recorded live at Radical Ventures’ AI Founders Masterclass, Joelle joins Radical Partner Sanjana Basu to explore how frontier AI actually makes its way from the lab into the enterprise.
Drawing on her experience building FAIR at Meta and now leading applied AI at Cohere, Joelle shares how leadership judgment, culture, and rigor shape what gets built and what ultimately lasts. Along the way, she unpacks her Plus-One principle, the responsibility that comes with deploying powerful systems, and what it takes to lead high-performance AI teams at scale.
Listen to the podcast on Spotify, Apple Podcasts, or YouTube.
AI News This Week
-
AI startup Cohere CEO says US Holds Edge Over China in AI Race (Reuters)
Radical Ventures portfolio company Cohere‘s CEO Aidan Gomez believes the U.S. and Canada still have the lead against China in the global AI race. While companies in China have developed high-performing AI models, Gomez argues liberal democracies around the world are hesitant to use Chinese models. According to him, “it’s not who gets the technology first, but who commercializes it at scale. The U.S. and Canada sit in an incredible position to be the world’s partner in adopting this technology.”
-
Mofasa: A Step Change in Metal-Organic Framework Generation (Orbital Materials)
Radical Ventures portfolio company Orbital Materials has released Mofasa, a state-of-the-art latent diffusion model for generating Metal-Organic Frameworks (MOF) materials used in climate applications such as carbon capture and storage. The model operates at an atomic granularity, providing a path towards cross-domain foundational models for materials.
-
Defining Real-World Embedding AI for Videos, Audios, Texts, Images, and Composition of Them (Twelve Labs)
Radical Ventures portfolio company Twelve Labs has released Morengo, their most powerful video understanding model. This state-of-the-art multimodal embeddings model features an industry-first capacity for entity creation and search, allowing users to define and find specific people and products within a repository of videos. Marengo leads amongst similar models in latency and performance across modalities (video, audio, image and text).
-
Can AI Look at Your Retina and Diagnose Alzheimer’s? Eric Topol Hopes So (Wired)
Eric Topol, vice president of Scripps Research and author of Super Agers, believes innovations like AI-assisted medicine can help increase our health span. Topol is bullish on the scale at which foundation models can be applied in healthcare, for example, using a single image of a person’s retina to surface risks for arterial issues, Parkinson’s, or Alzheimer’s. For a more in-depth exploration of AI’s impact on healthcare, listen to our Radical Talks episode with him.
-
Research: Scaling Depth Can Enable New Goal-Reaching Capabilities (Princeton/WUT)
In this paper, one of the winners of the NeurIPS 2025 Best Paper Awards, researchers challenge the view that reinforcement learning (RL) is simply a tool for fine-tuning models. They introduce a self-supervised RL training paradigm that demonstrates that RL can be used to train deep neural networks with sophisticated capabilities.
Radical Reads is edited by Ebin Tomy (Analyst, Radical Ventures)