Radical Ventures is proud to announce our lead investment in General Magic‘s $7.2M seed round, joined by a16z Speedrun and angel investors from OpenAI, Figma, and Aidan Gomez, CEO of Radical portfolio company Cohere. Below, Radical Partner Sanjana Basu reflects on the investment and why General Magic is poised to help bring the insurance industry into the AI era.
Everyone has an insurance horror story. It usually involves a bureaucratic doom loop: sitting on hold for forty minutes, hunting for a dusty printer to sign a PDF, or realizing a policy lapsed because a critical notification was buried in spam. The global insurance industry processes trillions of dollars in premiums annually, yet the customer journey remains one of the most friction-laden experiences in the modern economy. We can order a car or a meal with a single tap, but insurance still feels trapped in 1988.
The problem is that most of the world’s financial and insurance data is locked inside rigid, legacy systems never designed for the AI era. General Magic isn’t trying to convince enterprises to rip and replace that infrastructure; instead, they are giving companies a way to finally talk to it. By building a reasoning layer on top of existing systems of record, the team is unlocking a massive amount of trapped value. This is how the insurance industry becomes AI-native: not by rebuilding from scratch, but by bridging the gap between old data and new intelligence.
General Magic’s core product, Cell, is an SMS-native AI agent that connects directly to carriers’ and brokers’ existing systems—policy platforms, rating engines, underwriting tools, and internal APIs. Rather than forcing customers to log into a clunky portal or wait on hold, Cell’s AI agents sit directly on top of existing broker management systems, quoting platforms, and CRMs, handling the routine coordination work that ties up servicing teams by gathering underwriting data, chasing renewals, answering policy questions, and collecting claims details over text
Cell operates across the entire insurance lifecycle. From quote and bind through renewals, servicing, and claims, the agent automates the routine coordination that consumes human time. It collects missing information, follows up on stalled applications, captures loss details, and escalates complex cases with full context attached—all within a single, continuous thread.
In early deployments with a large personal lines insurer, General Magic reduced the effective time to quote from roughly 30 minutes to under three minutes in select flows, boosting conversion rates and significantly reducing the operational load on internal teams.
Founders Jai and Anthony are technical builders bringing a fresh, outsider’s perspective to one of the world’s most entrenched industries. What stands out most is their speed, intensity, and ability to build authentic relationships with customers and partners from day one.
General Magic isn’t just improving a process; they are fundamentally changing how we interact with the systems that protect our lives and property. We are thrilled to support Jai, Anthony, and the entire team as they bring legacy industries into the AI era.
AI News This Week
-
Trump Orders Government to Stop Using Anthropic After Pentagon Standoff (NYT)
A week-long standoff between Anthropic and the Pentagon ended with President Trump ordering federal agencies to stop using Claude and Defense Secretary Hegseth designating the company a “supply chain risk to national security.” The dispute centred on the Pentagon’s demand for unfettered access to Claude, with Anthropic refusing to proceed without explicit terms preventing its use for autonomous weapons and mass domestic surveillance. OpenAI moved quickly to fill the void, agreeing to permit all lawful uses of its technology while securing the right to embed technical guardrails and place its own employees alongside government personnel on classified projects. The degree of control AI companies can retain over their own models in government contracts remains an open question, and one that other governments will now be watching closely as they negotiate their own terms with frontier AI developers.
-
AI Tools are Being Prepared for the Physical World (The Economist)
The race to build “world models,” AI systems with an internal understanding of physical and virtual environments, has produced three competing approaches. Video-based systems develop world understanding as a byproduct of generating coherent simulations. Joint-Embedding Predictive Architectures extend world modelling beyond physical spaces to virtual environments such as HR systems and legal documents. The most ambitious approach is full 3D spatial modelling, exemplified by Radical Ventures portfolio company World Labs, which builds complete and consistent 3D environments that exist in their entirety rather than being generated frame-by-frame. Multiple users can inhabit the same space, and objects exist even when not being looked at. Some experts argue that world models already exist implicitly inside LLMs. World Labs CEO Fei-Fei Li remains skeptical of this LLM-implicit view, arguing that describing the world fluently is not the same as understanding it.
-
Breaking Down the Doomsday AI Memo That Spooked Markets (WSJ)
A viral Substack memo published by Citrini Research rattled markets this week after it laid out a fictional but detailed 2028 scenario exploring AI’s potential economic downsides. A piece published by Citadel Securities pushes back on the displacement narrative, pointing to rising software engineer job postings, stable AI adoption trends in labour data, and the argument that historically, productivity shocks have expanded output and real incomes rather than collapsing demand.
-
Breaking Down the Doomsday AI Memo That Spooked Markets (WSJ)
A viral Substack memo published by Citrini Research rattled markets this week after it laid out a fictional but detailed 2028 scenario exploring AI’s potential economic downsides. A piece published by Citadel Securities pushes back on the displacement narrative, pointing to rising software engineer job postings, stable AI adoption trends in labour data, and the argument that historically, productivity shocks have expanded output and real incomes rather than collapsing demand.
-
What is AI ‘Distillation’? (FT)
Distillation is the practice of training smaller “student” models on outputs from more powerful “teacher” models, letting developers shortcut the costly process of building frontier systems from scratch. The practice gained widespread attention last year when OpenAI accused DeepSeek of using ChatGPT to build powerful models based on open-source systems from Meta and Alibaba. This week, distillation is back in the news after Anthropic accused three Chinese AI companies of running industrial-scale distillation attacks on its Claude models, saying they had identified 24,000 fraudulent accounts that generated over 16 million exchanges with Claude.
-
Research: Does Socialization Emerge in AI Agent Society? A Case Study of Moltbook (University of Maryland/MBZUAI)
Researchers studied Moltbook, an AI-only social platform with 2.6 million LLM-driven agents, to test whether AI societies develop dynamics analogous to human communities. Despite large scale and interaction density, agents exhibit strong inertia, ignoring feedback, failing to adapt to interaction partners, and maintaining preset semantic trajectories. No stable influence hierarchies or shared social memory emerge. The findings suggest genuine socialization requires shared memory and adaptive feedback, not just interaction volume.
Radical Reads is edited by Ebin Tomy (Analyst, Radical Ventures)