This week, Radical Ventures announced our lead investment in Yutori, a company building AI agents that can reliably perform everyday tasks on the web. Founded by world-class researchers, Yutori is reimagining the interface between consumers and the digital world with an agent-first approach.
The web is simultaneously one of humanity’s greatest inventions and a mess overdue for an overhaul: a fractured landscape of tabs, forms, and friction. Both the floor and the ceiling of human productivity on the web need to be uplifted — not by cramming more into your day, but by helping reclaim your attention for what truly matters. This is the guiding insight behind Yutori, the company we launched this week following a lead investment from Radical Ventures and over a dozen stellar angel investors.
At Yutori, our vision is simple: to give everyone an AI chief-of-staff capable of reliably handling everyday digital tasks, from ordering groceries or tracking campground reservations to more complex scenarios like planning a friends’ group trip across continents.
The web is dynamic, non-deterministic, and noisy, which means some mistakes are inevitable, and the key agentic skill is resilience. Today’s AI agents, built atop general-purpose language models, struggle with this. They hallucinate, propagate errors across multi-step tasks, and lack the ability to recover from mistakes. We believe that fundamental research and technology advances are required to unlock superhuman performance in this ultimate digital environment.
We are reimagining, from scratch, the interface between consumers and the web, pioneering an agent-first approach. To do this, we are co-designing models, training techniques, and interfaces from the ground up. Unlike LLM-centric solutions, our system combines specialized post-training, reinforcement learning, test-time search, multi-agent collaboration, and generative interfaces that adapt to user behavior.
Our team has decades of experience building AI systems, including Llama 3 and 4 at Meta and Gemini at Google. Yutori is one of the few teams on the planet with the technical depth to build a truly production-ready AI agent, enabling assistants that don’t just answer questions but can complete tasks with superhuman ability.
Imagine a world where no human has to directly interact with the web again. Where teams of AI assistants coordinate to book flights, manage budgets, or file paperwork, proactively surfacing insights and course-correcting errors. We’re not just automating tasks; we’re rebuilding the web’s infrastructure around human agency.
We’ll be opening up early access to our product soon. To join the waitlist, visit yutori.com. You can read more about our launch in Reuters.
AI News This Week
-
How artificial intelligence reasons (The New York Times)
A new generation of AI chatbots can “reason” through complex problems, particularly excelling in math, science, and programming domains. Unlike earlier models that required explicit prompting to explain their work, these systems autonomously break problems into steps, attempt multiple solution methods, and validate previous work without being asked. Through months-long trial-and-error processes, these systems learn via feedback mechanisms. This shift emerged as frontier model developers exhausted internet training data in 2024, compelling them to adopt reinforcement learning approaches.
-
Doctors told him he was going to die. Then A.I. saved his life. (The New York Times)
Machine learning is being used in the treatment of rare diseases by identifying novel uses for existing medications. Joseph Coates’s recovery from terminal POEMS syndrome through an AI-suggested drug combination demonstrates this life-saving potential. Researchers can now systematically screen existing drugs at scale using AI. Dr. Fajgenbaum’s platform at UPenn compares 4,000 drugs against 18,500 diseases, addressing a gap where 90% of rare diseases lack approved treatments.
-
Group co-led by Fei-Fei Li suggests that AI safety laws should anticipate future risks (TechCrunch)
A California policy working group co-led by Radical Ventures Scientific Partner Fei-Fei Li has released a report on AI regulatory policies. The report seeks to strike a balance between innovation and safety, recommending increased transparency from frontier AI labs through mandatory safety testing disclosures, third-party verification systems, and enhanced whistleblower protections. This represents a middle ground in California’s regulatory approach, incorporating input from safety advocates like Yoshua Bengio and industry representatives from frontier AI labs.
-
Robots are getting smarter — and that’s a good thing (Financial Times)
AI is enabling machines to adapt to complex environments through the reasoning capabilities of large language models. Recent demonstrations showcase robots performing tasks like folding origami and organizing workspaces. Integrating AI “brains” allows robots to learn from internet data and respond to verbal instructions with increased dexterity. For aging societies with shrinking workforces, these advancements present opportunities in healthcare, manufacturing, and domestic settings.
-
Research: Cognitive behaviors that enable self-improving reasoners (Stanford University/SynthLabs)
Researchers have identified four cognitive behaviours that determine why some language models self-improve through reinforcement learning while others plateau: verification, backtracking, subgoal setting, and backward chaining. These methods mirror expert human problem-solving. The study found that introducing these behaviours through targeted priming enhances RL-driven performance, with reasoning patterns proving more important than solution accuracy itself. By emphasizing these cognitive behaviours, researchers successfully induced patterns necessary for efficient test-time computing in previously limited models.
Radical Reads is edited by Ebin Tomy (Analyst, Velocity Program, Radical Ventures).