This week Radical Ventures announced our Series C lead investment in Writer. Co-founded by CEO May Habib and CTO Waseem Alshikh, the company’s incredible growth trajectory is a testament to its unique value proposition: a full-stack generative AI platform that works across multiple business functions. With a rapidly growing customer base of the largest blue-chip companies on the planet, Writer is revolutionizing how enterprises leverage AI. Today, May Habib shares why this investment represents such a significant milestone for the company.
Four years ago, Waseem and I took a bet on transformers — the core innovation behind modern large language models (LLMs). Long before they first made headlines, we saw the transformer’s potential to become the connective tissue of the enterprise and completely rewire how we work.
So we built Palmyra — the first Writer LLM. Since then, we’ve relentlessly pushed the boundaries of what LLMs could do by furiously productizing every capability increase of the transformer for the enterprise.
We like to say that the story of Writer is the story of the transformer. And we’re here because we never let up. Today, Writer is a full-stack generative AI platform. With our platform, bankers and financial advisors have access to expert assistants in a click of a button; product teams can mine customer insights and build better products faster; medical clinicians can make more informed decisions — even when the stakes are high.
Writer empowers the people behind the work to radically change how they do their jobs. And the business results are off the charts.
With our $200 million Series C funding announced this week, led by Radical Ventures, ICONIQ Growth, Premji Invest, Salesforce Ventures among others, we’re expanding our team, developing new features, and deepening our partnerships to deliver even more powerful solutions.
The secret to our success has always been our relentless focus on helping our customers navigate this massive wave of innovation and deliver outcomes. The future of AI is not something we’re waiting for. It’s here, and we’re building it. Right now.
AI News This Week
-
AI startup Writer nabs a $1.9 billion valuation to become a super app for enterprises (Forbes)
AI startup Writer secured $200 million in Series C funding led by Radical Ventures, Iconiq Capital, and Premji Invest. Writer is Radical Ventures’ first investment out of its venture growth fund. The company’s proprietary Palmyra language models help enterprises with tasks from content creation to application building, serving over 300 customers including L’Oreal, Vanguard, Goldman Sachs, Salesforce and Intuit. Writer differentiates itself through cost-efficient model training ($700,000 versus $100M+ for GPT-4) and domain-specific models like Palmyra-Fin and Palmyra-Med. Radical Ventures Partner Rob Toews notes that Writer’s full-stack approach of building complete AI applications for customers has led to “screaming product market fit,” saying the strategy “wins in a lot of cases against the ‘take a frontier model, and then try to build something yourself around it’ approach.”
-
AI groups race to develop new benchmarks as models outgrow testing methods (Financial Times)
Leading AI companies are redesigning how they evaluate their AI models as current benchmarks become obsolete. Traditional testing methods like Hellaswag and MMLU, which use multiple-choice questions, are becoming inadequate as models consistently achieve over 90% accuracy. New benchmarks like SWE-bench Verified are emerging along with internal company benchmarks to assess reasoning and planning abilities. External initiatives like “Humanity’s Last Exam” and FrontierMath are introducing more rigorous testing methods, with FrontierMath revealing that even the most advanced models can only complete less than 2% of its questions.
-
DeepMind releases Nobel Prize-winning AlphaFold3 as open source (Nature)
Revolutionary protein-structure prediction tool AlphaFold3 is now available as open-source software for non-commercial use. The AI tool, whose creators John Jumper and Demis Hassabis won a share of the 2024 Chemistry Nobel prize, is now accessible to academics, though with restricted training weights access. While AlphaFold3 proved reproducible even before this release, the success of its predecessor AlphaFold2’s open-source approach—which led to breakthroughs in cancer research and reproductive biology—demonstrates the potential impact of unrestricted scientific collaboration.
-
AI-powered robot artist's Turing portrait fetches $1.1M at Sotheby's (The New York Times)
Ai-Da’s “A.I. God” made history as the first artwork by a humanoid robot to sell at an auction. The piece depicts the pioneering computer scientist Alan Turing as a deity of artificial intelligence and fetched a record $1.1 million. Ai-Da first generated multiple interpretations of Turing’s photograph, which were then digitally composed using its generative AI model before being fully realized through 3-D textured printing. While some argue that AI still cannot capture what makes art truly human –– our memories, emotions, and deeper understanding of the world –– this record-breaking sale pushes the boundaries of who, or what, can be considered an artist.
-
Research: A comprehensive survey of small language models in the era of large language models (Penn State University/University of Pennsylvania/UTHealth Houston/Amazon/RPI)
Researchers have conducted the first comprehensive survey of small language models (SLMs), addressing the need for efficient alternatives to large language models (LLMs). LLMs consume significant amounts of resources, such as energy, due to their scale. For instance, training LLaMA 3.1’s 405 billion parameter model required 16,000 H100 GPUs running continuously for 54 days. Assuming each H100 GPU draws between 600 and 700 watts, this training process would consume an estimated 12,400 MWh of energy. This is equivalent to the annual electricity consumption of over 1,100 U.S. households, where the average household uses about 10.6 MWh per year. The researchers demonstrate that SLMs can achieve comparable performance to LLMs in domain-specific tasks while requiring significantly fewer resources.
Radical Reads is edited by Leah Morris (Senior Director, Velocity Program, Radical Ventures).