This month, Radical Ventures portfolio company Cohere introduced Command R+, a new large language model (LLM) tailored for real-world business applications and workflows. The model has climbed the Arena rankings establishing itself as the best open model today.
Following the launch of Command R+, Radical Ventures Partner Aaron Brindle spoke with Cohere CEO and Co-founder Aidan Gomez – also an author of the seminal paper that invented the Transformer architecture that underpins Generative AI – about the new model and the future of LLMs.
Aaron Brindle (AB): Congratulations on the launch of Cohere’s latest large language model, Command R+. What was the ambition behind building this model?
Aidan Gomez (AG): We believe this will be the year that enterprises move beyond proof-of-concept AI projects and into large-scale production. When we designed our latest Command R series of models, we focused on building state-of-the-art large language models that enterprises can efficiently scale into production. We took a customer-first approach and were very intentional about the capabilities we wanted our models to excel at based on real-world business needs for AI.
AB: How is Command R+ different from the latest models released by OpenAI and Google?
AG: Command R+ is the market leader in the emerging scalable model category, which balances high efficiency with strong accuracy to handle enterprise-grade workloads. Our most powerful model to date is best-in-class at business-critical capabilities. Command R+ excels at Retrieval Augmented Generation (RAG) with citations to reduce hallucinations, multilingual coverage in 10 key languages to support global business operations, and Tool Use to automate sophisticated business processes.
AB: What are the critical performance considerations for enterprises looking to deploy LLMs today?
AG: The key considerations for our enterprise partners like Microsoft and Oracle when deploying LLMs are accuracy, efficiency, price point, and data privacy. Enterprises are looking for models with strong out-of-the-box performance that are scalable into production environments. Cohere has been committed to building our technology with data privacy at the core from the beginning and when designing our latest Command R+ model we focused on addressing the right combination of these business needs so that our AI technology can unlock real productivity gains.
AB: Why did Cohere decide to make the weights for Command R available?
AG: We have strongly supported the wider academic AI research community from the start, and that’s why we set up our non-profit research lab Cohere For AI. We released the weights of our Command R model family to help researchers advance breakthroughs for the industry and to allow our models to be independently evaluated. For all enterprise and commercial use our models require a commercial license and will be continually improved for customers.
AB: How does open-source contribute to innovation in the LLM space?
AG: The work of Cohere for AI and the wider research community helps to advance what’s possible with AI technology. We will continue to support the ML ecosystem with initiatives like research compute grants and open-source research releases like Aya that makes LLMs more accessible to cover previously underserved languages and communities.
AB: As you look to what’s next, what are the kinds of innovations we can expect from future LLMs?
AG: LLMs will continue to improve and become core reasoning engines that can automate tasks and take real-world action. We see many promising use cases for enterprises to leverage AI across sectors like financial services, retail, and technology. In particular, we see a lot of opportunity with Tool Use which helps upgrade our model applications from simple chatbots to powerful agents and research tools to increase productivity for businesses.
AI News This Week
-
AI 50 List (Forbes)
This year’s AI 50 List includes Radical Ventures portfolio companies Cohere, Hebbia, and Waabi. The list recognizes the most promising privately-held artificial intelligence companies. The sector has never been more competitive. Forbes received some 1,900 submissions this year, more than double last year’s count. For the startups on AI 50, the technology has evolved from capturing customers’ imaginations to capturing billions of dollars in collective revenue.
-
Is robotics about to have its own ChatGPT moment? (MIT Technology Review)
Researchers are using generative AI and other techniques to teach robots new skills. Radical Ventures portfolio company Covariant’s recently released multimodal model called RFM-1 is an example of how new models are managing unpredictable settings and varied conditions in the real world, beyond the capabilities of even the most advanced robot prototypes. The model can accept prompts in the form of text, image, video, robot instructions, or measurements. Generative AI allows the robot to both understand instructions and generate images or videos relating to those tasks.
-
US steps up chip manufacturing in Arizona (TechCrunch)
The US Commerce Department has signed an agreement to award Taiwan Semiconductor Manufacturing (TSMC) $6.6 billion in direct funding under the CHIPS and Science Act to set up semiconductor factories in Phoenix, Arizona, and provide up to $5 billion in loans. This grant, pegged for the company’s US subsidiary, TSMC Arizona, is the latest step by the US to strengthen its domestic supply of semiconductors as it seeks to reshore manufacturing of chips amid escalating geopolitical tensions between the US and China.
-
Inside Big Tech's underground race to buy AI training data (Reuters)
Tech giants initially used reams of data scraped from the internet for free to train generative AI models like ChatGPT that can mimic human creativity. They have said that doing so is both legal and ethical, though they face lawsuits from copyright holders over the practice. At the same time, these tech companies are also quietly paying for content locked behind paywalls and login screens, giving rise to a hidden trade in everything from chat logs to long-forgotten personal photos from faded social media apps.
-
Foundation model for cancer imaging biomarkers (Nature Machine Intelligence)
Foundation models have transformed AI by training vast amounts of broad unlabelled data. Researchers in the Artificial Intelligence in Medicine (AIM) Program at Harvard trained a convolutional encoder through self-supervised learning using a comprehensive dataset of 11,467 radiographic lesions. The researchers present the resulting foundation model, enabling more accurate, efficient and robust cancer imaging biomarkers, especially in use cases with small training datasets.
Radical Reads is edited by Leah Morris (Senior Director, Velocity Program, Radical Ventures).