Radical Reads

Building AI Infrastructure

By David Katz, Partner

http://Screenshot%202024%2009%2009%20at%209.20.03 AM

In the spring, the Canadian government unveiled a $2 billion investment in Canada’s AI ecosystem, announcing plans for an AI Compute Access Fund that will provide direct support to Canadian AI developers looking to access high-performance compute. This fund was announced alongside a broader Canadian AI Sovereign Compute Strategy that will guide the country’s efforts in developing AI infrastructure over the long term. Over the summer, the Canadian government engaged in a consultation process designed to inform the implementation of these two programs.

Radical’s Velocity team, which supports our portfolio companies across various functions unique to scaling AI companies, includes a compute practice group who, along with Radical’s AI infrastructure investment team, have a unique lens into the challenges facing scaling AI companies.

This week, David Katz, who leads Radical’s compute practice, shares a summary of Radical’s submission to the compute consultation.

A successful national AI compute infrastructure strategy must balance the immediate needs of Canada’s AI ecosystem with long-term strategic goals that position Canada as a global leader in AI commercialization and compute resources. To achieve this, several key components should be prioritized over the next five years.

Firstly, ensuring affordable and accessible compute for AI developers is paramount. Short-term subsidies for compute, regardless of its physical location, can bridge the gap until domestic infrastructure matures. Incorporating a clear transition plan to incentivize domestic compute capacity will foster long-term sustainability and economic benefits.

Subsidies should prioritize Canadian headquartered AI startups and scale-ups building and commercializing AI technologies (startups, scaleups, research institutes, etc) rather than the suppliers (chipmakers, hyperscalers, data centre operators), enabling free market forces to further strengthen the Canadian AI ecosystem. Consideration should be afforded to Canadian software suppliers capable of optimizing high-performance compute to ensure maximum value is extracted from any grant or subsidy.

Secondly, a coordinated procurement strategy is crucial to maximize purchasing power and leverage economies of scale. By aggregating demand, AI companies are able to negotiate better deals with hardware providers and reduce costs for the entire ecosystem.

Ultimately, domestic capacity of high-performance compute is critical to Canada’s security and economic sovereignty. Owning and controlling compute infrastructure within Canada provides a critical layer of resiliency and mitigates the risks associated with geopolitical tensions, trade disputes, and supply chain disruptions. Moreover, Canadian-owned infrastructure affords greater control over data privacy and security, reducing the potential for unauthorized access or breaches. Through tax or energy incentives, Canada can incentivize a robust domestic compute industry that strengthens Canada’s position as a global leader in the digital economy, attracting talent and investment from around the world. 

Canada’s location, abundant resources, and talent also make it a prime candidate for becoming a leading exporter of compute and AI data center services. Our proximity to the world’s largest market, Canada’s northern climate and access to renewable energy sources, offer the ideal conditions for building and operating large-scale data centers. By investing in AI infrastructure and fostering a supportive ecosystem, Canada can turn these unique advantages into economic opportunity.

While creating domestic supply is important, we also recommend that strategies and incentives to establish state-of-the-art supercomputing on Canadian soil leverage experienced global leaders across the technology stack. Supercomputers are dynamic and finicky technologies – let’s make sure that compute built in Canada is the best in the world. 

By creating a subsidy program to deliver on the immediate needs of Canadian AI companies, while also looking to expand the long term capacity of our domestic infrastructure, we can help to create the conditions necessary for Canada’s AI industry to thrive.

AI News This Week

  • The 100 most influential people in AI 2024  (Time Magazine)

    The TIME100 AI list highlights the influential leaders shaping the future of artificial intelligence. Among the honorees are two key figures from Radical Ventures portfolio company Cohere. Patrick Lewis, who is the Director of Machine Learning at Cohere, was recognized for his work developing retrieval-augmented generation (RAG), a technique that enhances AI models with domain-specific knowledge by drawing on external texts. Lewis continues to expand RAG by developing systems that cite external sources, aiming for AI-generated content that is fully transparent and verifiable. Sara Hooker, VP of Research and Head of Cohere for AI, leads Cohere’s nonprofit research arm. Her lab works on increasing AI efficiency and addressing data quality, with a focus on broadening the accessibility of AI research.

  • With $50M in new funding, You.com thinks its AI can beat Google on hard questions   (TechCrunch)

    Radical Ventures portfolio company You.com closed its $50 million Series B round led by Georgian, with participation from DuckDuckGo, Nvidia, and Salesforce Ventures. CEO and Founder Richard Socher emphasizes that while Google excels at simple queries, You.com is focused on more demanding tasks, like summarizing complex research or creating detailed investment plans. In these types of tasks, the query itself needs to be examined first so the agent can arm itself with the proper information and techniques to answer the question accurately. Large enterprises are using the product to handle queries within their own systems, with some companies using the tool millions of times per day.

  • AI breakthrough raises hopes for better cancer diagnosis  (Financial Times)

    Harvard Medical School researchers have developed an AI model called Chief that improves cancer diagnosis by detecting various tumor types and predicting outcomes. AI models are proving an increasingly useful ally to medical professionals in the field of imaging, because of their speed and pattern spotting ability. Chief achieved nearly 94% accuracy in cancer detection, with up to 96% accuracy for tumours in the esophagus, stomach, colon, and prostate. Its ability to link tumour cell patterns to specific genomic aberrations could help suggest best treatments without the need for costly and slow DNA sequencing. Chief is part of the growing field of diagnostic foundational AI models, trained on vast amounts of unlabeled data and usable in multiple clinical contexts for different purposes with minimal tweaking, that improve the accuracy of diagnosis and prognosis in patients.

  • Here’s how ed-tech companies are pitching AI to teachers  (MIT Technology Review)

    As the back-to-school season begins, an increasing number of ed-tech companies are promoting AI tools to help teachers save time on tasks like grading, lesson planning, and providing feedback. Over the past year, educators and policymakers have also advocated for AI integration in classrooms. Education departments in South Korea, Japan, Singapore, and US states like North Carolina and Colorado have issued guidelines on the safe and effective use of AI by teachers. However, many teachers remain uncertain about delegating certain responsibilities to AI, particularly for core tasks like planning and grading, where the teacher’s role is crucial to student learning.

  • Research: OLMES: A Standard for Language Model Evaluations  (Allen Institute for Artificial Intelligence/University of Washington)

    Progress in AI is often demonstrated by new models claiming improved performance on tasks measuring model capabilities. Evaluating language models in particular is challenging, as small changes to how a model is evaluated on a task can lead to large changes in measured performance. There is no common standard setup, meaning that different models are evaluated on the same tasks in different ways, leading to claims about which models perform best not being reproducible. Over the summer the Allen Institute for AI, a research-focused non-profit founded by late Microsoft co-founder and philanthropist Paul Allen, released OLMES (Open Language Model Evaluation Standard) – a standard to improve the transparency and reproducibility of language model evaluation. In particular, the researchers aim to remove ambiguity in how a final performance metric is obtained when evaluating a model on a dataset. OLMES can be applied to evaluation during the model development process, leaderboards, papers, providing justified recommendations on data sampling, how to format instances, the choice of in-context examples, probability normalization, and task formulation.

Radical Reads is edited by Leah Morris (Senior Director, Velocity Program, Radical Ventures).