Image source: Kate O’Rourke
“The one thing that was clear for me is that I did not want to go anywhere. The talent is here.” – Raquel Urtasun, CEO and Founder of Waabi, New York Times, March 21, 2022.
On March 30 2017, almost five years ago to the day, the Vector Institute for AI was launched in Toronto. When Tomi Poutanen, Ed Clark (now partners at Radical), AI pioneer Geoff Hinton and I founded Vector, along with a group of University of Toronto AI professors, the aim was to establish Canada as a global AI powerhouse – one of the 3 leading ecosystems in the world over the next 30+ years.
We believed that would be achieved by creating an ecosystem that starts with attracting and retaining the best research talent in the world. At that point, many of the best people in AI globally were from Toronto but had left and gone on to lead AI at places like Google, DeepMind, Apple, Facebook, Tesla and more. When we asked why, the common answer was the lack of job opportunities in Canada outside of the rare opening for a professor. Our own experience at Layer 6 when recruiting superstars from the Bay Area or London was similar: people often told us that to make a decision to leave a great, high paying job, pull their kids from school, have a spouse quit their job, sell the house and possibly pay exit taxes to move to Canada was a big risk because, if the one job didn’t work out, there were then few other options for their AI skill sets.
So the first goal of Vector was to create a critical mass of talent. Our bet was that great talent is a magnet for more great talent. By focusing first on star academics and what is required to attract and retain them (including the then fairly novel opportunity to remain an academic while also working in a company), you also attract great graduate students and a pipeline of undergrads through the affiliated universities. This is particularly important in a field such as AI, where global demand far outstrips the supply of skilled workers. So that educational talent pipeline itself draws companies, big and small. Once you have enough companies in the ecosystem, the graduating students will stay local for work. Some will start companies right out of school, and others will work for big companies where they are well trained before some leave to found or join startups. Moreover, that density of companies big and small de-risks a move to the ecosystem for top-tier talent. Now, if that initial job doesn’t work out, there are many other companies and job opportunities.
We believed that properly funding the academics was a very cheap way to start an innovation economy flywheel – no other tax incentives or government financial incentives would be required. So fundraising for Vector focused on a combination of federal, provincial and corporate sponsorship, with the money being used to pay leading researchers. In the case of the companies, the cross-section of Canadian sponsors included banks, insurance companies, and airline and car parts manufacturers in addition to tech companies like Shopify. Foreign-headquartered companies could sponsor if they also committed to opening an AI lab in the city (not just an office with salespeople or engineers). The result was that many large corporate AI labs in Toronto were launched within a few months, with companies relocating their existing employees and also attracting new talent. Now the flywheel was starting to spin!
We knew from the first Vector business plan that we wrote that local, smart, highly experienced and ambitious venture capital could be transformational to the startups that would inevitably spin out of Vector and the orbit of companies around it. Our vision was to create new global AI companies based in Canada. Only later did we decide to sell Layer 6 and launch the VC firm – Radical Ventures – to help make that dream a reality.
Five years later, that bet is paying off. Toronto’s ecosystem (and Canada more broadly) is spinning out an explosion of startups. Nick Frosst, Geoff Hinton’s first hire at Google’s AI lab in Toronto, would leave the search giant in 2020 to start the natural language processing platform Cohere, alongside Google Brain researcher Aidan Gomez and Ivan Zhang. Radical helped support Cohere’s creation and invested at incorporation. The company announced its $125 million Series B in February.
Similarly, Raquel Urtasun, a co-founder of Vector and the Chief Scientist at Uber ATG, left the ride-sharing company’s self-driving lab and launched Waabi last year, landing one of the largest Series A rounds ever raised in Canada. Again, Radical was among the first investors, as we have been in BenchSci, PocketHealth, Promise Robotics, UntetherAI and other startups that have benefited from Toronto’s incredible tech expansion. The city is now the third-largest tech talent market in North America.
2017 was also the year that Cade Metz, a technology reporter for the New York Times, first visited Toronto. He was in town to interview Geoff Hinton, beginning research on what would eventually become ‘Genius Makers’ – a wonderful telling of the modern history of AI. Earlier this year, we spoke to Cade, urging him to return to Toronto and explore its incredible transformation.
This week, Cade’s feature in the New York Times reflects the incredible flywheel momentum we are witnessing in Toronto – an ecosystem built by some of the best AI and tech talent on the planet, whose world-changing startups inspire us every day.
5 Noteworthy AI and Deep Tech Articles: week of March 28, 2022
1) ClimateAi researchers achieve AI breakthrough in predicting local weather events (Meteorological Technology International)
California-based ClimateAi, a climate resilience platform and Radical Ventures portfolio company, solved a critical weather forecasting challenge by leveraging generative adversarial networks (GANs). The new model downscales global forecasts to be as accurate as local forecasts without requiring the vast computational, financial, and human resources previously needed for such a small scale. Low-income countries could benefit from this technology. Access to accurate, high-resolution forecasts may help residents and local industries prepare for climate change risks. ClimateAi researchers also expect this method to work for longer-range forecasts (weeks, months, years, decades), where the need to increase resolution is even greater. The findings will be presented at AISTATS on March 28, 2022.
2) What do psychedelic drugs do to our brains? AI could help us find out (MIT Technology Review)
A research team at McGill University in Canada is conducting one of the most extensive studies on the use of psychedelics for mental illness. The research team uses NLP to study 6,850 written accounts of hallucinogenic drug use posted to the online group Erowid. Each report was written by a person who took one of 27 drugs — including ketamine, MDMA, LSD and psilocin — in a real-world setting rather than as part of a lab-based experiment. The work could shed light on how hallucinogens trigger specific mental states, whether euphoria or anxiety and could impact drug development. The analysis complements traditional studies, but any conclusions must be validated and verified within controlled clinical trials.
3) Benchmarking Indic languages for NLG (arXiv)
Rapid progress in natural language generation (NLG) model building for English language applications is driven by extensive datasets existing in English. Progress in other languages has been slower. A research and industry consortium has built IndicNLG, a suite of open source evaluation datasets for Indic languages including Assamese, Bengali, Gujarati, Hindi, Marathi, Odia, Punjabi, Kannada, Malayala, Tamil, Telugu (and English). India is home to 1.4 billion people (18% of the world population), but “only 10% of the population speaks English whereas a significant portion of the remaining 90% speaks one or more of the 22 official Indian languages.” IndicNLG will help move the needle for generative NLP cases in these Indic languages.
4) AI is revolutionizing prosthetic arm control (Discover Magazine)
In myoelectric-powered prosthetic limbs, electrodes are placed on the skin to read the muscle contractions causing the limbs to move accordingly. It can take some time to get used to moving the limb, figuring out the right way to contract muscles to make the arm work as intended. Neuroscientists do not know how to accurately decode the brain’s signals sent through nerves to control muscles. And that makes it hard to interpret nerve signals accurately. What’s needed is a way to measure and decode nerve signals so that they can be used to control prosthetic arm, hand and finger movement intuitively. Diu Khue Luu and Anh Tuan Nguyen from the University of Minnesota with colleagues have found a way to do this using an AI decoder that learns the user’s intention based on the nerve signals it senses in the arm.
5) How Native Americans are trying to debug AI’s biases (New York Times – subscription may be required)
“More representative, accurate data is beneficial not only to the groups it represents, but also to AI systems at large.” Initiatives such as Intelligent Voices of Wisdom (IVOW), are using AI to preserve culture and counter biases with improved data and processes. The team at IVOW, along with a group of volunteer collaborators and advisers, are developing proofs of concept for cultural engines — smart data sets that can feed more inclusive AI tools, including chatbots and image recognition apps. This article is part of a New York Times series on AI’s potential to solve everyday problems.
Share this edition of Radical Reads