This week, we share an excerpt from Unlearn’s Founder and CEO Charles Fisher’s writings on the company blog. Unlearn has devised a powerful, fascinating way to apply AI to improve clinical trials. Using advanced biostatistics and machine learning, Unlearn creates “digital twins” of clinical trial participants, reducing the number of actual humans needed for control arms. This makes clinical trials cheaper and faster to execute, enabling life-changing therapeutics to reach patients sooner.
The Artistic Expression of Science
Science is often perceived as a purely rational and logical endeavour. We say that a subject is more of an art than a science to express that practitioners are guided by intuition rather than knowledge or logic. Science, it seems, is the opposite of art. And scientists are perceived as hyper-rational truth seekers driven by data and logic above all else. There is probably no sentiment that I disagree with more strongly than the one expressed above. In my view, science is art. Being a scientist is to dedicate yourself to a life of creativity above all else. Well, to a life of practical creativity let’s say.
Science as Artistic Expression
I view science as a form of art in which one uses mathematics to paint reality. This perspective highlights the deep connection between the creative process inherent in both scientific and artistic endeavors. Just as an artist uses brushes and colors to capture the essence of their subject, a scientist uses mathematical models and equations to describe the complexities of the natural world.
Artificial Intelligence is evolving like species do. I don’t mean that they are a new species. Rather, different computational environments (CPU, GPU, small memory, large memory, etc.), diverse data resources (images, text, tabular, time series), and various selection pressures (loss functions, etc.) create a vast array of AI types, much like biological species in different niches. The researcher’s role is not to discover an ultimate truth or create the perfect AI – such an entity doesn’t exist. Instead, the goal is to develop and express personal intuition and tastes through AI research, and to train models that can be useful.
The primary objective for AI researchers is to cultivate a universal and predictive intuition. This means understanding the principles underlying AI behavior in a way that can be broadly applied across different contexts. The secondary goal is to express this intuition by producing tangible artifacts such as formulas, programs, papers, and presentations. These outputs are the researcher’s way of communicating their unique insights and understanding.
Inspiration is key to AI research. It often stems from identifying something that feels off about an existing method, combining two existing ideas in a novel way, or noticing unique characteristics in a dataset. The initial phase of research should focus solely on thinking about the problem. No papers, whiteboards, LaTeX, or code – just pure imagination and visualization. Researchers should try to visualize the algorithm with geometric clarity, simplifying the problem to its core essence.
Discussions with colleagues are vital. It doesn’t matter if they don’t get it as long as you get it and, in the end, it works. The development process should be iterative repeating the initial steps until they feel ready to move on, whether that takes hours or months. Creating a toy problem that retains all essential elements but nothing else is the next step. This simplified problem provides a concrete and manageable testing ground for the initial model.
Coding should begin with a quick, rough draft, accompanied by extensive comments to describe the intuition behind the code. Experimentation should be incremental, with changes made one at a time to observe their effects. Only after fully understanding and feeling confident in the simpler cases should scale or complexity be added.
To add structure to this creative and iterative process, consider the following principles from practical machine learning research:
Build the baseline first: Start with a simple method to establish a benchmark. This provides a comparison point for new methods and helps develop an initial understanding of the problem.
Do things that don’t scale: Initially, focus on rapid, iterative experimentation without worrying about scalability. This helps develop intuition before committing to more complex implementations.
Hone your physical intuition: Engage deeply with the problem, thinking through results and iterating until something clicks. Imagine it, visualize it, rather than reason through it. This intuitive understanding guides effective experimentation.
Get rid of things that don’t work: Regularly clean up and remove non-working methods to avoid biases and streamline future experiments. Focus on refining what works.
Optimization of the model’s performance should be approached as post-production, fine-tuning details after establishing a deep understanding. If intuition fails at any point, revisiting the start is necessary. The final step is writing a comprehensive paper to share the developed intuition, focusing on personal expression rather than conforming to specific journal requirements. Sharing the work, both internally and publicly, is important, but the primary goal is to express and communicate the researcher’s unique insights.
By blending these structured principles with the artistic and intuitive approach to AI research, researchers can navigate the challenges of both fundamental and applied work, ultimately contributing meaningful and innovative solutions to the field.
Intuition. Simplicity. Creativity. Experimentation. Iteration. Imagination. Craft. Aesthetics. Taste.
In my opinion, these are the ingredients to great science. The keys to making new discoveries. Therefore, the path of the scientist is more like that of an artist than one ruled by logic, data, and analysis. That’s what I believe. And as usual, someone already said it better than I can.
“I am enough of an artist to draw freely upon my imagination. Imagination is more important than knowledge. For knowledge is limited, whereas imagination encircles the world.” – Albert Einstein
AI News This Week
-
How AI is revolutionizing drug development (New York Times)
AI algorithms can now design drugs more effectively and quickly after being trained on datasets of molecular information and protein structures. This shift promises to make drug discovery faster and more cost-efficient, reducing the typical $1 billion discovery cycle of 10 to 15 years. Radical portfolio companies Nabla Bio, Intrepid Labs, and Genesis Therapeutics are harnessing AI to address complex drug targets, optimize formulations, and accelerate the path from discovery and design to patient treatment.
-
Waabi’s genAI promises to do so much more than power self-driving trucks (TechCrunch)
Radical portfolio company Waabi has raised $200 million in Series B funding led by Uber and Khosla Ventures, with participation from Nvidia and others. Radical, one of Waabi’s original investors, also participated in this round. Founded in 2021, Waabi deploys self-driving trucks using the world’s most advanced simulator, Waabi World, which trains AI without extensive real-world data. CEO and Founder, Raquel Urtasun, envisions broader applications for Waabi’s technology beyond trucking, including robotaxis and warehouse robotics, leveraging the ability to generalize and learn from limited data.
-
Toronto’s Untether straps in for growth selling AI chips (The Globe and Mail)
Untether AI, a Radical portfolio company, announced the launch of its second-generation chips which promise vastly better performance. Untether’s novel architecture reduces the distance data has to travel, boosting efficiency by up to nine times versus traditional chip architectures used by the biggest chip companies. The company is targeting vision-guided systems in robotics, farming, and autonomous vehicles. Former Intel senior executive Chris Walker recently joined the company as CEO alongside Google’s former Head of Silicon, Amir Salek, who is now Untether AI’s senior technical adviser.
-
Responsible AI playbook for investors (World Economic Forum)
This white paper delves into the crucial role investors play in promoting responsible AI technologies. Drawing on comprehensive research and insights from over 100 stakeholder interviews – including contributions from Leah Morris, Senior Director of Velocity at Radical Ventures – the paper presents a suite of tools and resources. Notably, it includes Radical Ventures’ “Responsible AI for Startups (RAIS)” framework. The resources provide strategies for investors to collaborate effectively with corporate boards, investment partners, and the wider ecosystem to foster responsible development. The playbook highlights the importance of mitigating risks and complying with regulatory demands while simultaneously spurring innovation and growth.
-
Empathic AI can’t get under the skin (Nature)
In the 1960s, Joseph Weizenbaum observed humans attributing lifelike qualities to ELIZA, one of the earliest natural language processing programs. Today’s LLMs are far more sophisticated and capable of convincingly mimicking human-like conversations, including expressions of empathy. These models may significantly benefit the field of cognitive therapy by providing support and understanding. Researchers like Garriy Shteynberg and Hannah Kirk are guiding the development of these technologies by emphasizing the importance of ethical considerations and the responsible integration of AI into our lives. This approach promises not only to safeguard user well-being and privacy but also to enhance the utility and emotional responsiveness of AI systems, ultimately making them more effective and trusted companions in our daily lives.
Radical Reads is edited by Leah Morris (Senior Director, Velocity Program, Radical Ventures).