Curated deep tech and AI content that humans at Radical are reading and thinking about. Sign up here to have Radical Reads delivered directly to your inbox every week.
1) AI Simulation Tools: Watch artificial intelligence learn to simulate sloppy mixtures of water, sand, and ‘goop’ (Science)
“This week at the International Conference on Machine Learning, the AI company DeepMind presented a new type of model called a Graph Network-based Simulator (GNS). The program can realistically recreate the interactions between tens of thousands of particles of different materials, lasting thousands of animation frames.
…In experiments, the new system was more accurate and better at generalizing various phenomena than competing AI approaches, despite its simplicity. Beyond cool visuals, the researchers hope the method can help machines reason about the world around them”
Radical Commentary: Scientists and special effects professionals have been using physics simulation software, also known as physics engines, for quite some time in the design of video games and scientific experiments. Where simulation programs require extensive time and expertise to develop, this AI approach can simulate complex physical phenomena through observation. The applications go beyond compelling visual effects to include natural disaster planning and improved machine reasoning about the physical world.
The resulting visualization is also a compelling means for understanding machine prediction. The approach uses graph networks to represent a scene as interacting particles in a network passing ‘messages’ to each other about their positions, velocities, and material properties. These messages and the particle responses are learned through trial and error by comparing forecasts with those of traditional physics engines. Once trained, the system can generalize to never-before-seen situations — predicting the behaviour of many times more particles, or what would happen if you added more obstacles like ramps, or shook up the box.
2) AI Fraud Detection: How Visa harnessed data and AI to prevent $25 billion in fraud (Venture Beat)
“The current system represents an evolution of a fraud detection service originally deployed in 1993. Today, the system uses recurrent neural networks along with gradient boosted trees. McSherry said having a defined use case — fraud detection — has allowed Visa to remain focused on how AI and ML can help improve services…
…As a rule, Visa continues to look for use cases where AI and ML could deliver at least a 20% to 30% efficiency increase. In some cases, Visa has seen 100% increases in processes when it’s applied advanced AI techniques such as deep learning neural networks.”
Radical Commentary: While it’s no surprise that massive payment processors like Visa are investing in AI to improve fraud detection, the sheer magnitude of the savings is impressive. Visa is not alone in this regard — on Wall Street, firms like JPMorgan report that AI is already helping them save $150M per year on fraud prevention.
Much is often written about the dangers of AI, but we’re already starting to see many scenarios where it is being used for helpful preventative measures, including as an inhibitor of malicious attacks, fraud prevention and cybersecurity.
3) Responsible AI: An update on our work on AI and responsible innovation (Google Blog)
“AI is a powerful tool that will have a significant impact on society for many years to come, from improving sustainability around the globe to advancing the accuracy of disease screenings. As a leader in AI, we’ve always prioritized the importance of understanding its societal implications and developing it in a way that gets it right for everyone.”
Radical Commentary: Google provided an update to its AI Principles, originally released in 2018 to set out the company’s commitment to develop AI technology responsibly and to establish specific application areas to not pursue, like those that violate human rights or enable surveillance that violates international norms.
Operational initiatives Google says it has undertaken over the past twelve months include:
- Training employees to spot issues that run counter to the company’s AI Principles.
- Researching tools and techniques to improve fairness and human-readable interpretation of machine learning systems.
- Better means to communicate the intent, use and performance of AI models to non-AI experts.
- An iterative process (sometimes involving external bodies) to review proposals for AI research and applications and their alignment with the AI Principles.
- External engagement to increase the number and variety of outside perspectives, especially groups who are currently underrepresented in the technology industry, and who are most likely to be impacted by the consequences of AI and advanced technology.
At Radical, we believe AI is a transformative tool that has the potential to solve some of our world’s greatest societal, business and environmental challenges. However, guidelines establishing the safe and responsible application of this technology are just as important as the applications themselves. We will continue to revisit this topic.
4) AI, Robotics and Research: Robotic lab assistant is 1,000 times faster at conducting research (The Verge)
“Researchers have developed what they say is a breakthrough robotic lab assistant, able to move around a laboratory and conduct scientific experiments just like a human.
In a trial reported in Nature today, the robot’s creators, led by PhD student Benjamin Burger, say it was able to perform experiments 1,000 times faster than a human lab assistant, with that speed-up mostly due to the robot’s ability to work around the clock without breaks.
But Professor Andy Cooper, whose lab developed the robot, tells The Verge that speed is not necessarily the point. The main benefit of a tool like this, he says, is that it allows scientists to explore avenues of research they wouldn’t waste a human’s time on.”
Radical Commentary: This article and video displaying a robot designed by chemists in the University of Liverpool in the UK, underscores a paradigm shift in automating “the researcher, rather than the instrument.” This robot carried out 688 experiments over an 8 day period, working 22 hours a day with the aim of creating more efficient chemical reactions. It did so using an optimization algorithm that gave it decision-making abilities and LIDAR to map out the environment in which it worked. In the short term, this solution has already generated returns by allowing critical lab work to continue in the midst of the current pandemic.
Our view is that technologies like this will play an even more important role in the longer term. It will enable newer and faster discoveries by augmenting humans who will get more time to devote to value-added work. It will also serve as a key lever in transitioning the life sciences industry from a relatively manual craft industry into an industrialized and automated one.
5) Software Engineering: Full Stack Deep Learning
“Since 2012, deep learning has led to remarkable progress across a variety of challenging computing tasks, from image recognition to speech recognition, robotics, and audio synthesis. Deep learning has the potential to enable a new set of previously infeasible technologies like autonomous vehicles, real-time translation, and voice assistants and help reinvent existing software categories.
There are many great courses to learn how to train deep neural networks. However, training the model is just one part of shipping a deep learning project. This course teaches full-stack production deep learning:
- Formulating the problem and estimating project cost
- Finding, cleaning, labelling, and augmenting data
- Picking the right framework and compute infrastructure
- Troubleshooting training and ensuring reproducibility
- Deploying the model at scale”
Radical Commentary: One of the things we look for in the companies we partner with is the founding team’s ability to deploy deep learning products at scale — this was one of our key differentiators at Layer 6. Full Stack Deep Learning looks like a good resource for companies looking to bridge the gap from training machine learning models to deploying AI systems in the real world. The course is created by Pieter Abbeel, the co-founder of Covariant (a Radical portfolio company), along with Josh Tobin, and Sergey Karayev and sets out to cover the various elements required to be able to ship a machine learning software product.
Editor’s Note: We will continue to use this platform to share without commentary articles focused on data and the use of it to illustrate and illuminate racial injustice. Because you cannot fix problems you cannot see or understand.
6) African Americans got left out of the urban economic boom (Quartz)
“Over the past couple of decades, American cities replaced their abandoned downtowns with gleaming new residential developments, fancy restaurants, and office buildings with high-paying jobs.
On average, the economies of the biggest metro areas in the US doubled in size between 2001 and 2018, according to data from the Bureau of Economic Analysis.
But Black Americans were largely left out of the urban economic boom. From homeownership to income, they have consistently lagged behind their white counterparts. These disparities are adding fuel to the protests sparked by the death of George Floyd and other cases of police brutality. “
7) The Fullest Look Yet at the Racial Inequity of Coronavirus (New York Times)
“Early numbers had shown that Black and Latino people were being harmed by the virus at higher rates. But the new federal data — made available after The New York Times sued the Centers for Disease Control and Prevention — reveals a clearer and more complete picture: Black and Latino people have been disproportionately affected by the coronavirus in a widespread manner that spans the country, throughout hundreds of counties in urban, suburban and rural areas, and across all age groups.”
— R —