Radical Reads

Radical Reads

By Radical Editorial

http://Radical%20Reads

Curated deep tech and AI content that humans at Radical are reading and thinking about. Sign up here to have Radical Reads delivered directly to your inbox every week.

AI News This Week

  • Radical Partner Salim Teja on Canada’s Brain Gain: Growing High-Tech Talent in Canada  (Ivey Podcast)

    In this Ivey Business School podcast, Radical Partner and Head of Impact, Salim Teja, explains why world-class tech talent is fueling a network of innovative startups in Canada. This brain-gain is helping a new generation of technology start-ups and entrepreneurs establish roots in the country. Salim uses Radical portfolio companies as examples in the discussion.

  • Enterprise Adoption of AI: AI and machine learning: Powering the next-gen enterprise   (CIO)

    “A July CIO Pandemic Business Impact Survey asked a more provocative question: “How likely is your company to increase consideration of AI/ML as a way to flatten or reduce human capital costs?” Nearly half, 48 percent, were either very or somewhat likely to do so. The implication is that, as the economic downturn deepens, the demand for AI/ML solutions may well intensify.”

    Radical Commentary: At the macro level, we expect many industries across the global economy to become more resilient by adopting AI following the economic crisis caused by the COVID pandemic.

    The trend towards more companies considering the use of AI/ML is interesting when looking at the larger picture of current technology adoption. A US Census Bureau survey published in July of 583,000 US businesses about their use of AI and other advanced technologies showed only 2.8% of firms using AI/ML. Besides looking closely at large enterprises, this is one of the few recent studies to cover the long tail of companies.

    While there are interesting takeaways for early adopters and laggards, these two surveys highlight the vast gulf between expected adoption of AI and actual adoption to date. That gap means we are in the very early stages of AI technology adoption, and that there are countless opportunities for AI entrepreneurs.

  • AI in the Wild: Why Do Solar Farms Kill Birds? Call in the AI Bird Watcher  (Wired)

    “Counting dead birds is just the type of repetitive, unsavory task that AI was made for. But actually implementing the system at a solar facility is fraught with technical challenges. Arguably the most difficult task is simply teaching the machine-learning algorithm to reliably recognize birds in a complex environment. Birds come in many sizes, shapes, and colors, which means that the algorithm must have a sufficiently good grasp on the abstract concept of “birdness” that it can pick them out whether they’re flying overhead or perched on a solar panel…

    …“The machine-learning research we’re doing is a little unique, because we don’t just want to classify an object in a single image,” says Szymanski. “It has to classify a small, fast-moving object over time. So if the bird is flying, in some frames you’ll see a dot and in others you’ll see its wings out, and we need to track that object as it moves across the camera.”

    Radical Commentary: It is estimated that the hundreds of utility-scale solar farms around the US may kill nearly 140,000 birds annually. As the solar industry grows, it is under increasing pressure to measure and fix this environmental impact. Counting and tracking birds is an example of machine intelligence helping to solve a critical problem that would otherwise not be cost-effective to fix. Building this solution requires repurposing commercial hardware, cloud connectivity, labelled data, and implementing advanced techniques such as tracking a changing object across time and space.

  • AI and Privacy: Cloak your photos with this AI privacy tool to fool facial recognition  (The Verge)

    “Fawkes uses artificial intelligence to subtly and almost imperceptibly alter your photos in order to trick facial recognition systems.

    The way the software works is a little complex. Running your photos through Fawkes doesn’t make you invisible to facial recognition exactly. Instead, the software makes subtle changes to your photos so that any algorithm scanning those images in future sees you as a different person altogether. Essentially, running Fawkes on your photos is like adding an invisible mask to your selfies.

    Scientists call this process “cloaking” and it’s intended to corrupt the resource facial recognition systems need to function: databases of faces scraped from social media.”

    Radical commentary: As governments look to adopt systems that leverage facial recognition software to identify individuals based on their social media posts, new technologies are now emerging, aiming to thwart these efforts. This cat and mouse game gets to the heart of two significant areas of concern regarding the deployment of computer vision technology.

    Facial recognition systems are widely used in commercial applications that allow users to search for photos of friends or family on their phone. Increasingly, this same technology is used by governments as a means of supporting law enforcement efforts. While the regulations for facial recognition software remains the subject of debate in western governments, some countries are deploying this same technology to profile and suppress minorities.

    The company highlighted in this article exposes one of the critical vulnerabilities in vision systems as a means of avoiding detection: Slight alterations to images effectively “poison” the data without significantly altering the final image. While addressing a privacy issue, this kind of adversarial attack on an AI system exposes a significant concern for vision software that — under entirely different contexts — offers essential benefits. For example, self-driving cars rely on vision systems that may be vulnerable to similar deceptions when classifying road signs, traffic lanes and people.

  • AI and Agriculture: How a 30-Ton Robot Could Help Crops Withstand Climate Change   (Wall Street Journal)

    “The 70-foot-tall colossus, called a “Field Scanalyzer,” is the world’s biggest agricultural robot… it lumbers daily over 2 acres of crops including sorghum, lettuce and wheat, its cluster of electronic eyes assessing their temperature, shape and hue, the angle of each leaf.

    The Scanalyzer beams this data — up to 10 terabytes a day, roughly equivalent to about 2.6 million copies of Tolstoy’s “War and Peace” — to computers in Illinois and Missouri. Analyzing the range and depth of data generated is possible only with machine-learning algorithms, according to data scientists at George Washington University and St. Louis University, where researchers are teaching the computers to identify connections between specific genes and plant traits the Scanalyzer observes.”

    Radical Commentary: Climate change is a growing risk for the global agricultural sector as weather patterns become more unpredictable, leading to more severe shocks to supply chains. Technologies like the Scanalyzer aim to help plant breeders identify characteristics and genetic markers pointing to the most efficient and resilient crop varieties. In this case, the research team is also working on developing deep learning tools to help pinpoint subtle plant varieties that may have unexpected resiliency to a changing environment.

    Agriculture companies are increasingly incorporating AI tools into their research and development. A few examples are given in the article, including Bayer AG, the world’s largest crop seed supplier, having developed self-teaching algorithms to predict crop seed performance. While there are challenges in maintaining an enormous robot in the desert and low-cost drones are increasingly able to undertake similar work to this US $26 million project, the opportunity is in the quality of the data retrieved from continuous imaging across a consistent plane using a wide array of cameras and sensors.

    ***

    Editor’s Note: We will continue to use this platform to share without commentary articles focused on data and the use of it to illustrate and illuminate racial injustice. Because you cannot fix problems you cannot see or understand.

  • Racism and Digital Design: How Online Platforms Can Thwart Discrimination  (Harvard Business School)

    A manager’s toolkit for platform design

    The first step to building inclusive online platforms, says Luca, is for designers and decision-makers to recognize the potential for discrimination to occur.

    Luca and Svirsky outline a framework for making inclusive design choices in a forthcoming article in the journal Marketing Intelligence Review. What follows is a condensed version:

    Build awareness. Digital platform builders must recognize how their design choices and algorithms can lead to discrimination in a marketplace. Managers can be proactive about investigating and tackling the problem. For example, Uber created a cross-functional Fairness Working Group made up of economists, data scientists, lawyers, and product managers to explore discrimination issues.

    Measure discrimination. Many platforms do not know the racial, ethnic, or gender composition of their transaction participants. A regular report on problems and successes among users who are at risk of being discriminated against can help companies reveal and confront issues.

    Withhold sensitive data. In many cases, a simple but effective change involves withholding potentially sensitive user information, such as race and gender, in the early stages of engagement with the platform.

    Automate with awareness of algorithmic bias. Automation and algorithms can help reduce bias, such as the case with Airbnb’s instant booking feature. However, discrimination can also occur through algorithms. Algorithms can be debiased by altering their inputs, but doing so requires managers to think about their goals around diversity and fairness. For example, LinkedIn redesigned its recruitment search tool to ensure that the gender breakdown of search results matches the gender breakdown for that occupation as a whole. If 30 percent of data scientists are women, then a recruiter searching for data scientists would see 30 percent female candidates in search results.

    Think like a choice-architect. The principles of choice architecture can help reduce discrimination. For example, people tend to use whatever option is set up as the default, so resetting default options with inclusivity in mind can be a useful strategy. Companies can also consider increasing the prominence of their anti-discrimination policies to help raise awareness.

    Experiment to measure effects. Platforms can incorporate efforts to measure discrimination into their experimental testing to understand the impact of different design choices.”

    Be transparent. Platforms should make their work on issues of discrimination transparent and open up lines of communication with managers and designers. It is also essential to evaluate methods for measuring discrimination and associated design changes over time.

Radical Reads is edited by Leah Morris (Senior Director, Velocity Program, Radical Ventures).