Channels
Log in register
piqd uses cookies and other analytical tools to offer this service and to enhance your user experience.

Your podcast discovery platform

Curious minds select the most fascinating podcasts from around the world. Discover hand-piqd audio recommendations on your favorite topics.

You are currently in channel:

Technology and society

Nechama Brodie
Author, fact-checker and academic
View piqer profile
piqer: Nechama Brodie
Saturday, 11 November 2017

The Neural Network That Mistook A Turtle For A Rifle (With Apologies To Oliver Sacks)

In one of the courses I moderate, a trainer on data uses a simple visual slide to demonstrate some of the limitations of machine-based versus human cognition: he shows one of those meme image mosaics that go something like 'labradoodle or fried chicken?', and challenges you to spot the difference. The idea behind this show-and-tell is that humans can tell the difference, and machines often can't. Yet.

If we step away, for a moment, from the troubling visual algorithms that promise to tell anarchist from compliant citizen, gay from 'straight', what about programmes that simply allow a machine to identify something as a human face ... never mind tell the difference between a chihuahua and a blueberry muffin!

Because there are so many obvious potential uses for an automated visual service that can apply human-like analyses and appropriate interpretation to visual data, this is a very important focus area for machine learning-type programmes, including what is described as deep learning or neural networks.

These networks process and weigh information using layers of connected nodes loosely modeled on human neural/brain structures. These structures, of course, are not the same as a human brain, and while the nodal pattern allows for interesting and rather innovative construction of relevance, they also allow for modes of disruption.

An MIT AI research group recently studied and created deliberate and targeted disruptions of neural network-based classifiers, which they called 'adversarial' examples. These included 2D images (getting a machine to read a 'cat' as an avocado dish), which were easy enough for the machine to correct. More worrying, or interesting, depending on your view, were 3D engineered examples of a turtle that scanned as a rifle from all angles and a baseball that registered as an espresso. It seems 'funny' until you realise a person could 3D print a gun and could make it appear as almost anything, including harmless carry-on items. 

The Neural Network That Mistook A Turtle For A Rifle (With Apologies To Oliver Sacks)
5
0 votes
relevant?

Would you like to comment? Then register now for free!

Stay up to date – with a newsletter from your channel on Technology and society.