Curious minds select the most fascinating podcasts from around the world. Discover hand-piqd audio recommendations on your favorite topics.
piqer for: Global finds Technology and society Health and Sanity
Nechama Brodie is a South African journalist and researcher. She is the author of six books, including two critically acclaimed urban histories of Johannesburg and Cape Town. She works as the head of training and research at TRI Facts, part of independent fact-checking organisation Africa Check, and is completing a PhD in data methodology and media studies at the University of the Witwatersrand.
In one of the courses I moderate, a trainer on data uses a simple visual slide to demonstrate some of the limitations of machine-based versus human cognition: he shows one of those meme image mosaics that go something like 'labradoodle or fried chicken?', and challenges you to spot the difference. The idea behind this show-and-tell is that humans can tell the difference, and machines often can't. Yet.
If we step away, for a moment, from the troubling visual algorithms that promise to tell anarchist from compliant citizen, gay from 'straight', what about programmes that simply allow a machine to identify something as a human face ... never mind tell the difference between a chihuahua and a blueberry muffin!
Because there are so many obvious potential uses for an automated visual service that can apply human-like analyses and appropriate interpretation to visual data, this is a very important focus area for machine learning-type programmes, including what is described as deep learning or neural networks.
These networks process and weigh information using layers of connected nodes loosely modeled on human neural/brain structures. These structures, of course, are not the same as a human brain, and while the nodal pattern allows for interesting and rather innovative construction of relevance, they also allow for modes of disruption.
An MIT AI research group recently studied and created deliberate and targeted disruptions of neural network-based classifiers, which they called 'adversarial' examples. These included 2D images (getting a machine to read a 'cat' as an avocado dish), which were easy enough for the machine to correct. More worrying, or interesting, depending on your view, were 3D engineered examples of a turtle that scanned as a rifle from all angles and a baseball that registered as an espresso. It seems 'funny' until you realise a person could 3D print a gun and could make it appear as almost anything, including harmless carry-on items.
Stay up to date – with a newsletter from your channel on Technology and society.