Curious minds select the most fascinating podcasts from around the world. Discover hand-piqd audio recommendations on your favorite topics.
piqer for: Global finds Technology and society Health and Sanity
Nechama Brodie is a South African journalist and researcher. She is the author of six books, including two critically acclaimed urban histories of Johannesburg and Cape Town. She works as the head of training and research at TRI Facts, part of independent fact-checking organisation Africa Check, and is completing a PhD in data methodology and media studies at the University of the Witwatersrand.
AI has been fodder for dystopian literary narratives for decades, but it would appear too many of today's AI researchers and programmers bypassed books for binary and screentime in their formative years. Case in point: yet another funded academic facial recognition project with vast potential for misuse and abuse (see previous Piqds on the 'Gaydar' programme). This time it's a programme that would potentially allow despotic regimes to identify protestors who have partially hidden their faces.
Let's unpack that thought for a moment: why would protestors be hiding their faces in the first place? Well, yes, it could be because they were doing something illegal. But this programme isn't going after crooks in balaclavas. It's looking at how to identify people against urban backgrounds. The implication is: what if freedom was the thing that was illegal? What if simply expressing an opinion, wanting a franchise, wanting free and fair elections, wanting to end violence against women, wanting to be allowed to marry whomever you chose ... what if those were the things that compelled otherwise peaceful protestors to obscure their features? It's not like this isn't something we see on an almost daily basis, in countries where even a few months ago it might have been unthinkable: the United States, Spain, not to mention Ukraine and so on.
And, still, with the world the way it is and a good many hundred Sci-Fi classics that have spent generations warning scientists away from actively creating programmes that can be used to oppress individuals and societies ... this is what (some) scientists are choosing to do.
One PhD candidate involved in the research belatedly told a reporter at Vice's Motherboard that their work "... should only be used for people who want to use it for good stuff", but could not offer any suggestions as to how that would be effected or regulated.
AI researchers need regular reminders that just because you can do something doesn't mean you should do it.
Stay up to date – with a newsletter from your channel on Technology and society.