Curious minds select the most fascinating podcasts from around the world. Discover hand-piqd audio recommendations on your favorite topics.
piqer for: Global finds Technology and society
Prague-based media development worker from Poland with a journalistic background. Previously worked on digital issues in Brussels. Piqs about digital issues, digital rights, data protection, new trends in journalism and anything else that grabs my attention.
If you have ever tried to create an online account, you have probably encountered a CAPTCHA prompt asking you to decipher a hard-to-read text, to puzzle over small images or to click on a checkbox. Standing for "Completely Automated Public Turing test to tell Computers and Humans Apart", the test aims to distinguish real people from robots wanting to spam and abuse the website. As annoying as it may be, there are valid reasons for this security feature. In fact, according to Tim Wu, a professor at Columbia Law School, we should step up our game as far as robot detection is concerned.
In a short but informative opinion piece, the author of "The Attention Merchants: The Epic Struggle to Get Inside Our Heads" describes how “robots are getting better, every day, at impersonating humans”, from carrying out online ticket-buying and writing automated product reviews, to spreading propaganda and swaying votes. As the robots affect more and more areas of life, Wu calls for the introduction of “Blade Runner” law — a measure that would make it illegal for a robot to pose as a human.
“Today’s impersonation-bots are different from the robots imagined in science fiction: They aren’t sentient, don’t carry weapons and don’t have physical bodies. Instead, fake humans just have whatever is necessary to make them seem human enough to 'pass',” writes Wu.
However absurd “Blade Runner” law may appear at first sight, Wu brings up a couple of important points to consider. First, he accurately calls out industry’s lack of incentive (and so far action for that matter) to tackle the issue, thus dismissing the possibility of relying solely on self-regulation. Second, he might be right that the mixed approach employing both technological and legal tools would be necessary to successfully deter impersonation bots. But most importantly, Wu calls a spade a spade and acknowledges the relevance and severity of the issue — bots posing as humans seriously threaten our democracy.