Curious minds select the most fascinating podcasts from around the world. Discover hand-piqd audio recommendations on your favorite topics.
piqer for: Technology and society Climate and Environment Deep Dives Health and Sanity Doing Good Global finds Globalization and politics
Co-host of the Episode Party podcast, author of Storm Static Sleep: A Pathway Through Post-rock, editor at ATTN:Magazine.
Initially it seems strange that a podcast about earthly existential risks should start by gazing out into space. But don’t worry – Josh Clark is on hand to clarify, wielding the same powers of explanatory distillation he’s been honing over 10 years hosting the Stuff You Should Know podcast. The End Of The World explores the ways in which the human race might meet an early end, from the runaway intelligence of AI to the intrepid physics experiments conducted at the Large Hadron Collider. It may sound like scientific fearmongering, but this is essential listening. Now that global threats such as climate change are finally being recognised as needing our immediate and worldwide attention, this podcast heralds the inevitable next stage: to counter the perception of existential risks as the mere playthings of science fiction and to start rousing humanity into taking them seriously.
It starts with the Fermi paradox: the contradiction that intelligent extra-terrestrial life is still nowhere to be seen, despite the high probability that we should have encountered evidence of aliens by now. Does the absence of life elsewhere foreshadow our own imminent demise? If aliens are nowhere to be found, can it be theorised that the universe can only sustain fleeting bursts of life, and that our own tenure is destined to be short-lived?
Clark is merciful in getting the most fatalistic possibility out of the way quickly, and the podcast promptly shifts to topics that permit a degree of human agency. For example, we can fortify the safety procedures that surround the study of deadly pathogens, thus lowering the risk that virus samples will be accidentally let loose. Or we can start exploring ways to introduce benevolence into AI, so that we don’t find ourselves hunted down by machines later down the line. Ultimately, Clark’s message is positive. If we start recognising existential risks and acting on them now, we may still be able to secure a bright future for ourselves.