Preview Mode Links will not work in preview mode

Physical Attraction


We are the podcast that explains ideas in physics, one chat-up line at a time.

You can read about us here, contact us here and if you like what we do and want to help us keep doing it, you can donate here. You can subscribe to the Physical Attraction: Extra! Feed over at Patreon: www.patreon.com/PhysicalAttraction - where for $3 per bonus episode, you can help to support the show, and get some juicy bonus content too. If you donate $3+ via the Paypal link, I can also send you a direct download link to a bonus episode of your choice. Just leave your email, and the episode you want. Bonus episodes released so far: Alien Attack, Part II (45m). 

Dec 1, 2017

This is the much-anticipated second part of the Phil Torres Tapes! We have a guest on the show today – Phil Torres. Phil Torres is an author, Affiliate Scholar at the Institute for Ethics and Emerging Technologies, former contributor at the Future of Life Institute, and founding Director of the X-Risks Institute. He has published in Bulletin of the Atomic ScientistsSkepticFree InquiryThe Humanist, Journal of Future StudiesBioethicsJournal of Evolution and Technology, ForesightErkenntnis, and Metaphilosophy, as well as popular media like TimeMotherboardSalonCommon DreamsCounterpunchAlternetThe Progressive, and Truthout.

I was absolutely delighted that he agreed to be interviewed for a show like ours, and so I urge you to seek out his website – risksandreligion.org – and buy one of his books. There’s “The End – what Science and Religion have to tell us about the Apocalypse”, which is on my shelf already, and, recently, we have Morality, Foresight, and Human Flourishing, which is is an introduction to the whole field of existential risks. So I would urge you all, if you’re interested in this topic – that of risks to the entire human species, which I think we can agree affects us all – to buy one of those books.

This is the second part of our conversation, which focuses on AI, Superintelligence, and the control problem. How can we deal with AI, how will it impact our lives and have we any hope of controlling a superintelligent AI? There's plenty more general discussion about existential risks, too. 

Follow Phil @xriskology and the show @physicspod.