Preview Mode Links will not work in preview mode

Physical Attraction


Maybe you've seen a thousand science documentaries and you're tired of hearing about the same subjects; or maybe you don't know the first thing about physics, but would love to learn. My aim with this show is to explore the vast range of topics in physics, from quantum mechanics and relativity to the physics of stars, galaxies and black holes. We will explore brand-new topics in science and technology as I learn about them. Whether you know the story already or are learning it all for the first time, my aim is to "educate, inform, and entertain!" 

We are a physics podcast. But not just a physics podcast - interviews with scientists, scholars, authors and reflections on the history and future of science and technology are all in the wheelhouse.

You can read about us here, contact us here and if you like what we do and want to help us keep doing it, you can donate here. You can subscribe to the Physical Attraction: Extra! Feed over at Patreon: www.patreon.com/PhysicalAttraction - where for $2 per bonus episode, you can help to support the show, and get some juicy bonus content too. If you donate $2+ via the Paypal link, I can also send you a direct download link to a bonus episode of your choice. Just leave your email, and the episode you want. Bonus episodes released so far: Alien Attack, Part II (45m), Failed TEOTWAWKI Predictions, Part II (1hr). 

We have a sister podcast, Autocracy Now, which deals with the lives of famous historical dictators. (Why host one podcast when you can host two?) You can find some of their episodes on our feed, or the show itself at www.autocracynow.libsyn.com 

Dec 5, 2017

Stephen Hawking, Elon Musk, Martin Rees... all have warned about the risk from Artificial Intelligence. Centres like the Future of Humanity Institute in Oxford, and the Centre for Existential Risk at Cambridge, rate the risk from artificial intelligence as way up there on the scale of potential human apocalypses. But it won't look like killer terminator-robots. We are standing on the precipice of a future that's almost impossible to comprehend. Will it be possible to survive?


In this episode, we'll talk about why people are afraid that this explosion in technology could lead to the end of the world. Pictures of killer Terminators need not apply; instead, we're discussing the philosophical and ethical problems that come with artificial intelligence that could exceed the capacity of humans. How can we be sure that it has the same values as us? How can we be sure that it even understands the value of human life at all? Why are people so afraid of such a miraculous technology? And is there anything we can do to ensure that it will be used as a tool rather than taking over entirely?  

Hit us up on Twitter @physicspod if your brain hasn't yet been uploaded to a computer and you still need to use a pesky keyboard interface in meatspace to interact. There, you can donate to the show (in fiat currency, not bitcoin, although I'll take some if you have any going.) Like and review us on iTunes, which will help their 'artificially intelligent' algorithms serve up this show to more consumer human robo-droids. Until next time, stay safe.