Preview Mode Links will not work in preview mode

Physical Attraction


New? Head to the episode guide or drop us a line with the contact form.

We are a physics podcast. But not just a physics podcast - interviews with scientists, scholars, authors and reflections on the history and future of science and technology are all in the wheelhouse.

You can read about us here, which includes an episode guide for new listeners, contact us here and if you like what we do and want to help us keep doing it, you can donate here. You can subscribe to the Physical Attraction: Extra! Feed over at Patreon: www.patreon.com/PhysicalAttraction - where for $2 per bonus episode, you can help to support the show, and get some juicy bonus content too.

We had a sister podcast, Autocracy Now, which deals with the lives of famous historical dictators. You can find some of their episodes on our feed, or the show itself at www.autocracynow.libsyn.com 

Dec 26, 2017

We have a guest on the show today! His name is Stuart Armstrong, and he works at the Future of Humanity Institute that we’ve mentioned several times over the course of the TEOTWAWKI specials who are looking at big-picture existential risks. Stuart Armstrong’s research at the Future of Humanity Institute centers on the safety and possibilities of Artificial Intelligence (AI), how to define the potential goals of AI and map humanity’s partially defined values into it, and the long term potential for intelligent life across the reachable universe. He has been working with people at FHI and other organizations, such as DeepMind, to formalize AI desiderata in general models so that AI designers can include these safety methods in their designs.