Preview Mode Links will not work in preview mode

Physical Attraction


We are the podcast that explains ideas in physics, one chat-up line at a time.

You can read about us here, contact us here and if you like what we do and want to help us keep doing it, you can donate here. You can subscribe to the Physical Attraction: Extra! Feed over at Patreon: www.patreon.com/PhysicalAttraction - where for $3 per bonus episode, you can help to support the show, and get some juicy bonus content too. If you donate $3+ via the Paypal link, I can also send you a direct download link to a bonus episode of your choice. Just leave your email, and the episode you want. Bonus episodes released so far: Alien Attack, Part II (45m). 

Dec 26, 2017

We have a guest on the show today! His name is Stuart Armstrong, and he works at the Future of Humanity Institute that we’ve mentioned several times over the course of the TEOTWAWKI specials who are looking at big-picture existential risks. Stuart Armstrong’s research at the Future of Humanity Institute centers on the safety and possibilities of Artificial Intelligence (AI), how to define the potential goals of AI and map humanity’s partially defined values into it, and the long term potential for intelligent life across the reachable universe. He has been working with people at FHI and other organizations, such as DeepMind, to formalize AI desiderata in general models so that AI designers can include these safety methods in their designs.