Nov 24, 2017
Stephen Hawking, Elon Musk, Martin Rees... all have warned about
the risk from Artificial Intelligence. Centres like the Future of
Humanity Institute in Oxford, and the Centre for Existential Risk
at Cambridge, rate the risk from artificial intelligence as way up
there on the scale of potential human apocalypses. But it won't
look like killer terminator-robots. We are standing on the
precipice of a future that's almost impossible to comprehend. Will
it be possible to survive?
In this episode, we'll lay the groundwork for talking about the singularity by introducing some of the fields involved - robotics, artificial intelligence and the efforts to create an artificial general intelligence. And we'll try to understand some of the incredible possibilities that might arise as we slowly blur the line between the artificial and the 'real'. We have a choice: tech utopia, or tech dystopia. But in the world of the Singularity, there is no room for Luddites.
Hit us up on Twitter @physicspod if your brain hasn't yet been uploaded to a computer and you still need to use a pesky keyboard interface in meatspace to interact. There, you can donate to the show (in fiat currency, not bitcoin, although I'll take some if you have any going.) Like and review us on iTunes, which will help their 'artificially intelligent' algorithms serve up this show to more consumer human robo-droids. Until next time, stay safe.