
AI: “We’re Like Children Playing With A Bomb”June 13, 2016 |
You’ll find the Future of Humanity Institute down a medieval backstreet in the centre of Oxford. It is beside St Ebbe’s church, which has stood on this site since 1005, and above a Pure Gym, which opened in April. The institute, a research faculty of Oxford University, was established a decade ago to ask the very biggest questions on our behalf. Notably: what exactly are the “existential risks” that threaten the future of our species; how do we measure them; and what can we do to prevent them? Or to put it another way: in a world of multiple fears, what precisely should we be most terrified of?
When I arrive to meet the director of the institute, Professor Nick Bostrom, a bed is being delivered to the second-floor office. Existential risk is a round-the-clock kind of operation; it sleeps fitfully, if at all.
Image: AcidZero via Flickr CC BY-NC-SA 2.0
Tags: artificial intelligence, bioethics, book review, danger, hal 9000, hubris, neuroethics, nick bostrom, risks, unknown