Be the first to like.

Share

Oren Etzioni, a well-known AI researcher, complains about news coverage of potential long-term risks arising from future success in AI research (see “No, Experts Don’t Think Superintelligent AI is a Threat to Humanity”). After pointing the finger squarely at Oxford philosopher Nick Bostrom and his recent book, Superintelligence, Etzioni complains that Bostrom’s “main source of data on the advent of human-level intelligence” consists of surveys on the opinions of AI researchers. He then surveys the opinions of AI researchers, arguing that his results refute Bostrom’s.

It’s important to understand that Etzioni is not even addressing the reason Superintelligence has had the impact he decries: its clear explanation of why superintelligent AI may have arbitrarily negative consequences and why it’s important to begin addressing the issue well in advance. Bostrom does not base his case on predictions that superhuman AI systems are imminent. He writes, “It is no part of the argument in this book that we are on the threshold of a big breakthrough in artificial intelligence, or that we can predict with any precision when such a development might occur.”

Thus, in our view, Etzioni’s article distracts the reader from the core argument of the book and directs an ad hominem attack against Bostrom under the pretext of disputing his survey results. We feel it is necessary to correct the record. One of us (Russell) even contributed to Etzioni’s survey, only to see his response being completely misconstrued. In fact, as our detailed analysis shows, Etzioni’s survey results are entirely consistent with the ones Bostrom cites.

… Read More

Image License AttributionNoncommercialShare Alike Some rights reserved by AcidZero

Be the first to like.

Share
MIT Technology Review

Tags: , , ,

Leave a Reply