Artificial intelligence is one of those tech terms that seems to inevitably conjure up images (and jokes) of computer overlords running sci-fi dystopias — or, more recently, robots taking over human jobs
Artificial intelligence is one of those tech terms that seems to inevitably conjure up images (and jokes) of computer overlords running sci-fi dystopias — or, more recently, robots taking over human jobs
But AI is already here: It’s powering your voice-activated digital personal assistants and Web searches, guiding automated features on your car and translating foreign texts, detecting your friends in photos you post on social media and filtering your spam.
But as practical uses of AI have exploded in recent years, one critical element remains missing: an industrywide set of ethics standards or best practices to guide the growing field.
Algorithms help us to choose which films to watch, which music to stream and which literature to read. But what if algorithms went beyond their jobs as mediators of human culture and started to create culture themselves?
Algorithms help us to choose which films to watch, which music to stream and which literature to read. But what if algorithms went beyond their jobs as mediators of human culture and started to create culture themselves?
In 1950 English mathematician and computer scientist Alan Turing published a paper, “Computing Machinery and Intelligence,” which starts off by proposing a thought experiment that he called the “Imitation Game.” In one room is a human “interrogator” and in another room a man and a woman. The goal of the game is for the interrogator to figure out which of the unknown hidden interlocutors is the man and which is the woman. This is to be accomplished by asking a sequence of questions with responses communicated either by a third party or typed out and sent back. “Winning” the Imitation Game means getting the identification right on the first shot.
Turing then modifies the game by replacing one interlocutor with a computer, and asks whether a computer will be able to converse sufficiently well that the interrogator cannot tell the difference between it and the human. This version of the Imitation Game has come to be known as the “Turing Test.”
Turing’s simple, but powerful, thought experiment gives a very general framework for testing many different aspects of the human-machine boundary, of which conversation is but a single example.
On May 18 at Dartmouth, we will explore a different area of intelligence, taking up the question of distinguishing machine-generated art. Specifically, in our “Turing Tests in the Creative Arts,” we ask if machines are capable of generating sonnets, short stories, or dance music that is indistinguishable from human-generated works, though perhaps not yet so advanced as Shakespeare, O. Henry or Daft Punk.
A data-sharing agreement obtained by New Scientist shows that Google DeepMind’s collaboration with the NHS goes far beyond what it has publicly announced
A data-sharing agreement obtained by New Scientist shows that Google DeepMind’s collaboration with the NHS goes far beyond what it has publicly announced
It’s no secret that Google has broad ambitions in healthcare. But a document obtained by New Scientist reveals that the tech giant’s collaboration with the UK’s National Health Service goes far beyond what has been publicly announced.
The agreement gives DeepMind access to a wide range of healthcare data on the 1.6 million patients who pass through three London hospitals run by the Royal Free NHS Trust – Barnet, Chase Farm and the Royal Free – each year. This will include information about people who are HIV-positive, for instance, as well as details of drug overdoses and abortions. The agreement also includes access to patient data from the last five years.
Image: By Identify the image source as Compliance and Safety and link that text to http://complianceandsafety.com/blog/hipaa-compliance-paper-shredding-illustration2/ on the same page that uses this image. See this example here., CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=21168112
Some people think A.I. will kill us off. In his 2014 book Superintelligence, Oxford philosopher Nick Bostrom offers several doomsday scenarios. One is that an A.I. might “tile all of the Earth’s surface with solar panels, nuclear reactors, supercomputing facilities with protruding cooling towers, space rocket launchers, or other installations whereby the AI intends to maximize the long-term cumulative realization of its values.”
This sort of redecoration project would leave no room for us, or for a biosphere for that matter. Bostrom warns darkly, “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb.” (Read a Future Tense excerpt from Superintelligence.)
Many counterarguments have been made against unexpected intelligence explosions, focused largely on technical limitations and logic. For example, sci-fi writer Ramez Naam pointed out in an essay for H+ magazine that even a superintelligent mind would need time and resources to invent humanity-destroying technologies; it would have to participate in the human economy to obtain what it needed (for example, building faster chips requires not just new designs but complicated and expensive chip fabrication foundries to build them.)
Experts discussing autonomous weapons argued that people should still be able to control weapons systems as they advance to levels where they can act independently.
“Mandating meaningful human control of weapons would help protect human dignity in war, ensure compliance with international humanitarian and human rights law, and avoid creating an accountability gap for the unlawful acts of a weapon,” a report from Human Rights Watch and the Harvard Law School International Human Rights Clinic said.
The two groups went on to call for a “prohibition on the development, production, and use of fully autonomous weapons”. Their report – Killer Robots and the Concept of Meaningful Human Control – was published as the Convention on Certain Conventional Weapons started in Geneva, Switzerland.
Image: By Campaign to Stop Killer Robots – https://www.flickr.com/photos/stopkillerrobots/8673351202/in/photolist-dS2q7M-7UDxD-i7k3Zo-fAXQKa-ftTCWJ-h9jjs-4FsVpk-mz9ej-bWVvkS-5zbFgg-edkwCF-7kLLCo-edragy-edrcty-edrc8Y-edra9N-c3kBAw-edkvgX-edkvyD-edras5-edrbVN-edraHm-edkwQr-edraNy-edkyjt-edkyxP-edkxeR-edkyYP-edrbr3-edrd9W-edr9MW-edkuJt-edr9gC-edkySg-edkBKZ-edkBCn-edreeq-edr2kd-edkAXR-edreJN-edkAuv-edrepA-edrfCw-edrfTC-edrfcj-edrfRG-4V3Eb-4V4x6-edkEEa-dyGMgd, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=33487057
At first glance, you might think they should, simply because they are machines and that’s what they are designed to do. But then think of all the times you would not mindlessly carry out others’ instructions – and put robots into those situations.
Just consider:
An elder-care robot tasked by a forgetful owner to wash the “dirty clothes,” even though the clothes had just come out of the washer
A preschooler who orders the daycare robot to throw a ball out the window
A student commanding her robot tutor to do all the homework instead doing it herself
A household robot instructed by its busy and distracted owner to run the garbage disposal even though spoons and knives are stuck in it.
There are plenty of benign cases where robots receive commands that ideally should not be carried out because they lead to unwanted outcomes. But not all cases will be that innocuous, even if their commands initially appear to be.
Last week, Microsoft inadvertently revealed the difficulty of creating moral robots. Chatbot Tay, designed to speak like a teenage girl, sounded like a Nazi-loving racist after less than 24 hours on Twitter. Of course, Tay wasn’t designed to be explicitly moral. But plenty of other machines are involved in work that has clear ethical implications
Last week, Microsoft inadvertently revealed the difficulty of creating moral robots. Chatbot Tay, designed to speak like a teenage girl, sounded like a Nazi-loving racist after less than 24 hours on Twitter. Of course, Tay wasn’t designed to be explicitly moral. But plenty of other machines are involved in work that has clear ethical implications
Of course, Tay wasn’t designed to be explicitly moral. But plenty of other machines are involved in work that has clear ethical implications.
Wendell Wallach, a scholar at Yale’s Interdisciplinary Center for Bioethics and author of “A Dangerous Master: How to keep technology from slipping beyond our control,” points out that in hospitals, APACHE medical systems help determine the best treatments for patients in intensive care units—often those who are at the edge of death. Wallach points out that, though the doctor may seem to have autonomy, it could be very difficult in certain situations to go against the machine—particularly in a litigious society. “Is the doctor really free to make an independent decision?,” he says. “You might have a situation where the machine is the de facto decision-maker.”
As robots become more advanced, their ethical decision-making will only become more sophisticated. But this raises the question of how to program ethics into robots, and whether we can trust machines with moral decisions.
It may not strike everyone as the loftiest ambition: creating machines that are smarter than people. Not setting the bar terribly high, is it? So the more cynical might say. All the same, an array of scientists and futurists are convinced that the advent of devices with superhuman intelligence looms in the not-distant future. The prospect fills some of our planet’s brainiest specimens with dread
It may not strike everyone as the loftiest ambition: creating machines that are smarter than people. Not setting the bar terribly high, is it? So the more cynical might say. All the same, an array of scientists and futurists are convinced that the advent of devices with superhuman intelligence looms in the not-distant future. The prospect fills some of our planet’s brainiest specimens with dread
They include certified smart men like Bill Gates of Microsoft, the physicist Stephen Hawking and Elon Musk, head of SpaceX. Messrs. Hawking and Musk have been especially grim. “The development of full artificial intelligence could spell the end of the human race,” Mr. Hawking told the BBC in 2014. At about the same time, Mr. Musk worried that “with artificial intelligence, we are summoning the demon,” a fiend that he feared would become “our biggest existential threat.”
When people of their caliber speak, it seems reasonable to listen. And so, alarms about a computer-spawned apocalypse are a backdrop to the latest installment in the Retro Report series, video documentaries that explore major news events of the past and their continuing effects.
Men of science are not alone in the hand-wringing over the possibility of machines running wild. Asked what they feared most, Americans interviewed by researchers at Chapman University in Southern Californiaranked the consequences of modern technology near the top. Even death did not rattle them as much; it was way down on their list of worries, at No. 43.