Google can see a future where robots help us unload the dishwasher and sweep the floor. The challenge is making sure they dont inadvertently knock over a vase — or worse — while doing so.
Researchers at Alphabet Inc. unit Google, along with collaborators at Stanford University, the University of California at Berkeley, and OpenAI — an artificial intelligence developing company backed by Elon Musk — have some ideas about how to design robot minds that wont lead to undesirable outcomes for the person or persons they serve. They published a technical newspaper Tuesday outlining their thinking.
The motivation for the research is the immense popularity of artificial intelligence, software that can learn about the world and act within it. Todays AI systems let vehicles drive themselves, interpret speech spoken into phones, and devise trading strategies for the stock market. In the future, companies plan to use AI as personal assistants, first as software-based services like Apple Inc.s Siri and the Google Assistant, and later as smart robots that can take actions for themselves.
But before giving smart machines the ability to make decisions, people need to make sure the goals of the robots are aligned with those of their human owners.
While possible AI safety dangers have received a lot of public attention, most previous discussion has been very hypothetical and speculative, Google researcher Chris Olah wrote in a blog post accompanying the paper. We believe its essential to ground concerns in real machine learning the investigations and to start developing practical approaches for engineering AI systems that operate safely and reliably.
The report describes some of their own problems robot designers may face in the future, and lists some techniques for building software that the smart machines cant subvert. The challenge is the open-ended nature of intelligence, and the puzzle is akin to one faced by regulators in other areas, like the financial system; how do you design regulations to let entities achieve their goals in a system you regulate, without being able to subvert your rules, or be unnecessarily constricted by them?
For example, if you have a clean robot( and OpenAI aims to build such a machine ), how do you make sure that your rewards dont give it positive incentives to cheat, the researchers wonder. Reward it for cleaning up a room and it might answer by sweeping dirt under the rug so its out of sight, or it might learn to turn off its cameras, preventing it from find any mess, and thereby giving it a reward. Counter these tactics by giving it an additional reward for using cleaning products and it might evolve into a system that uses bleach far too liberally because its rewarded for doing so. Correct that by making its reward for using cleaning products tied to the apparent cleanliness of its environment and the robot may eventually subverts that as well, hacking its own system to make itself think it deserves a reward regardless.
While cheating with housecleaning may not seem to be a critical problem, the researchers are extrapolating to potential future utilizes where stakes are higher. With this paper, Google and its collaborators are trying to solve problems they can only vaguely understand before they manifest in real-world systems. The mindset is roughly: Better to be somewhat prepared than not prepared at all.
With the realistic possibility of machine learning-based systems controlling industrial process, health-related systems, and other mission-critical technology, small-scale collisions seem like a very concrete threat, and are critical to prevent both intrinsically and because such accidents could cause a justifiable loss of trust in automated systems, the researchers write in the paper.
Some answers the researches propose include restriction how much control the AI system has over the human environment, so as to contain the damage, and pairing a robot with human buddy. Other ideas include programming trip wires into the AI machine to give humans a warn if it abruptly steps out of its intended routine.
The idea of smart machines running haywire is scarcely new: Goethe wrote a poem in the late 18 th century where a sorcerers apprentice makes a living broom to fetch water from a river for a basin in his home. The broom is so good at its chore that it almost deluges the house and so the sorcerer chops it up with an ax. New brooms emerge out of the fragments and continue with their tasks. Designing machines that avoid this kind of unintentionally harmful outcome is the core notion behind Googles research.
The research is part of an ongoing line of investigation that goes back more than 50 years, told Stuart Russell, a prof of computer science at the University of California at Berkeley and an writer, with Googles Peter Norvig, of the definitive volume on artificial intelligence. The fact that Google and other companies are getting involved in AI safety research is a further demo of the varied applications AI is seeing in industry, he told. And the problems theyre trying to deal with are not hypothetical: Russell had a human cleaner in Paris who conceal rubbish away in the apartment, which was only discovered by the landlord where reference is moved out, so a robot might do the same.
Anyone that believes for five seconds about whether its a good idea to build something thats more intelligent than you, theyll is understood that yes of course its a number of problems, he said.
Read more: www.bloomberg.com