In a thought-provoking new piece published in the Communications of the ACM, UW CSE professor and Allen Institute for Artificial Intelligence CEO Oren Etzioni and sociologist Amitai Etzioni of George Washington University make the case for development of “AI guardians” to provide oversight for increasingly autonomous AI systems. (Aside: How cool is it to publish papers with your parent?!?!) The guardians, they argue, would ensure that operational AI adheres to our laws and ethical norms. They write:
“All societies throughout history have had oversight systems. Workers have supervisors; businesses have accountants; schoolteachers have principals. That is, all these systems have hierarchies in the sense that the first line operators are subject to oversight by a second layer and are expected to respond to corrective signals from the overseers….
“AI systems not only need some kind of oversight, but this oversight must be provided—at least in part—not by mortals, but by a new kind of AI system, the oversight ones. AI needs to be guided by AI.”
The duo offer three reasons why such oversight is needed: AI systems are learning systems, and therefore have the potential to stray from the initial guidelines given to them by their programmers; they are becoming more opaque to humans—either intentionally, or due to public incomprehension or sheer scale of the application; and these systems increasingly function autonomously, empowered by complex algorithms to make decisions independently of human input. Likening their proposed guardians to a home’s electrical circuit breaker—a system considerably less sophisticated than the electrical system it is designed to monitor and intervene when something goes awry—they suggest that the guardians don’t need to be more intelligent than the systems they oversee; just sufficiently intelligent to avoid being outwitted or short-circuited by those systems. The authors go on to examine the various forms such oversight might take when it comes to AI systems, from auditors and monitors, to enforcers and ethics bots.
In the case of both the operational and the oversight systems, they conclude, “humans should have the ultimate say.”
Read the full article here.
For more on the topic of AI and society from UW CSE researchers, see professor Dan Weld‘s column, The Real Threat of Artificial Intelligence, published in GeekWire earlier this year, and professor Pedro Domingos’ book, The Master Algorithm, exploring how machine learning will remake our world.