Scientists Are Using Books to Teach Robots About Morality

iStock
iStock | iStock

In order to create robots that can understand ethical dilemmas and interact safely with humans, scientists are turning to one of the oldest methods of teaching morality: stories. Researchers at the Georgia Institute of Technology are developing an artificial intelligence system called “Quixote” that can read and comprehend the plots of written stories and then learn to act like socially appropriate protagonists instead of unlawful or psychotic antagonists.

“The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature,” researcher Mark Riedl says. “We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.”

The idea, according to Futurity, is to train A.I. systems to imitate the moral actions of the protagonists in stories. Quixote learns to identify moral behaviors in stories through a reward system that reinforces good actions and punishes bad. It’s a system based on Riedl’s earlier A.I. system, called “Scheherazade,” which analyzes story plots from the Internet. Quixote goes one step further, not just identifying plot elements, but evaluating characters’ actions.

Riedl and his team presented their new system at this year’s Association for the Advancement of Artificial Intelligence meeting. Though Quixote is a work in progress, Riedl claims it could one day help robots make real-world decisions (for example, choosing to follow the law instead of committing a crime).

“We believe that AI has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior,” Riedl says. “Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual.”