(London Guardian) More than 70 years ago, Isaac Asimov dreamed up his three laws of robotics, which insisted, above all, that "a robot may not injure a human being or, through inaction, allow a human being to come to harm”. Now, after Stephen Hawking warned that "the development of full artificial intelligence could spell the end of the human race”, two academics have come up with a way of teaching ethics to computers: telling them stories.
Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology have just unveiled Quixote, a prototype system that is able to learn social conventions from simple stories. Or, as they put in their paper Using Stories to Teach Human Values to Artificial Agents, revealed at the AAAI-16 Conference in Phoenix, Arizona this week, the stories are used "to generate a value-aligned reward signal for reinforcement learning agents that prevents psychotic-appearing behaviour”.