Scientists have developed a new machine-learning algorithm to help robots display appropriate social behavior in interactions with humans.
Advances in artificial intelligence (AI) are making virtual and robotic assistants increasingly capable in performing complex tasks, researchers said.
For these smart machines to be considered safe and trustworthy collaborators with human partners. However, robots must be able to quickly assess a given situation and apply human social norms.
Now, researchers at Brown University and Tufts University in the US have created a cognitive-computational model of human norms. In a representation coded into machines. Further, developed a machine-learning algorithm that allows machines to learn norms in unfamiliar situations drawing on human data.
Development of AI systems with Human Norms
The project funded by the US Defence Advanced Research Projects Agency (DARPA) represents important progress towards the development of AI systems. This can intuit how to behave in certain situations in much the way people do. As an example in which humans intuitively apply social norms of behaviour. Consider a situation in which a cell phone rings in a quiet library, researchers said.
A person receiving that call would quickly try to silence the distracting phone. Whisper into the phone before going outside to continue the call in a normal voice. Ultimately, for a robot to become social or perhaps even ethical. It will need to have a capacity to learn, represent, activate, and apply a large number of norms that people in a given society expect one another to obey.
Moreover, the task will prove far more complicated than teaching AI systems rules for simpler tasks such as tagging pictures, detecting spam, or guiding people through their tax returns.
However, by providing a framework for developing and testing such complex algorithms, the new research could bring machines that emulate the best of human behaviour closer, researchers said.