Skip to main content
European Commission logo

Integrating smart robots into society

Should we have special laws to govern robots? The question isn’t being debated in parliaments and newspaper columns yet, but robots currently under development are becoming so astute at learning to interact like humans that it’s only a matter of time.
The Nao robot shown here is being developed by Aldebaran Robotics. © Ed Alcock 2013

Current research into robotics and artificial intelligence is developing new machines that do not simply perform repetitive or mundane tasks, but learn by imitation and interaction with people, working together as partners. 

These systems are designed to learn and adapt to changing circumstances, interacting with people in socially appropriate ways by interpreting human needs and intentions.

It means that society is increasingly in need of ethical rules governing robots, such as those posed by science writer Isaac Asimov in 1942 in Runaround, a short story which was part of the inspiration for the 2004 film I, Robot, starring Will Smith. 

Asimov proposed that a robot should obey all commands given to it by humans and preserve itself so long as it doesn’t cause harm to humans.

In practice, however, the relationship between robots and humans is emerging as a much more subtle one, with robots often designed to take the role of care-givers in society, keeping an eye on isolated elderly people, or encouraging sick children to take medicine on time.

These robots are a far cry from the production line machines typically associated with robotics, and are being designed to interact with humans and operate safely among them. Some of the robots now under development can learn by watching people and continuously improving their skills; others are mastering human interaction over extended periods. As they develop, such systems will play an increasingly important role in society.

One example is the EU-funded HUMANOBS project, which looked at how robots can learn to interact in a complex environment by observing human behaviour. The aim is to develop systems that can replicate useful, functional aspects of human intelligence and effectively understand the complexities of the real world.

‘What we want to achieve is some of the very powerful features of human intelligence, but we are not interested in replicating all of the nuances of human psychology,’ said Professor Kristinn Thorisson, HUMANOBS coordinator and associate professor of computer science at Reykjavik University in Iceland.

By watching humans in a television interview, the system, represented by a computer-generated character or avatar, learnt to be interviewer or interviewee – asking questions or giving answers with coordinated head movements, speech and hand manipulation of objects.

‘Our artificial intelligence can learn highly complex tasks,’ Prof. Thorisson said. ‘And there are no dedicated training sessions – it is always learning, so the quality of what it is learning improves continuously.’

Such research provides a basis for developing highly adaptable systems, which means that robot makers can develop generic devices, bringing down the cost.

‘The possibility is to make a robot that at design time does not know if it is going to land on Mars or go into the Amazon. That is where this process is headed, the kind of extreme adaptation that may be required in some tasks,’ Prof. Thorisson said.

Robot friend

Several other EU-supported projects are aimed at integrating robots into daily life by improving human-robot interaction.

‘We want a much more natural interaction between robots and people,’ said Dr Radu Horaud, coordinator of HUMAVIPS, a project to help robots develop audiovisual abilities for effective social interactions.

‘In robotics, people have generally concentrated on physical interaction, but for a “social robot” it is about cognitive interaction. The robot must figure out what is going on and how it should respond,’ said Dr Horaud, a research director at France’s INRIA institute of computational sciences.

The HUMAVIPS project is working to develop robots that can interact socially. Image courtesy of HUMAVIPS
The HUMAVIPS project is working to develop robots that can interact socially. Image courtesy of HUMAVIPS
The HUMAVIPS project is working to develop robots that can interact socially. Image courtesy of HUMAVIPS

HUMAVIPS used the opening of an art exhibition as a test, challenging a robot made by consortium member Aldebaran Robotics to make sense of the situation to show that it is able to understand some of the subtleties of complex human interactions.

The robot had to identify the artwork a person was admiring and to provide relevant information. To do that, it had to sift the relevant data from a mix of conversation and noise, movement and shadows. Part of the challenge was in using consumer robot technology, likely to be part of daily life in future.

The ALIZ-E project also uses Aldebaran humanoid robots and cloud computing to study interactions, but over longer periods – hours, days or weeks. The project is working in seven European hospitals to get the robots to help diabetic children understand their condition, to teach them to monitor their symptoms, and to adopt habits that keep them healthy.

‘We have found that robots really can achieve good outcomes,’ said Professor Tony Belpaeme, ALIZ-E coordinator and professor of cognitive systems and robotics at the University of Plymouth, UK.

‘We’ve gone some way to achieving the artificial intelligence that will allow the robots to work autonomously. But we are not there yet,’ he added.

‘What we want to achieve is some of the very powerful features of human intelligence, but we are not interested in replicating all of the nuances of human psychology.’

Prof. Kristinn Thorisson, coordinator, HUMANOBS

The robot develops a rapport with patients over time by learning from their previous interactions. And a humanoid robot is particularly effective with children, Prof. Belpaeme said, adding that autistic children have also responded well to the robot.

‘With this robot, they see something lifelike, and they pay attention,’ he added.

The FP7-supported ALIZ-E project, which runs till next year, points to the bigger role robots are likely to play in healthcare and in looking after the frail or sick.

Other EU-supported projects exploring human-robot interactions include JAMES, coordinated by Edinburgh University in Scotland, UK, which is also developing capabilities that enable the robots to function as home care companions or service robots. Its system aims to recognise, understand and interact with several people in dynamic, socially appropriate ways. In its test scenario, the robot plays bartender, combining tasks such as taking drinks orders and payments with social behaviour in handling simultaneous interactions or politely managing queue-jumpers.

Service robots

Service robot systems require safe methods of cooperation between human and robot. The CHRIS project, coordinated by the Bristol Robotics Laboratory, UK, is looking at how robots can communicate their intentions and develop the mental ability required to be able to interact with people.

The CORBYS project, coordinated by Germany’s University of Bremen, has also been exploring symbiotic human-robot interaction such as in robotic gait rehabilitation systems that could be used for stroke patients. It aims to develop control systems for robots based on situational awareness and anticipation of the often unpredictable behaviour of humans.

The five principles of robotics

While researchers grapple with the nature of human intelligence and the technical challenges in robotics systems, wider society must consider the social, ethical and legal implications.

Discussions around some of these issues led one EU-funded project, euRobotics, to make suggestions on legal and regulatory guidelines in order to stimulate debate on how the field should develop.

Among the proposals made by the project are ‘roboethical’ rules for designers, constructors and users of robots. Some rules are relatively uncontroversial, such as that robots should be designed to assure their safety and security. But others have generated more heat, particularly the principle that ‘robots should not be designed solely or primarily to kill or harm humans’. Some argue that to take account of military robotics, this must include the caveat ‘except in the interests of national security’.

The proposed five principles of robotics are:

1. Robots should not be designed solely or primarily to kill or harm humans.

2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.

3. Robots should be designed in ways that assure their safety and security.

4. Robots are artefacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.

5. It should always be possible to find out who is legally responsible for a robot.

Such principles trace their roots to problems pondered by science fiction writers more than 70 years ago. Despite the rapid approach of robots, they are still unresolved.

More info

HUMANOBS

HUMAVIPS

ALIZ-E

CHRIS

CORBYS

JAMES

Weekly news alert
The best Horizon stories, delivered to your inbox