A German researcher is trying to get to the bottom of what it would take for us to really trust a robot – with our lives
Picture yourself infirm and elderly, and in need of some reliable, hands-on help with the basics of daily life. Now picture yourself with a robot helper. Would you trust it?
An ERC-funded research project led by Sandra Hirche, professor of control engineering at the Technical University of Munich, could help build that trust. Hirche and her team are using artificial intelligence to develop advanced robotic systems that can work alongside humans in a safe and intuitive manner. If she is successful, robots could act as care givers to the incapacitated, support physical rehabilitation, provide mobility and manipulation aids for the elderly, and – in the workplace – collaborate with humans in manufacturing processes.
Hirche is seeking to apply mathematics to this challenge. In conventional robotics, a machine designed to grasp a moving object uses sensors and cameras to continually establish where the object is, and then compute how much its motors need to move to make contact with the object. This feedback loop, which is the essence of what experts call control engineering, is underpinned by a prediction model that estimates how the object will behave while being grasped. And that’s how far you can trust the robot to do the task safely: it’s no more nor less reliable than the math used.
“Safety implies that they give you guarantees, which could be formal mathematical guarantees, about how the robot will move,” she says. But if a human is involved, the system must also predict his or her behaviour. And ideally, the robot would even adapt to the actual person it is working with. As it observes how the person moves, it would then continually adapt its statistical model. But people aren’t easy to predict.
So Hirche and her team are taking advantage of recent developments in machine learning, applying models derived from a 250-year old probability theorem developed by the English Reverend Thomas Bayes. “You can develop human models using observations from the past,” Hirche explains. “If you observe how you handed me a cup on three occasions, for example, then you can start to create a data-driven statistical model for that movement using machine learning.” That means assigning, based on the data, a probability for each potential outcome in the man-machine interaction.
“We don’t only give you a prediction, but also give you a level of uncertainty of the prediction. It needs to be transparent, we need to be able to explain why we got to an outcome,” she says.
And that’s what Hirche and her team have accomplished in the lab in, for instance, scenarios in which a robot helps a human move an object from one place to another. Monitored by psychologists, the studies demonstrated that the new control algorithms work; humans perceive the robots as helpful. The team has also performed experiments where humans and robots have just touched each other, or have moved an object through a virtual maze.
This has applications beyond robotics. It could, for example, help manage degenerative conditions such as Parkinson’s Disease, in which symptoms fluctuate unpredictably over time. “We are working with clinicians on a project in which the patient would wear a smart watch and we can use the motion data it captures to estimate the severity of the symptoms,” says Hirche. “It is able to judge quite reliably the human motor state, which can be used to change the medication, which in the future could be even administered by a pump, in a very automatic way. The results are so much better than existing techniques.”