“Robots can go all
the way to Mars,
but they can’t pick up the groceries”
In the popular imagination, robots have been portrayed alternatively as friendly companions or existential threat. But while robots are becoming commonplace in many industries, they are neither C-3PO nor the Terminator. Cambridge researchers are studying the interaction between robots and humans – and teaching them how to do the very difficult things that we find easy.
Stacks of vertical shelves weave around each other in what looks like an intricately choreographed – if admittedly inelegant – ballet that has been performed since 2014 in Amazon’s cavernous warehouses. The shelves, each weighing more than 1,000 kg, are carried on the backs of robots that resemble giant versions of robotic vacuum cleaners. The robots cut down on time and human error, but they still have things to learn.
Once an order is received, a robot goes to the shelf where the ordered item is stored. It picks up the shelf and takes it to an area where the item is removed and placed in a plastic bin, ready for packing and sending to the customer. It may sound counter-intuitive, but the most difficult part of this sequence is taking the item from the shelf and putting it in the plastic bin.
For Dr Fumiya Iida, this is a typical example of what he and other roboticists call a ‘last metre’ problem. “An Amazon order could be anything from a pillow, to a book, to a hat, to a bicycle,” he says. “For a human, it’s generally easy to pick up an item without dropping or crushing it – we instinctively know how much force to use. But this is really difficult for a robot.”
In the 1980s, a group of scientists gave this kind of problem another name – Moravec’s paradox – which essentially states that things that are easy for humans are difficult for robots, and vice versa. “Robots can go all the way to Mars, but they can’t pick up the groceries,” says Iida.
One of the goals of Iida’s lab in Cambridge’s Department of Engineering is to find effective solutions to various kinds of last metre problems, from putting a book in a plastic bin or harvesting vegetables, to building LEGO structures or detecting cancerous tumours.
Ed Bray
Ed Bray
Iida’s team is also working with British Airways, who have a last metre problem with baggage handling: a process that is almost entirely automated, except for the point when suitcases of many different shapes, sizes and weight need to be put onto an aircraft.
And for the past two summers, they’ve also been working with local fruit and vegetable group G’s Growers to design robots that can harvest lettuces without crushing them.
Julia Cai
Julia Cai
“That last metre is a really interesting problem,” Iida says. “It’s the front line in robotics, because so many things we do in our lives, from cooking to care to picking things up, are last metre problems, and that last metre is the barrier to robots really being able to help humanity.”
Kieran Gilday
Kieran Gilday
Although the thought of having a robot to cook dinner or perform other basic daily tasks may sound nice, these types of domestic applications are still a way off becoming reality. “Robots are becoming part of our society in the areas where they’re needed most – areas like agriculture, medicine, security and logistics. There are many needs for robots – but they can’t go everywhere instantly,” explains Iida.
If, as Iida says, the robot revolution is already happening, how will we as humans interact with them when they become a more visible part of our everyday lives? And how will they interact with us? Dr Hatice Gunes of Cambridge’s Department of Computer Science and Technology, with funding from the Engineering and Physical Sciences Research Council, has just completed a three-year project into human–robot interaction, bringing together aspects of computer vision, machine learning, public engagement, performance and psychology.
“Robots are not sensitive to emotions or personality, but personality is the glue in terms of how we behave and interact with each other,” she says. “So how do we improve the way in which robots and humans understand one another in a social setting?”
This is another example of Moravec’s paradox: for most individuals, being able to read and respond to the physical cues of other people, and adapt accordingly, is second-nature. For robots however, it’s a challenge.
Gunes’ project focused on artificial emotional intelligence: robots that not only express emotions, but also read cues and respond appropriately. Her team developed computer vision techniques to help robots recognise different emotional expressions, micro-expressions and human personalities; and programmed a robot that could come across as either introverted or extroverted.
“We found that human–robot interaction is personality-dependent on both sides,” says Gunes. “A robot that can adapt to a human’s personality is more engaging, but the way humans interact with robots is also highly influenced by the situation, the physicality of the robot and the task at hand. When people interact with each other, it’s often in a task-based manner, and different tasks bring out different aspects of our personalities, whether they’re completing that task with another person or with a robot.”
It wasn’t just the robots who found some of the interactions difficult, however: many of Gunes’ human subjects found the novelty of talking with a robot in public affected their ability to listen and follow directions.
“For me, it was more interesting to observe the people rather than to showcase what we’re doing, mostly because people don’t really understand the abilities of these robots,” she say. “But as robots become more available, hopefully they’ll become demystified.” Gunes now aims to focus on the potential of robots and virtual reality technology for well-being applications, such as coaching, cognitive training and elderly care.
As robots become more commonplace in our lives, ethical considerations become more important. In his lab, Iida has a robot ‘inventor’, but if the robot invents something of value, who owns the intellectual property? “At the moment, the law says that it belongs to the human who programmed the robot, but that’s an answer to a legislative question,” says Iida. “The ethical questions are a little murkier.”
However, Professor Huw Price, Academic Director of both Cambridge’s Centre for the Study of Existential Risk and the Leverhulme Centre for the Future of Intelligence, thinks it will be a long time before we need to think about giving robots rights.
“Think of a dog-lover’s version of the difference between dogs and cats,” he says. “Dogs feel pleasure and pain, as well as affection, shame and other emotions. Cats are good at faking these things, but inside they’re just mindless killers. On this spectrum, robots are going to be way out on the cat end (except for the killing bit, hopefully), for the foreseeable future. They might be good at faking emotions, but they’ll have the same inner life as a teddy bear or a toaster.
“Eventually we might build robots, teddy bears and even toasters that do have an inner life, and then it will be a different matter. But for the moment, the ethical challenges involve machines that will be good at behaving in ways that we humans interpret as signs of emotions, and good at reading our emotions. These machines raise important ethical issues – like whether we should use them as carers for people who can’t tell that they are just machines, such as infants and dementia patients – but we don’t need to worry about their rights.”
“Another interesting question is whether a robot can learn to be ethical,” says Iida. “That’s very interesting scientifically, because it leads to the nature of consciousness. Robots are going to be a bigger and bigger part of our lives, so we all need to be thinking about these questions.”