Lets talk robots: Can man and machine live side-by-side?

Dr Guy Hoffman is Assistant Professor at the School of Communication at IDC Herzliya, and Co-Director of its Media Innovation Lab.

His research focuses on human-robot interaction and non-verbal communication in robots. Hoffman, who created the first real-time improvising human-robot jazz duet, suggests robots with human attributes will soon be an integral part of our lives.

Here, he offers his thoughts on the subject:

Q: When you reference robots with human attributes, what exactly do you mean?

A: It’s about the emotion, or the effect they have on you: the ways in which robots tap into human emotion and the psychological world. Robots seem to have this special position in technology that resonates with basic human emotions such as fear and hope.

Q: In terms of robots tapping into our psychological world, how would that play out?

A: Humans are inherently wired for social interaction. We are very sensitive to social cues, and we know that people, at a very young age, and throughout their lives, will attribute intentions, intelligence, beliefs and desires to inanimate things.

For example, in our experiments we will have a robot direct its head away from a person who is disclosing a very intimate life event, telling the robot a story that’s very difficult for them to tell. In the middle of the story, the robot will turn away – it’s not looking at anything, it’s just moving its head to the side. And what’s interesting is that people interpret this as the robot not respecting them, or belittling what they have to say.

Robots can easily tap into the way that humans experience the world, which is social and emotional. We find that people have strong reactions to robots in all directions, be it fear, hope, emotional or even social connections.

There’s repeated evidence that individuals, in cases where a robot has spent some time with them, for example in a nursing home or in experiments, are reluctant to give it back.

Q: What benefits can this bring? Robots are there to serve us, so how does the human connection enhance their capabilities?

A: There are two ways in which this can be beneficial. One is on a very functional level, for instance a factory in which humans and robots work together. Usually, humans and robots are separated. But the right kind of social communication and behaviour can make a team more fluent. You can make employees more willing to work with the robot, to trust it to do the right thing at the right time and make them willing to work with it for longer periods.

For example, in the future factory, the worker may have a robotic teammate that increasingly adapts to that person’s behaviour, learning the preferences, timing and body language of their human co-worker. They will become an increasingly successful teammate, making work for the human more satisfying and perhaps less hazardous, and will also make the human want to work with the robot more and more.

So robots could be like the apprentice to their human teammate.

I also see robots in lay environments such as in schools, hospitals and nursing homes. They might be sharing the workload, whether it’s teaching, giving medicine or taking blood pressure. We have to think of robots for everyday people, which will behave in a way that’s clever and intuitive and not specifically designed for robotics operators.

Q: Is that an area we haven’t really properly engaged in, in terms of the types of machinery and the robots that are available?

A: There are few robots employed in environments that are not specifically designed for them. We are trying to solve important problems such as “navigation”, “grasping” and performing the right action. But there’s another layer, to do with doing the right thing socially.

For example, being able to behave in a way that shows consideration for human social behaviour, such as respecting personal space. Or how does the robot approach a person in different situations, such as if they’re sitting or standing.

Professor Rachid Alami’s group has studied this question. We are social creatures, and having a robot behave in a way that is socially acceptable will make use of robotics successful.

Q: We live in a digital age, and many of our communications are virtual, and yet the physical and face-to-face attributes you describe still seem to be important to us as human beings.

A: The true power of the human experience is in the real world, in movement, behaviour and face-to-face interaction. I often show a video clip of a robot programmed to keep a certain distance from a person. When the person comes close, the robot takes a step back. People feel sorry for the robot: they feel like the robot is shy or a little bit afraid of the person.

The experience of interacting with something face-to-face will always be more intense, and, even in this age of digital communication, there are a lot of benefits to be had in recognising that and enhancing our interactions in the physical world.

Shaking hands still has social value. The pendulum is swinging back, from everything becoming so digital and virtual, to contact, and communication technology becoming physical again.

Q: You were recently quoted as saying: “Every one of you will be living with a robot in your life very soon.” Do we need robots we can feel comfortable with?

A: It’s dangerous to make predictions, but robotic technology is at a point where it’s becoming economically and technologically feasible, and we’re going to start seeing them in more environments.

It seems like we’re on this cusp of this becoming consumer-level technology. We will see robots in everybody’s lives within several years. I think robots will be serving in kindergartens during our lifetimes.

Q: How could you see that working?

A: Classroom sizes, especially in places that are not economically successful, are a big problem, and they put children at a disadvantage compared with neighbourhoods that can afford one teacher per 10 kids.

I think there’s a place for technology to make a real difference. I’m not saying that robots should replace teachers, but a teacher with a robot can be a more successful teacher than a teacher who’s facing 40 kids on their own.

And the same will go for hospitals. So, where you have night staff in a hospital ward, the current situation is, in many cases, that they’re overworked. So, perhaps, you will have two nurses with six robots assisting them.

And just as the personal computer had to get much more user-friendly to be acceptable to everybody, from architects to car designers, robots will need to become more sociable to be accepted by kindergarten teachers or nursing home carers.

Q: Can you imagine a time when human attributes have been sufficiently developed for a robot to be a pet or a companion?

A: Pets are a good example of something we relate to on a social level, but we don’t believe is human. But we still relate to them using the tools evolution has given us to relate to each other.

We might interpret the wagging tail as the dog being happy to see us. It doesn’t matter if the dog is happy to see us: the dog fulfils a certain social role in the way that it behaves.

Q: How does developing more “human” robots enhance our understanding of ourselves? And why is that important?

A: Robots can serve as a model for human behaviour. For example, we can evaluate how people behave in teams by having the robots do the same thing over and over again in a consistent way, and study the ways in which people respond to them.

Robots can create a laboratory environment for the testing of human behaviour.

By very carefully controlling the robot’s behaviour, we are able to see and explore how it affects human behaviour. It’s basically the perfect testing equipment because it enables us to induce a social situation without the many different variables that humans introduce through their behaviour.

Q: Should we be anxious about robots? Will they ever be smarter than us or take on a life of their own? Or is that just sci-fi speculation?

A: Robots could in some sense be smarter than us, but we create technology to enhance the abilities that biology has given us. For example, I’ve been working with Professor Gil Weinberg [professor of musical technology] at Georgia Tech on a robotic prosthesis for a drummer who lost his hand, and we made that even more than just a way to control a drumstick.

We created another drumstick with a mind of its own that could improvise with him. Suddenly, he had more capability than able-bodied drummers, and could drum with two sticks in one hand, and the other stick was faster than the one he could directly control.

Through biotechnology, we get people to exceed their biological limitations. As long as we accept that we manufacture technologies to give us capabilities that we don’t have, I don’t think the question of them being smarter than us is a worry. Look at it this way: it’s not any different from a car being faster than us.

Q: Will robots ever have free will?

A: It’s not a black-and-white question of having free will. Machines make decisions already. Some are small, such as beeping when the car behind you gets too close.

Decisions are part of any digitally controlled system. There’s no borderline between mechanics and free will. People have to decide how much of the decision-making process they’re willing to forgo. Think about GPS: we are giving our mapping software the power to decide where we drive. This is a call we could make, but choose not to.

However, I don’t think people will give the decision of whether to end life support for a loved one to a machine. It’s a gradient, and we have to decide, on a case-by-case basis, what we want to retain control of, and what we are comfortable giving up.

Q: Can you envisage a robot being held legally accountable for its actions?

A: That’s one of the most important questions being debated right now in the robotics community. What would it mean that a robot is accountable? If we destroy the robot, does this make anything better?

This takes us into profound philosophical questions about how society functions, and it almost has nothing to do with robotics – it’s more to do with how do we define responsibility and how we want to reward or punish certain actions.

People are already thinking about this deeply. We have started an initiative in Israel called “Robots in Human Society”, together with Professor Dan Halperin and Lior Zalmanson from Tel Aviv University.

It’s an annual forum where we discuss these questions with experts from fields including literature and ethics to law and social sciences. This is a question in which every facet of society needs to engage. It is not a technology question, and it shouldn’t be left to the robotics departments to make these calls.

This article was originally published in BT’s Connected Worlds book, which celebrates the spirit of human endeavour and how connections build innovation and community.