Thomas Arnold

Thomas Arnold
Research Associate, Human-Robot Interaction
Biography:

Laboratory, Tufts University, USA

What is robot ethics?

Recently, there have been many discussions around the ethical issues that a self-driving car would raise. How would it know if a toddler ran out onto the road, and would it react? Would it keep going to preserve the safety of the passenger? Or would it swerve and perhaps risk the life of the person inside the car, in order to save the child on the road? This question is one of many problems which Thomas Arnold tries to solve every day.

Watch the video:
Listen to the Audiofile here:
Read the transcript of Thomas Arnold's Video here

Thomas: I’m Thomas Arnold; a research associate at Tufts University, Human-Robot Interaction Laboratory, in the USA.

Nerina: What are you working on right now?

Thomas: Right now I’m working on ethics and social robots. My Master’s and Doctoral work is in philosophy of religion. I still consider myself a scholar working in those fields; I just have a very specific context for it now: to test and apply it.

Nerina: What are robotic ethics actually?

Thomas: Part of robotic ethics is really asking about the specific contexts where robots seem to be coming onto the market and being designed, and really trying to think through carefully what the robots should be doing and how they’re going to be able to do it. Whether that’s elder care, tutoring a child, public safety or repairing something; anything where interacting with human beings is going to happen, that’s what I’m interested in trying to think through.

Nerina: Why do we have to think about this?

Thomas: Recently, there have been many discussions around the ethical issues that a self-driving car could have. How do we know if a toddler runs out on the road, how the car will react? Will it keep going to preserve the safety of the passenger? Will it swerve and perhaps risk the life of the person inside in order to save the child on the road?

That’s a limited case, but none-the-less it’s a case where we have to think about what we want the system to do and why. And the reason we need to think about it is because people have different views about that.

Nerina: What is your approach?

Thomas: We are interested in taking the problem from two directions. On one case, we are interested in how people will react to robots, and how they will expect things from the robots, so we’re interested in how we as human beings will react and what our expectations naturally are in some ways. Then from the other direction, we try to use a combination of approaches, one of them being a rule-based approach. But it’s not simply a matter of rules because we also try to build-in context. We try to represent in the code, in our computational system, different contexts so that the system is able to recognize that if you’re in a kitchen and there’s a knife in the kitchen it means something different, there’s a different set of rules for handing over a knife, what a knife would be, than there would be in a subway where if someone was holding a knife you would draw a different conclusion.

Nerina: Is this a way to try to make robots more human?

Thomas: That’s a really good question. That is I think one of the struggles in our field because I feel the answer is always yes and no. Yes in that, in order to interact with a person well you need to understand when they are responding, when they are in pain, or when something wrong has happened. On the other hand, you have to be careful not to encourage a relationship or expectations that are unrealistic and end up being maybe manipulative.

If a robot intentionally looks or sounds a certain way that a person might think they are actually able to reciprocate or return some type of affection, or that they are hurt or feel pain in the same way that a person does, then that starts to be a problem. We know that already robots being used in different cases are creating bonds with people. In the US military, for US soldiers serving in Afghanistan and Iraq, there are robots that are kind of called dogs. They are shaped a little bit like dogs that detect IEDs; they detect these minds on the ground. The soldiers bond with those robots. They give them funerals when they are damaged beyond functioning, medals, and consider them colleagues in some way.

You can say, well maybe that’s a very difficult environment so maybe the soldiers are in a state of mind where they vulnerable enough to do that. But I really think that’s something that will extend beyond that context. I think people will be drawn to that, and I think it’s a serious ethical issue how we prevent those relationships from being bad for human beings, from being hurtful and deceptive. And maybe this will be the struggle but I really want robotic action to always come back to human beings as being the ones that are responsible. So that when a robot performs something well and is responsive, I would like us not to say, “That robot is human, or that robot is moral”; I would rather say the robot performed a moral action that human beings are responsible for, that our design allowed that moral action to occur.

So ultimately it comes back to us. It comes back to our ethics and our relationships with one another because I think it’s our responsibility to keep ourselves accountable for what the robots do. It will be very tricky and hard. Lawyers and legal professors are already arguing about what happens when a robot does something wrong. Is that just a malfunction? Is the designer responsible? Is the owner responsible? Those questions will be difficult but I really would want to keep as a principle that we are still always accountable in some respect for the systems that are designed.

Nerina: In the end is it a question about what is a human being? Is it a question of who we are?

Thomas: I think part of what makes us human is imagining, how do we recreate ourselves. Not just biologically, but if through craft and technologically. That’s part of what it means to be human. The downside to that are all the shortcomings we are prone to when we try to do that. I think that’s the other side of that part of what makes us human, it’s that we are also flawed in how we imagine. Sometimes our imagination lets us do wonderful things, and sometimes our imagination deceives us as to how things will actually unfold.

I think robots are just another chapter in that story. In the press you have people that want to write headlines, “Will the robots take over?” Will they take over our civilization? I think those are not very helpful because it’s another way to avoid responsibility here and now and the fact that we need to pay attention to how we are treating one another. Robots and artificial intelligence is a reflection of what we think about one another, and how we treat each other as human beings. I think it will be an ongoing debate; your question is still on the table to figure out what we are doing.

Nerina: Do you have a wish for the future?

Thomas: My wish would be that we find a way to live sustainably on our planet.

Nerina: What is life about?

Thomas: I think life is about exploring, learning from one another, struggling, and continuously asking that question of what life’s about.

Nerina: Thank you very much, Thomas.

Thomas: Thank you Nerina.

#follow-up with Thomas Arnold | AI, Robots and Humans

Thomas is Research Associate in the Human-Robot Interaction Laboratory, at Tufts University in USA and tells us about the last ideas and trends from his lab. Have a watch!

Watch the trailer:
Watch the video:
Listen to the Audiofile here:
Biography:

Laboratory, Tufts University, USA

Traces&Dreams AB

c/o Impact Hub
Jakobsbergsgatan 22
111 44 Stockholm Sweden
Org. nr: 559336-2196

Join the community

Subscribe here to our newsletters and learn more about narratives, futures, and positive change.

Copyright © Traces&Dreams AB 2023

Privacy Policy