A_map_of_New_England,_being_the_first_that_ever_was_here_cut_..._places_(2675732378).jpg
RWhitcomb-editor RWhitcomb-editor

Jennifer Ware: 'The Trolley Problem' and robotics

silver.jpg

From the New England Board of Higher Education (nebhe.org)

 

NEBHE's Commission on Higher Education & Employability has thought hard over the past year about the increasing role of artificial intelligence and robotics in the future of life and work.

Many others are also waking up to this landscape, which not so long ago seemed like science fiction. Waltham, Mass.-based MindEdge Learning, for example, plans to devote regular blog posts to ethical questions in a world in which humans coexist with sophisticated—even humanoid—robots.

The first blog post by Jennifer Ware, a MindEdge editor who teaches philosophy at CUNY, begins by asking: What happens when robots do immoral things? Whom do we hold responsible? How do we navigate the fact that our own biases and prejudices will inevitably make their way into programs for the machines we build? Should we construct sophisticated, humanoid machines simply because we can? Is it wrong to treat robots in certain ways, or fail to treat them in other ways? How might our relationships with robots color our relationships with other humans? What happens if robots take over parts of the workforce? How might robots give us insight into what it means to be human? She writes:

Programming Machines to Make Moral Decisions: The Trolley Problem

"Machines have changed our lives in many ways. But the technological tools we use on a day-to-day basis are still largely dependent on our direction. I can set the alarm on my phone to remind me to pick up my dry cleaning tomorrow, but as of now, I don’t have a robot that will keep track of my dry cleaning schedule and decide, on its own, when to run the errand for me.

As technology evolves, we can expect that robots will become increasingly independent in their operations. And with their independence will come concerns about their decision-making. When robots are making decisions for themselves, we can expect that they’ll eventually have to make decisions that have moral ramifications–the sort of decisions that, if a person had made them, we would consider blameworthy or praiseworthy.

Perhaps the most talked-about scenario illustrating this type of moral decision-making involves self-driving cars and the “Trolley Problem.” The Trolley Problem, introduced by Phillipa Foot in 1967, is a thought experiment that is intended to clarify the kinds of things that factor into our moral evaluations. Here’s the gist:

Imagine you’re driving a trolley, and ahead you see three people standing on the tracks. It’s too late to stop, and these people don’t see you coming and won’t have time to move. If you hit them, the impact will certainly kill them. But you do have the chance to save their lives! You can divert the trolley onto another track, but there’s one person in that path that will be killed if you chose to avoid the other three. What should you do?

Intuitions about what is right to do in this case tend to bring to light different moral considerations. For instance: Is doing something that causes harm (diverting the trolley) worse than doing nothing and allowing harm to happen (staying the course)? Folks who think you should divert the trolley, killing one person but saving three, tend to care more about minimizing bad consequences. By contrast, folks who say you shouldn’t divert the trolley tend to argue that you, as the trolley driver, shouldn’t get to decide who lives and dies.

The reality is that people usually don’t have time to deliberate when confronted with these kinds of decisions. But automated vehicles don’t panic, and they’ll do what we’ve told them to do. We get to decide, before the car ever faces such a situation, how it ought to respond. We can, for example, program the car to run onto the sidewalk if three people are standing in the crosswalk who would otherwise be hit–even if someone on the sidewalk is killed as a result.

This, it seems, is an advantage of automation. If we can articulate idealized moral rules and program them into a robot, then maybe we’ll all be better off. The machine, after all, will be more consistent than most people could ever hope to be.

But articulating a set of shared moral guidelines is not so easy. While there’s good reason to think most people are consequentialists–responding to these situations by minimizing pain and suffering–feelings about what should happen in Trolley cases are not unanimous. And additional factors can change or complicate people’s responses: What if the person who must be sacrificed is the driver? Or what if a child is involved? Making decisions about how to weigh people’s lives should make any ethically minded person feel uncomfortable. And programming those values into a machine that can act on them may itself be unethical, according to some moral theories.

Given the wide range of considerations that everyday people take into account when reaching moral judgments, how can a machine be programmed to act in ways that the average person would always see as moral? In cases where moral intuitions diverge, what would it mean to program a robot to be ethical? Which ethical code should it follow?

Finally, using the Trolley Problem to think about artificial intelligence assumes that the robots in question will recognize all the right factors in critical situations. After all, asking what an automated car should do in a Trolley Problem-like scenario is only meaningful if the automated car actually “sees” the pedestrians in the crosswalk. But at this early stage in the evolution of AI, these machines don’t always behave as expected. New technologies are being integrated into our lives before we can be sure that they’re foolproof, and that fact raises important moral questions about responsibility and risk.

As we push forward and discover all that we can do with technology, we must also include in our conversations questions about what we should do. Although those questions are undoubtedly complicated, they deserve careful consideration–because the stakes are so high.''

 

 

Read More