A_map_of_New_England,_being_the_first_that_ever_was_here_cut_..._places_(2675732378).jpg
RWhitcomb-editor RWhitcomb-editor

Jennifer Ware: When bad stuff comes from technology

Maniac9.jpg

Via The New England Board of Higher Education (nebhe.org)

It’s an unpleasant reality, but also an inevitable one: Technology will cause harm.

And when it does, whom should we hold responsible? The person operating it at the time? The person who wrote the program or assembled the machine? The manager, board or CEO that decided to manufacture the machine? The marketer who presented the technology as safe and reliable? The politician who helped pass legislation making the technology available to consumers?

These questions reveal something important about what we do when bad things happen; we look for an individual—a particular person—to blame. It’s easier to make sense of how one person’s recklessness or conniving could result in disaster, than to ascribe blame to an array of unseen forces and individuals. At the same time, identifying a “bad guy” preserves the idea that bad things happen because of rogue or reckless agents.

Sometimes—as when hackers create programs to steal data or drone operators send their unmanned aircraft into disaster zones and make conditions unsafe for emergency aircraft—it is clear who is responsible for the adverse outcome.

But other times, bad things are the result of decisions made by groups of people or circumstances that come about incrementally over time. Political scientist Dennis Thompson has called this "the problem of many hands." When systems or groups are at fault for causing harm, looking for a single person to blame may obfuscate serious issues and unfairly scapegoat the individual who is singled out.

Advanced technology is complex and collaborative in nature. That is why, in many cases, the right way to think about the harms caused by technology may be to appeal to collective responsibility. Collective responsibility is the idea that groups, as distinct from their individual members, are responsible for collective actions. For example, legislation is a collective action; only Congress as a whole can pass laws, while no individual member of Congress can exercise that power.

When we assign collective responsibility and recognize that a group or system is morally flawed, we can either try to alter it to make it better, or to disband it if we believe it is beyond redemption.

But critics of collective responsibility worry that directing our blame at groups will cause individuals to feel that their personal choices do not matter all that much. Individuals involved in the development and proliferation of technology, for instance, might feel disconnected from the negative consequences of those contributions, seeing themselves as mere cogs in a much larger machine. Or they may adopt a fatalistic perspective, and come to see the trajectory of technological development as inevitable, regardless of the harm it may cause, and think to themselves, “Why not? If I don’t, someone else will."

Philosopher Bernard Williams argued against this sort of thinking in his “Critique of Utilitarianism” (1973). Williams presents a thought experiment in which “Jim” is told that if he doesn't kill someone, then 20 people will be killed—but if he chooses to take one life, the other 19 will be saved. Williams argues that, morally speaking, it is beside the point whether someone else will kill or not kill people because of Jim's choice. To maintain personal integrity, Jim must not do something that is wrong—killing one person—despite the threat.

In the case of an individual who might help develop “bad” technology, Williams’ argument would suggest that it does not matter whether someone else would do the job in her place; to maintain her personal integrity, she must not contribute to something that is wrong.

Furthermore, at least some of the time our sense of what is inevitable may be overly pessimistic. Far from being excused for our participation in the production of harmful technologies that seem unavoidable, we may have a further obligation to fight their coming to be.

For many involved in the creation and proliferation of new technologies, there is a strong sense of personal and shared responsibility. For example, employees at powerful companies such as Microsoft and Google have made efforts to prevent their employers from developing technology for militarized agencies, such as the Department of Defense and Immigration and Customs Enforcement. Innovators who have expressed some remorse for the harmful applications of their inventions include Albert Einstein (who encouraged research that led to the atomic bomb), Kamran Loghman (the inventor of weapons-grade pepper spray), and Ethan Zuckerman (inventor of the pop-up ad), all of whom later engaged in activities intended to offset the damage caused by their innovations.

Others try to make a clear distinction between how they intended their innovations to be used, and how those technologies have actually come to be used down the line—asserting that they are not be responsible for those unintended downstream applications. Marc Raibert, the CEO of Boston Dynamics, tried to make that distinction after a video of the company’s agile robots went viral and inspired dystopian fears in many viewers. He stated in an interview: "Every technology you can imagine has multiple ways of using it. If there's a scary part, it's just that people are scary. I don't think the robots by themselves are scary."

Raibert’s purist approach suggests that the creation of technology, in and of itself, is morally neutral and only applications can be deemed good or bad. But when dangerous or unethical applications are so easy to foresee, this position seems naive or willfully ignorant.

Ultimately, our evaluations of responsibility must take into consideration a wide range of factors, including: collective action; the relative power and knowledge of individuals; and whether any efforts were made to alter or stop the wrongs that were caused.

The core ethical puzzles here are not new; these questions emerge in virtually all arenas of human action and interaction. But the expanding frontiers of innovation can make it harder to see how we should apply our existing moral frameworks in a new and complicated world.

Jennifer Ware is an editor at Waltham, Mass.-based MindEdge Learning who teaches philosophy at the City University of New York. 

 

Read More