Philosophy of Computer Science

Computer Ethics

Last Update: 30 March 2013

Note: NEW or UPDATED material is highlighted


Entire courses have been devoted to this topic. For more information, do a Google search by clicking on the title above.


Websites

  1. AAAI's AI Topics website on Ethical and Social Implications of AI

  2. The Research Center on Computing and Society


Readings

There are numerous books on computer ethics. For those at UB, type "computer ethics" as a Keyword into Bison.

Boldface entries are of particular interest or importance.


  1. Lem, Stanislaw (1971), "Non Serviam", in S. Lem, A Perfect Vacuum, trans. by Michael Kandel (New York: Harcourt Brace Jovanovich, 1979).


  2. Moor, James H. (1979), "Are There Decisions Computers Should Never Make?", Nature and System 1: 217-229.

      Good articles to read in contrast to Moor's paper:

    1. On whether computers can make better decisions than humans, see:
      Heingartner, Douglas (2006), "Maybe We Should Leave That Up to the Computer", New York Times (18 July).

    2. Friedman, Batya; & Kahn, Peter H., Jr. (1992), "People Are Responsible, Computers Are Not", excerpt from their "Human Agency and Responsible Computing: Impications for Computer System Design", Journal of Systems and Software (1992): 7-14; excerpt reprinted in M. David Ermann, Mary B. Williams, & Michele S. Shauf (eds.) (1997), Computers, Ethics, and Society, Second Edition (New York: Oxford University Press): 303-314.

    3. Johnson, George (2002), "To Err Is Human", New York Times (14 July).

      • Provides an interesting real-life case study of Moor's problem.

          But an interesting, real-life, contrasting case study is the now-famous landing of a US Airways jet on the Hudson River in NYC in January 2009.
          A book by William Langewiesche argues that the plane, with its computerized "fly by wire" system, was the real hero.
          The following two book reviews offer contrasting opinions:

        • Haberman, Clyde (2009, November 27), "The Story of a Landing", New York Times Book Review.

        • Salter, James (2010, January 14), "The Art of the Ditch", New York Review of Books 57(1).

    4. On the other hand, here are some cautionary counterexamples:
      Neumann, Peter G. (1993), "Modeling and Simulation", Communications of the ACM 36(4) (June): 124.

    5. Brachman, Ronald J. (2002), "Systems that Know What They're Doing", IEEE Intelligent Systems (November/December): 67-71.

      • Suggests how and why decision-making computers should be able to explain their decisions.

    6. Aref, Hassan (2004), "Recipe for an Affordable Supercomputer: Take 1,100 Apples...", Chronicle of Higher Education (5 March): B14.

      • Suggests (but does not discuss) that supercomputers might make decisions that we could not understand:

        "As we construct machines that rival the mental capability of humans, will our analytical skills atrophy? Will we come to rely too much on the ability to do brute-force simulations in a very short time, rather than subject problems to careful analysis? Will we run to the computer before thinking a problem through?...A major challenge for the future of humanity is whether we can also learn to master machines that outperform us mentally."

      • On the notion of "our analytical skills atrophy"ing, you might enjoy the following science-fiction story about a human who rediscovers how to do arithmetic after all arithemetical problems are handled by computers:

        Asimov, Isaac (1957), "The Feeling of Power", reprinted in Clifton Fadiman (ed.), The Mathematical Magpie (New York: Simon and Schuster, 1962): 3-14.

    7. Kolata, Gina (2004), "New Studies Question Value of Opening Arteries" The New York Times (21 March): A1,A21.

      • A paragraph deeply embedded in this article suggests that people find it difficult to accept the rational recommendations even of other people. The article reports on evidence that a certain popular and common surgical procedure has just been shown to be of no benefit:

          "Dr. Hillis said he tried to explain the evidence to patients, to little avail. ‘You end up reaching a level of frustration,’ he said. ‘I think they have talked to someone along the line who convinced them that this procedure will save their life.’"

    8. On humans' difficulty in reasoning about probability and statistics:

      1. "Reasoning"
        • Bibliography on reasoning, for CSE 575 (Cognitive Science)

      2. Wainer, Howard (2007), "The Most Dangerous Equation", American Scientist 95(3) (May-June): 249ff.
        • "Ignorance of how sample size affects statistical variation has created havoc for nearly a millennium."

    9. Greengard, Samuel (2009), "Making Automation Work", Communications of the ACM 52(12) (December): 18–19.

      • "Today's automated systems provide enormous safety and convenience. However, when glitches, problems, or breakdowns occur, the results can be catastrophic."

    10. For a fictional approach, see:
      Asimov, Isaac (1950), "The Evitable Conflict", Astounding Science Fiction; reprinted in Isaac Asimov, I, Robot (Garden City, NY: Doubleday), Ch. 9, pp. 195–218.


  3. Moor, James H. (1985), "What Is Computer Ethics?, Metaphilosophy 16(4) (October): 266-275.


  4. LaChat, Michael R. (1986), "Artificial Intelligence and Ethics: An Exercise in the Moral Imagination", AI Magazine 7(2): 70-79.

    1. Here are two good follow-up essays; Dietrich argues that we should build robots that will be more moral than we are (and then we should "exit stage left"):

      1. Dietrich, Eric (2001), "Homo sapiens 2.0: why we should build the better robots of our nature", Journal of Experimental and Theoretical Artificial Intelligence 13(4) (October): 323-328.

      2. Dietrich, Eric (2007), "After the Humans Are Gone", Journal of Experimental and Theoretical Artificial Intelligence 19(1): 55-67.

    2. Fletcher, Joseph (1972), "Indicators of Humanhood: A Tentative Profile of Man", Hastings Center Report 2(5): 1-4.

    3. Frankenstein vs. Wiener
      • With links to Shelley's Frankenstein and to Wiener's God and Golem, Inc.

    4. Links to Karel Capek and R.U.R.:

      1. Joseph and Karel Capek
      2. Karel Capek
      3. R.U.R.
        1. online version of the play
        2. Another online version

    5. A follow-up essay by LaChat, in which he argues that a "moral" robot "will have to possess sentient properties, chiefly pain perception and emotion, in order to develop an empathetic superego which human persons would find necessary and permissible in a morally autonomous AI":

      La Chat, Michael Ray (2003), "Moral Stages in the Evolution of the Artificial Superego: A Cost-Benefits Trajectory", in Iva Smit, Wendell Wallach, & Goerge E. Lasker (eds.), Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. II (Windsor, ON, CANADA: International Institute for Advanced Studies in Systems Research and Cybernetics):18-24.

    6. LaChat discusses Asimov's 3 Laws of Robotics;
      here is another discussion:

      1. Clarke, Roger (1993), "Asimov's Laws of Robotics: Implications for Information Technology, Part 1", [IEEE] Computer 26(12) (December): 53-61.
      2. Clarke, Roger (1994), "Asimov's Laws of Robotics: Implications for Information Technology, Part 2", [IEEE] Computer 27(1) (January): 57-66.

    7. LaChat also discusses the role of personality and emotion in AI;
      here are some links to AI research on these topics:

      1. UB alumnus Reid Simmons's research on "social robots"

      2. MIT's "COG" project on humanoid robotics

      3. webpage on the cognitive science of emotion


  5. Turkle, Sherry (2004), "How Computers Change the Way We Think", Chronicle of Higher Education (January 30): B26-B28.

  6. Petersen, Stephen (2007), "The Ethics of Robot Servitude", Journal of Experimental and Theoretical Artificial Intelligence 19(1) (March): 43-54.

  7. Sparrow, Robert (2007), "Killer Robots", Journal of Applied Philosophy 24(1): 62-77.

  8. Anderson, Michael; & Anderson, Susan Leigh (2007), "Machine Ethics: Creating an Ethical Intelligent Agent", AI Magazine 28(4) (Winter): 15-26.

      See also:
    • Anderson, Michael; & Anderson, Susan Leigh (2010), "Robot Be Good", Scientific American 303(4) (October): 72–77.

  9. Choi, Charles Q. (2008), "Not Tonight, Dear, I Have to Reboot", Scientific American (March): 94-97.

    • "Is love and marriage with robots an institute you can disparage? Computing pioneer David Levy doesn't think so—he expects people to wed droids by midcentury. Is that a good thing?"

  10. Tanaka, Fumihide; Cicourel, Aaron; & Movellan, Javier R. (2007), "Socialization between Toddlers and Robots at an Early Childhood Education Center", Proceedings of the National Academy of Sciences 104(46) (13 November): 17954-17958.

  11. Wallach, Wendell; & Allen, Colin (2009), Moral Machines: Teaching Robots Right from Wrong (New York: Oxford University Press).

  12. Markoff, John (2009), "Ay Robot! Scientists Worry Machines May Outsmart Man", New York Times (26 July): 1, 4.

  13. Wagner, Alan R.; & Arkin, Ronald C. (2010), "Acting Deceptively: Providing Robots with the Capacity for Deception", International Journal of Social Robotics, to appear.



Text copyright © 2004–2013 by William J. Rapaport (rapaport@buffalo.edu)
Cartoon links and screen-captures appear here for your enjoyment.
They are not meant to infringe on any copyrights held by the creators.
For more information on any cartoon, click on it, or contact me.

http://www.cse.buffalo.edu/~rapaport/584/compethics.html-20130330