From owner-cse584-sp07-list@LISTSERV.BUFFALO.EDU Mon Feb 26 11:35:56 2007 Received: from ares.cse.buffalo.edu (ares.cse.Buffalo.EDU [128.205.32.79]) by castor.cse.Buffalo.EDU (8.13.6/8.12.10) with ESMTP id l1QGZumX017714 for ; Mon, 26 Feb 2007 11:35:56 -0500 (EST) Received: from front1.acsu.buffalo.edu (warmfront.acsu.buffalo.edu [128.205.6.88]) by ares.cse.buffalo.edu (8.13.6/8.13.6) with SMTP id l1QGZjXt009861 for ; Mon, 26 Feb 2007 11:35:45 -0500 (EST) Received: (qmail 21120 invoked from network); 26 Feb 2007 16:35:45 -0000 Received: from mailscan5.acsu.buffalo.edu (128.205.6.137) by front1.acsu.buffalo.edu with SMTP; 26 Feb 2007 16:35:45 -0000 Received: (qmail 20713 invoked from network); 26 Feb 2007 16:35:45 -0000 Received: from deliverance.acsu.buffalo.edu (128.205.7.57) by front2.acsu.buffalo.edu with SMTP; 26 Feb 2007 16:35:45 -0000 Received: (qmail 23240 invoked from network); 26 Feb 2007 16:35:28 -0000 Received: from listserv.buffalo.edu (128.205.7.35) by deliverance.acsu.buffalo.edu with SMTP; 26 Feb 2007 16:35:29 -0000 Received: by LISTSERV.BUFFALO.EDU (LISTSERV-TCP/IP release 14.5) with spool id 3528327 for CSE584-SP07-LIST@LISTSERV.BUFFALO.EDU; Mon, 26 Feb 2007 11:35:28 -0500 Delivered-To: cse584-sp07-list@listserv.buffalo.edu Received: (qmail 24263 invoked from network); 26 Feb 2007 16:35:28 -0000 Received: from mailscan1.acsu.buffalo.edu (128.205.6.133) by listserv.buffalo.edu with SMTP; 26 Feb 2007 16:35:28 -0000 Received: (qmail 8862 invoked from network); 26 Feb 2007 16:35:27 -0000 Received: from castor.cse.buffalo.edu (128.205.32.14) by smtp2.acsu.buffalo.edu with SMTP; 26 Feb 2007 16:35:27 -0000 Received: from castor.cse.Buffalo.EDU (rapaport@localhost [127.0.0.1]) by castor.cse.Buffalo.EDU (8.13.6/8.12.10) with ESMTP id l1QGZRim017700 for ; Mon, 26 Feb 2007 11:35:27 -0500 (EST) Received: (from rapaport@localhost) by castor.cse.Buffalo.EDU (8.13.6/8.12.9/Submit) id l1QGZReJ017699 for cse584-sp07-list@listserv.buffalo.edu; Mon, 26 Feb 2007 11:35:27 -0500 (EST) X-UB-Relay: (castor.cse.buffalo.edu) X-PM-EL-Spam-Prob: : 7% Message-ID: <200702261635.l1QGZReJ017699@castor.cse.Buffalo.EDU> Date: Mon, 26 Feb 2007 11:35:27 -0500 Reply-To: "William J. Rapaport" Sender: "Philosophy of Computer Science, Spring 2007" From: "William J. Rapaport" Subject: MORALITY AND ARTIFICIAL LIFE To: CSE584-SP07-LIST@LISTSERV.BUFFALO.EDU Precedence: list List-Help: , List-Unsubscribe: List-Subscribe: List-Owner: List-Archive: X-UB-Relay: (castor.cse.buffalo.edu) X-DCC-Buffalo.EDU-Metrics: castor.cse.Buffalo.EDU 1335; Body=0 Fuz1=0 Fuz2=0 X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00,SUBJ_ALL_CAPS autolearn=no version=3.1.7 X-Spam-Checker-Version: SpamAssassin 3.1.7 (2006-10-05) on ares.cse.buffalo.edu X-Virus-Scanned: ClamAV 0.88.6/2655/Mon Feb 26 03:40:40 2007 on ares.cse.buffalo.edu X-Virus-Status: Clean Status: R Content-Length: 3686 ------------------------------------------------------------------------ Subject: MORALITY AND ARTIFICIAL LIFE ------------------------------------------------------------------------ | Date: Sat, 24 Feb 2007 17:30:24 -0500 | From: "Mike Prentice" | Subject: Morality and artificial life | ... | I just finished your article "How minds can be computational systems" | and was intrigued by the notion of Cassie believing herself to be in | pain vs being in pain vs believing that she believed she is in pain. I | was wondering what, exactly, the difference is. If we tell her she is | in pain, i.e. assert that she is in pain, and she believes it, and is | programmed to take certain actions to avoid it, then: | 1) Is she in pain? | 2) Are we responsible? i.e. Ethically speaking, are we obligated to | ease her pain? | 3) Is she a research animal? If she gains what we think of as | consciousness, will she be entitled to rights? | | It seems to me that long before we have viable conscious robots, we | have right now artificial life. | | I would like to use the game Elebits as an example (my girlfriend's | favorite game on the Nintendo Wii). Elebits are "cute" little agents | that run around and make noises in their virtual environment. You can | explore their environment and wake them up, put them to sleep, capture | them, and even terrify them. So, are the Elebits actually afraid, or | merely programmed to mimic fear? What is the difference? They react in | much the same way a mouse does when frightened, trying to run away or | cowering in a corner. | | If you could harm an Elebit (it's a kids' Nintendo game, so of course | you can't, but let's say you could) in the context of its virtual | environment, would it be wrong? | | I haven't fully developed my own beliefs on this subject yet, but it | seems to me that if a virtual agent is programmed to believe it is in | pain, or afraid, and reacts to stimulus in ways that mimic a human's | or animal's fear, then there is no difference. I would posit that the | agent's "fear" and "pain" are, in fact, real fear and real pain, and | should invoke the same reactions in us as when an animal feels fear | and pain. Judging from my girlfriend's reactions, if an agent is | "cute" enough, maybe they already do. | | Sorry for the length. It seems to be a complicated subject. | | -- Mike Prentice | | P.S. There is a computer game, Darwinia, in which the object is to | save artificial lifeforms called Darwinians from rampaging viruses; | mainly by killing the viruses. Since the Darwinians are cute and the | viruses are ugly, does that make it okay to eradicate viruses to save | the Darwinians? In many cases, the viruses "act" more intelligently | than the Darwinians! But we are told by "God" (the game's author) that | Darwinians have digital souls, and the viruses do not. (Darwinians can | be reincarnated!) We will be discussing precisely this sort of issue towards the end of the semester (What is computer ethics?). To get a head start on thinking about some of these issues, take a look at: Lem, Stanislaw (1971), "Non Serviam", in S. Lem, A Perfect Vacuum, trans. by Michael Kandel (New York: Harcourt Brace Jovanovich, 1979). http://www.cse.buffalo.edu/~rapaport/Papers/Papers.by.Others/lem.pdf LaChat, Michael R. (1986), "Artificial Intelligence and Ethics: An Exercise in the Moral Imagination" [PDF], AI Magazine 7(2): 70-79. http://www.cse.buffalo.edu/~rapaport/Papers/Papers.by.Others/lachat86.pdf Peterson, Steven (2007), "The Ethics of Robot Servitude", Journal of Experimental and Theoretical AI. http://stevepetersen.net/professional/petersen-robot-servitude.pdf