Computer Ethics II:
Should We Build Artificial Intelligences?
Last Update: Friday, 22 November 2024 |
Note 1: Many of these items are online; links are given where they are known. Other items may also be online; an internet search should help you find them.
Note 2: In general, works are listed in chronological order.
(This makes it easier to follow the historical development of ideas.)
§19.1: Introduction
(*) On the Golem, see the Philosophical, Historical, and Literary
Digression in §3.14 (p.67) of this book.
§19.2: Is AI Possible in Principle?
§19.3: What Is a Person?
which surveys several theories of consciousness and explores whether any
current or future AI systems could be considered to be "conscious"
according to any of them. (Spoiler: Their answer for current systems is
"no"; their answer for future systems is "yes".)
§19.4: Rights:
§19.6: Personal AIs and Morality:
Later, he added a "zeroth" law, specifying that the other three laws
only held if they did not conflict with it:
Although the authors do not mention Asimov's laws,
Hadfield-Menell et al. (2017),
"The Off-Switch
Game", discusses the conditions under which a robot has the incentive
to switch itself off. Their notion of the robot's information about the
human's "utility" seems akin to Asimov's second law.
See also:
which notes, "The irony is that the Three Laws of Robotics are flawed.
Asimov essentially used them as plot device[s] to drive stories about how the
ambiguities in the laws can still result in conflict and conspiracy."
§19.7: Are We Personal AIs?
§19.8: Questions for the Reader:
0.
A robot may not injure humanity, or, through inaction, allow humanity to
come to harm.
Kan, Michael (2024, 4 January),
"Google Taps Asimov's Three Laws of Robotics for Real Robot
Safety", PCMag
(Translation from
Rapaport, W.J. (1987). God, the demon, and the cogito)
For a philosophical analysis, see:
"[I]t seems probable
that once the machine thinking method had started, it would not take long
to outstrip our feeble powers. There would be no question of the
machines dying, and they would be able to converse with each other to
sharpen their wits. At some stage therefore we should have to expect
the machines to take control, in the way that is mentioned in Samuel
Butler's Erewhon."
Copyright © 2023--2024 by
William J. Rapaport
(rapaport@buffalo.edu)
http://www.cse.buffalo.edu/~rapaport/OR/A0fr19.html-20241122