Last Update: 10 April 2007
Note: or material is highlighted |
Entire courses have been devoted to this topic. For more information, do a Google search by clicking on the title above. I also have a large file of articles and newspaper clippings; stop by my office if you want to browse through it. (If I can find it ;-)
Boldface entries are of particular interest or importance.
Powers, Richard (1995),
Galatea 2.2
(New York: Farrar, Straus, Giroux);
LOCKWOOD Book Collection PS3566 .O92 G35 1995
Miller, Christopher A. (guest ed.) (2004), "Human-Computer Etiquette: Managing Expectations with Intentional Agents", Communications of the ACM 47(4) (April): 30-61.
"As we construct machines that rival the mental capability of humans, will our analytical skills atrophy? Will we come to rely too much on the ability to do brute-force simulations in a very short time, rather than subject problems to careful analysis? Will we run to the computer before thinking a problem through?...A major challenge for the future of humanity is whether we can also learn to master machines that outperform us mentally."
Asimov, Isaac (1957), "The Feeling of Power", reprinted in Clifton Fadiman (ed.), The Mathematical Magpie (New York: Simon and Schuster, 1962): 3-14.
Dietrich, Eric (2001), "Homo sapiens 2.0: why we should build the better robots of our nature" [PDF], Journal of Experimental and Theoretical Artificial Intelligence 13(4) (October): 323-328.
La Chat, Michael Ray (2003), "Moral Stages in the Evolution of the Artificial Superego: A Cost-Benefits Trajectory", in Iva Smit, Wendell Wallach, & Goerge E. Lasker (eds.), Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. II (Windsor, ON, CANADA: International Institute for Advanced Studies in Systems Research and Cybernetics):18-24.