Brooks: Intelligence without Representation
Last Update: 28 April 2003
Questions:
Do we
need to represent
at all?
If so,
what
should be represented?
And
how
?
E.g., logic? no logic?
Highlights of Brooks's theory:
Bottom-up approach to AI:
instead of
decomposing
"human-level intelligence" into
parts
(vision, language, etc.) and
interfaces
...
... start with
simpler systems
that are "complete"
i.e., that can operate in real world with appropriate sensory & acting abilities
e.g., robotic insects
i.e., follow evolutionary path:
build mobility, vision, survial tasks as foundation for "true intelligence"
"these are the hard tasks" that "constrain" solutions to other problems
"Use the world as its own model"
by continually monitoring & sensing the world
"Representation is the wrong unit of abstraction
in building the bulkiest parts of intelligent systems
so, possibly representation is OK for some things:
language
problem solving
reasoning
memory
"Representations...appear only in the eye or mind of the observer"
= Dennett's "intentional stance"
possibly, representations are ways for a
researcher
to
describe
what a robot does & how it works
COG
Real question:
What is the role of
thought
in action?
memory, reasoning, advice taking (learning by being told), etc., all require
concepts
& therefore
representations
References:
Brooks, Rodney A.
(1991),
"Intelligence without Representation"
,
Artificial Intelligence
47: 139-159.
Kirsh, David
(1991),
"Today the Earwig, Tomorrow Man?"
,
Artificial Intelligence
47: 161-184.
Brooks, Rodney A.
(1991),
"Intelligence without Reason"
, IJCAI-91 (San Mateo, CA: Morgan Kaufmann): 569-595.
Brooks, Rodney A.
(1996),
"From Earwigs to Humans"
,
Proceedings IIAS: The Third Brain and Mind International Symposium Concept Formation, Thinking and Their Development, Kyoto, Japan
: pp. 59-66.
Copyright © 2003 by
William J. Rapaport
(
rapaport@cse.buffalo.edu
)
file: 663/brooks.2003.04.27.html