KRR is a branch of AI; therefore, there is a prior question:
What is AI?
2 contrasting definitions: Minsky, Boden
3 goals of AI research:
AI as advanced CS, as engineering
AI as computational psychology
AI as computational philosophy
KRR = computational theory of how a cognitive agent (including a
computer) does, can, or should represent and use information, where
"use" = reason with and reason about.
See 4 quotations (Hirst, Edelman, Waltz, McCarthy)
An interesting goal: read a chapter of a book, and answer
the questions at the end of the chapter (= general reading
comprehension); see Reddy; see Friedland et al.
Why "knowledge" representation and reasoning (as opposed, say, to
"information" R&R or just plain "data structures" or "data bases")?
in philosophy, knowledge = justified, true belief
Plato, Theaetetus, ~2400 years ago
cf. Gettier: JTB not a sufficient condition for K
i.e., possible to have JTB without K
better names: belief R&R, information R&R
in fact, "K" is really a synonym for these.
Note: one branch of KRR is representing and reasoning
about "knowledge" (= epistemic logic)
4 questions:
What is to be represented? (what is the domain/ontology?)
How is it to be represented? (what is the representation language?)
How much of the domain needs to be represented?
What can/should be done with it? (how should it be
used/processed? what is the inference mechanism?)
Basic formula: R represents W
R: representation; syntax
W: what is to be represented; ontology
"represents": relation between R and W; semantics
2 senses of the basic formula:
R is a representation of a world
i.e., KR is the study of how to computationally
represent "the world"
but where is the R? (in the world? in a mind?)
So: new formula:
cognitive agent/computer C uses R to represent W.
e.g., C knows/believes that P,
where P is a proposition (= meaning of
a sentence, object of belief, bearer
of truth value); contrast this with
knowing a thing (noun phrase) or knowing
how to do an action (verb phrase).
know/believe are propositional
attitudes; others are: wish, judge,
hope, etc.
See SIGART 70 (Feb 1980).
R: the KR could be any of the following kinds (syntactic
issues):
linguistic
declarative (knowing that)
e.g., logic/math (2+2=4),
empirical/historical (Bush is President),
definitional (GCD =def largest positive
integer that divides two numbers),
algorithmic (to compute GCD, do...)
procedural (knowing how)
e.g., knowing how to ride a bike,
knowing how to compute GCD; knowing
an algorithm and how to use it.
e.g., knowing who Bush is, knowing
Bush personally; cf. Boer & Lycan
diagrammatic/pictorial
analog vs. digital
sensory/"perceptual"; cf. Barsalou
distributed vs. local
permanent vs. re-created as needed
other syntactic issues:
must representations exist? (Brooks:
intelligence without representation,
intelligence without reason)
are the above alternatives mutually
exclusive?
maybe there are representations
of all of these kinds, serving
different purposes; if so, how should
they interact?
maybe some things that can
be represented don't have to be.
W (ontological issues):
what's represented?
facts (truths) about world? (the way the
world is?)
what about falsehoods?
surely we have to represent
them in order to assert that they're
false!
= problem of negative existentials:
how to represent "there are
no elephants in my refrigerator",
"there are no unicorns"
by negation?
by a non-existing
elephant?
by not saying anything?
cf. Hirst
what world?
real?
possible? (way the world might be)
fictional?
mental representation of any of these?
which could be wrong!
do we need 2nd-order representation?
R represents B[elief]
B represents W
see Smith,
"Correspondence Continuum"; Minsky on models
different "guises", ways of thinking about W
e.g., Bush, President of the US,
Commander-in-Chief
e.g., morning star/evening star
intensional representation
(see SNePS webpage, Hirst paper cited
above)
denotation/reference vs. "sense" (Frege)
should everything be represented?
or can we get away with representing only
some things, and inferring the
rest?
if so, what should be represented?
(cf. axioms vs. theorems)
need to represent both specific info
and general info
specific: Fido is a dog, Fido barked at 3:00am
general: Dogs are animals; Fido barks at 3:00am
the representation relation (R represents W)
(semantic issues):
soundness: anything that's represented is
in W (i.e., there's nothing false or
fictional, with respect to the chosen W)
completeness: all of W is represented (somehow,
even if only by inference)
more later...
The cognitive agent (human or computer):
BDI cycle:
cognitive agent has knowledge (Beliefs) about the actual
world or about some (possibly fictional) domain,
including general background knowledge, "situated"
knowledge aobut current environment (perhaps as a
result of perception)
cognitive agent has goals (Desires)
cognitive agent forms Intentions to act, based on B&D
cognitive agent acts, and thereby changes the world
go back to beginning of cycle
agent's internal representation of external world
is always incomplete and usually contains errors,
i.e., is unsound, due to:
changes in world (world isn't static)
agent's inability to learn everything needed
in reasonable time
limits of agent's KR system
possibly noisy
possibly incapable of representing some
things (cf. insects, dogs)
"Horatio's Principle" (from Hamlet): "There
are more things in heaven and earth than are..."
represented/representable in your KB
"dreamt of in your philosophy"
cf. Gödel's Incompleteness Theorem:
There are more truths than theorems.
possible advantage of partial model:
simpler; can infer "missing"
aspects