AI tries to answer: "How much of cognition is
computable?"
So, KRR tries to develop a computational theory of
how a cognitive agent does/can/should
represent and use (i.e., reason
about and with) information...
... about the world
... about an agent's beliefs about the world
Cognitive agent A knows/believes that α.
Representation R represents world-fragment W for A.
The representation R has a syntax & a semantics:
Syntax:
relations among
the symbols ("markers") of the representation
language
grammar:
"shape", symbol manipulation;
"term", "wff"
which strings of symbols are wf
Ontology(1): the non-logical
details of the KR language
proof-theory:
axioms, rules of inference
which strings of wffs are theorems
Semantics:
relations between symbols & what they represent
meaning, truth
Ontology(2): theory of the represented world
what there is
what kinds of things there are (categories)
properties, relations
"syntax" of the world
Brian Cantwell Smith's paradoxical
observation:
Semantic theory requires its own KR language!
Language of FOL
(Grammatical) Syntax:
punctuation
connectives
variables
function symbols
(including 0-place ones, i.e., constants)