Summary:
Over the last ten years, the Defense Department has spent many millions
of dollars on a new computer technology called `program
verification'--a branch of computer science whose business, in its own
terms, is to `prove programs correct'. ... What ... does this new
technology mean? How good are we at it? ... If it were possible to
prove that the programs being written to control automatic
launch-on-warning systems were correct, would that mean there could not
be a catastrophic accident? ... Do the techniques of program
verification hold enough promise so that, if these new systems could
all be proven correct, we could all sleep more easily atnight? These
are the questions I ... look at in this paper. And my answer ... is
no. ... [T]here are fundamental limitations to what can be proven about
computers and computer programs. ... Just because a program is `proven
correct', ... you cannot be sure that it will do what you intend.
Abstract:
It is argued that current semantical techniques for analysing knowledge
representation systems (clear use/mention distinctions, strict
metalanguage hierarchies, distinct "syntactic" and "semantic" accounts,
even model-theory itself) are too rigid to account for the complexities
of representational practice, and ill-suited to explain intricate
relations among representation, specification, implementation,
communication, and computation. By way of alternative, the paper
advocates the prior development of a general theory of correspondence,
able to support an indefinite continuum of circumstantially dependent
representation relations ranging from fine-grained syntactic
distinctions at the level of language and implementation, through
functional data types, abstract models, and indirect classification,
all the way to the represented situation in the real world. The
overall structure and some general properties of such a correspondence
theory are described, and its consequences for semantic analysis
surveyed.
You might also be interested in a couple of my own papers
that discuss Smith's views:
Abstract:John
Searle once said: "The
Chinese room
shows what we knew all
along: syntax by itself is not sufficient for semantics. (Does anyone
actually deny this point, I mean straight out? Is anyone actually
willing to say, straight out, that they think that syntax, in the sense
of formal symbols, is really the same as semantic content, in the sense
of meanings, thought contents, understanding, etc.?)." I say: "Yes".
Stuart C. Shapiro
has said: "Does that make any sense? Yes: Everything
makes sense. The question is: What sense does it make?"
This essay explores what sense it makes to say that syntax by
itself is sufficient for semantics.
Abstract:
A theory of "syntactic semantics" is advocated as a way of
understanding how computers can think (and how the
Chinese-Room-Argument
objection to the Turing Test can be overcome): (1) Semantics, as the
study
of relations between symbols and meanings, can be turned into syntax--a
study of relations among symbols (including meanings)--and hence syntax
can suffice for the semantical enterprise. (2) Semantics, as the process
of understanding one domain modeled in terms of another, can
be viewed recursively: The base case of semantic
understanding--understanding a domain in terms of itself--is
syntactic understanding. (3) An internal (or "narrow"), first-person
point of view makes an external (or "wide"), third-person point of
view otiose for purposes of understanding cognition.