Computer Programs as Mathematical Objects
Last Update: Friday, 8 November 2024 |
Note 1: Many of these items are online; links are given where they are known. Other items may also be online; an internet search should help you find them.
Note 2: In general, works are listed in chronological order.
(This makes it easier to follow the historical development of ideas.)
§15.1.1: Bugs and Intended Behavior:
§15.1.2: Proofs and Programs:
§15.3: Program Verification:
§15.4: The Fetzer Controversy:
"thinking does not guarantee that you will not make
mistakes. But not thinking guarantees that you will."
The computers used to execute the programs, by contrast, are physical, in
space/time, and are only amenable to empirical confirmation. It's a
category mistake (a logical blunder) to think we can have the kind of
certainty about the performance of a causal system in the world that we can
have about an abstract object.
Simple example: A physical machine in a physical room runs on electricity
and will not complete its tasks without it. Even a verified program,
uploaded into the machine, cannot guarantee what will happen when the
machine executes that program because, for example, there might be a power
failure (as you also observe).
The blunder therefore is to confound abstract objects with causal systems,
We can possess deductively certain knowledge about algorithms relating
inputs to outputs as abstract entities. But we can only have inductive
confirmation for the performance of machines when they execute those
programs.
Hence, the offending sentence, "for Fetzer (and Cleland), computing is
about the world; it is external (or "wide"). Thus, computer programs can't
be ("absolutely" or "externally") verified, because the world may not be
conducive to "correct" behavior:…" where the rest is fine.
It would be more appropriate to say, "There are abstract computers (which
are not in space/time) that are amenable to deductive proofs (that
specified inputs yield specified outputs, on the basis of their programs.
But they are not the same as physical computers (which are in space/time)
that are only open to inductive reasoning, which means that the causal
performance of physical computers — even
when executing deductively verified
programs — cannot
be predicted with deductive certainty but only with inductive fallibility."
And to which Fetzer replied:
§15.6.2: Models: Putting the World into Computers:
"Maybe I made a mistake. But [John] Cage says: A 'mistake' is
beside the point. Once anything happens, it authentically is."
—Richard Powers, Orfeo: A Novel (New York:
W.W. Norton, 2014), p. 11.
Fresco, Nir; & Primiero, Giuseppe (2013),
"Miscomputation", Philosophy & Technology
26:253–272.
Kripke, Saul A. (2023),
"The Question of Logic", Mind:
"Surely, if a machine set up to do [the logic in Bertand Russell's and
Alfred North Whitehead's] Principia
Mathematica concludes, let's say, that the distributive law fails in
Principia Mathematica's logic, one would think that something has gone
wrong with either the programming or with (say) the fuse, not that it
doesn't hold in Principia Mathematica. Why? Because if we have this
concrete object in front of us, our belief that it produces exactly the
same results that would hold in Principia Mathematica depends on
(a) a mathematical proof
that any abstract Turing machine so programmed
will do so (one that uses both logic and mathematics); (b) a physical
argument that this particular object, built up in this physical way, will
behave like this Turing machine program (that depends on our beliefs
in the laws of physics); and (c) an engineering hope
that, in fact, the
machine is well enough set-up that it won't blow a fuse at this point."
(p. 26)
containing philosophical remarks on the value of program verification.
The sentence at the top of page 359 is not quite right. We must distinguish
between computer programs (as implementations of algorithms), which are
deductively verifiable; and physical computers that execute them (when they
are implemented in computers). The programs as such are abstract, not in
space/time, and are amenable to deductive verification.
To which I responded:
Would it have been better for me to have said something like this?:
"…physical computing is about the world…
thus, the physically realized
execution of a computer program can't be ("absolutely" or "externally")
verified…"
Sure. Something like that. Not a big deal. I think you get the point
across, even if I might have explained it a bit differently. The emphasis
should be on abstract objects vs. causal systems, where the former are
amenable to deductive proof but the latter only to inductive confirmation.
Hence, we can have certain knowledge about the former but only uncertain
knowledge about the latter.
Rapaport, W.J. (2012). IntenSionality vs. intenTionality.
"… how [can] we tell whether a given piece of live mathematical
reasoning corresponds to a given actual or envisioned formal proof …
How does one guarantee that the stated axioms or premises of the formal
proof are in fact necessary for the intuitive, pre-theoretic notions
invoked in the informal text? That is, how do we assure ourselves that
the formalization is faithful? This question cannot be settled by a
formal derivation. That would start a regress, at least potentially.
We would push our problem to the axioms or premises of that
derivation. Moreover, any formalization of a real-life argument reveals
missing steps, or gaps, and plugging those would sometimes require theses
much like Church's thesis"
(Stewart Shapiro 2013,
pp. 158–159).
Copyright © 2023‐2024 by
William J. Rapaport
(rapaport@buffalo.edu)
http://www.cse.buffalo.edu/~rapaport/OR/A0fr15.html-20241108