- Knuth 1973: 4–6
characterizes the informal, intuitive notion of "algorithm" as follows:
- "Finiteness. An algorithm must always terminate after
a finite number of steps…"
- "Definiteness. Each step of an algorithm must be
precisely defined; the actions to be carried out must be
rigorously and unambiguously specified for each case…"
- "Input. An algorithm has
zero or more inputs…"
- "Output. An algorithm has
one or more outputs…"
- "Effectiveness. [A]ll of the operations to be
performed in the algorithm must be sufficiently basic that they
can in principle be done exactly and in a finite length of time
by a [hu]man using pencil and paper…"
Note: We can also say that A is an algorithm for
computing a function f means that A is an algorithm
as characterized above and that, for any input i,
A's output for i = f's output for i;
i.e., for any i,
A(i) = f(i).
- Computer programming languages (like Java, Lisp, Fortran, etc.) are
formal languages for expressing (or "implementing") algorithms.
- Every computer programming language is equivalent in expressibility
to a Turing-machine programming language.
- I.e., every program in any programming language can be translated
into the language for programming Turing machines, and vice versa.
- I.e.,
any function that is computable by any programming language
is computable by a Turing machine, and vice versa.
- Some real computer programs violate parts of definition 1:
- E.g., airline-reservation systems, ATMs, operating systems, etc.,
never terminate.
- E.g., heuristic AI programs don't always compute exactly
the function that they were written for, but only come very close.
- E.g., the "effectiveness" of "mundane" or everyday procedures
(like recipes) depends on
the environment in which they are executed.
- E.g., using a different brand of some ingredient can
ruin a recipe, or one chef's "pinch" of salt might be another's
1/8th teaspoon. And can you really execute a recipe "using
pencil and paper"?
- E.g., algorithms can apparently be written that can
perform an infinite computation in a finite amount of time
(by continually accelerating).
- E.g., we can sum the terms of an infinite sequence in
a finite amount of time if we take 1/2n second to
add the nth term.
- Therefore, these programs
[i.e., the "real programs" referred to in premise 4]
do not implement Turing machines
(contrary to premise 3).
- Therefore, they
[i.e., the "real programs" referred to in premise 4]
are not computable. (But how can a real computer
program not be computable?!)
|