In article <5E300D89-09B9-4788-9E43-C2517C0E0A3C / games-with-brains.com>,
Eleanor McHugh  <eleanor / games-with-brains.com> wrote:
>On 16 Apr 2008, at 14:42, Phillip Gawlowski wrote:
>> I doubt, however, that there is a single undefined state in the Space
>> Shuttle's software. No uncaught exception, no reliance on language
>> features to do the right things, but well understood and diligent
>> implementation of those, together with rigorous QA.
>
>It's a lovely idea, but ponder the impact of G=F6del's Incompleteness =20=
>
>Theorems or Turing's proof of the Halting Problem. In practice there =20
>are program states which can occur which cannot be identified in =20
>advance because they are dependent on interactions with the =20
>environment, or are artefacts of the underlying problem space.

I'm not sure how the Halting Problem relates to presence or
absence of undefined states. What does it mean to call a state
"undefined" anyway, in this context? Presumably there's always
a finite number of states, which may be extremely large for most
programs that perform useful calculations. If we start with very
small, simple (and not useful) programs, we can enumerate all
the states. Are any of these undefined? As we increase the size
of the program, the number of states increases, presumably at
an alarming rate. At what point do we become unable to identify
program states?

An example of a simple program that does a barely useful task
is one that reads an input level from a 16 bit A/D, say, and
writes half that value to a 16 bit D/A. Can we be confident we
can write a 100% reliable and correct program to perform this
task? If not, why not? If so, let us increase the complexity
of the task progressively in small increments. At what point
are we forced to admit that we cannot be sure our program does
what it is meant to do?

I'm not trying to prove you wrong; I just want to get a better
handle on the problem.

Francis