On Thu, Apr 17, 2008 at 10:29 AM, Eivind Eklund <eeklund / gmail.com> wrote:
>  In the absence of hardware failure (something we of course can't prove
>  and is a problem in the real world), we can trivially prove halting
>  for a lot of

(trivlal :)

> cases.  E.g, the simple program "halt" halts.

If 'halt' is defined as implying that the program halts, then you're
not really proving anything here. Like saying that true implies true.
You have to take that as an axiom.

> halt if
>  true halts.  halt if 1 == 1 would generally halt, assuming no
>  redefinition of ==.  An so on.

Once you invoke conditions, you are invoking some formal system of
axioms (like your assumption above) which you have to define. I grant
that there are very simple formal systems which are 'complete' but
they are generally not 'interesting'. If you want even simple integer
arithmetic[1], you're subject to Goedel's incompleteness theorem. And
this is just in the abstract. As soon as you put even the 'halt'
program on real hardware, all bets are off. You simply cannot prove
anything about it.

I think I'd rather fly on plane whose avionics software had been
written by someone who assumed that they had made mistakes somewhere
and that both their software and the hardware it was running on were
going to fail in some unforeseeable way (and had implemented the
appropriate safeguards both in process and error recovery) than by
someone who assumed that proving their software correct was sufficient
(though I realise that is a bit of a straw man :)

On a separate note, I'm continually surprised that computer 'science'
is so heavily biased to the mathematical theoretical side at the
expense of the empirical and pragmatic. Almost every single code
example in all textbooks and papers has 'error handling omitted for
clarity'! It's no wonder we have all these planes falling out of the
sky :)

Best regards,
Sean

[1] The number theorists are now shrieking in disbelief: Simple?
Integers? Arithmetic?! :)