On 16 Apr 2008, at 19:40, Francis Burton wrote:
> I'm not sure how the Halting Problem relates to presence or
> absence of undefined states. What does it mean to call a state
> "undefined" anyway, in this context? Presumably there's always
> a finite number of states, which may be extremely large for most
> programs that perform useful calculations. If we start with very
> small, simple (and not useful) programs, we can enumerate all
> the states. Are any of these undefined? As we increase the size
> of the program, the number of states increases, presumably at
> an alarming rate. At what point do we become unable to identify
> program states?

You're making the mistake of viewing program states as a discrete set,  
all of which can be logically enumerated. If that were the case then  
whilst complexity would make managing the creation of complex software  
difficult, it would still be theoretically possible to create  
'correct' programs. However Godel's incompleteness theorems tell us  
that for any mathematical system based upon a set of axioms there will  
be propositions consistent with that system which cannot be proven or  
disproved by application of the system (i.e. they are unprovable,  
which is what I meant by the casual short-hand 'undefined').

Both Turing machines and Register machines are axiomatic mathematical  
systems and therefore can enter states which in terms of their axioms  
are unknowable. A program is essentially a meta-state comprised of  
numerous transient states and is thus a set of meta-propositions  
leading to state propositions which need to be proved, any of which  
may be unknowable. This applies equally to both the runtime behaviour  
of the program _and_ to the application of any formal methods used to  
create it.

For most day-to-day programming Godel incompleteness is irrelevant, in  
the same way that quantum indeterminacy can be ignored when playing  
tennis, but when you build large software systems which need to be  
highly reliable unknowable states do have potential to wreak havoc.  
This is why beyond a certain level of complexity it's helpful to use  
statistical methods to gain additional insight beyond the normal  
boundaries of a development methodology, and

Now the Halting Problem is fascinating because it's very simple in  
conception: given a program and a series of inputs (which in Godel's  
terms comprises a mathematical system and a set of axioms) determine  
whether or not the program will complete. Turing proved that in the  
general case this problem is insoluble, and not only does this place  
an interesting theoretical limitation of all software systems but it  
also applies to sub-programs right the way down to the finest grain of  
detail. Basically anytime a program contains a loop condition the  
Halting Problem will apply to that loop.

So in essence we're left with a view of software development in which  
we can never truly know if a program is correct or even if it will halt.

> An example of a simple program that does a barely useful task
> is one that reads an input level from a 16 bit A/D, say, and
> writes half that value to a 16 bit D/A. Can we be confident we
> can write a 100% reliable and correct program to perform this
> task? If not, why not? If so, let us increase the complexity
> of the task progressively in small increments. At what point
> are we forced to admit that we cannot be sure our program does
> what it is meant to do?

That's very difficult to say for sure. Conceptually I'd have  
confidence in the old BASIC standby of:

10 PRINT "HELLO WORLD"
20 GOTO 10

as this will run infinitely _and_ is intended to. But of course under  
the hood the PRINT statement needs to be implemented in machine code  
and that implementation could itself be incorrect, so even with a very  
trivial program (from a coder's perspective) we see the possibility of  
incorrect behaviour.

However balancing this is the fact that a program exists across a  
finite period of time and is subject to modification and improvement.  
This means that the set of axioms can be adjusted to more closely  
match the set of propositions, in principle increasing our confidence  
that the program is correct for the problem domain in question.

> I'm not trying to prove you wrong; I just want to get a better
> handle on the problem.

Douglas Hoffstadter has written extensively on Godel and computability  
if you want to delve deeper, but none of this is easy stuff to get  
into as it runs counter to our common sense view of mathematics.


Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net
----
raise ArgumentError unless @reality.responds_to? :reason