Eleanor McHugh wrote:
> On 16 Apr 2008, at 19:44, Robert Dober wrote:
>> On Wed, Apr 16, 2008 at 6:23 PM, Eleanor McHugh
>> <eleanor / games-with-brains.com> wrote:
>>> On 16 Apr 2008, at 14:42, Phillip Gawlowski wrote:
>>>
>>>> I doubt, however, that there is a single undefined state in the Space
>>>> Shuttle's software. No uncaught exception, no reliance on language
>>>> features to do the right things, but well understood and diligent
>>>> implementation of those, together with rigorous QA.
>>>>
>>>
>>> It's a lovely idea, but ponder the impact of G?del's Incompleteness
>>> Theorems or Turing's proof of the Halting Problem. In practice there 
>>> are
>>> program states which can occur which cannot be identified in advance 
>>> because
>>> they are dependent on interactions with the environment, or are 
>>> artefacts of
>>> the underlying problem space.
>>>
>> I am not sure but on a first approach I believe that neither G?del nor
>> Turing apply because they are talking about systems describing
>> themselves. IIRC it is a theorem in TNT(1) making an assumption about
>> TNT in the first case and a turing machine reading the description of
>> a turing machine on its tape in the second case.
>> I do not believe that Aircraft Control Systems have this degree of
>> self awareness, but I can stand corrected if I am wrong, because
>> although I have been taught a lot about TM and TNT I do not know a lot
>> about Aircraft Control.
>
> Any process that is algorithmic is necessarily implementable as a 
> Turing machine so I'd argue that the very act of coding the system 
> with a process defines TNT(1) whilst the target system itself becomes 
> TNT. Therefore until the system runs in situ and responds to its 
> environment one cannot make any firm statements regarding when the 
> system will halt. And if you can't tell when an autopilot will halt, 
> you have the potential for all kinds of mayhem...
>
> Of course this is a terrible abuse of Church-Turing, but it seems to 
> fit the real world pretty well.
>
>>> That's why run-time error handling and fail-safe behaviour are so 
>>> important
>>> regardless of the rigour of Q&A processes.
>> That however I agree with!
>
> :)
>
>
> Ellie
> Who wonders what the hell this thread will look like in Google searches.
>
> Eleanor McHugh
> Games With Brains
> http://slides.games-with-brains.net
> ----
> raise ArgumentError unless @reality.responds_to? :reason
>
>
>
>
My thanks to the core contributors to this fascinating thread. It's 
stretched me well past my boundaries on several points, but also 
clarified some key learning I've picked from my own field (psychology & 
psychotherapy).

For me, the take-away here (and this is not at all news to me) is that 
valid formal approaches to reliability are efficient (at least at times) 
and powerful, and should definitely be used - WHEN WE HAVE THEM. The 
problem is that they all stop short of the full spectrum of reality in 
which our processes must survive. Thus we must leave ultimately the 
comfort of deduction and dive into dragon realm of inferential 
processes. Ultimately, there simply is not substitute, or adequate 
simulation of, reality. Sigh.

Thanks, again. I've much enjoyed plowing through these posts.

t.

-- 

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Tom Cloyd, MS MA, LMHC
Private practice Psychotherapist
Bellingham, Washington, U.S.A: (360) 920-1226
<< tc / tomcloyd.com >> (email)
<< TomCloyd.com >> (website & psychotherapy weblog) 
<< sleightmind.wordpress.com >> (mental health issues weblog)
<< directpathdesign.com >> (web site design & consultation)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~