Simen Edvardsen wrote:
> On 8/10/06, Francis Cianfrocca <garbagecat10 / gmail.com> wrote:
>> This thread has probably gone on long enough ;-)
>
> Probably.
Well ... I've sort of stood back the past day, except for spotting some 
semi-hemi-demi-quasi relevant web links courtesy of the ACM and 
del.icio.us. :) But the fundamental discussion here is highly 
interesting, and certainly relevant if Ruby is to be a "significantly 
better language" than its scripting cousins Perl, Python and PHP, or 
other programming languages in general. Among the *popular* languages, 
Ruby seems to be more "computer-science-aware" than any of the others.

As I've noted, I got interested in proving programs correct in the early 
1970s, when I was in graduate school but not as a computer science 
student. I continued that interest, collecting books on the subject and 
attempting to introduce the concepts at my workplace when I became a 
professional programmer again upon my escape from graduate school. I 
still have most of the books, although I haven't looked at them recently.

I still look at the literature on this subject occasionally, although 
I'm much more interested in the literature on the Markov/queuing/Petri 
net/performance modeling arena of computer science now than the 
"correctness" arena.


>> The thing that really distinguishes functional programs
>> is that you can reason mathematically (and one hopes, automatically)
>> about their correctness. Rather than being stuck with the whole sorry
>> round of unit-test/debug/regress/repeat that we use with Ruby and every
>> other non-functional language. Apart from the reference to Ada/Diana (a
>> sub-language with a rigorous formal semantics), I haven't heard anyone
>> on this thread try to relate this benefit of functional languages to
>> non-functional ones.
I don't think it's necessarily either functional syntax or semantics 
that makes this possible, but restrictions on the size of programs, 
strong and static typing, restrictions on non-reversible operations like 
assignment, etc. Yes, Dijkstra's maxim, "Testing can only show the 
presence of defects, never their absence," is true. But I have seen 
(tiny) FORTRAN programs proved correct, (tiny) Lisp programs proved 
correct, and as I noted a post or two ago, I think the original effort 
applied to a program expressed as a (small) flowchart.
>> To answer someone else, the ability to automatically process programs is
>> why I'm interested in mini-languages (DSLs) written in Ruby that are
>> non-Turing complete, because they may be decidable, if designed
>> properly.
>
> Any internal DSL has access to the whole of the parent language, and
> so it can not be non-Turing complete as long as the parent language
> is. Restricting access to features of the parent language to get this
> automatical correctness tester or whatever to work seems plain silly.
> Besides, there's no mathematical notation for a programmers intent,
> and probably never will be, so a processor will probably never be able
> to prove that a program does what its author intended to do.
Well ...

1. No programmer I've ever known has stood by idly while someone took 
Turing completeness away from him or her, either deliberately or 
accidentally. :)
2. Part of the whole "science" of proving programs correct is designing 
ways for the customer to formally specify what a program must do to be 
correct. A processor not only has to know what the programmer intended 
but what the customer wanted as well. :)

In any event, the academic thread that appears to have the strongest 
legs these days in the whole realm of proving reliability of our 
software is the Pi-calculus. Most of what I've seen (which is the 
abstract of one paper!) is in reference to Erlang. See 
http://www.erlang.se/workshop/2006/. I'm not at all familiar with Erlang 
the language, although I'm quite familiar with Erlang the queueing 
theorist. :) I downloaded and installed it, though.