ToRA wrote:

> On the typing issues, it seems that the third axis of declarative
> vs inferred typing has been missed.

That's because I don't understand it.

> For example, in Java/C# etc. you have to declare the type of any
> and every variable you create.  In languages like ML / Haskell,
> the compiler is able to infer the types of your variables (you may
> optionally expicitly declare the type of them yourself if you want).

Woah, that is _exactly_ like the Eclipse editor for Java. If you don't 
declare the type correctly, Eclipse will infer what it is (I suspect by 
running a Java parser inside Java), and at a keystroke will push the correct 
type in.

Conclusion: Java + Eclipse == Haskell ;-)

> I'll admit that actually writing the algorithms to do the inference for
> something as dynamic as ruby would be hard (and probably provably
> "impossible") to cover every case, but if you're coding in the sane
> 90%ish of the language that doesn't use eval or equivalent to redefine
> every method call to something else, then you should be able to have
> the computer check that you are doing something right, or at least not
> somethign that is impossibly wrong.

We need look no further than the efforts to optimize Smalltalk to see how 
far this concept can (and can't) push.

> My .02

That's why I just added this:

http://www.c2.com/cgi/wiki?RubyVsJava

It compares Java's Log4j to its clone, Ruby's Log4r.

"So they both apparently solve exactly same problem in exactly the same way.

"log4j's src folder has 31,764 lines of code.

"log4r's src folder has 2,071 lines of code."

Some languages are just more fun to belt out lots of lines with, I guess...

-- 
  Phlip
  http://www.greencheese.org/ZeekLand  <-- NOT a blog!!!