Dave Thomas  <Dave / PragmaticProgrammer.com> wrote:
>jmichel / schur.institut.math.jussieu.fr (Jean Michel) writes:
>
>> The  individual memory/cpu  numbers are  quite interesting.  Contrary to
>> what you  say, they show ruby  behing perl/python most of  the time (the
>> reason ruby fares quite well is that *all* tests were easily implemented
>> in  ruby while  other languages  were maybe  not quite  so fun  thus are
>> missing points on  unimplemented tests).

The tests in this shootout seem to be not quite serious.
There are often substantial differences in the algorithms
implemented for different languages. Also, as noted above, 
many languages lose points because nobody bothered to
implment all of the test cases.

>And the interesting thing about that is that Ruby was added to the
>tests late, which speaks well for its adaptability.

Well, it's not as if these test cases are hard programming
challenges.

>> Most interesting for me is the ackerman and heapsort which purely
>> test the speed of the interpreter and where ruby is about twice as
>> slow as most other interpreted languages (which score close to each
>> other, with the exception of lua which seems impressively fast).
>> This seems to imply that if we look hard at it, there should be a
>> way to speedup ruby by a factor of 2!
>
>Possibly, but I'm not totally convinced. I think that a true benchmark 
>would measure a language doing what it does in the real world. The
>ackerman results are interesting, but not really indicative of any
>performance problems I've personally every experienced with Ruby.

The fibonacci and heapsort results are similar - a lot of function 
calls and stack usage.

>Now, Ruby's speed isn't perfect. In particular, it is severely hampered 
>by the current garbage collection overhead once you start getting
>1,00's of objects on the heap, and that can slow up the "parse a file, 
>process the objects" kind of application. The new generational GC
>should help a lot there, and it will be interesting to see what effect 
>that has on the string-based tests.

Has there been any public discussion of a new GC strategy?  I'm
not any kind of expert, but my understanding is that 'generational'
generically refers to methods for amortizing GC usage over time 
rather than to any algorithmic speedup in the overall time spent 
doing GC.  The amortizing would help, say, the responsiveness of
a user interface, but not a benchmark.

One strategy that I suspect would help Ruby GC would be to
treat class definitions, method definitions, and global
objects differently than other garbage, since they are 
expected to be long-lived.  Perhaps they should go on
a different, compactified heap.

-= Josh