>From: "David Douthitt" <DDouthitt / cuna.com>
>| Clemens Hintze <c.hintze / gmx.net> wrote:
>| [...]
>| And the speed issue ... I think there is no such an issue, if you can
>| code time critical parts in C. I strongly believe that my app using
>| scripting and C together would be nearly as performant as his one
>| coded in C/C++ only. And perhaps more reliable, thinking for Ruby's
>| true garbage collector, for instance.
>
>This is something I came up against already - and just in the first
>two weeks :-)  I have a set of applications that were written
>in Perl 4, which scan the UNIX system logs and generate color-coded
>HTML pages for them.  Whether using Perl or Ruby, they take a LONG
>time - especially for an application which runs every five minutes.
>They also suck an incredible amount of CPU time, slowing everything
>done noticibly (including terminal response time).
>
>All they do is scan the log (41,000 lines plus) and generate HTML
>files based on them.  At one time I had them (Ruby version, Perl
>version) generating separate files for each system in the log;
>when I switched to using ksh and grep, the speed increase was incredible.
>I'm still stuck though, since scanning for one particular host
>(with 41,000 lines!) can take over 3 minutes.
>
>The application is quite simple really (two pages in Ruby) but
>the speed is in the tank.  Time for GNU Smalltalk?  Scheme?
>Eiffel?  Don't know.... still looking (and wanting to learn
>something new!)

Is that 41,000 *new* lines every 5 minutes?  Or is it e.g. 40,900 old
lines and 100 new ones?  What I'm driving at is: have you considered
ways of saving info from the nth run of the program that could
drastically simplify the work of the n+1th run?

Hope this helps, Lew
---
Lew Perin | perin / acm.org | www.panix.com/~perin/