------art_226401_11939382.1148392895923
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

My experience has been that Ruby programs run just fine as long as their
working sets are relatively small. (In the most important measure of
performance, the subjective one, they are "fast enough.") As they have
larger and larger sets of data to work on, they get slower and slower, at an
increasing rate. With programs that have large working sets (notice, I
didn't say "large programs" - this has nothing to do with line count), a
point comes at which Ruby isn't just too slow- it's too slow by a vast
amount. I don't think this has anything to do with GC, but it may have to do
with all the runtime hashing that Ruby has to do in order to be as dynamic
as it is.

If I'm right about this, then you might be able to address it in some cases
by partitioning working sets across multiple processes or even multiple
process spaces.

On 5/23/06, Peter Hickman <peter / semantico.com> wrote:
>
>
> As to converting things to C. I've had to convert my graphics tools from
> Ruby to C to get the performance I required. But coding them in Ruby
> allowed me to play about with various algorithms
> to find the best before committing my time to crafting the C code. I
> never expected Ruby (or anything but C) to have been fast enough to
> process more than 400,000 images on the kit that I had. Same with the
> database tools. I do not see this as a problem with Ruby. It would have
> been really nice, in terms of writing code, to be able to do everything
> in Ruby but there is a trade off. As you move away from assembler you
> start trade finely honed code for faster development. Even C is a trade
> off.
>
>
>

------art_226401_11939382.1148392895923--