From: "Rick Nooner" <rick / nooner.net>
>
> Yesterday at work we took an analysis program written in ruby that we had been
> running on a Solaris box (Sunblade 1500, 1 Gig RAM, 1.5 Ghz Sparc) and moved
> it to a windows box (HP D530, 1 Gig RAM, 2.8 Ghz Pentium) to do performance
> comparisons.
> 
> The analysis builds a profile in memory of over 3.6 GB of data on disk.  On
> the Solaris box, it takes about 35 mins and uses about 700 MB of RAM.  It
> would not complete on the windows box using the full data set, bombing with
> "failed to allocate memory (NoMemoryError)".  There was nearly 800 MB of
> RAM free on the windows box as well as having a 4 Gig swap available.
> 
> Is windows that inefficient with memory allocation or is this a ruby
> implementation issue on windows?

Any chance the program does any large block allocations?
Like more than 250 MB in one chunk?  I've noticed windows
performs poorly when a program churns lots of small
memory allocations, then occasionally wants to grab a big
block of memory.  I have a test program (two, actually -
one in ruby, the other in C using malloc) that doesn't 
fail on Linux and Darwin, and does fail on Windows.  Even
though the Linux and Darwin systems had less physical 
RAM and less swap than the Windows system with its 2GB
ram and 4GB swap.  When it fails on Windows, there's 
plenty of system memory still available - but the windows
heap management has allowed the process virtual memory
space to become so fragmented that there's no room to map
the large block allocation into the process' virtual 
address space.


Regards,

Bill