Bob Hutchison wrote:

[snip]
> Well I tried your test on OS X. The Sync had no problem, the mutex
> showed the memory growth (though it eventually (fifth iteration I think)
> cleaned itself up). I modified your test to create exactly 1000 threads
> and call GC three times at the end, things were better, i.e. it released
> its memory more quickly than without, but still not good. I ended up with:
> 
>       GC.start
>       `sync; sync; sync`
>       sleep 1
>       GC.start
>       `sync; sync; sync`
>       sleep 1
>       GC.start
>       `sync; sync; sync`
>       sleep 1
>       GC.start
>       `sync; sync; sync`
>       sleep 1
> 
> and this made a bigger difference. The memory usage was much more
> tightly bound.
> 
> (And yes, the three calls to sync are also on purpose... in the late 70s
> through the 80s, calling sync once didn't guarantee anything, you had to
> call it a few times, three generally worked... I don't know the current
> situation because it is easy enough to type sync;sync;sync (well, in
> truth, I usually alias sync to the three calls))
> 
> But of course, the point is that despite appearances there is likely no
> memory leak at all on OS X, just some kind of long term cycle of process
> resource utilisation -- this is a complex situation, Ruby GC, process
> resource utilisation/optimisation, and system optimisation all
> interacting. Who knows what's actually going on.
Finally someone with some platform details!! <vbg>

OK ... here's my take

1. The OS "three-sync" thing is, as you pointed out, a throwback to days
when you needed to do that sort of thing. It's superstition.

2. IIRC OS X is a BSD-type kernel rather than a Linux kernel, so at
least we know a different memory manager still needs "help" to deal with
this kind of application.

3. Typing a single "sync" into a *Linux* kernel when there's a lot of
"stuff" built up in RAM is a very bad idea. It will force the system
into an I/O bound mode and lock everybody out until the kernel has
cleaned up after itself. Either OS X has a better memory and I/O manager
than Linux, or you didn't have a lot of "stuff" built up from this
simple test. The second and third syncs are both unnecessary and
harmless. :)

4. Deleting references to no-longer-needed objects and then explicitly
calling the garbage collector has a longer history and tradition than
UNIX. It is "standard software engineering practice" in any environment
that has garbage collection. Just last week, I had to stick such a call
into an R program to keep it from crashing (on a Windows machine with 2
GB of RAM!)

For the software engineering philosophers on the list, what's the
difference between a language that forces the engineer to explicitly
manage dynamic memory allocation and de-allocation, and one that
supposedly relieves the engineer from that need -- until you crash in a
production system that worked on smaller test cases a couple of months
ago? :)

5. Can somebody run these Ruby leak tests/demos on a Windows XP or 2003
Server with multiple processors? I'm really curious what happens.