Zed Shaw wrote:
> On Sun, 2006-08-27 at 12:04 +0900, M. Edward (Ed) Borasky wrote:
> 
>> For the software engineering philosophers on the list, what's the
>> difference between a language that forces the engineer to explicitly
>> manage dynamic memory allocation and de-allocation, and one that
>> supposedly relieves the engineer from that need -- until you crash in a
>> production system that worked on smaller test cases a couple of months
>> ago? :)
>>
> 
> This is exactly the problem I'm complaining about.  It's not that Ruby's
> GC isn't collecting the ram fast enough or anything like that.  If Ruby
> runs out or I call GC.start then dammit clear the dead objects.  It's
> not a suggestion, it's a command.  Collect the RAM.  Don't do it lazy
> until my OS or process crashes.
And when a lazy garbage collector meets a lazy OS memory manager, all
heck breaks loose. Three syncs, a garbage collect, repeat thrice? This
*is* the 21st century, right? Do the &$%^# I/O, dagnabbit!! :)

> So, for me, despite everyone talking about timeouts and stuff (and I
> know people are just trying to find out why Ruby does this) it's all
> useless since I still have a program crashing and I'm now looking at a
> total redesign to work around a GC.
> 
> And yet, all I really need is a single option I can pass to ruby that
> says "never go above X" instead of this psuedo-leak-until-crash crap.
If I may use the J word ... the Java run-time has stuff like that. Set
your RAM size too small and the garbage collector is about all that
runs. Set it too large, and the JRE and its workload gets paged or
swapped out.

So you buy more RAM. Guess what? Some Linux kernels perform *worse* the
more RAM they need to manage! Tables to manage high memory (> 1 GB) are
kept in low memory (< 1 GB). It doesn't take much for the tables to fill
up low memory. So you buy 64-bit processors. :)

But Windows has similar problems -- it's not just Linux. I don't know
about BSD or Mac OS X. As the marine biologist said, "Bah Humpback!"

> 
>> 5. Can somebody run these Ruby leak tests/demos on a Windows XP or 2003
>> Server with multiple processors? I'm really curious what happens.
> 
> Luis Lavena ran our tests on Win32 and found that Mutex works but Sync
> doesn't, so I'm back to square one.  I'm probably going to do a fast
> patch so that people can just pick either and see if the leak goes away.
Yeah, I figured Windows was gonna fail one way or another. And it
probably varies depending on whether you use MinGW or Visual Stupido to
compile Ruby <weg>

Well, enough trolling for one night ... I'm going to see if I can write
a web server in GForth in a 16-bit address space.

<ducking>