Now we can start talking about the real numbers.  In my linux 1.6.7 Ruby,
it is defined in gc.c that

#ifndef GC_MALLOC_LIMIT
#if defined(MSDOS) || defined(__human68k__)
#define GC_MALLOC_LIMIT 200000
#else
#define GC_MALLOC_LIMIT 8000000
#endif
#endif

I don't know about the units, but it is likely that they are in
bytes.  Therefore, if we are not using DOS and human68k(?), the maximum
amount of garbage that we may retain is 8 megabytes.  For me, memory is
relatively cheap these days, and as I will likely have only one such
process in a single computer, 8 megabytes (more likely less than this, as
I will have real Ruby objects that I need to keep) is a small price to pay
for the increase in the execution efficiency.  Furthermore, we can always
redefine GC_MALLOC_LIMIT to reduce the maximum possible wasted memory.

In fact, in my application I hope that once the Ruby code is parsed, my C
code takes over until the process is finished, so that I will only have "C
speed" and not "Ruby speed" at all.  Ruby is used only for system
specifications and configurations.  Based on your response alone, it seems
that I am on the right track for my (specific) application.

Regards,

Bill
===========================================================================
Joel VanderWerf <vjoel / path.berkeley.edu> wrote:
> William Djaja Tjokroaminata wrote:

>> 1) When Ruby memory is dynamic, we don't need to use ALLOC to free memory
>> periodically, because the garbage collector is already being called
>> often by Ruby internals.  If we use ALLOC, then the garbage collector is
>> called even more often, which may result in execution performance
>> degradation.

> Suppose your ruby code produces a large amount of garbage, but not 
> enough to trigger GC, and then your C code takes over for a while and 
> starts allocating. Unless you call back into ruby code or manually 
> invoke ALLOC or GC, that amount of garbage is, in effect, deducted from 
> the space your C code has to play with.