Nat Pryce wrote:

> From: "Sean O'Dell" <sean / BUHBYESPAMcelsoft.com>
> 
>>In C++, I use the stack to create/destroy objects and if the objects
>>themselves need to dynamically allocate large amounts of memory, I use
>>the destructors to free up that memory.  Garbage collection and the lack
>>of destructor functions are a nightmare to me...it's those two features
>>precisely that will keep me from using Ruby in any large projects.
>>Don't get me wrong, I love Ruby to death, it's an amazing language...but
>>garbage collection is terrorizing me...I can't abide memory usage
>>building up to a critical point and then having this giant collection
>>process kicking in, dominating my application.
>>
> [snip]
> 
>>I wish we could ditch it for stack-based memory management, with real
>>object destructors to allow clean-up mechanisms for the dynamic memory
>>allocations.  But, I assume there's some fundamental design issues that
>>would make that impossible.
>>
> 
> Managing memory by allocating objects on the stack is fine, as long as
> object lifetimes can be directly related to lifetimes of lexical scopes.  In
> my experience this is not often true in large applications, especially in
> applications that have an object-oriented design.  That's why C++ has the
> new operator after all.  However, if you want to do the same thing in Ruby,
> use the "class allocates instance, passes instance to block, cleans up
> instance" idiom to enforce an object lifetime to be the same as a lexical
> scope.  Of course, you have to be careful not to keep a reference to that
> object outside the lexical scope, otherwise you end up with a dangling
> pointer that references an invalid object.  You can also explicitly call the
> GC at the end of those scopes -- this will give you pretty much the same
> behaviour as your C++ program.


Collection does too much...it takes longer the more objects there 
are...that's not good scaling.  It really, really bugs me.

Object lifetime doesn't HAVE to be tied to a lexical scope, IMO, and 
still be clean and tight.  I like to have parent objects that "contain" 
other objects and when the parent destructor is invoked, it destroys all 
of its children.  It's not as automatic as on the stack (where there's a 
tie between the object life and its scope), but it's still very tight 
and it's well worth the risk of the occassional "floating" object.

The block idiom is cool...but it gets so messy sometimes.  If you have 
10 objects you want to be scoped to a block, you're nested like 20 
spaces indented in the code...it gets ridiculous...I can't even tell 
what code belongs to what block after enough nesting.  It's a neat 
feature, but it doesn't address the lexical scope issue but just barely.

> Also, you can use finalisers instead of destructors.  Compared to C++
> destructors they are more flexible -- other objects can register a finaliser
> on an object so that they can clean up their internal state when that object
> is collected -- and safer --  you cannot get dangling pointers that
> reference objects whose destructors have been called.


Finalizers don't cut it for me.  Proper object destruction requires 
special tasks much of the time (freeing memory, closing files, etc.).  A 
proper destructor is helping keep the function of the object 
encapsulated.  It's an exit routine.  Last I heard, finalizers were 
called when the object was already gone...so you can't much in the way 
of closing anything or freeing anything that the object was 
maintaining...it's just not the same thing.

I realize it's probably a trade-off for some other cool features, but to 
me it's just not OOP if you don't have destructors, and finalizers are 
not destructors.

> Finally, you shouldn't be terrorized by GC suddenly slowing your program
> down.  Empirical studies from around 10 years ago showed that conservative
> garbage collectors had comparable performance to manual memory management --
> for some applications GC was faster, for some slower, but on average the
> same -- and garbage collectors have improved a lot since then.  Manual
> memory management can also take over your program at unexpected times; have
> you ever looked at the amount of work malloc and free have to do to avoid
> heap fragmentation, or how reference counting causes poor locality of
> reference and thereby lots of cache misses?


I'm not actually terrorized by the thought of slow-downs...just by the 
thought of embedding Ruby in any long-lived portions of my applications, 
or where iterations will be creating objects, potentially creating huge 
heaps of unused ones, then glitching while it cleans up.  I'm just very 
cautious about where I put Ruby code.  I'd be much more comfortable and 
less suspicious if there were scopes for the objects so I could 
guarantee they died at a certain point without having to invoke 
collection (and still ending up with objects alive that I want dead).

I guess I just want Ruby to succeed and perhaps become a widely accepted 
platform for developing applications.  Look at the poor fellow with that 
graphics application.  I bet he had to sweat a little to get someone to 
accept Ruby on that project, and a fundamental design issue with Ruby's 
memory management is causing him to have to insert hack code to keep it 
running smoothly.  We shouldn't have to do that.  Objects should go away 
when told to, memory levels should be where we design them to be, and 
nothing should be running in the background unless we tell it to.  It's 
really that simple.  All the studies in the world aren't going to allow 
me to abide that in a real application.  No matter how cool the language is.

	Sean