"Jim Weirich" <jweirich / one.net> wrote in message
news:m2wux3bys2.fsf / skaro.access.one.net...
> >>>>> "Sean" == Sean O'Dell <sean / celsoft.com> writes:
>
> Other responses addressed your explicit destruction question ... I'll
> give a response to your second question.
>
>     Sean> Also, a related issue...why isn't there a finalize call?  I
>     Sean> don't mean the finalizer where you can set a method to get
>     Sean> called after an object is gone, I mean, why isn't there a
>     Sean> call to an object's "def finalize...end" right *before* an
>     Sean> object goes away?  Is that in the works or is that just not
>     Sean> the Ruby way?
>
> Ruby avoids this weird situation by running the finalizer code *after*
> the object is already collected.  You arrange for a closure to handle
> the resources that need handling during finalization, but the closure
> has no reference to the original object, therefore it can't create a
> new reference to it.  In addition, Matz made the finalization code
> just a little clunky on purpose, to discourage using it gratuitously.

Well...I come mainly from C++, which doesn't do reference counting, so
perhaps reference counting isn't such a great idea.  I don't know...I just
know that being able to call a finalize method *before* the object is one of
the great things about OOP.  You can encapsulate activities inside an object
without the outside world knowing everything going on inside, and they can
use it safely.

>     Sean> I don't fully understand why explicit object destruction
>     Sean> doesn't exist and why there is no object finalize call.  Is
>     Sean> it against Ruby philosophy and will never be there, or is it
>     Sean> just that Ruby is still young and it hasn't been implemented
>     Sean> yet?
>
> There are several reasons for using explicit object destruction...
>
>  1) To recover memory
>  2) To release non-memory resources
>  3) To use the "resource allocation is acquisition" idiom (RAIA)
>
> With GC, (1) isn't much of a motivator.  A good GC collector will
> generally outperform most manual allocation/destruction schemes.  With
> the inherit dangers involved with explicit destruction (dangling
> pointers, etc), it just doesn't seem worth it.

I don't know enough about garbage collection, but that's what everyone is
saying so I'm accepting that on faith right now.

> If (2) is your concern, there is nothing stopping you from doing an
> explicit release of your non-memory resources.  For example, if you
> want to make sure a file is closed when you are done with it, then
> close it.  Don't depend on object destruction to close the file.
>
> As for (3), a lot of people from the C++ world use explicit object
> destruction combined with the stack based allocation available in C++
> to manage resources.  For example, the following code will make sure
> the ifstream "f" is closed when the function exits and "f" is
> automatically destructed.

I mainly use destructors to perform object-destruction tasks.  Resource
allocations, etc. are only part of what might happen.  For example, suppose
I have a class that manages the configuration of an application.  On
construction, it opens a config file, reads it all in and then closes it.
During the life of the object, it provides information about how the
application is to perform its job.  During its life, the application itself
might change some of the settings based on user input.  On destruction, I
need the object to write out its current configuration information so all
the changes are saved to disk.

I considered using "yield" to wrap things up (using block_given? to enforce
using blocks, throwing an exception when a block wasn't provided) to ensure
this happens, but that got really messy fast in one of my apps.  I had about
6 objects to create, all of which required special destruction tasks, and
that meant I had 6-level-deep nested block, complete with 6 calls to yield.
The code looks horrible.  It's so much cleaner when you can create 6 objects
in succession, and they destruct in reverse order.  I could wrap all 6
objects inside a block with an ensure clause and then call their destructors
explicitly, but that requires that the outside world know about the
destructors, and that ruins the whole idea of encapsulation.

The problem becomes even worse when you have objects that depend on other
objects, and the order in which they destruct becomes an issue.  Even if the
garbage collection scheme were changed to allow reference counting, that
wouldn't provide for destruction order.

I do come from a primarily C++ background.  I'm not afraid of Ruby, but I
need my stuff to be tight and operate well with the other processes on the
system.  Garbage collection and the lack of object destructors are really
freaking me out.  I'm very much used to controlling very closely when memory
is used and freed, and I depend on OOP to automate a lot of that.

Under what circumstances is garbage collection invoked?  I think I can deal
with using yield to call blocks to make sure cleanup calls are made with my
objects, but the state of memory is still an issue.  How often is garbage
collection invoked?  What are all of its triggers?

    Sean