Stephen White <spwhite / chariot.net.au> wrote:
>
>On Mon, 12 Feb 2001, YANAGAWA Kazuhisa wrote:
>
> > Probably you like to read some surveys on concurrent object oriented
> > programming languages for revising your proposal.  such as
> >
> >   ftp://ftp.cee.hw.ac.uk/pub/funcprog/nrs.coop96.ps.Z
>
>That was an interesting paper.
>
>My suggestion was along the lines of "Future RPC", and would be very
>difficult to implement on multiple CPUs with OS threads.
>
>The problem with OS threads is that every process can be doing so many
>things or waiting on so many things that the OS cannot keep track of it
>all. This makes threading into basically waking up every waiting thread
>and saying "did you want to do anything?".

I believe that with a good scheduling algorithm it should
be possible to largely avoid this?  I am out of my depth
here though, I could easily be very wrong.

>As Ruby implements Threads internally to the interpreter, and Ruby has
>a very intimate knowledge of itself, it would be able to wake and kill
>threads without actually assigning a physical thread to every Object.
>
>Here's a possible implementation:
>
>   Every object has a unique Object ID and a "Thread:" field.
>
>   During method calls, the return value and parameters have their Thread
>   field assigned to the called Object. This is to prevent access to
>   variables which may change. When the method returns, Thread reverts.
>
>   On Object access, the Thread field is checked to make sure that there is
>   no "Future RPC" result waiting (eg, it may change). When Thread reverts,
>   then access may go ahead.
>
>   When a method is called, the same thread is used to go into the call
>   (like now). If it pops in and out before it's pre-empted, then no
>   penalty is incurred.
>
>   If the called method is slow, the interpreter spawns a new thread to
>   continue the calling object in the next round of tasking.
>
>This would incur the cost of one C style variable check per object 
>reference
>and slow method calls would have an additional context switch.

The various attributes of an object may themselves be
objects, that contain objects, ad nauseum.  There is
no fast way to notify objects up and down that chain
that one of them is locked.  And in the (in the
parsing case I sent you very real) possibility that an
attribute of an object is the object itself, naive code
to do that notification could get into endless
recursion.

(Of course the version I sent you is being modified in a
way that will be thread-safe.  But it is an example of
why implicit parallelism may cause problems.)

> > 3. Most important one: Is that model really so useful?
>
>I don't know. It would depend on the situation. This scheme doesn't help
>with OS threads or multiple CPUs but it does have the advantage of being
>automagic. :)
>
>Can you think of any scenarios where this automagicality would fail?

In addition to the above, consider an object for sending
calls to a database handle.  Frequently with database
handles you may create a temporary #table, manipulate it,
and query it.  This involves multiple slow operations but
no race conditions because this takes place on a table
that is local to your database handle.

If you internally multithread this, then you now get race
conditions where the database handle is switching between
serving thread A and thread B, both of which are doing
operations involving a table #tmp that each was expecting
to create, manipulate, then tear down...

Cheers,
Ben
_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com