Hi,

I've had this idea to use threaded code =
(http://en.wikipedia.org/wiki/Threaded_code) for the GC mark phase for =
some time as it's very much like a=20
VM core, "dispatching" AST nodes and objects respectively.A minimum of =
branch misprediction overhead is very important for a fast marking =
phase.

Here's a link to a patch against REE ( =
http://github.com/FooBarWidget/rubyenterpriseedition187-248 ) with some =
more context :=20

=
http://code.google.com/p/rubyenterpriseedition/issues/detail?id=3D28&colsp=
ec=3DID%20Type%20Status%20Priority%20Milestone%20Summary

With the existing macros (perhaps with some minor refactorings for use =
outside the opcode loop as well) ko1 extracted to vm_exec.h on 1.9, this =
could
be a minor patch on that MRI version.

Thoughts ?=20

- Lourens

On 2010/01/14, at 20:35, Brian Mitchell wrote:

> On Thu, Jan 14, 2010 at 15:06, Roger Pack <rogerdpack2 / gmail.com> =
wrote:
>> that might be a more sane way of trying it out.
>> That being said, typically GC only uses 10% of a ruby's time (I
>> think), so it's not a super high performance hit currently.
>=20
> I'm not sure what kinds of app code you've been running but I see
> somewhere between 20% and 60% spent in memory management. This
> includes well tuned code which avoids garbage when possible. Keep in
> mind that you can't simple run a benchmark with no real heap. You'll
> have to artificially add 60MB to 200MB of objects depending on what
> your target runtime footprint will be.
>=20
>> I've also been working on some "native type" wrappers [1] whose goal
>> is basically to reduce the amount of memory Ruby has to traverse in
>> order to do its mark and sweep.
>>=20
>> ex:
>> a =3D GoogleHashSparseIntToInt.new
>>=20
>> a[3] =3D 4 # it saves these away as native C ints, so ruby's GC
>> basically ignores all members.
>>=20
>> I'd be happy to add more functionality [like native {saved away}
>> strings] or a wrapper to sets/priority queues/std::vector if anybody
>> would find it useful just let me know.
>>=20
>> -r
>> [1] http://github.com/rdp/google_hash
>=20
> These sorts of libraries would be excellent to start publicizing. I'd
> also be interested in doing some smarter implementations for hashes
> along the lines of Lua's tables. Lua's ropes are also worth looking at
> since there is a lot of string concat heavy code out there.
>=20
> Io (iolanguage.com) for example, makes heavy use of unboxed type
> collections to allow fast vector operations.
>=20
> Brian.
>=20