>can anyone sum up the overall
>conclusions of this lengthy discussion? that's what
i'd like to hear.
>i.e. does any encoding scheme out there do the job,
the whole job, and
>nothing but the job? or are they all flawed and
somebody someday needs
>to sit down and figure the problem out and fix it for
good?

Why yes, I would be glad to offer my opinions and thus
restart the whole thing from the beginning.

Conclusions:

1--There is no one unicode encoding scheme that fits
all purposes perfectly.  This is why there are so many
schemes.  I like UCS-2, but it's not very
ideologically pure (merely fast and convenient). 
Since I don't see Ruby as a high-performance language
but rather as a high-versatility one, UTF-* might be
better.  It really doesn't seem all that important to
me.


2--All I and most people who use languages to handle
text want, is to think of strings of characters, and
to be able to I/O these character strings as byte
streams in appropriate encodings.  Although everything
from MFC (yuck) to Java to C# to Angband LUA can
handle this basic functionality pretty well invisibly,
I don't think the will to bring Ruby up to date is
there.

3--I think the best bet for an international Ruby
(which coincidentally would also be a threaded Ruby)
is a .NET version of Ruby (i.e. a Ruby interpreter
running in .NET, not a compiler that compiles Ruby to
.NET code).  Arton's NETRuby would seem to fit the
bill pretty well, but I can't contact the author.  The
project is sparsely documented and definitely
experimental, but the code seems to work fine... has
anyone tried to take it further?

Benjamin Peterson


x