At 01:05 08/09/21, Yukihiro Matsumoto wrote:
>Hi,
>
>In message "Re: [ruby-core:18751] Re: Character encodings - a radical  
>suggestion"
>    on Sat, 20 Sep 2008 10:00:24 +0900, "Michael Selig" 
><michael.selig / fs.com.au> writes:
>
>|Perhaps we need to go back to basics with this discussion. As a mere  
>|English speaker, I do not fully understand the issues that are faced by  
>|Japanese and other encodings. What I have gathered from this discussion is  
>|(please tell me if I am wrong):
>|
>|- There are characters that Ruby needs to support which cannot be uniquely  
>|mapped to Unicode
>
>Yes, even though they are minor.
>
>|- In fact there are entire character sets that we want to support in Ruby  
>|that are not supported in Unicode
>
>Yes, I know two of them: Mojikyo, which refusing character
>unification. The character set contains 170,000 characters.

Just for general information, this doesn't specifically refer to
CJK unification (i.e. unification of the same ideograph from
China, Japan, Korea, and so on) but is more about general glyph
(dis)unification. This means that minor differences in how exactly
to write a character are given separate codepoints. This may help
in historical research (some variants are more used by some writers
or in some centuries than others,...), but in general isn't helpful,
on the contrary, it will make data processing more difficult.

However, even in daily life, there is some need to distinguish
some (ideographic) glyph variants in certain cases. For this,
Unicode contains variation selectors (U+FE00-FE0F and U+E0100-E01EF).
These are used after a base character, based on a registration in the
Ideographic Variation Database (http://www.unicode.org/ivd/).
There is currently only the Adobe-Japan1 collection registered, see
http://www.unicode.org/ivd/data/2007-12-14/IVD_Charts.pdf.
For glyph variants, it would be no problem (although quite some work,
of course) for Mojikyo to register them as Ideographic Variations
in this database. This would make all these Variations usable
in Unicode.

 From http://www.mojikyo.com/info/konjaku/index.html, we can also
see the following:
                       Mojikyo         Unicode
戳机 (kanji)         150,366           A bit more than double of what
                                       Unicode has. In my guess mostly
                                       glyph variants, but there sure are
                                       a few not yet encoded characters, too.

润戳机 (non-kanji)     2,256           Kana variants could be encoded
                                       with variation selectors

垧机(bonji)            1,875           Don't know, but because these are
                                       of Indic origin, my guess is that
                                       Unicode would use a different encoding
                                       model with much less characters

姑裹矢机(oracle bone)  3,364           space tentatively allocated (U+32000-327FF),
(http://www.internationalscientific.org/CharacterASP/why_study.aspx#oracle)
                                       see http://unicode.org/roadmaps/tip/

谰财矢机/Tangut        6,000           under consideration for encoding

垮虏矢机                 145           did not find any info, but I'm
                                       quite sure a well-written proposal
                                       would be accepted

淇今(seal characters) 10,969           Very old style, but most of them
(http://www.internationalscientific.org/CharacterASP/why_study.aspx#seal)
                                       with clear equivalents to modern
                                       ideographs. Still used on seals.
                                       To unify or not to unify is the
                                       big question.

It seems that Mojikyo is currently handled from two sides: www.mojikyo.org
for the non-commercial side, and www.mojikyo.com for the commercial side
(with various products published by Kinokuniya, a big Japanese publisher).
That leads to somewhat complicated usage conditions (you can use some
fonts for free for yourself, but have to pay if you use them in a paper
you publish,...), not only for the fonts (would be quite understandable)
but also for some of the data.

>At the
>time I first heard that number was huge, but Unicode is approaching
>pretty close (it now has more than 100,000 characters).

Conclusion: If the Mojikyo people wanted, they could get most if
not all of their stuff into Unicode in one way or another. But
similar to all other work of serious character encoding, it
would be a lot of work.


>GB18030, defined by Chinese government.  I don't know the detail, but
>I've heard it officially contains Unicode as its subset.  But encoding
>scheme for GB18030 is upto 4bytes per codepoint, so I am not sure how
>it can holds 21bit Unicode codepoint in it.

4 bytes raw would be 32 bits, so that should be enough to hold 21 bits.
Because some characters use only one or two bytes, the overall code space
is smaller, about 1,600,000 codepoints. This is still larger than Unicode
(around 1,100,000 codepoints), but the difference is currently not used
at all.

For more details, please see
http://www.icu-project.org/docs/papers/unicode-gb18030-faq.html
and http://unicode.org/faq/han_cjk.html#23.
(I was of the impression that GB 18030 contains a few characters
similar to the Japanese せ‖ and friends in JIS X 0213, but I haven't
found any such information anymore, so it may not be true).

So I don't think there is any real problem for GB 18030 and Unicode.


>|- There are ambiguous characters in some character sets - same code for  
>|different characters
>
>Yes.
>
>|I think it would be a benefit if we all got to understand a bit more:
>|
>|- How the character ambiguity (eg: Yen/ backslash) issue is handled at the  
>|moment - generally, not just with Ruby. ie: how do you know that a printer  
>|or screen is going to show the right character?
>
>Either avoiding conversion (operation based on bytes), or selecting
>proper encoding scheme (out of many very similar encodings, such as
>Shift_JIS, CP932, Windows-31J for example).  Conversion table from
>unicode.org is carefully designed to ensure roundtrip, although that
>is the very reason we have so many similar encoding.  If we can choose
>(or negotiate) to use same conversion table at both ends, it is
>unlikely to have mojibake problems.

Yes, roundtrip is easy if you use the same conversion tables, but
unfortunately, the major vendors (Microsoft, Apple, IBM,...) messed
up with minor variations (usually just a few codepoints out of
several thousand).

As for how you know that a printer or screen is going to show the
right character, you simply don't, in particular e.g. on the Web.
0x5C will show as a Yen sign on Japanese systems with fonts tweaked
for Japanese, but will show as a backslash otherwise. Japanese
IT professionals have to just learn about this.


>|- How the various "non-ascii compatible" encodings are used in practice.  
>|eg: it is my understanding that UTF-7 is really only used in email, and  
>|that it would be straightforward to immediately transcode it to/from UTF-8  
>|in an POP/IMAP library, so UTF-7 could be avoided completely as an  
>|"internal" encoding in Ruby. It's as if were were treating UTF-7 like  
>|base64 - just a transformation of a "real" encoding. (In fact UTF-16 & 32  
>|could be considered the same sort of thing, except they may be used more  
>|widely.)
>
>UTF-{16,32}{BE,LE} are non-ascii compatible, but they are safe to
>convert into UTF-8 since their difference only lies in encoding
>scheme.  They represent same character set anyway.  ISO-2022 is used
>often in mails and web. 

That would be iso-2022-JP. ISO 2022 is a standard that defines a set
of tools to create encodings, not an encoding in and by itself.

Regards,    Martin.

>The situation is little bit more complicated,
>but basically it can be converted into Unicode as well (with slight
>risk of yen sign problem).  You can ignore UTF-7.
>
>|- How a Japanese programmer would handle the situation of dealing with a  
>|combination of a Japanese non-Unicode compatible character set, and say a  
>|UTF-8 encoding which included non-ascii characters, and non-Japanese ones.  
>|ie: Is there a reasonable alternative to encoding both to Unicode &  
>|somehow dealing with the "difficult characters" as special cases?
>
>Unicode is getting better each day.  So it now covers almost all
>day-to-day problems.  Some cellphone problems are covered by using
>private area.
>
>                                                       matz.


#-#-#  Martin J. Du"rst, Assoc. Professor, Aoyama Gakuin University
#-#-#  http://www.sw.it.aoyama.ac.jp       mailto:duerst / it.aoyama.ac.jp