Felipe Contreras wrote:
>
> I didn't find anything new. It's just explaining character sets in a
> rather non-specific way. ASCII uses 8 bits, so it can store 256
> characters, so it can't store all the characters in the world, so
> other character sets are needed (really? I would have never guessed
> that). UTF-16 basically stores characters in 2 bytes (that means more
> characters in the world), UTF-8 also allows more characters it doesn't
> necessarily needs 2 bytes, it uses 1, and if the character is beyond
> 127 then it will use 2 bytes. This whole thing can be extended up to 6
> bytes.
>
> So what exactly am I looking for here?

ASCII is a 7-Bit Encoding with 128 characters in the set.

Most PC's these days use an 8 bit byte. I'm no rocket scientist when it comes
to CPU Architectures or character encodings but I would think the machines
byte or word size would be the most logical choices....

Most of my files are in UTF-8 or ISO 8859-1 (and probably some Windows-1252).
As far as I know UTF-8 and Latin 1 are compatible in the first 128 char
because of ASCII's wide spread'ness.


Since I may have missed the original message.... What is the problem again?

TerryP.


--
    
Email and shopping with the feelgood factor!
55% of income to good causes. http://www.ippimail.com