On 2012/04/30 1:50, Joshua Ballanco wrote:

> I know it seems like this class is just wrapping String and always defaulting to byte-wise operations, but it's more fundamental than that. Because there is no encoding on the bytes, there will never be an encoding error when working with them. This could be extremely useful for applications that combine bytes from multiple sources (e.g. Socket data + a file on disk + immediate strings in code) that could potentially have different encodings. By operating on bytes, you can defer the encoding checks until later, if at all.

I'm not saying I'm totally against this, but "extremely useful" could 
also mean "too useful". There are clearly cases where one needs to put 
things together at the byte level. But there are also quite some cases 
that seem to "just work" when using byte-wise operations, at least as 
long as nothing else but US-ASCII gets used. Things then blow up 
terribly once some other characters get into the mix.

Actually, the binary/ASCII-8bit encoding is very close to a Blob. It was 
mostly Akira Tanaka who didn't want to distinguish between "true" binary 
and ASCII-8bit, because that would have made the use of regular 
expressions with binary impossible or convoluted.

Despite the title of this issue, I didn't see any *bit*wise operations 
(e.g. bitwise and/or/xor/not) proposed. Were you just taking them for 
granted? What about adding these to String, maybe limiting them to 
binary/ASCII-8bit?

Regards,    Martin.