Oleg,

I can't say I comletely agree with your point (or understand it), but so be it.
Feel free to ask for clarification.

Basically I was trying (in my wordy way) to say that toUsingCharset seems to do two things:

- Convert the Unicode string to an array of bytes using the converter for "fromCharset"
- Convert the bytes back to Unicode using the converter for "toCharset".

This makes no sense to me. When you're doing character-set-aware programming and have an array of bytes, you always need to keep a (byte[], charset name) pair, so you know what the bytes *mean*. The bytes by themselves are just a bit stream; the character set name tells you how to interpret the bits into "abstract" characters that mean something to a human. toUsingCharset is converting the Unicode string to a bit stream using one mechanism, then converting back to Unicode using another mechanism. I don't know how this could ever do anything useful.

Had not Sung-Su refused to provide a simple unit test case for this method, this discussion would have been put to an end a few months ago. But apparently writing test cases is for losers

How about if we just deprecate the @#% thing and the two URIUtil methods that call it?

-- Laura

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to