18.11.2008 19:57, Timo Sirainen wrote:
Ok, I see how this makes things problematic. One couldn't just encode it to UTF-8 anyway and do the comparison after that (provided there would be an option enabled)?

You can encode everything to UTF-8, but the result will be different
depending on what the source character set is. If by "option" you mean
that you'd have a single setting that specifies which the "non-utf8
charset" is that (hopefully) all your users are using, then sure that
would be the a) choice in my previous reply.
Yes, you're right. I suppose didn't think that through.
So basically a password containing any non 7-bit ASCII is only "correct" when provided by a client using the same charset as the password is stored in... If the RFC states that the password should be provided as 7-bit ASCII then I think I'll google for a reason why some clients send the password as something else.

Most client programmers haven't even thought about the whole issue. The
password is typically 7bit. So they just send the password using
whatever charset that the OS by default happens to use.

In your case you're most likely not really seeing ISO-8859-1 charset,
but rather Windows-1252. Although Windows-1252 is a superset of
ISO-8859-1, but things like euro character is present in 1252 but not in
8859-1 (and euro in a different position in 8859-15).
Yes, I see. So in light of this and the conversation on the imap-protocol -list

http://mailman2.u.washington.edu/pipermail/imap-protocol/2008-February/000822.html

our current options seem to boil down to having the passwords ISO-8859-1 encoded (given the demographics of our users). Those using operating systems with native UTF-8 clients have to use passwords containing only 7-bit characters.

I didn't realise the specifications were so flexible on this password issue.

Chears, Fredrik

------------------------------------------------------------------------

Reply via email to