Erik van der Poel wrote: > Erik van der Poel wrote: > >> Besides, in networking, it's better to be conservative. You don't >> start with a short blacklist and then grow it when you find others. >> No, you start with a whitelist, and grow that. > > > One could even make an argument along these lines for nameprep. > Perhaps nameprep should not have started with the *huge* Unicode > character set, subsequently making a feeble attempt to reduce that set > to a safe one. > > One could make the argument that the nameprep RFC does not adhere to > the old rule: > > "Be liberal in what you accept, and conservative in what you send" > I like this "maxim" by late Jon Postel,too.
nameprep versioning had been debated 3 years ago, but AFAIK it was dismissed. Let's assume nameprep1 and nameprep2 have its own supported unicode char set. of course, nameprep1 is subset of nameprep2. IDN2 contains new unicode char from nameprep2. Decode&Normalize&Verify2(Normalize&Verify&Encode1(IDN2)) would fail in the inner expression by the encoding party who uses nameprep1 , Decode&Normalize&Verify1(Normalize&Verify&Encode2(IDN2)) would fail in the outer expression by the decoding party whoi uses nameprep1, while Decode&Normalize&Verify2(Normalize&Verify&Encode1(IDN1)) would succeed by both parties, assuming two parties have different versions of namepreps.. nameprep versioning and applicaton-level filtering both fundamentally change how dns resolving works, by adding new *exception handlings* that did not exist with ASCII only DNS. that is, dns stability issues came out at that time. We should have to produce perfect nameprep/stirngprep at once and use forever without versioning, but unicode is not designed/not prepared for identifier use. I think that is the source of all these homograph problems. Please correct me if my memory about this topic is wrong, senior members. Soobok
