Me: >>> I've just been looking through fl_utf8.cxx and came across: >>> ... >>> int fl_utf_strcasecmp(const char *s1, const char *s2)
matt: >> Um, that is very wrong in many respects: Phew! that's a relief! I thought I was just too tired to see it. greg: > At very least, I'd say add a \todo saying it needs future > optimization to avoid the strlen's. I haven't checked, but it could be that this code is a stub and that it's not actually used anywhere yet. > We should probably have regression tests for this as well, to > test for common problems and boundary cases. Markus Kuhn's web page http://www.cl.cam.ac.uk/~mgk25/unicode.html is actually one of the least dry ones out there about unicode/utf8 that I've come across so far. Apart from a link to the O'ksi'D FLTK, he also has one to a document containing test characters. We could investigate further... Maybe it's obvious to everyone else, but it strikes me that this UTF-8 capability really boils down to four separate things: a. being able to handle non-ascii characters and simply counting the correct number of characters in a char* stream of bytes b. being able to load the relevant fonts and display the glyphs on the screen, in widget labels, etc c. being able to collate such strings and therefore provide ascending/descending sort orders within list widgets, etc d. being able to handle right to left text as well as LtoR. I haven't looked at the UTF-8 code, so maybe it already does it, but my feeling is that we must ensure we have 100% coverage of the first point before we can move on. Cheers D. _______________________________________________ fltk-dev mailing list fltk-dev@easysw.com http://lists.easysw.com/mailman/listinfo/fltk-dev