On Wed, Apr 11, 2007 at 09:00:49PM +0200, Jan Engelhardt wrote: > >+struct interval { > >+ int first; > >+ int last; > >+}; > > CodingStyle? uint16_t instead of int?
> >+ { 0x1D173, 0x1D182 }, { 0x1D185, 0x1D18B }, { 0x1D1AA, 0x1D1AD }, > >+ { 0xE0001, 0xE0001 }, { 0xE0020, 0xE007F }, { 0xE0100, 0xE01EF } > >+ }; > > Since Unicode above 0xFFFF is unsupported, could not these entries be killed? The UTF-8 decoder part already supports full 31-bit Unicode (including 5 and 6 byte long UTF-8 sequences). It's only the font handling part that doesn't support Unicode beyond BMP. If an application prints a non-BMP character that is double-wide, or is a zero-width space, the expected behavior is to move the cursor by two or zero positions. In order to do this, width information is needed even beyond BMP. It's a completely different story that there would be no real glyph displayed, just e.g. a replacement symbol followed by a space to pretend a real double-width character was printed. > unsigned int rescan:1; > unsigned int inverse:1; > unsigned int width; or even uint8_t. > I would not mind unsigned. Okay. -- Egmont - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/