On Monday 12 May 2008, Jon Harrop wrote:
> On Monday 12 May 2008 13:54:45 Kuba Ober wrote:
> > > 5. Strings: pushing unicode throughout a general purpose language is a
> > > mistake, IMHO. This is why languages like Java and C# are so slow.
> >
> > Unicode by itself, when wider-than-byte encodings are used, adds "zero"
> > runtime overhead; the only overhead is storage (2 or 4 bytes per
> > character).
>
> You cannot degrade memory consumption without also degrading performance.
> Moreover, there are hidden costs such as the added complexity in a lexer
> which potentially has 256x larger dispatch tables or an extra indirection
> for every byte read.

In a typical programming language which only accepts ASCII characters outside
of string constants, your dispatch table will be short anyway (covers ASCII
subset only), and there will be an extra comparison or two, active only when
lexing strings. So no biggie.

> > Given that storage is cheap, I'd much rather have Unicode support than
> > lack of it.
>
> Sure. I don't mind unicode being available. I just don't want to have to
> use it myself because it is of no benefit to me (or many other people) but
> is a significant cost.

Let's look at a relatively widely deployed example: Qt toolkit.
Qt uses a 16 bit Unicode representation, and I really doubt that there are any
runtime-measurable costs associated with it. By "runtime measurable" I mean
that, say, application startup would take longer. A typical Qt application
will do quite a bit of string manipulation on startup (even file names
are stored in Unicode and converted to/from OS's code page), and they have
slashed startup time by half on "major" applications, between Qt 3 and Qt 4,
by doing algorithmic-style optimizations unrelated to strings (reducing number
of malloc's, for one). So, unless you can show that one of your applications
actually runs faster when you use non-Unicode strings as compared to well
implemented Unicode ones, I will not really consider Unicode to be a problem.

I do agree that many tools, like lexer generators, may not be Unicode-aware or
have poorly implemented Unicode awareness. The 256x lexer table blowup
shouldn't happen even if you were implementing APL with fully Unicode-aware
lexer. The 1st level lexer table should be split into two pieces (ASCII and
APL ranges), and everything else is either an error or goes opaquely into
string constants.

A lexer jump table only makes sense when it actually saves time compared to
a bunch of compare-and-jumps. On modern architectures some jump lookup tables
may actually be slower than compare-and-jumps, because some hardware
optimizations done by CPU (say branch prediction) may simply ignore branch
lookup tables, or only handle tables commonly generated by compilers...

Cheers, Kuba

_______________________________________________
Caml-list mailing list. Subscription management:
http://yquem.inria.fr/cgi-bin/mailman/listinfo/caml-list
Archives: http://caml.inria.fr
Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
Bug reports: http://caml.inria.fr/bin/caml-bugs

Reply via email to