Marco van de Voort schrieb:
In our previous episode, Hans-Peter Diettrich said:
memory management and the occasional code page conversion (and since this may reduce the number of code page conversions when working with "non-native" strings, it can also be a performance win).
IMO a single encoding, i.e. UTF-8, can cover all cases.

Well, for starters, it doesn't cover the existing Delphi/unicode codebase.

Because it's bound to UTF-16? That's not a problem, because WideString will continue to exist, and according conversions are still inserted by the compiler.

While some hard core Ansi coders may whine about such a convention, the
absence of implicit string conversions (except in external library calls)
will make such applications more performant than mixed-encoding versions.

I don't see why this is the case. A current system encoding application does
not do any conversion. (except for GUI output, and that can be considered
negiable to the actual GUI overhead)

When system encoding changes with the target platform, indexed access to such strings can lead to different results. Unless the compiler can read the coder's mind...

Why spend time in the design of multiple RTL/LCL versions, when a single version will be perfectly sufficient?

Why spent 13 years being compatible when you can throw it away in a second?

It's sufficient to throw away what's no more needed :-)

DoDi

_______________________________________________
fpc-devel maillist  -  fpc-devel@lists.freepascal.org
http://lists.freepascal.org/mailman/listinfo/fpc-devel

Reply via email to