So if someone has a cross platform (Win/Mac) app designed to process text
files of various and unknown formats and provenance, is there some incantation
process which would ensure it is best prepared for lots of finding and
filtering?
Best wishes
David Glasgow
___
I think that as the code changes in since v7 also included some substantial
optimisations, I'm no longer certain that there is *in general* a performance
hit from v7 onwards... except on Windows, where Mark W has hinted he may soon
fix this.
But I'm not absolutely sure. Because the only place
It sure helped me to understand it! Thanks. As I understand the performance
issue thought between 6.7 and later versions of LC, it revolves around having
to process all the unicode strings that are native now. Or so the discussion
has gone in the past. If not, then the performance hit since v7 h
On 07/09/2021 17:22, Bob Sneidar via use-livecode wrote:
This makes sense to me (I think) because if I am not mistaken, UTF16 is
Unicode, and UTF8 is simple ASCII. The slowdown from 6.7 to 7.0 was precicely
the support for Unicode text. Someone will correct me if I am wrong about this.
As a
This makes sense to me (I think) because if I am not mistaken, UTF16 is
Unicode, and UTF8 is simple ASCII. The slowdown from 6.7 to 7.0 was precicely
the support for Unicode text. Someone will correct me if I am wrong about this.
As a hobbyist, I try and stay away from localization issues. But I
I went back and re-did the tests, checking on the results.
The file *is* UTF8, so I need to textDecode() it; if I don't, the result
are simply wrong, and so the times are irrelevant.
1. Once it has been textDecoded(), i.e. is in internal format, and I run
my algorithm it gets the correct resu