On Wed, 22 Aug 2012 22:30:52 +0500 Ivanko B <ivankob4m...@gmail.com> wrote:
> Even if you would implement something like the Unix "find" or "ls" > programs, they would be more likely to be limited by I/O and all sorts > of file/directory attribute lookups than code page conversions of file > names. > ============ > 1) I/O is heavily cached on modern a-lot-of-RAM machines & 2) > conversion eat CPU & L2/3 caches & 3) conversion penalties grow > rapidly on servers because of exausting resources mentioned in 2). Although you are right that nowadays OS have highly optimized file systems, Jonas is right too. File functions are so slow, even on Linux, that both the compiler and Lazarus uses its own cache for file functions. These caches require each file name to normalize and doing binary searches with case insensitive comparisons - a lot of string operations. The speed gain of such a cache is noticeable on Linux, while on Windows it is dramatic. That's why I doubt that some extra string conversions are measurable. Can we now return to the real problems? Marco gave some examples. Mattias _______________________________________________ fpc-devel maillist - fpc-devel@lists.freepascal.org http://lists.freepascal.org/mailman/listinfo/fpc-devel