Jonas Maebe schrieb:

Apart from specific scenarios, memory mapping can easily be slower than direct reads. The main reason is that you get round trips to the OS via hardware interrupts whenever you trigger a page fault, instead of doing one or more (relatively cheap compared to interrupts) system calls. The potential savings of a few memory copies, especially for files in the range of 2-500kb, is very unlikely to compensate for this.

When the file resides in the OS file cache, no page faults will occur unless the file was removed from the cache. If not, every access request has to do disk I/O, resulting in process switching etc., so that the page faults are neglectable.


I see the biggest benefit in many possible optimization in the scanner and parser, which can be implemented *only if* an entire file resides in memory.

Then just read it into a buffer in one shot.

That's just what I suggested, for a first test :-)

When memory management and (string) copies really are as expensive as some people say, then these *additional* optimizations should give the really achievable speed gain.

a) the memory management overhead primarily comes from allocating and freeing machine instruction (and to a lesser extent node tree) instances b) the string copy cost I mentioned primarily comes from getting symbol names for the purpose of generating rtti and assembler symbol names

May be, we'll see...

DoDi

_______________________________________________
fpc-devel maillist  -  fpc-devel@lists.freepascal.org
http://lists.freepascal.org/mailman/listinfo/fpc-devel

Reply via email to