On 06 Jan 2010, at 13:04, Florian Klaempfl wrote:

Jonas Maebe schrieb:

Another reason is probably to speed up the compilation:
* (re)compiling huge source files can be slow and/or require lots of
memory, depending on the used compiler (and debug information or
optimization settings)

For single class c++ files, imo most of the time is spent into reading
the huge headers which are often even not needed and a complete mess
because nobody has an overview which classes are used and which not.

It depends. Since these compilers only see whatever is in the current source file (and its header files), putting more code in the same source file can significantly slow down interprocedural optimizations (as soon as one algorithm with quadratic complexity is involved). And inlining can significantly increase the complexity of single routines as well, making stuff such as register allocation much slower :)

At least compiling Apple's Mac OS X linker is fairly slow, even though it's only about 1MB of code. The reason is that they put virtually all of the classes into the header files, and then include those together in the main cpp file. See http://www.opensource.apple.com/source/ld64/ld64-95.2.12/src/ld/ . Pretty much the entire linker is implemented in MachOReaderRelocatable.hpp and MachOWriterExecutable.hpp (note that I'm not claiming that this is how typical C++ programs are structured, it's only to illustrate that compiling one huge file can be quite slow).


Jonas
_______________________________________________
fpc-devel maillist  -  fpc-devel@lists.freepascal.org
http://lists.freepascal.org/mailman/listinfo/fpc-devel

Reply via email to