Jonas Maebe schrieb:

There are more things that prevent that, not in the least that almost any source code error will result in lots of memory leaks from the compiler.

IMO just such problems can be reduced by moving global variables into classes. When e.g. a single Compiler object holds the references to all other heap objects, an entire cleanup can be performed in the destructors.

This is where the Pascal "singleton" hack (units for classes) differs from true OOP, because units can not be "destroyed". The many constructors and destructors of such compiler singletons (InitCompiler, DoneCompiler...) should be callable (and actually be called) after every compilation.


For compiler development and debugging purposes it would be very nice, when all targets are covered in one compile.

It increases the compiler build time, which is not nice.

Building an compiler for all targets can be done in e.g. make fullcycle, or in an according .lpi, while the default compiler still can be built for only the specified target. As before, every target has to be included explicitly, by referencing its "root" unit.


And I personally, as a compiler developer and debugger, think it's very nice that everything is cleanly separated and that it's not possible to have accidental cross-target dependencies, which could result in changes in the constants/types for one target to suddenly impact a completely different target.

The target specific types etc. have to be encapsulated, so that they can not be used by accident. This is not hard to manage (still hard to implement ;-), because the conflicting target-specific units and types have to be renamed for "parallel" use. Since this disallows to use target specific types in general compiler code, proper virtualization is enforced by that renaming, so that many of your problems will disappear consequently :-)


Until FPC is developed by perfect programmers that never make any errors, I think that minimising the way in which things can get entangled is the best way.

A good OOP language and development system shall not only *allow* for OO programming, it also should *encourage* the use of OOP. When compiler developers state that their OOP compiler is constructed in an non-OO way, for performance or other reasons, this IMO is only an excuse for not understanding the pros of OOP, or for bad use or implementation of the support for objects and related types (dynamic strings...).

It's possible that the use of OO features, together with spaghetti code, can result in a performance degradation. But OTOH performance can improve as well, when *all* (related) parts of a project cooperate by using OO features.

In detail I still doubt that the use of dynamic strings *everywhere* will really result in an much slower compiler - the opposite may be true as well. It's known that stringhandling in an OO way can result in poor performance (see .NET strings), due to clumsy overloading of the string operators - but such overhead can be reduced by using more compiler magics, instead of general overloading. Also arguments like "FillChar takes a long time", as an argument against using classes (InitInstance), IMO is only a misinterpretation of the profiling information - it might come as well from the page faults, that occur on the first access to new heap objects, and only becomes *visible* in the use of a common (FillChar...) procedure, while otherwise the page faults are added to arbitrary code, that accesses an heap object for the first time.

DoDi

_______________________________________________
fpc-devel maillist  -  fpc-devel@lists.freepascal.org
http://lists.freepascal.org/mailman/listinfo/fpc-devel

Reply via email to