----- Original Message -----
From: "Eugen Leitl" <[EMAIL PROTECTED]>
> Can anyone shed some light on this?

Because of the sophistication of modern processors there are too many
variables too be optimized easily, and doing so can be extremely costly.
Because of this diversity, many compilers use semi-random exploration.
Because of this random exploration the compiler will typically compile the
same code into a different executable. With small programs it is likely to
find the same end-point, because of the simplicity. The larger the program
the more points for optimization, so for something as large as say PGP you
are unlikely to find the same point twice, however the performance is likely
to be eerily similar.

There are bound to be exceptions, and sometimes the randomness in the
exploration appears non-existent, but I've been told that some versions the
DEC GEM
compiler used semi-randomness a surprising amount because it was a very fast
way to narrow down to an approximate best (hence the extremely fast
compilation and execution). It is likely that MS VC uses such techniques.
Oddly extremely high level languages don't have as many issues, each command
spans so many instructions that a pretuned set of command instructions will
often provide very close to optimal performance.

I've been told that gcc does not apparently use randomness to any
significant degree, but I admit I have not examined the source code to
confirm or deny this.
                    Joe


Reply via email to