>On Thursday, May 8, 2003, at 07:04 AM, Beman Dawes wrote: > >> A 2-3% timing difference probably isn't reliably repeatable in real >> code. >> >> How code and data happens to land in hardware caches can easily swamp >> out such a small difference. The version-to-version or step-to-step >> differences in CPU's, memory, compilers, or operating systems can >> cause that much difference in a given program. Differences need to get >> up into the 20-30% range before they are likely to be reliably >> repeatable across different systems. >> >> At least that's been my experience. > >That has not been my recent experience. While working on my current >project (the Safari web browser), we have routinely made 1% speedups >that are measurable and have an effect across multiple machines and >compilers (same basic CPU type and operating system), and we have also >detected 1% slowdowns when we inadvertently introduced them.
I notice the examples you give are JavaScript. Greg's example of a virtual machine is written mostly in C, IIRC. I wonder if C++ is more sensitive to compiler differences?
For example, some C++ compilers are a lot more aggressive about inlining than others. For some of the code I've timed, a change slowed results for a compiler that failed to inline it, but ran quicker for the compiler that was good at inlining.
>I'm not sure, though, if this negates your point, Beman. Something that >gives a 2-3% speedup for one Boost user might not be worth any level of >obfuscation unless we can prove it provides a similar speedup for other >Boost uses.
Yes, that's a key point. And of course the 70% gain from a better sort algorithm is the kind of win Boost needs to be alert to.
--Beman
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost