On Sat, 23 Jul 2005, Arjan van de Ven wrote: > > For the kernel the runtime measurement is obviously tricky (kgprof > anyone?)
Ehh, doesn't "opgprof" do that already? Anyway, judging by real life, people just don't _do_ profile runs. Any build automation that depends on the user profiling the result is a total wet dream by compiler developers - it just doesn't happen in real life. So: > but the static analysis method really shouldn't be too hard. I really think that the static analysis is the only one that actually would matter in real life. It's also the only one that is repeatable, which is a big big bonus, since at least that way different people are running basically the same code (heisenbugs, anyone?) and benchmarks are actually meaningful, since they don't depend on whatever load was picked to generate the layout. So dynamic analysis with dynamic profile data is just one big theoretical compiler-writer masturbation session, in my not-so-humble opinion. I bet static analysis (with possibly some hints from the programmer, the same way we use likely/unlikely) gets you 90% of the way, without all the downsides. Linus - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/