Sent from my iPhone
On Feb 18, 2011, at 11:15 AM, Bakul Shah <bakul+pl...@bitblocks.com> wrote: > On Fri, 18 Feb 2011 10:46:51 PST Rob Pike <robp...@gmail.com> wrote: >> The more you optimize, the better the odds you slow your program down. >> Optimization adds instructions and often data, in one of the >> paradoxes of engineering. In time, then, what you gain by >> "optimizing" increases cache pressure and slows the whole thing down. > > You need a feedback loop. Uncontrolled anything is a recipe > for disaster. Optimizations need to be `judicious' but that > requires experience, profiling and understanding but the > trend seems to be away from that..... > > On a slightly different tangent, 9p is simple but it doesn't > handle latency very well. To make efficient use of long fat > pipes you need more complex mechanisms -- there is no getting > around that fact. rsync & hg in spite of their complexity > beat the pants off replica. Their cache behavior is not very > relevant here. Similarly file readahead is usually a win. > >> C++ inlines a lot because microbenchmarks improve, but inline every >> modest function in a big program and you make the binary much bigger >> and blow the i-cache. > > That's a compiler fault. Surely modern compilers need to be > cache aware? ideally a smart compiler treats `inline' as a hint > at most, just like `register'. > Well how does template expansion affect all of this? I've heard in conversations that C++ is pretty register hungry which makes me think lots of inlining happens behind the scenes. Then again that's an implementation detail, except maybe for templates.