andrewcoppin: > Don Stewart wrote: > >I've written an extended post on how to understand and reliably optimise > >code like this, looking at it all the way down to the assembly. > > > >The result are some simple rules to follow for generated code as good > >as gcc -O2. > > > >Enjoy, > > > > http://cgi.cse.unsw.edu.au/~dons/blog/2008/05/16#fast > > > > A well-written piece, as always. > > My feelings are ambivilent. On the one hand, it's reassuring that such > good performance can be obtained without resorting to calling C, > explicit unboxed types, GHC-specific hacks, strictness annotations, > manual seq calls, strange case expressions, or really anything remotely > odd. It's fairly plain Haskell '98 that most beginners would be able to > read through and eventually understand. And yet it's fast. > > On the other hand, this is the anti-theisis of Haskell. We start with a > high-level, declarative program, which performs horribly, and end up > with a manually hand-optimised blob that's much harder to read but goes > way faster. Obviously most people would prefer to write declarative code > and feel secure that the compiler is going to produce something efficient. > > If the muse takes me, maybe I'll see if I can't find a less ugly way to > do this... >
I don't understand what's ugly about: go s l x | x > m = s / fromIntegral l | otherwise = go (s+x) (l+1) (x+1) And the point is that it is *reliable*. If you make your money day in, day out writing Haskell, and you don't want to rely on radical transformations for correctness, this is a sensible idiom to follow. Nothing beats understanding what you're writing at all levels of abstraction. -- Don _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe