Hi,

I have to dispute this Bulat's characterisation here. We can solve lots
of nice problems and have high performance *right now*. Particularly
concurrency problems, and ones involving streams of bytestrings.
No need to leave the safety of GHC either, nor resort to low level evil
code.

let's go further in this long-term discussion. i've read Shootout problems
and concluded that there are only 2 tasks which speed is dependent on
code-generation abilities of compiler, all other tasks are dependent on speed of used libraries. just for example - in one test TCL was fastest language. why? because this test contained almost nothing but 1000 calls to the regex engine with very large strings and TCL regex engine was fastest

Maybe it would not be a bad idea to check the number of cache misses, branch mispredictions etc. per instruction executed for the shootout apps, in different languages, and of course, in haskell, on the platforms ghc targets. Do you think it might potentially be interesting to the GHC developing community to have such an overview? It might show potential bottlenecks, I think.

-- Andy


_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to