Hi Bulat, > ghc low-level code optimization cannot be compared with best modern C > compilers that's result of 20 years of optimization. ghc generates > machine code in rather simple idiomatic way, so it should be compared > to non-optimizing C compiler
Sure. But I was curious if to see whether there was some optimization I had missed, seeing as other similarly low level programs, such as the nsieve benchmark of the language shootout, or the word counting program, manage to run within a few percentage points of C if not faster. Since this program doesn't use any features specific to functional programming, such as higher order functions, and mostly just calls out to imperative primitives of GHC not implemented in Haskell (such as unsafeRead and unsafeWrite), I would have thought that the gap in runtimes might have been smaller. Fortunately for me, using the FFI is really quite easy so for the actual problem I am trying to solve I just call out to a C implementation of the Floyd Warshall algorithm. I forgot to paste the source code for the C program. FWIW, here it is! #include <stdlib.h> #include <stdio.h> #define SIZE 1500 #define MIN(x, y) (x < y ? x : y) void fw(int size, int *matrix) { #define M(x,y) matrix[x*size+y] for (int k = 0; k < size; k++) { for (int i = 0; i < size; i++) { for (int j = 0; j < size; j++) { M(i, j) = MIN(M(i, j), M(i, k) + M(k, j)); } } } } static int *matrix; int main(int argc, char **argv) { matrix = malloc(sizeof(int)*SIZE*SIZE); if(!matrix) exit(1); fw(SIZE, matrix); return 0; } -- Mathieu _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe