Hi haskellers, So.. the type says it all
p_map_reduce :: ([a] -> b) -> (b -> b -> b) -> [a] -> b so the idea is to write your computation extensive funciton as a map_reduce function, simplest example: sum ( can be seen as map id then reduce (+) ) then instead of calling: sum xs just call: p_map_reduce sum (+) xs the function in the background will split the xs into 16 parts, and use all your cores to process the computation. Enless the computation from the map_reduce function is too trivial or the list is too small, it should bump all your cores to 100% usage, and linearly increase overall performance. You can customize the number of parts by calling the helper function p_map_reduce_to I hope this can be useful to someone besides me :) here is the code warning: i'm using a very annoying coding style, by redefining the (.) operater to be reverse application. Please forgive me and metally transform the order, or just pretend you are reading Java / Python / Ruby or whatever :) module Main where import Data.List import Control.Parallel import Prelude hiding ((.)) -- for my poor oo mind (.) :: a -> (a -> b) -> b a . f = f a infixl 9 . (...) :: (b -> c) -> (a -> b) -> a -> c (...) f g x = f (g x) -- helpers reduce = foldl1 in_group_of n [] = [] in_group_of n xs = xs.take(n) : xs.drop(n).in_group_of(n) split_to n xs = xs.in_group_of(size) where size = if xs.length < n then n else xs.length `div` n -- parallel processing p_eval' xs = xs.pseq(xs.reduce(par)) p_reduce' op xs = xs.p_eval'.reduce(op) p_map_reduce_to n m r xs = xs.split_to(n).map(m).p_reduce'(r) p_map_reduce m r xs = p_map_reduce_to 16 m r xs -- test fib 0 = 0 fib 1 = 1 fib n = fib (n-1) + fib (n-2) fibs xs = xs.map(fib).sum test_list = replicate 50 30 s_fibs = test_list.fibs p_fibs = test_list.p_map_reduce fibs (+) main = p_fibs.show.putStrLn _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe