On 10/23/06, Sebastian Sylvan <[EMAIL PROTECTED]> wrote:
I'm not so sure that a newArray is faster than N copies of
newEmptyMVar, at any rate the [MVar] approach has *no* congestion
points

They are congestion points because each thread could conceivably
attempt to putMVar at the same time as the main thread tries to
takeMVar.

You could try having an array of MVars to get rid of the extra space
that the list takes, and use unsafeWrite etc. to get faster writes
back to the result array. Then do takeMVar on each item in the array
to get "block for N resources" behaviour without having a single
resource semaphore that could (profile!) suffer from congestion.

Meh, it seems like a lot of work for little.

How many elements were in the benchmark list? I notice that the
original benchmark ran for 5 seconds or so, what happens if the work
per element is increased? If the length is increased? It's possible
that the array approach wins when you try to process 100,000 elements.

--
Taral <[EMAIL PROTECTED]>
"You can't prove anything."
   -- Gödel's Incompetence Theorem
_______________________________________________
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell

Reply via email to