Oh, sorry to email twice so soon, but I have an idea for making par-map
usable in more cases: add a keyword argument called "block-size". Its value
should be a positive integer, and the meaning is to have each thread do
block-size iterations. That should make it easier to use par-map for cases
like this where the cost of each function is very small.

I realize it's not a great solution because you still have to iterate
through the list to get to the later elements. A hypothetical
"vector-par-map" would solve that.

Noah


On Fri, Mar 29, 2013 at 4:24 PM, Noah Lavine <noah.b.lav...@gmail.com>wrote:

> I agree. Do you have any idea what's causing the overhead?
>
> I tried to benchmark it, but got a segmentation fault. I think we have
> plenty of work to do here. :-)
>
> Noah
>
>
> On Fri, Mar 29, 2013 at 2:00 AM, Mark H Weaver <m...@netris.org> wrote:
>
>> I wrote:
>>
>> > Nala Ginrut <nalagin...@gmail.com> writes:
>> >> --------------------cut-------------------
>> >> scheme@(guile-user)> ,time (define a (map (lambda (x) (expt x 5))
>> (iota
>> >> 10000)))
>> >> ;; 0.008019s real time, 0.007979s run time.  0.000000s spent in GC.
>> >> scheme@(guile-user)> ,time (define a (par-map (lambda (x) (expt x 5))
>> >> (iota 10000)))
>> >> ;; 6.596471s real time, 6.579375s run time.  1.513880s spent in GC.
>> >> --------------------end-------------------
>> > [...]
>> >> Well, is there any example?
>> >
>> > The timings above suggest that, on your machine, the overhead of
>> > 'par-map' is in the neighborhood of 660 microseconds per thread (that's
>> > the total run time divided by 10000 iterations).
>>
>> I must say that 'par-map' has shockingly poor performance.
>> We really ought to try to improve this.
>>
>>      Mark
>>
>>
>

Reply via email to