On Mon, Mar 23, 2009 at 12:34 PM, Igor Stasenko <siguc...@gmail.com> wrote:

>
> Now consider the overhead of creating a fork vs the actual useful code
> which is running within a block.
> I presume, this code will run 10x times slower on a single core
> processor, comparing to one w/o forks.
> So, you will need to have 10 cores to match the computation time with
> single core processor.
>
> I think its not wise to introduce parallelism on such low levels (in
> Concurrent.Collections.Array>>withIndexDo:). Its like hammering nails
> with microscope :)
>
> That's why i'm saying its too good to be true.
> Introducing parallelism at such low levels will be a waste. I leaning
> toward island model. This is a middle point between no sharing, like
> in Hydra and sharing everything, like in what you proposing.
>

10 times slower? Sounds like a made-up number to me...

" Using 101 threads: "
c := ConcurrentArray new: 1000001.
Time millisecondsToRun: [c withIndexDo: [ :each :i | c at: i put: i
asString. ]].
5711
5626
6074

" Using 11 threads: "
c := ConcurrentArray new: 1000001.
Time millisecondsToRun: [c withIndexDo: [ :each :i | c at: i put: i
asString. ]].
3086
3406
3256

" Using 1 thread: "
d := Array new: 1000001.
Time millisecondsToRun: [d withIndexDo: [ :each :i | d at: i put: i
asString]].
2426
2610
2599

My implementation is 1/2 to 1/3 the speed of the single-threaded Array. If
the blocks did more work, then the overhead would be lower and some benefit
would be gained from using multiple cores.

I don't have a good idea of where the overhead is going - maybe it's being
lost in the block copying that is needed to work around Squeak's
deficiencies? Or maybe it's waiting for the scheduler to do its stuff?

Implementation attached, MIT license if you're picky.

Gulik.

-- 
http://gulik.pbwiki.com/

Attachment: ConcurrentArray.st
Description: Binary data

_______________________________________________
Pharo-project mailing list
Pharo-project@lists.gforge.inria.fr
http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project

Reply via email to