On Sat, Jan 22, 2011 at 8:52 AM, Ernest N. Prabhakar, Ph.D.
<[email protected]> wrote:
> My assumption was that doing a parallel assign:
>
> result[i] = i*i
>
> would be safe, since it always accessed a unique portion of memory, but doing 
> a serial append:
>
> result << i*i
>
> would not.  But that may have been a mistake on my part, since the size (at 
> least) needs to be updated. Anyone know better?

I believe you're correct about the size update being the problem.
Unless the result array has all the space it needs initially, the
example could easily trigger it to resize, and with concurrent threads
all writing at the same time it's very likely some results will get
lost.

It could be improved slightly if you initialize @result with a
guaranteed-sized backing store, like this:

@result = Array.new(5)

This would ensure the array already has a 5 element backing store
prepared, and so concurrent writes would be writing different memory
locations. But it still seems like a bad precedent, since the
implementation details of doing concurrent writes to Array are hidden.
It could be maintaining an immutable linked list structure instead of
an array, in which case every random-access update would need to
modify all nodes in one direction or the other, and you're back to
concurrent mutation.

- Charlie
_______________________________________________
MacRuby-devel mailing list
[email protected]
http://lists.macosforge.org/mailman/listinfo.cgi/macruby-devel

Reply via email to