Yes but only one thread at each client is allowed to use MPI. Also, there is
a semaphore on the MPI usage.



On Fri, Feb 26, 2010 at 1:09 AM, Brian Budge <brian.bu...@gmail.com> wrote:

> Is your code multithreaded?
>
> On Feb 25, 2010 12:56 AM, "Amr Hassan" <amr.abdela...@gmail.com> wrote:
>
> Thanks alot for your reply,
>
> I'm using blocking Send and Receive. All the clients are sending data and
> the server is receive the messages from the clients with MPI_ANY_SOURCE as
> the sender. Do you think there is a race condition near this pattern?
>
> I searched a lot and used totalview but I couldn't detect such case. I
> really appreciate if you send me a link or give an example of a possible
> race condition in that scenario .
>
> Also, when I partition the message into smaller parts (send in sequence -
> all the other clients wait until the send finish) it works fine. is that
> exclude the race condition?
>
>
> Regards,
> Amr
>
>
>
>
> >>We've seen similar things in our code. In our case it is probably due to
> a
> >>race condition....
>
>
> >>On Feb 24, 2010 9:36 PM, "Amr Hassan    " <amr.abdelaziz_at_[hidden]>
> wrote:
>
> >>Hi All,
>
> >>I'm ...
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to