Simon, it is a lot more difficult than it appears. You're right,
select/poll can do it for any file descriptor, and shared mutexes/
conditions (despite the performance impact) can do it for shared
memory. However, in the case where you have to support both
simultaneously, what is the right a
Jeff Squyres wrote:
We get this question so much that I really need to add it to the FAQ. :-\
Open MPI currently always spins for completion for exactly the reason
that Scott cites: lower latency.
Arguably, when using TCP, we could probably get a bit better performance
by blocking and allow
tsi...@coas.oregonstate.edu wrote:
Thanks for the explanation. I am using GigEth + Open MPI and the
buffered MPI_BSend. I had already noticed that top behaved
differently on another cluster with Infinibandb + MPICH.
So the only option to find out how much time each process is waiting
arou
Thanks for the explanation. I am using GigEth + Open MPI and the
buffered MPI_BSend. I had already noticed that top behaved differently
on another cluster with Infinibandb + MPICH.
So the only option to find out how much time each process is waiting
around seems to be to profile the code.
We get this question so much that I really need to add it to the
FAQ. :-\
Open MPI currently always spins for completion for exactly the reason
that Scott cites: lower latency.
Arguably, when using TCP, we could probably get a bit better
performance by blocking and allowing the kernel to
On Jun 3, 2009, at 6:05 AM, tsi...@coas.oregonstate.edu wrote:
Top always shows all the paralell processes at 100% in the %CPU
field, although some of the time these must be waiting for a
communication to complete. How can I see actual processing as
opposed to waiting at a barrier?
Thanks
Top always shows all the paralell processes at 100% in the %CPU field,
although some of the time these must be waiting for a communication to
complete. How can I see actual processing as opposed to waiting at a
barrier?
Thanks,
Tiago