I have a code that seems to run about 40% faster when I bond together
twin eth interfaces. The question, of course, arises: is it really
producing so much traffic to keep twin 1 Gig eth interfaces busy? I
don't really believe this but need a way to check.

What are good tools to monitior the MPI performance of a running job.
Basically what throughput loads is it imposing on the eth interfaces.
Any suggestions?

The code does not seem to produce much of disk I/O as profiled via
strace (if at all NFS I/O is a bottleneck).

-- 
Rahul

Reply via email to