Ashley Pittman wrote:
On Sat, 2008-08-16 at 08:03 -0400, Jeff Squyres wrote:
- large all to all operations are very stressful on the network, even
if you have very low latency / high bandwidth networking such as DDR IB
- if you only have 1 IB HCA in a machine with 8 cores, the problem
becom
On Sat, 2008-08-16 at 08:03 -0400, Jeff Squyres wrote:
> - large all to all operations are very stressful on the network, even
> if you have very low latency / high bandwidth networking such as DDR IB
>
> - if you only have 1 IB HCA in a machine with 8 cores, the problem
> becomes even more di
> - per the "sm" thread, you might want to try with just IB (and not
> shared memory), just to see if that helps (I don't expect that it
> will, but every situation is different). Try running "mpirun --mca
> btl openib ..." (vs. "--mca btl ^tcp").
Unfortunately you were right- it did not help. Sm
There are likely many issues going on here:
- large all to all operations are very stressful on the network, even
if you have very low latency / high bandwidth networking such as DDR IB
- if you only have 1 IB HCA in a machine with 8 cores, the problem
becomes even more difficult because al
On Fri, 15 Aug 2008, Kozin, I \(Igor\) wrote:
> Hello, I would really appreciate any advice on troubleshooting/tuning
> Open MPI over ConnectX. More details about our setup can be found here
> http://www.cse.scitech.ac.uk/disco/database/search-machine.php?MID=52
> Single process per node (ppn
Hello,
I would really appreciate any advice on troubleshooting/tuning Open MPI over
ConnectX. More details about our setup can be found here
http://www.cse.scitech.ac.uk/disco/database/search-machine.php?MID=52 Single
process per node (ppn=1) seems to be fine (the results for IMB can be found
h