On 2010-12-03, at 8:46AM, Jeff Squyres (jsquyres) wrote: > Another option to try is to install the openmx drivers on your system and run > open MPI with mx support. This should be much better perf than tcp.
We've tried this on a big GigE cluster (in fact, Brice Goglin was playing with it on our system) -- it's not really an answer. It didn't work past a small number of nodes, and the performance gains were fairly small. IntelMPIs Direct Ethernet Transport did work on larger nodecounts, but again it was a pretty modest effect (few percent decrease in pingpong latencies, no discernable bandwidth improvements). - Jonathan -- Jonathan Dursi <ljdu...@scinet.utoronto.ca> SciNet, Compute/Calcul Canada