Allan --
We have been unable to reproduce this bad TCP performance behavior.
Indeed, in our runs, TEG TCP is performing slower than OB1 TCP.
Sidenote: is there any reason you're supplying the pls_rsh_orted MCA
parameter on the command line? It shouldn't really be necessary if
OMPI is in
On Oct 19, 2005, at 12:04 AM, Allan Menezes wrote:
We've done linpack runs recently w/ Infiniband, which result in
performance
comparable to mvapich, but not w/ the tcp port. Can you try running w/
an
earlier version, specify on the command line:
-mca pml teg
Hi Tim,
I tried the same cluste
Message: 2 Date: Tue, 18 Oct 2005 08:48:45 -0600 From: "Tim S. Woodall"
Subject: Re: [O-MPI users] Hpl Bench mark and
Openmpi rc3 (Jeff Squyres) To: Open MPI Users
Message-ID: <43550b4d.6080...@lanl.gov> Content-Type: text/plain;
charset=ISO-8859-1; format=flowed
Hi Jeff,
I installed t