Hi Jeff,

Given the situation that 16 nodes have eth0,eth1 and eth2 interface, I tried
to run data transfer within themselves using mpirun, but without specifying
"btl_tcp_if_include". I got only 15% increase in uni-directional data
transfer rate when using 3 links. But if I run two such processes : one
using eth0+eth1 and other using eth2 only, I get expected 50% increase in
tranfer rate. Any clue?

Regards,
Jayanta

On Wed, Sep 2, 2009 at 1:41 PM, Jeff Squyres <jsquy...@cisco.com> wrote:

> If you don't use btl_tcp_if_include, Open MPI should use all available
> ethernet devices, and *should* (although I haven't tested this recently)
> only use devices that are routable to specific peers.  Specifically, if
> you're on a node with eth0-3, it should use all of them to connect to
> another peer that has eth0-3, but only use eth0-1 to connect to a peer that
> only has those 2 devices.  (all of the above assume that all your eth0's are
> on one subnet, all your eth1's are on another subnet, ...etc.)
>
> Does that work for you?
>
>
>
> On Aug 25, 2009, at 7:14 PM, Jayanta Roy wrote:
>
>  Hi,
>>
>> I am using Openmpi (version 1.2.2) for MPI data transfer using
>> non-blocking MPI calls like MPI_Isend, MPI_Irecv etc. I am using "--mca
>> btl_tcp_if_include eth0,eth1" to use both the eth link for data transfer
>> within 48 nodes.  Now I have added eth2 and eth3 links on the 32 compute
>> nodes. My aim is to share the high speed data within 32 compute nodes
>> through eth2 and eth3. But I can't include this as part of "mca" as the rest
>> of 16 nodes do not have these additional interfaces. In MPI/Openmp can one
>> specify explicit routing table within a set of nodes? Such that I can edit
>> /etc/host for additional hostname with these new interfaces and add these
>> hosts in the mpi hostfile.
>>
>> Regards,
>> Jayanta _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to