The SGI Altix ICE cluster system supports 2 InfiniBand fabrics.
http://www.sgi.com/products/servers/altix/ice/
Each compute node has 2 HCAs and each is connected to a separate fabric. We recommend that users use one fabric for storage traffic and the other for MPI, but there is no reason why both fabrics could not be used for MPI. OpenMPI requires setting a separate subnet prefix for each fabric to use both fabrics for MPI and OpenSM supports this setting of subnet prefix. Other MPIs do not require this.

Edward


on 04/04/2008 08:08 AM Tang, Changqing said the following:
What I mean "claim to support" is to have more people to test with this config.

--CQ

-----Original Message-----
From: Or Gerlitz [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 03, 2008 11:18 PM
To: Tang, Changqing
Cc: [EMAIL PROTECTED]; ewg@lists.openfabrics.org
Subject: Re: [ofa-general] Re: [ewg] OFED March 24 meeting
summary on OFED 1.4 plans

On Thu, Apr 3, 2008 at 5:40 PM, Tang, Changqing
<[EMAIL PROTECTED]> wrote:

 The problem is, from MPI side, (and by default), we don't
know which
port is on which  fabric, since the subnet prefix is the
same. We rely
on system admin to config two  different subnet prefixes
for HP-MPI to work.
 No vendor has claimed to support this.
CQ, not supporting a different subnet prefix per IB subnet is
against IB nature, I don't think there should be any problem
to configure a different prefix at each open SM instance and
the Linux host stack would work perfectly under this config.
If you are a ware to any problem in the opensm and/or the
host stack please let the community know and the maintainers
will fix it.

Or.

_______________________________________________
ewg mailing list
ewg@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg
_______________________________________________
ewg mailing list
ewg@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg

Reply via email to