We have a small lustre install consisting of an MDS and 5 OSS servers.
Historically the MDS and OSS servers had both a 1Gbit ethernet interface
(tcp0) to workstations and a QDR IB interface (ib0) to our cluster.

   We're planning on adding a MTU 9000 10Gbit ethernet (tcp1) interface
to the MDS and OSS nodes and workstations for faster access.  Our
software has a pretty high IO to CPU component.

   I just discovered that our MDS can't in fact take another PCIe 8x
card but it does have a spare GigE port.  The 10gbit Ethernet switch
can support 1gbit and 10gbit interfaces.

   We'd then have 3 networks
tcp0 at 1gbit to slow clients
tcp1 at 10gbit to faster clients
ib0 to cluster

   My question is:

   Is there a risk of congestion or overrunning that 2nd GigE MDS 
interface if our workstations and OSS servers communicate over tcp1 at
10gbit but the MDS tcp1 is connected at 1Gbit.  The bulk of our traffic
will continue to be between the cluster and lustre over IB but the
workstations can trivially over run ethernet hence the desire for
10gbit between them and the OSSes.

   My gut feeling is it should be fine, particularly with the larger MTU,
there's not that much traffic to the MDS but I'd easily believe it if
somebody said it's risky thing to do.

   The alternative is to buy a new MDS and swap disks into it.

James Robnett
National Radio Astronomy Observatory
Array Operations Center





_______________________________________________
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Reply via email to