At 09:08 AM 10/1/2008, Ramiro Alba Queipo wrote:
Hi all,

We have an infiniband cluster of 22 nodes witch 20 Gbps Mellanox
MHGS18-XTC cards and I tried to make performance net tests both to check
hardware as to clarify concepts.

Starting from the theoretic pick according to the infiniband card (in my
case 4X DDR => 20 Gbits/s => 2.5 Gbytes/s) we have some limits:

1) Bus type: PCIe 8x => 250 Mbytes/lane => 250 * 8 = 2 Gbytes/s

2) According to a thread an users openmpi mail-list (???):

  The 16 Gbit/s number is the theoretical peak, IB is coded 8/10 so
  out of the 20 Gbit/s 16 is what you get. On SDR this number is
  (of course) 8 Gbit/s achievable (which is ~1000 MB/s) and I've
  seen well above 900 on MPI (this on 8x PCIe, 2x margin)  
 
  Is this true?

IB uses 8b/10 encoding.  This results in a 20% overhead on every frame.  Further, IB protocol - header, CRC, flow control credits, etc. will consume additional bandwidth - the amount will vary with workload and traffic patters.  Also, any fabric can experience congestion which may reduce throughput for any given data flow. 

PCIe uses 8b/10b encoding for both 2.5GT/s and 5.0 GT/s signaling (the next generation signaling is scrambled based so provides 2x the data bandwidth with significantly less encoding overhead).  It also has protocol overheads conceptually similar to IB which will consume additional bandwidth (keep in mind many volume chipsets only support a 256B transaction size so a single IB frame may require 8-16 PCIe transactions to process.   There will also be application / device driver control messages between the host and the I/O device which will consume additional bandwidth.  

Also keep in mind that the actual application bandwidth may be further gated by the memory subsystem, the I/O-to-memory latency, etc. so while the theoretical bandwidths may be quite high, they will be constrained by the interactions and the limitations within the overall hardware and software stacks. 


3) According to other comment in the same thread:

  The data throughput limit for 8x PCIe is ~12 Gb/s. The theoretical
  limit is 16 Gb/s, but each PCIe packet has a whopping 20 byte
  overhead. If the adapter uses 64 byte packets, then you see 1/3 of
  the throughput go to overhead.

  Could someone explain me that?

DMA Read completions are often returned one cache line at a time while DMA Writes are often transmitted at the Max_Payload_Size of 256B (some chipsets do coalesce completions allowing up to the Max_Payload_Size to be returned).  Depending upon the mix of transactions required to move an IB frame, the overheads may seem excessive.

PCIe overheads vary with the transaction type, the flow control credit exchanges, CRC, etc.   It is important to keep these in mind when evaluating the solution. 

Then I got another comment about the matter:

The best uni-directional performance I have heard of for PCIe 8x IB
DDR is ~1,400 MB/s (11.2 Gb/s) with Lustre, which is about 55% of the
theoretical 20 Gb/s advertised speed.


---------------------------------------------------------------------


Now, I did some tests (mpi used is OpenMPI) with the following results:

a) Using "Performance tests" from OFED 1.31
     
   ib_write_bw -a server ->  1347 MB/s

b) Using hpcc (2 cores at diferent nodes) -> 1157 MB/s (--mca
mpi_leave_pinned 1)

c) Using "OSU Micro-Benchmarks" in "MPItests" from OFED 1.3.1

   1) 2 cores from different nodes

    - mpirun -np 2 --hostfile pool osu_bibw -> 2001.29 MB/s
(bidirectional)
    - mpirun -np 2 --hostfile pool osu_bw -> 1311.31 MB/s

   2) 2 cores from the same node

    - mpirun -np 2  osu_bibw -> 2232 MB/s (bidirectional)
    - mpirun -np 2  osu_bw -> 2058 MB/s

The questions are:

- Are those results coherent with what it should be?
- Why tests with the two core in the same node are better?
- Should not the bidirectional test be a bit higher?
- Why hpcc is so low?

You would need to provide more information about the system hardware, the fabrics, etc. to make any rational response.  There are many variables here and as I noted above, one cannot just derate the hardware by a fixed percentage and conclude there is a real problem in the solution stack.   He is more complex.   The question you should ask is whether the micro-benchmarks you are executing are a realistic reflection of the real workload.  If not, then do any of these numbers matter at the end of the day especially if the total time spent within the interconnect stacks are relatively small or bursty.

Mike
_______________________________________________
general mailing list
[email protected]
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to