Re: [openib-general] Question about the IPoIB bandwidth performance ?

2006-06-05 Thread Parks Fields

At 12:36 PM 6/5/2006, Talpey, Thomas wrote:

Thanks Parks, this is a very interesting perspective.
I will avoid going into my rant about edge devices for
now, however. :-)


Cool, you can send it direct if you want.



I am not sure what you mean about using SDP "end to end".
I assume you would perhaps use SDP to these edge nodes,
but this would require terminating the SDP connection and
re-issuing the stream over TCP to the Panasas box, wouldn't it?


yes It would probably have to work that way. Another problem would be 
SDP is not routeable.





Would this bridging be done in-kernel, like your IPoIB/Ethernet
solution today, or would you implement a daemon? It will be
a difficult challenge, I predict.


We are just starting to think about things like this, and trying to 
keep an open mind to all possibilities.  We have no solutions to do 
this yet. There might be better ways.
So you are correct and haven't thought it all the way through and 
have no alterative plan other than IPoIB at the moment.


My next step will be testing 4x-ddr IPoIB before doing anything else.
parks



   * Correspondence *

This email contains no programmatic content that requires independent 
ADC review  




___
openib-general mailing list
openib-general@openib.org
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general



Re: [openib-general] Question about the IPoIB bandwidth performance ?

2006-06-05 Thread Talpey, Thomas
Thanks Parks, this is a very interesting perspective.
I will avoid going into my rant about edge devices for
now, however. :-)

I am not sure what you mean about using SDP "end to end".
I assume you would perhaps use SDP to these edge nodes,
but this would require terminating the SDP connection and
re-issuing the stream over TCP to the Panasas box, wouldn't it?

Would this bridging be done in-kernel, like your IPoIB/Ethernet
solution today, or would you implement a daemon? It will be
a difficult challenge, I predict.

Tom.

At 02:16 PM 6/5/2006, Parks Fields wrote:
>
>>
>>I consider IPoIB to be Ethernet emulation.
>>
>>As for apples and oranges, my point exactly.
>
>
>It is not really about comparisons. Here at LANL we have an 
>environment where all our new Clusters have to mount our global 
>parallel file system Panasas. It is ethernet and will be for a while.
>
>Cluster interconnect is IB and the compute nodes do NOT have 
>ethernet, so we created i-o nodes to "bridge " IB to ethernet.
>
>Compute nodeIB---i/o node---10gig---ethernet switch   panasas
>
>We like to match / balance the network to bandwidth to storage 
>bandwidth plus try to achieve 1GB/sec per TF of the machine.  EX: 
>50TF machine  = 50 GB/sec of storage bandwidth needed.
>
>So if IPoIB would give us ~700 MB/sec and came out the other side 
>with 10gigE at ~800 that would be nice.
>Hope this helps.   We are now trying to find out is SDP will work end-to-end.
>
>thanks
>parks
>
>
>
>* Correspondence *
>
>This email contains no programmatic content that requires independent 
>ADC review  
>
>
>


___
openib-general mailing list
openib-general@openib.org
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general



Re: [openib-general] Question about the IPoIB bandwidth performance ?

2006-06-05 Thread Parks Fields




I consider IPoIB to be Ethernet emulation.

As for apples and oranges, my point exactly.



It is not really about comparisons. Here at LANL we have an 
environment where all our new Clusters have to mount our global 
parallel file system Panasas. It is ethernet and will be for a while.


Cluster interconnect is IB and the compute nodes do NOT have 
ethernet, so we created i-o nodes to "bridge " IB to ethernet.


Compute nodeIB---i/o node---10gig---ethernet switch   panasas

We like to match / balance the network to bandwidth to storage 
bandwidth plus try to achieve 1GB/sec per TF of the machine.  EX: 
50TF machine  = 50 GB/sec of storage bandwidth needed.


So if IPoIB would give us ~700 MB/sec and came out the other side 
with 10gigE at ~800 that would be nice.

Hope this helps.   We are now trying to find out is SDP will work end-to-end.

thanks
parks



   * Correspondence *

This email contains no programmatic content that requires independent 
ADC review  




___
openib-general mailing list
openib-general@openib.org
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general



Re: [openib-general] Question about the IPoIB bandwidth performance ?

2006-06-05 Thread Talpey, Thomas
[PATCHv2 1/2] resend: mthca support for
>max_map_per_fmr
>  device attribute (Roland Dreier)
>   7. Re: Question about the IPoIB bandwidth performance ?
>  (Talpey, Thomas)
>   8. Re: Question about the IPoIB bandwidth performance ? (hbchen)
>
>----- Message from "hbchen" <[EMAIL PROTECTED]> on Mon, 05 Jun 2006 09:38:24
>-0600 -
>   
>  To: "Hal Rosenstock" <[EMAIL PROTECTED]> 
>   
>  cc: "OPENIB"  
>   
> Subject: Re: [openib-general] Question about the IPoIB bandwidth  
>  performance ?
>   
>
>Hal Rosenstock wrote:
>  On Mon, 2006-06-05 at 11:12, hbchen wrote:
>
>Hi,
>I have a question about the IPoIB bandwidth performance.
>I did netperf testing using Single GiGE, Myrinet D card,
>Myrinet 10G
>ethernet card,
>and Voltaire Infiniband 4X HCA400Ex (PCI-Express interface).
>
>
>NIC (Jumbo enabled) Line bandwidth(LB) IPoverNIC bandwidth
>utilization
>(IPoNIC/LB)
>-  --
>--
>Single Gigabit NIC : 1Gb/sec=125MB/sec 120MB/sec 96% (PIC-X
>interface)
>Myrinet D card : 250MB/sec 240~-245MB/sec 96% ~ 98% (PCI-X
>interface)
>Myrinet 10G Ethernet: 10Gb/sec=1280MB/sec 980MB/sec 76.6% (My
>testing
>using Linux 2.6.14.6)
>(PCI-Express) 1225MB/sec 95.7% (Data from Myrinet website)
>IB HCA4X(PCI-Express): 10Gb/sec=1280MB/sec 420MB/sec 32.8% (My
>testing
>using Linux 2.6.14.6)
>474MB/sec 37% (the best from OpenIB mailing list)
>(2.6.12-rc5 patch 1)
>
>Why the bandwidth utilization of IPoIB is so low compared to
>the others
>NICs?
>
>
>  One thing to note is that the max utilization of 10G IB (4x) is 8G
>  due
>  to the signalling being included in this rate (unlike ethernet whose
>  rate represents the data rate and does not include the signalling
>  overhead).
>
>Hal,
>Even with this IB-4X = 8Gb/sec = 1024 MB/sec the IPoIB bandwidth
>utilization is still very low.
>>> IPoIB=420MB/sec
>>> bandwidth utilization= 420/1024 = 41.01%
>
>
>HB
>
>
>
>
>  -- Hal
>
>
>There must be a lot of room to improve the IPoIB software to
>reach 75%+
>bandwidth utilization.
>
>
>HB Chen
>Los Alamos National Lab
>[EMAIL PROTECTED]
>
>___
>openib-general mailing list
>openib-general@openib.org
>http://openib.org/mailman/listinfo/openib-general
>
>To unsubscribe, please visit
>http://openib.org/mailman/listinfo/openib-general
>
>
>
>
>
>- Message from "Hal Rosenstock" <[EMAIL PROTECTED]> on 05 Jun 2006
>11:34:50 -0400 -
>   
>   To: "Eitan Zahavi" <[EMAIL PROTECTED]>   
>   
>   cc: "OPENIB" 
>   
>  Subject: [openib-general] Re: [PATCH] osm: trivial missing header files  
>   fix 
>   
>
>On Mon, 2006-06-05 at 08:51, Eitan Zahavi wrote:
>> Hi Hal
>>
>> Cleaning up compilation warnings I found there missing includes in
>> various sources.
>>
>> Eitan
>>
>> Signed-off-by:  Eitan Zahavi <[EMAIL PROTECTED]>
>
>Thanks. Applied to trunk only.
>
>-- Hal
>
>
>
>- Message from "Hal Rosenstock" <[EMAIL PROTECTED]> on 05 Jun 2006
>11:45:28 -0400 -
>   
> To: "Eitan Zahavi" <[EMAIL P

Re: [openib-general] Question about the IPoIB bandwidth performance ?

2006-06-05 Thread hycsw
Tom,

We are in the process of measuring the CPU utilization on our NFS/RDMA
experiments in contrast with regular the NFS, we also intend to include 
netperf numbers and will keep you posted with our results as soon as 
possible.

Helen

- original Message -
>From [EMAIL PROTECTED] Mon Jun  5 09:03:56 2006


Helen, have you measured the CPU utilizations during these runs?
Perhaps you are out of CPU.

Outrageous opinion follows.

Frankly, an IB HCA running Ethernet emulation is approximately the
world's worst 10GbE adapter (not to put too fine of a point on it :-) )
There is no hardware checksumming, nor large-send offloading, both
of which force overhead onto software. And, as you just discovered
it isn't even 10Gb!

In general, network emulation layers are always going to perform more
poorly than native implementations. But this is only a generality learned
from years of experience with them.

Tom.  

___
openib-general mailing list
openib-general@openib.org
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

___
openib-general mailing list
openib-general@openib.org
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general



Re: [openib-general] Question about the IPoIB bandwidth performance ?

2006-06-05 Thread Bernard King-Smith
> Thomas Talpey said:
> At 11:38 AM 6/5/2006, hbchen wrote:
> >Even with this IB-4X = 8Gb/sec = 1024 MB/sec the IPoIB bandwidth
utilization is still very > low.
> >>> IPoIB=420MB/sec
> >>> bandwidth utilization= 420/1024 = 41.01%
>
>
> Helen, have you measured the CPU utilizations during these runs?
> Perhaps you are out of CPU.
>
> Outrageous opinion follows.
>
> Frankly, an IB HCA running Ethernet emulation is approximately the
> world's worst 10GbE adapter (not to put too fine of a point on it :-) )
> There is no hardware checksumming, nor large-send offloading, both
> of which force overhead onto software. And, as you just discovered
> it isn't even 10Gb!
>
> In general, network emulation layers are always going to perform more
> poorly than native implementations. But this is only a generality learned
> from years of experience with them
>
> Tom.

Hold on here

Who said anything about Ethernnet emulation. Hal said he is running
straight Netperf over IB not ethernet emulation. I don't think that any IB
HCAs today support offloaded checksum and large send. You are comparing
apples and oranges. The only appropriate comparison is to use the IBM HCA
compared to the mthca adapters. I think Hal's point is actually comparing
"any" IB adapter against GigE and Myrinet. Both the mthca and IBM HCA's
should get similar IPoIB performance using identical OpenIB stacks.


Bernie King-Smith
IBM Corporation
Server Group
Cluster System Performance
[EMAIL PROTECTED](845)433-8483
Tie. 293-8483 or wombat2 on NOTES

"We are not responsible for the world we are born into, only for the world
we leave when we die.
So we have to accept what has gone before us and work to change the only
thing we can,
-- The Future." William Shatner


   
 openib-general-re 
 [EMAIL PROTECTED]  
 Sent by:   To 
 openib-general-bo openib-general@openib.org   
 [EMAIL PROTECTED]   cc 
   
   Subject 
 06/05/2006 12:11  openib-general Digest, Vol 24,  
 PMIssue 22
   
   
 Please respond to 
 [EMAIL PROTECTED] 
 enib.org  
   
   




Send openib-general mailing list submissions to
 openib-general@openib.org

To subscribe or unsubscribe via the World Wide Web, visit
 http://openib.org/mailman/listinfo/openib-general
or, via email, send a message with subject or body 'help' to
 [EMAIL PROTECTED]

You can reach the person managing the list at
 [EMAIL PROTECTED]

When replying, please edit your Subject line so it is more specific
than "Re: Contents of openib-general digest..."
Today's Topics:

   1. Re: Question about the IPoIB bandwidth   performance ?
(hbchen)
   2. Re: [PATCH] osm: trivial missing header files fix (Hal Rosenstock)
   3. Re: [PATCH] osm: trivial missing cast in osmt_service call
  for memcmp (Hal Rosenstock)
   4. Re: Question about the IPoIB bandwidth performance ?
  (Bernard King-Smith)
   5. Re: Re: [PATCH]Repost: IPoIB skb panic (Shirley Ma)
   6. Re: [PATCHv2 1/2] resend: mthca support for
max_map_per_fmr
  device attribute (Roland Dreier)
   7. Re: Question about the IPoIB bandwidth performance ?
  (Talpey, Thomas)
   8. Re: Question about the IPoIB bandwidth performance ? (hbchen)

- Message from "hbchen" <[EMAIL PROTECTED]> on Mon, 05 Jun 2006 09:38:24
-0600 -
   
  To: "Hal Rosenstock" <[EMAIL PROTECTED]> 
                   
  cc: "OPENIB"  
   
 Subject: Re: [openib-general] Question about the IPoIB bandwidth  
  performance ? 

RE: [openib-general] Question about the IPoIB bandwidth performance ?

2006-06-05 Thread Felix Marti








 

 











From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On
Behalf Of hbchen
Sent: Monday, June 05, 2006 9:12
AM
To: Talpey, Thomas
Cc: openib-general@openib.org
Subject: Re: [openib-general]
Question about the IPoIB bandwidth performance ?



 

Talpey, Thomas wrote:



At 11:38 AM 6/5/2006, hbchen wrote:  

Even with this IB-4X = 8Gb/sec = 1024 MB/sec the IPoIB bandwidth utilization is still very low.    



IPoIB=420MB/sec  bandwidth utilization= 420/1024 = 41.01%    





  Helen, have you measured the CPU utilizations during these runs?Perhaps you are out of CPU.   

Tom,
I am HB Chen from LANL not the Helen Chen from SNL.
I didn't run out of CPU.  It is about 70-80 % of CPU utilization.
  



Outrageous opinion follows. Frankly, an IB HCA running Ethernet emulation is approximately theworld's worst 10GbE adapter (not to put too fine of a point on it :-) )  

The IP over Myrinet ( Ethernet emulation) can reach
upto 96%-98%  bandwidth utilization why not the IPoIB ?



[Felix:] As pointed out earlier: it is the message rate. If
you change the mtu to 1500B (instead of the non-standard 9000B Jumbo frames)
performance will drop into the same range as what you see with IPoIB (limited
by the receiver).


HB Chen 
[EMAIL PROTECTED]



There is no hardware checksumming, nor large-send offloading, bothof which force overhead onto software. And, as you just discoveredit isn't even 10Gb! In general, network emulation layers are always going to perform morepoorly than native implementations. But this is only a generality learnedfrom years of experience with them. Tom.     

 








___
openib-general mailing list
openib-general@openib.org
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Re: [openib-general] Question about the IPoIB bandwidth performance ?

2006-06-05 Thread Talpey, Thomas
At 12:11 PM 6/5/2006, hbchen wrote:
>>Perhaps you are out of CPU.
>>
>>  
>Tom,
>I am HB Chen from LANL not the Helen Chen from SNL.

Oops, sorry! I have too many email messages going by. :-)
HB, then.


>I didn't run out of CPU.  It is about 70-80 % of CPU utilization.

But, is one CPU at 100%? Interrupt processing, for example.

>  
>>
>>Outrageous opinion follows.
>>
>>Frankly, an IB HCA running Ethernet emulation is approximately the
>>world's worst 10GbE adapter (not to put too fine of a point on it :-) )
>>  
>The IP over Myrinet ( Ethernet emulation) can reach upto 96%-98%  bandwidth 
>utilization why not the IPoIB ?

I am not familiar with the implementation Myrinet uses. In any
case, I am not saying that an emulation can't reach certain goals,
just that they will pretty much always be inferior to native approaches.
Sometimes far inferior.

Tom. 

___
openib-general mailing list
openib-general@openib.org
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general



Re: [openib-general] Question about the IPoIB bandwidth performance ?

2006-06-05 Thread hbchen




Talpey, Thomas wrote:

  At 11:38 AM 6/5/2006, hbchen wrote:
  
  
Even with this IB-4X = 8Gb/sec = 1024 MB/sec the IPoIB bandwidth utilization is still very low.


  
IPoIB=420MB/sec  
bandwidth utilization= 420/1024 = 41.01%

  

  
  

Helen, have you measured the CPU utilizations during these runs?
Perhaps you are out of CPU.

  

Tom,
I am HB Chen from LANL not the Helen Chen from SNL.
I didn't run out of CPU.  It is about 70-80 % of CPU utilization.
 


  Outrageous opinion follows.

Frankly, an IB HCA running Ethernet emulation is approximately the
world's worst 10GbE adapter (not to put too fine of a point on it :-) )
  

The IP over Myrinet ( Ethernet emulation) can reach upto 96%-98% 
bandwidth utilization why not the IPoIB ?

HB Chen 
[EMAIL PROTECTED]

  There is no hardware checksumming, nor large-send offloading, both
of which force overhead onto software. And, as you just discovered
it isn't even 10Gb!

In general, network emulation layers are always going to perform more
poorly than native implementations. But this is only a generality learned
from years of experience with them.

Tom.  

  




___
openib-general mailing list
openib-general@openib.org
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Re: [openib-general] Question about the IPoIB bandwidth performance ?

2006-06-05 Thread Talpey, Thomas
At 11:38 AM 6/5/2006, hbchen wrote:
>Even with this IB-4X = 8Gb/sec = 1024 MB/sec the IPoIB bandwidth utilization 
>is still very low.
>>> IPoIB=420MB/sec  
>>> bandwidth utilization= 420/1024 = 41.01%


Helen, have you measured the CPU utilizations during these runs?
Perhaps you are out of CPU.

Outrageous opinion follows.

Frankly, an IB HCA running Ethernet emulation is approximately the
world's worst 10GbE adapter (not to put too fine of a point on it :-) )
There is no hardware checksumming, nor large-send offloading, both
of which force overhead onto software. And, as you just discovered
it isn't even 10Gb!

In general, network emulation layers are always going to perform more
poorly than native implementations. But this is only a generality learned
from years of experience with them.

Tom.  

___
openib-general mailing list
openib-general@openib.org
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general



Re: [openib-general] Question about the IPoIB bandwidth performance ?

2006-06-05 Thread Bernard King-Smith
Hal Rosenstock wrote:

> On Mon, 2006-06-05 at 11:12, hbchen wrote:
> > Hi,
> > I have a question about the IPoIB bandwidth performance.
> > I did netperf testing using Single GiGE, Myrinet D card, Myrinet 10G
> > ethernet card,
> > and Voltaire Infiniband 4X HCA400Ex (PCI-Express interface).
> >
> >
> > NIC (Jumbo enabled) Line bandwidth(LB) IPoverNIC bandwidth utilization
> > (IPoNIC/LB)
> > -  --
> > --
> > Single Gigabit NIC : 1Gb/sec=125MB/sec 120MB/sec 96% (PIC-X interface)
> > Myrinet D card : 250MB/sec 240~-245MB/sec 96% ~ 98% (PCI-X interface)
> > Myrinet 10G Ethernet: 10Gb/sec=1280MB/sec 980MB/sec 76.6% (My testing
> > > using Linux 2.6.14.6)
> > (PCI-Express) 1225MB/sec 95.7% (Data from Myrinet website)
> > IB HCA4X(PCI-Express): 10Gb/sec=1280MB/sec 420MB/sec 32.8% (My testing
> > using Linux 2.6.14.6)
> > 474MB/sec 37% (the best from OpenIB mailing list)
> > (2.6.12-rc5 patch 1)
> >
> > Why the bandwidth utilization of IPoIB is so low compared to the others
> > NICs?
>
> One thing to note is that the max utilization of 10G IB (4x) is 8G due
> to the signalling being included in this rate (unlike ethernet whose
> rate represents the data rate and does not include the signalling
> overhead).
>
> -- Hal
>

You also have larger IP packets when you use GigE ( especially in large
send/offload ) and Myrinet. I think Myrinet uses a 60K MTU and for GigE,
without large send you get a 9000 MTU. With large send you get a 64K buffer
to the adapter so fragmentation to 1500/9000 IP packets is offloaded in the
adapter.

Currently with IPoIB using UD mode, you have to generate lots of 2K
packets. With serialized IBoIP drivers you end up bottlenecking on a single
CPU. There is a IPoIB-CM IEFT spec out which should significantly improve
IPoIB performance if implemented.

> > There must be a lot of room to improve the IPoIB software to reach 75%+
> > bandwidth utilization.
> >
> >
> > HB Chen
> > Los Alamos National Lab
> > [EMAIL PROTECTED]
> >
> > ___
> > openib-general mailing list
> > openib-general@openib.org
> > http://openib.org/mailman/listinfo/openib-general
> >
> > To unsubscribe, please visit
http://openib.org/mailman/listinfo/openib-general
> >


___
openib-general mailing list
openib-general@openib.org
http://openib.org/mailman/listinfo/openib-general


Bernie King-Smith
IBM Corporation
Server Group
Cluster System Performance
[EMAIL PROTECTED](845)433-8483
Tie. 293-8483 or wombat2 on NOTES

"We are not responsible for the world we are born into, only for the world
we leave when we die.
So we have to accept what has gone before us and work to change the only
thing we can,
-- The Future." William Shatner


___
openib-general mailing list
openib-general@openib.org
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general



Re: [openib-general] Question about the IPoIB bandwidth performance ?

2006-06-05 Thread hbchen




Hal Rosenstock wrote:

  On Mon, 2006-06-05 at 11:12, hbchen wrote:
  
  
Hi,
I have a question about the IPoIB bandwidth performance.
I did netperf testing using Single GiGE, Myrinet D card, Myrinet 10G
ethernet card,
and Voltaire Infiniband 4X HCA400Ex (PCI-Express interface).


NIC (Jumbo enabled) Line bandwidth(LB) IPoverNIC bandwidth utilization
(IPoNIC/LB)
-  --
--
Single Gigabit NIC : 1Gb/sec=125MB/sec 120MB/sec 96% (PIC-X interface)
Myrinet D card : 250MB/sec 240~-245MB/sec 96% ~ 98% (PCI-X interface)
Myrinet 10G Ethernet: 10Gb/sec=1280MB/sec 980MB/sec 76.6% (My testing
using Linux 2.6.14.6)
(PCI-Express) 1225MB/sec 95.7% (Data from Myrinet website)
IB HCA4X(PCI-Express): 10Gb/sec=1280MB/sec 420MB/sec 32.8% (My testing
using Linux 2.6.14.6)
474MB/sec 37% (the best from OpenIB mailing list)
(2.6.12-rc5 patch 1)

Why the bandwidth utilization of IPoIB is so low compared to the others
NICs?

  
  
One thing to note is that the max utilization of 10G IB (4x) is 8G due
to the signalling being included in this rate (unlike ethernet whose
rate represents the data rate and does not include the signalling
overhead).
  

Hal,
Even with this IB-4X = 8Gb/sec = 1024 MB/sec the IPoIB bandwidth
utilization is still very low.
>> IPoIB=420MB/sec  
>> bandwidth utilization= 420/1024 = 41.01%


HB 




  
-- Hal

  
  
There must be a lot of room to improve the IPoIB software to reach 75%+
bandwidth utilization.


HB Chen
Los Alamos National Lab
[EMAIL PROTECTED]

___
openib-general mailing list
openib-general@openib.org
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general


  
  
  




___
openib-general mailing list
openib-general@openib.org
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Re: [openib-general] Question about the IPoIB bandwidth performance ?

2006-06-05 Thread Hal Rosenstock
On Mon, 2006-06-05 at 11:12, hbchen wrote:
> Hi,
> I have a question about the IPoIB bandwidth performance.
> I did netperf testing using Single GiGE, Myrinet D card, Myrinet 10G
> ethernet card,
> and Voltaire Infiniband 4X HCA400Ex (PCI-Express interface).
> 
> 
> NIC (Jumbo enabled) Line bandwidth(LB) IPoverNIC bandwidth utilization
> (IPoNIC/LB)
> -  --
> --
> Single Gigabit NIC : 1Gb/sec=125MB/sec 120MB/sec 96% (PIC-X interface)
> Myrinet D card : 250MB/sec 240~-245MB/sec 96% ~ 98% (PCI-X interface)
> Myrinet 10G Ethernet: 10Gb/sec=1280MB/sec 980MB/sec 76.6% (My testing
> using Linux 2.6.14.6)
> (PCI-Express) 1225MB/sec 95.7% (Data from Myrinet website)
> IB HCA4X(PCI-Express): 10Gb/sec=1280MB/sec 420MB/sec 32.8% (My testing
> using Linux 2.6.14.6)
> 474MB/sec 37% (the best from OpenIB mailing list)
> (2.6.12-rc5 patch 1)
> 
> Why the bandwidth utilization of IPoIB is so low compared to the others
> NICs?

One thing to note is that the max utilization of 10G IB (4x) is 8G due
to the signalling being included in this rate (unlike ethernet whose
rate represents the data rate and does not include the signalling
overhead).

-- Hal

> There must be a lot of room to improve the IPoIB software to reach 75%+
> bandwidth utilization.
> 
> 
> HB Chen
> Los Alamos National Lab
> [EMAIL PROTECTED]
> 
> ___
> openib-general mailing list
> openib-general@openib.org
> http://openib.org/mailman/listinfo/openib-general
> 
> To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
> 

___
openib-general mailing list
openib-general@openib.org
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general



[openib-general] Question about the IPoIB bandwidth performance ?

2006-06-05 Thread hbchen
Hi,
I have a question about the IPoIB bandwidth performance.
I did netperf testing using Single GiGE, Myrinet D card, Myrinet 10G
ethernet card,
and Voltaire Infiniband 4X HCA400Ex (PCI-Express interface).


NIC (Jumbo enabled) Line bandwidth(LB) IPoverNIC bandwidth utilization
(IPoNIC/LB)
-  --
--
Single Gigabit NIC : 1Gb/sec=125MB/sec 120MB/sec 96% (PIC-X interface)
Myrinet D card : 250MB/sec 240~-245MB/sec 96% ~ 98% (PCI-X interface)
Myrinet 10G Ethernet: 10Gb/sec=1280MB/sec 980MB/sec 76.6% (My testing
using Linux 2.6.14.6)
(PCI-Express) 1225MB/sec 95.7% (Data from Myrinet website)
IB HCA4X(PCI-Express): 10Gb/sec=1280MB/sec 420MB/sec 32.8% (My testing
using Linux 2.6.14.6)
474MB/sec 37% (the best from OpenIB mailing list)
(2.6.12-rc5 patch 1)

Why the bandwidth utilization of IPoIB is so low compared to the others
NICs?
There must be a lot of room to improve the IPoIB software to reach 75%+
bandwidth utilization.


HB Chen
Los Alamos National Lab
[EMAIL PROTECTED]

___
openib-general mailing list
openib-general@openib.org
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general