> - Message from "Or Gerlitz" <[EMAIL PROTECTED]> on Thu, 01 Feb
2007 11:17:53 +0200 -
>
> Dotan Barak wrote:
> > I think that now, when implementation of IPoIB CM is available and SRQ
> > is being used, one may
> > need to use a SRQ with more than 16K WRs.
>
> IPoIB UD uses SRQ by nat
To:
"EWG" <[EMAIL PROTECTED]>
cc:
"Roland Dreier" <[EMAIL PROTECTED]>, "OPENIB"
Subject:
[openib-general] Suggestion to remove NAPI with IPoIB from OFED 1.2
release
> I suggest that in OFED 1.2 we will not include the NAPI support
> The reasons are:
>
>* IBM interrupt handler change to suppor
- Message from "Roland Dreier" <[EMAIL PROTECTED]> on Wed, 10
> Jan 2007 07:15:12 -0800 -
>
> To:
>
> "Michael S. Tsirkin" <[EMAIL PROTECTED]>
>
> cc:
>
> openib-general@openib.org
>
> Subject:
>
> Re: [openib-general] [PATCHv4] IPoIB CM Experimental support
>
> > - Using path MTU
- Message from "Michael S. Tsirkin" <[EMAIL PROTECTED]> on Mon,
> 8 Jan 2007 18:57:14 +0200 -
>
> To:
>
> openib-general@openib.org, "Roland Dreier" <[EMAIL PROTECTED]>
>
> Subject:
>
> [openib-general] [PATCHv4] IPoIB CM Experimental support
>
> The following patch adds experimental s
Roland Dreier <[EMAIL PROTECTED]> wrote on 10/05/2006
10:18:49 AM:
> Bernard> I don't think it is the PCI-e bus because
it can handle
> Bernard> much more than 20 Gb/s.
>
> This isn't true. Mellanox cards have PCI-e x8 interfaces, which
has a
> theoretical limit of 16 Gb/sec in each dir
"john t" <[EMAIL PROTECTED]>
wrote on 10/05/2006 08:18:31 AM:
> Hi Bernard,
>
> I had a configuration issue. I fixed it and now
I get same BW (i.e.
> around 10 Gb/sec) on each port provided I use ports on different HCA
> cards. If I use two ports of the same HCA card then BW gets divided
> be
John,
Who's adapter (manufacturer) are you
using? It is usually an adapter implementation or driver issue that occures
when you cannot scale across multiple links. The fact that you don't scale
up from one link, but it appears they share a fixed bandwidth across N
links means that there is a driv
Eli and Roland,
Has anyone run the RR test in Netperf
to look at latency? What 1 byte RR rates did you see before and after applying
the patch.
Bernie King-Smith
IBM Corporation
Server Group
Cluster System Performance
[EMAIL PROTECTED] (845)433-8483
Tie. 293-8483 or wombat2 on NOTES
"We
Hi Eitan,
On Mon, 2006-06-05 at 08:59, Eitan Zahavi wrote:
> Hi Hal
>
> Last one of my cleaning up compilation warnings I found a missing
> cast in osmt
Hal Rosenstock wrote:
> On Mon, 2006-06-05 at 11:12, hbchen wrote:
> > Hi,
> > I have a question about the IPoIB bandwidth performance.
> > I did netperf testing using Single GiGE, Myrinet D card, Myrinet 10G
> > ethernet card,
> > and Voltaire Infiniband 4X HCA400Ex (PCI-Express interface).
> >
>
Michael S. Tsirkin wrote:
Michael> Quoting r. Shirley Ma <[EMAIL PROTECTED]>:
Michael> > different drivers have different implementations for CQ
completion handler.
Michael> Maybe these drivers should be changed then? Its a bit hard for me
to imagine a
Michael> driver that doesn't get hardware
Lenoid Arsh wrote:
Lenoid> Shirley,
Lenoid> some additional information you may be interested:
Lenoid> According to our experience with the Voltaire IPoIB driver,
Lenoid> splitting CQ harmed the throughput (we checked with the iperf
Lenoid> application, UDP mode.) Splitting the the CQ caus
Subject
Re: Speeding up IPoIB.
Grant Grundler wrote:
> Currently we only get 40% of the link bandwidth compared to
> 85% for 10 GigE. (Yes I know the cost differences which favor IB ).
Grant> 10gige is getting 85% without TOE?
Grant> Or are they distributing event handling across several CPUs?
On 10 GigE they are using large
[sorry if this forum is the wrong place to take this up]
Grant Grundler <[EMAIL PROTECTED]> wrote :
Grant> [ I've probably posted some of these results before...here's another
Grant> take on this problem. ]
Hopefully not rehashing too much old information.
Grant> I'm expect splitting the RX/TX
Richard Frank <[EMAIL PROTECTED]> wrote:
Richard> Are there any mechanisms available to the client process to manage
the
Richard> QoS level for the various supported ULPs
(SDP,TCP,UDP,RDS,SRP,iSER,etc)
Richard> either at the ULP level or some combination of process and ULP -
or
Richard> perhaps ev
Shirley> After completion handler receives the notification, don't
Shirley> poll the CQ right away, and wait for more WIKIs in
Shirley> CQ. That way can reduce the CQ lock overhead.
Roland> That's interesting... it makes sense, and it argues in
Roland> favor of deferring CQ po
Shirley> Some tests have been done over mthca and
Shirley> ehca. Unidirectional stream test, gains up to 15%
Shirley> throughout with this patch on systems over 4 cpus.
Shirley> Bidirectional could gain more. People might get different
Shirley> performance improvement number und
18 matches
Mail list logo