Hi all,
Today's linux-next merge of the net-next tree got a conflict in
drivers/infiniband/hw/cxgb4/qp.c between commit 5b0c275926b8
("RDMA/cxgb4: Fix SQ allocation when on-chip SQ is disabled") from the
infiniband tree and commit 9919d5bd01b9 ("RDMA/cxgb4: Fix onchip queue
support for T5") from t
alloc_page(GFP_ATOMIC) is safe. it doesnt give you a highmempage
Not sure about header splitting. If done, it probably should be smart
as you said (allowing small frames to be in the first part)
On Wed, Apr 17, 2013 at 2:49 PM, Roland Dreier wrote:
> On Wed, Apr 17, 2013 at 12:10 PM, Eric Dumaze
On Wed, Apr 17, 2013 at 12:10 PM, Eric Dumazet wrote:
> + prefetch(page_address(frag->page.p));
I guess we would have to change our allocation to __get_free_page()
(instead of alloc_page()), so that we make sure never to get a highmem
page?
Other than that, seems fine.
By the way,
On 4/17/13, Jay Fenlason wrote:
> On my Fedora Rawhide boxes, I noticed I was getting warnings saying
> that Infiniband modules were not checking for DMA mapping errors. I
> wrote this patch to silence the warnings.
>
> I tested it on a pair of x86_64 machines running
> 3.9.0-0.rc7.git2.1.fc20 wi
Please try :
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
index 2cfa76f..9bdff21 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
@@ -112,6 +112,8 @@ static void ipoib_ud_skb_put_frags(struct
ipoib_
On Wed, 2013-04-17 at 21:15 +0300, Or Gerlitz wrote:
> On Wed, Apr 17, 2013 at 9:06 PM, Randy Dunlap wrote:
> > On 04/17/13 00:04, Stephen Rothwell wrote:
> >> Changes since 20130416:
>
> > on x86_64:
> > drivers/built-in.o: In function `isert_free_np':
> > ib_isert.c:(.text+0x6e8a77): undefined
>
>
>
>That's probably because of a cache line miss.
>
>The thing I don't really understand is that normally, the first cache
>line (64 bytes) contains both the Ethernet header and IPv4 header.
>
>So what does this adapter in this respect ?
>
>I guess you should try to use IPOIB_UD_HEAD_SIZE=64 to
On my Fedora Rawhide boxes, I noticed I was getting warnings saying
that Infiniband modules were not checking for DMA mapping errors. I
wrote this patch to silence the warnings.
I tested it on a pair of x86_64 machines running
3.9.0-0.rc7.git2.1.fc20 with
InfiniBand: Mellanox Technologies MT25418
From: Patrick McHardy
Date: Wed, 17 Apr 2013 18:18:24 +0200
> The following patchset adds support for running TIPC over InfiniBand.
> The patchset consists of three parts (+ a minor fix for the ethernet media
> type):
>
> - Preparation: removal of an the unused str2addr callback and move of the
On Wed, Apr 17, 2013 at 9:06 PM, Randy Dunlap wrote:
> On 04/17/13 00:04, Stephen Rothwell wrote:
>> Changes since 20130416:
> on x86_64:
> drivers/built-in.o: In function `isert_free_np':
> ib_isert.c:(.text+0x6e8a77): undefined reference to `rdma_destroy_id'
[...]
Nic,
Yep, you need to add
d
On Wed, Apr 17, 2013 at 11:06 AM, Randy Dunlap wrote:
> on x86_64:
>
> drivers/built-in.o: In function `isert_free_np':
> ib_isert.c:(.text+0x6e8a77): undefined reference to `rdma_destroy_id'
> drivers/built-in.o: In function `isert_conn_setup_qp':
> ib_isert.c:(.text+0x6e9038): undefined referenc
On Wed, Apr 17, 2013 at 10:32 AM, Atchley, Scott wrote:
> On Apr 17, 2013, at 1:15 PM, Wendy Cheng wrote:
>
>> On Wed, Apr 17, 2013 at 7:36 AM, Yan Burman wrote:
>>> Hi.
>>>
>>> I've been trying to do some benchmarks for NFS over RDMA and I seem to only
>>> get about half of the bandwidth that
On Apr 17, 2013, at 1:15 PM, Wendy Cheng wrote:
> On Wed, Apr 17, 2013 at 7:36 AM, Yan Burman wrote:
>> Hi.
>>
>> I've been trying to do some benchmarks for NFS over RDMA and I seem to only
>> get about half of the bandwidth that the HW can give me.
>> My setup consists of 2 servers each with
On Wed, Apr 17, 2013 at 7:36 AM, Yan Burman wrote:
> Hi.
>
> I've been trying to do some benchmarks for NFS over RDMA and I seem to only
> get about half of the bandwidth that the HW can give me.
> My setup consists of 2 servers each with 16 cores, 32Gb of memory, and
> Mellanox ConnectX3 QDR ca
Support TIPC in the IPoIB driver. Since IPoIB now keeps track of its own
neighbour entries and doesn't require the packet to have a dst_entry
anymore, the only necessary changes are to:
- not drop multicast TIPC packets because of the unknown ethernet type
- handle unicast TIPC packets similar to
The skb->protocol field is used by packet classifiers and for AF_PACKET
cooked format, TIPC needs to set it properly.
Fixes packet classification and ethertype of 0x in cooked captures:
Out 20:c9:d0:43:12:d9 ethertype Unknown (0x), length 56:
0x: 5b50 0028 30d4 0100 1000
Add InfiniBand media type based on the ethernet media type.
The only real difference is that in case of InfiniBand, we need the entire
20 bytes of space reserved for media addresses, so the TIPC media type ID is
not explicitly stored in the packet payload.
Sample output of tipc-config:
# tipc-co
Some network protocols, like InfiniBand, don't have a fixed broadcast
address but one that depends on the configuration. Move the bcast_addr
to struct tipc_bearer and initialize it with the broadcast address of
the network device when the bearer is enabled.
Signed-off-by: Patrick McHardy
---
net
Signed-off-by: Patrick McHardy
---
net/tipc/bearer.h| 2 --
net/tipc/eth_media.c | 20
2 files changed, 22 deletions(-)
diff --git a/net/tipc/bearer.h b/net/tipc/bearer.h
index 39f1192..cc2d74e 100644
--- a/net/tipc/bearer.h
+++ b/net/tipc/bearer.h
@@ -77,7 +77,6 @@ str
The following patchset adds support for running TIPC over InfiniBand.
The patchset consists of three parts (+ a minor fix for the ethernet media
type):
- Preparation: removal of an the unused str2addr callback and move of the
bcast_addr from struct tipc_media to struct tipc_bearer. This is neces
Hi.
I've been trying to do some benchmarks for NFS over RDMA and I seem to only get
about half of the bandwidth that the HW can give me.
My setup consists of 2 servers each with 16 cores, 32Gb of memory, and Mellanox
ConnectX3 QDR card over PCI-e gen3.
These servers are connected to a QDR IB swi
21 matches
Mail list logo