2013/9/3 Gandalf Corvotempesta :
> $ sudo qperf -ub 172.17.0.2 rc_bi_bw rc_lat rc_bw rc_rdma_read_lat
> rc_rdma_read_bw rc_rdma_write_lat rc_rdma_write_bw tcp_lat tcp_bw
> rc_bi_bw:
> bw = 20.5 Gb/sec
> rc_lat:
> latency = 15.4 us
> rc_bw:
> bw = 13.7 Gb/sec
> rc_rdma_read_lat:
>
2013/9/3 Hal Rosenstock :
> With mthca, due to quirk, optimal performance is achieved at 1K MTU.
> OpenSM can reduce the MTU in returned PathRecords to 1K when one end of
> the path is mthca and actual path MTU is > 1K. This is controlled by
> enable_quirks config parameter which defaults to FALSE
On 8/31/2013 3:51 PM, Gandalf Corvotempesta wrote:
> By the way, increasing MTU to 4096 will give me more performance?
With mthca, due to quirk, optimal performance is achieved at 1K MTU.
OpenSM can reduce the MTU in returned PathRecords to 1K when one end of
the path is mthca and actual path MTU
2013/9/1 Gandalf Corvotempesta :
> What is strange to me is that rsocket is slower than IPoIB and limited
> to 10Gbit more or less. With IPoIB i'm able to reach 12.5 Gbit
qperf is giving the same strange speed:
FROM NODE1 to NODE2:
$ sudo qperf -ub 77.95.175.106 ud_lat ud_bw
ud_lat:
latency
2013/9/1 Rupert Dance :
> My guess is that it will not make a huge difference and that the solution
> lies elsewhere.
What is strange to me is that rsocket is slower than IPoIB and limited
to 10Gbit more or less. With IPoIB i'm able to reach 12.5 Gbit
--
To unsubscribe from this list: send the lin
: Re: Slow performance with librspreload.so
2013/8/31 Rupert Dance :
> The Vendor ID indicates that this is a Voltaire card which probably
> means it is an older card. Some of the early Mellanox based cards did
> not support anything bigger than 2048.
Yes, it's an older card used
2013/8/31 Rupert Dance :
> The Vendor ID indicates that this is a Voltaire card which probably means it
> is an older card. Some of the early Mellanox based cards did not support
> anything bigger than 2048.
Yes, it's an older card used just for this test.
By the way, increasing MTU to 4096 will g
, 2013 5:21 AM
To: Rupert Dance
Cc: Hefty, Sean; linux-rdma@vger.kernel.org
Subject: Re: Slow performance with librspreload.so
2013/8/30 Rupert Dance :
> One way to set or check mtu is with the ibportstate utility:
>
> Usage: ibportstate [options] []
> Supported ops: enable, disable,
2013/8/30 Rupert Dance :
> One way to set or check mtu is with the ibportstate utility:
>
> Usage: ibportstate [options] []
> Supported ops: enable, disable, reset, speed, width, query, down, arm,
> active, vls, mtu, lid, smlid, lmc
I've tried but max MTU is 2048 on one device:
$ sudo ibv_devin
...@vger.kernel.org] On Behalf Of Gandalf Corvotempesta
Sent: Friday, August 30, 2013 12:27 PM
To: Hefty, Sean
Cc: linux-rdma@vger.kernel.org
Subject: Re: Slow performance with librspreload.so
2013/8/30 Hefty, Sean :
> Not directly. The ipoib mtu is usually set based on the mtu of the IB
link. The lat
> Another strange issue:
>
> $ sudo LD_PRELOAD=/usr/local/lib/rsocket/librspreload.so iperf -c
> 172.17.0.2
>
> Client connecting to 172.17.0.2, TCP port 5001
> TCP window size: 128 KByte (default)
Increasing the window size may improv
On Aug 30, 2013, at 1:38 PM, "Hefty, Sean" wrote:
>> Another strange issue:
>>
>> $ sudo LD_PRELOAD=/usr/local/lib/rsocket/librspreload.so iperf -c
>> 172.17.0.2
>>
>> Client connecting to 172.17.0.2, TCP port 5001
>> TCP window size:
2013/8/30 Hefty, Sean :
> Not directly. The ipoib mtu is usually set based on the mtu of the IB link.
> The latter does affect rsocket performance. However if the ipoib mtu is
> changed separately from the IB link mtu, it will not affect rsockets.
Actually i'm going faster with IPoIB than rso
> with 2 parallel connection i'm able to reach "rate" speed with iperf,
> the same speed archived with rstream.
> Is iperf affected by IPoIB MTU size when used with librspreload.so ?
Not directly. The ipoib mtu is usually set based on the mtu of the IB link.
The latter does affect rsocket perfo
2013/8/30 Gandalf Corvotempesta :
> Is iperf affected by IPoIB MTU size when used with librspreload.so ?
Another strange issue:
$ sudo LD_PRELOAD=/usr/local/lib/rsocket/librspreload.so iperf -c 172.17.0.2
Client connecting to 172.17.0.2
2013/8/30 Gandalf Corvotempesta :
> By the way, moving the HBA on the second slot, brought me to 12Gbps on
> both hosts.
This is great:
$ sudo LD_PRELOAD=/usr/local/lib/rsocket/librspreload.so iperf -c 172.17.0.2
Client connecting to 17
2013/8/29 Hefty, Sean :
> 12 Gbps on a 20 Gb link actually seems reasonable to me. I only see around
> 25 Gbps on a 40 Gb link, with raw perftest performance coming in at about 26
> Gbps.
Is this a rstream limits or an IB limit? I've read somewhere that DDR
should transfer at 16Gbps
By the way
2013/8/29 Hefty, Sean :
> 12 Gbps on a 20 Gb link actually seems reasonable to me. I only see around
> 25 Gbps on a 40 Gb link, with raw perftest performance coming in at about 26
> Gbps.
Ok.
I think that i've connected the HBA to the wrong PCI-Express slot.
I have a DELL R200 that has 3 PCI-Ex
2013/8/29 Gandalf Corvotempesta :
> node1 (172.17.0.1 is ip configured on ib0):
>
> $ sudo ./rstream -s 172.17.0.1
> name bytes xfers iters total time Gb/secusec/xfer
> 64_lat64 1 100k12m 0.26s 0.40 1.28
> 4k_lat4k 1 10k
-- Forwarded message --
From: Gandalf Corvotempesta
Date: 2013/8/29
Subject: Re: Slow performance with librspreload.so
To: "Hefty, Sean"
2013/8/28 Hefty, Sean :
> If you can provide your PCIe information and the results from running the
> perftest tools (rdma
> Ubuntu 13.04 Server on both nodes.
>
> node1:
>
> $ cat /proc/cpuinfo | grep 'model name'
> model name : Intel(R) Xeon(R) CPU E5-2603 0 @ 1.80GHz
If you can provide your PCIe information and the results from running the
perftest tools (rdma_bw), that could help as well.
--
To unsubscribe from
> 2013/8/28 Hefty, Sean :
> > Can you explain your environment more? The performance seems low.
>
> Ubuntu 13.04 Server on both nodes.
>
> node1:
>
> $ cat /proc/cpuinfo | grep 'model name'
> model name : Intel(R) Xeon(R) CPU E5-2603 0 @ 1.80GHz
> $ cat /proc/cpuinfo | grep 'model name'
> mod
2013/8/28 Hefty, Sean :
> Can you explain your environment more? The performance seems low.
Ubuntu 13.04 Server on both nodes.
node1:
$ cat /proc/cpuinfo | grep 'model name'
model name : Intel(R) Xeon(R) CPU E5-2603 0 @ 1.80GHz
model name : Intel(R) Xeon(R) CPU E5-2603 0 @ 1.80GHz
model name :
> > Can you run the rstream test program to verify that you can get faster
> than 5 Gbps?
> >
> > rstream without any options will use rsockets directly. If you use the -
> T s option, it will use standard TCP sockets. You can use LD_PRELOAD with
> -T s to verify that the preload brings your per
2013/8/28 Hefty, Sean :
> Can you run the rstream test program to verify that you can get faster than 5
> Gbps?
>
> rstream without any options will use rsockets directly. If you use the -T s
> option, it will use standard TCP sockets. You can use LD_PRELOAD with -T s
> to verify that the prel
> i've connected just one port between two hosts.
> Ports is detected properly as 20Gb/s (4x DDR) but i'm unable to reach
> speed over 5Gbit/s:
It's possible that this is falling back to using normal TCP sockets.
Can you run the rstream test program to verify that you can get faster than 5
Gbps
Hi
i'm trying the preloader librspreload.so on two directly connected hosts:
host1:$ sudo ibstatus
Infiniband device 'mlx4_0' port 1 status:
default gid: fe80::::0002:c903:004d:dd45
base lid: 0x1
sm lid: 0x1
state: 4: ACTIVE
phys state: 5: LinkUp
rate: 20 Gb/sec (4X DDR)
link_layer: In
27 matches
Mail list logo