Roland Dreier wrote:
Getting exactly the right value for max_qp_wr is kind of tricky because
of complicated allocation rules. I guess this is just a mlx4 bug in
reporting not quite the right value from ibv_query_device().
Maybe the correct way to go for mlx4 is to go min/max that is, report
On Sun, May 22, 2011 at 09:46:21AM +0300, Or Gerlitz wrote:
Maybe the correct way to go for mlx4 is to go min/max that is,
report the --minimal-- of max(recv, send) value that would work for
an app setting of either of their send or the recv WR numbers.
I see that OFED already contains a
Hi,
Le jeudi 19 mai 2011 à 09:07 +0300, Eli Cohen a écrit :
Hi Yan,
it appears that you're using quite an old firmware. Could you upgrade
the firmware to the latest version and check again the failure to
create a QP with the max depth. FW and burning tools can be downloaded
from
Le jeudi 19 mai 2011 à 12:34 +0300, Eli Cohen a écrit :
On Thu, May 19, 2011 at 11:17:16AM +0200, Yann Droneaud wrote:
Have you some test code for me to test ?
I used ibv_rc_pingpong which is part of libiberbs. the '-r' option
allows you to define the queue depth. Please try it and
Hi,
Le jeudi 19 mai 2011 à 16:45 +0200, Yann Droneaud a écrit :
So I'm a bit puzzled : why does it work in ibv_rc_pingpong but not in
rdma_bw ?
Because ibv_rc_pingpong -r modify the max_recv_wr attributes
and rdma_bw -t modify the max_send_rw instead.
After modifiying ibv_rc_pingpong to
On Thu, May 19, 2011 at 05:03:45PM +0200, Yann Droneaud wrote:
To sum up:
- ibv_qp_init_attr.max_recv_wr can be set to ibv_device_attr.max_qp_wr,
16384 in my case,
- ibv_qp_init_attr.max_send_wr *cannot* be set to
ibv_device_attr.max_qp_wr, but it can be set to 16351.
Thanks for
Le jeudi 19 mai 2011 à 18:46 +0300, Eli Cohen a écrit :
On Thu, May 19, 2011 at 05:03:45PM +0200, Yann Droneaud wrote:
To sum up:
- ibv_qp_init_attr.max_recv_wr can be set to ibv_device_attr.max_qp_wr,
16384 in my case,
- ibv_qp_init_attr.max_send_wr *cannot* be set to
Hi,
Le jeudi 19 mai 2011 à 18:59 +0200, Yann Droneaud a écrit :
Le jeudi 19 mai 2011 à 18:46 +0300, Eli Cohen a écrit :
On Thu, May 19, 2011 at 05:03:45PM +0200, Yann Droneaud wrote:
To sum up:
- ibv_qp_init_attr.max_recv_wr can be set to ibv_device_attr.max_qp_wr,
16384 in my
If spare WQEs are taken in account here, it should be taken in account
in the data reported by ibv_query_device().
max_qp_wr does not distinguish between max send or receive or indicate if those
values should be the same. IMO, Setting
max_qp_wr = max(send wr, recv wr)
makes more sense than
On Thu, May 19, 2011 at 06:06:03PM +, Hefty, Sean wrote:
max_qp_wr does not distinguish between max send or receive or indicate if
those values should be the same. IMO, Setting
max_qp_wr = max(send wr, recv wr)
makes more sense than
max_qp_wr = min(send wr, recv wr)
The
Hi,
Le jeudi 21 avril 2011 à 11:53 -0700, c...@asomi.com a écrit :
An ENOMEM return does not mean that the subsystem *just* failed to
allocate system memory.
The memory that could not be allocated could be device memory.
I'm also having some difficulties with system memory
And I forgot to mention:
Le vendredi 22 avril 2011 à 12:20 +0200, Yann Droneaud a écrit :
I'm also having some difficulties with system memory allocation.
In this case of failure, strace shows the last write() syscall returning
ENOMEM.
Regards.
--
Yann Droneaud
OPTEYA
--
To unsubscribe
On Thu, Apr 21, 2011 at 9:44 AM, Yann Droneaud ydrone...@opteya.com wrote:
I have a problem with rdma_create_qp() when I set
qp_init_attr.cap.max_send_wr to something higher than 16351:
it returns -1 and errno is set to ENOMEM Cannot allocate memory.
strace doesn't show anything related to
13 matches
Mail list logo