Re: Write Packets to InfiniBand HCA

2012-01-04 Thread Greg I Kerr
Thanks for the help everyone.

 Greg, RAW Ethernet QPs (soon to be re-submitted upstream) which to be
 implemented within the ConnnetX / mlx4 driver as MLX transport QPs
 would allow you to do  that - over Ethernet.

 If you're asking on IB, I would suggest using UD QPs which are
 available today from user space, do you have any issue with them?


Yes I want to do this with IB. I wasn't aware that I could use a UD QP
to write my own packets to the wire. Is this what you're suggesting I
can use a UD QP for?

Thanks,

Greg

On Wed, Jan 4, 2012 at 4:03 PM, Or Gerlitz or.gerl...@gmail.com wrote:
 Roland Dreier rol...@purestorage.com wrote:
 It is possible with ConnectX (cf MLX QPs in the kernel driver). However I 
 don't
 know what documentation is available and some hacking would be needed to use 
 this for
 something more general than sending MADs on special QPs.

 Or.
--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Write Packets to InfiniBand HCA

2012-01-03 Thread Greg I Kerr
Yes I should have mentioned that I am using a Mellanox Connect-X
adapter. Do you know where I can find documentation for QLogic's
iPath? A quick google search didn't seem to turn anything up.

Thanks for the information.

- Greg Kerr

On Tue, Jan 3, 2012 at 9:53 AM, Mike Heinz michael.he...@qlogic.com wrote:
 That would depend on which HCA you are using. I know that you can use 
 QLogic's iPath interface to do what you want, but I don't think it is 
 possible through the verbs interface available through stock OFED.

 -Original Message-
 From: linux-rdma-ow...@vger.kernel.org 
 [mailto:linux-rdma-ow...@vger.kernel.org] On Behalf Of Greg I Kerr
 Sent: Monday, January 02, 2012 9:50 PM
 To: linux-rdma@vger.kernel.org
 Subject: Write Packets to InfiniBand HCA

 Hi,

 Does anyone know if it is or isn't possible to put the HCA in some kind of 
 raw mode where I can compose a packet in software and write it to the card? 
 This is obviously in comparison to calling ibv_post_send and having a work 
 request converted into a packet.

 Thanks,

 Greg Kerr
 --
 To unsubscribe from this list: send the line unsubscribe linux-rdma in the 
 body of a message to majord...@vger.kernel.org More majordomo info at  
 http://vger.kernel.org/majordomo-info.html


 This message and any attached documents contain information from QLogic 
 Corporation or its wholly-owned subsidiaries that may be confidential. If you 
 are not the intended recipient, you may not read, copy, distribute, or use 
 this information. If you have received this transmission in error, please 
 notify the sender immediately by reply e-mail and then delete this message.

--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Write Packets to InfiniBand HCA

2012-01-02 Thread Greg I Kerr
Hi,

Does anyone know if it is or isn't possible to put the HCA in some
kind of raw mode where I can compose a packet in software and write
it to the card? This is obviously in comparison to calling
ibv_post_send and having a work request converted into a packet.

Thanks,

Greg Kerr
--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


rdma_bw fails

2011-05-17 Thread Greg I Kerr
After finally fully comprehending libibverbs, I am now trying to
expand my understand to librdma_cm but it would seem I am having some
problems getting connected.

If I run rdma_bw on two nodes with the -c option (use rdma_cm) it
fails with the error: 4390:pp_client_connect: unexpected CM event 1.
event 1 is RDMA_CM_EVENT_ADDR_ERROR. I was under the impression that
it should work if ib0 is configured.

Thanks in advance for any help,

Greg Kerr

[kerrg@compute-0-3 rdma]$ rdma_bw -c
4292: | port=18515 | ib_port=1 | size=65536 | tx_depth=100 | sl=0 |
iters=1000 | duplex=0 | cma=1 |

[kerrg@compute-0-2 rdma]$ rdma_bw -c 10.1.1.30
4390: | port=18515 | ib_port=1 | size=65536 | tx_depth=100 | sl=0 |
iters=1000 | duplex=0 | cma=1 |
4390:pp_client_connect: unexpected CM event 1

Here is the output of /sbin/ifconfig:

[kerrg@compute-0-3 rdma]$ /sbin/ifconfig
eth0  Link encap:Ethernet  HWaddr 00:30:48:BE:D9:84
  inet addr:10.1.255.251  Bcast:10.1.255.255  Mask:255.255.0.0
  inet6 addr: fe80::230:48ff:febe:d984/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:55999 errors:0 dropped:0 overruns:0 frame:0
  TX packets:12608 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:6023763 (5.7 MiB)  TX bytes:2459277 (2.3 MiB)
  Memory:febe-fec0

ib0   Link encap:InfiniBand  HWaddr
80:00:00:48:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00
  inet addr:10.1.1.30  Bcast:10.255.255.255  Mask:255.0.0.0
  inet6 addr: fe80::230:48be:d984:1/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:65520  Metric:1
  RX packets:8 errors:0 dropped:0 overruns:0 frame:0
  TX packets:42 errors:0 dropped:5 overruns:0 carrier:0
  collisions:0 txqueuelen:256
  RX bytes:560 (560.0 b)  TX bytes:3696 (3.6 KiB)

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:1253 errors:0 dropped:0 overruns:0 frame:0
  TX packets:1253 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:2093229 (1.9 MiB)  TX bytes:2093229 (1.9 MiB)

[kerrg@compute-0-2 rdma]$ /sbin/ifconfig
eth0  Link encap:Ethernet  HWaddr 00:30:48:BE:DA:D4
  inet addr:10.1.255.252  Bcast:10.1.255.255  Mask:255.255.0.0
  inet6 addr: fe80::230:48ff:febe:dad4/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:66833 errors:0 dropped:0 overruns:0 frame:0
  TX packets:17905 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:6323062 (6.0 MiB)  TX bytes:2166485 (2.0 MiB)
  Memory:febe-fec0

ib0   Link encap:InfiniBand  HWaddr
80:00:00:48:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00
  inet addr:10.1.1.31  Bcast:10.255.255.255  Mask:255.0.0.0
  inet6 addr: fe80::230:48be:dad4:1/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:65520  Metric:1
  RX packets:52 errors:0 dropped:0 overruns:0 frame:0
  TX packets:4 errors:0 dropped:5 overruns:0 carrier:0
  collisions:0 txqueuelen:256
  RX bytes:4088 (3.9 KiB)  TX bytes:352 (352.0 b)

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:1230 errors:0 dropped:0 overruns:0 frame:0
  TX packets:1230 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:2055348 (1.9 MiB)  TX bytes:2055348 (1.9 MiB)
--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Call to ibv_post_recv fails

2011-05-06 Thread Greg I Kerr
I was hoping some of you might be able to tell me why the call to
ibv_post_recv fails in the following code snippet, based off of
rc_pingpong.

I registered both buffers with ibv_reg_mr and am simply trying to post
a work request that points to an sge list of 2 buffers. Is it not
allowed to point to separately registered memory regions with this
list?

Thanks,

Greg Kerr

static int pp_post_recv(struct pingpong_context *ctx, int n)
{
  struct ibv_sge list[] = {
{
.addr = (uintptr_t) ctx-buf,
.length = ctx-size,
.lkey = ctx-mr-lkey
},
{
  .addr = (uintptr_t) ctx-buf2,
  .length = ctx-size,
  .lkey = ctx-mr2-lkey
}};

  struct ibv_recv_wr wr = {
.wr_id  = PINGPONG_RECV_WRID,
.sg_list= list,
.num_sge= 2,
  };
  struct ibv_recv_wr *bad_wr;
  int i;

  for (i = 0; i  n; ++i)
if (ibv_post_recv(ctx-qp, wr, bad_wr))
  break;

  return i;
}
--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


How is Q_KEY Set

2011-03-15 Thread Greg I Kerr
I was wondering how the Q_KEY is set by default, if
the user doesn't specify it in the ibv_qp_attr struct?

I couldn't find a suitable answer in the spec. The reason
I'm asking is because I have a program where I create a
connection in the sense that I perform all steps from
opening the device to creating queue pairs and modifying
them to point at the remote queue pairs. Then I run all the
destructor functions, until I close the device.

I then re-open a new connection. Right now I'm having
issues with the poll_cq function either never finding anything
on the queue, or returning transport retry counter error, when
I try to send data over the new connection.

When investigating the issue I noticed that the new QP
had the same Q_KEY as the old QP, which surprised me.

Thanks

-- Greg Kerr
--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Infiniband Shared Memory Segment

2011-01-27 Thread Greg I Kerr
I sent this e-mail to the OFED ewg list but I figured someone on the
Linux mailing list might know about this as well, since it's fairly
low level details.

I was hoping someone could answer a few questions for me, or point to
where the answers might be. I dug through the formal IBA specification
V1.2 but could not find the relevant information.

If I run an Infiniband program, say the ibv_rc_pingpong 'test'
program, and look in /proc/pid/fd the following file descriptors
appear:

lrwx-- 1 kerrg kerrg 64 Jan 26 13:02 3 - /dev/infiniband/uverbs0
lr-x-- 1 kerrg kerrg 64 Jan 26 13:02 4 - infinibandevent:

And if I look in /proc/pid/maps:
2b6f3379f000-2b6f337a -w-s  00:11 7797
 /dev/infiniband/uverbs0

I'm assuming those are file descriptors which are mapped to the HCA
and used to send commands to it. I noticed that the cmd_fd field of
the ibv_context data structure pointed to to fd 3.

What kind of data might be located within the memory pointed to by the
FD? I.e. Could there be configuration information for the connection
and/or HCA? Could there be the data waiting to be sent (although I
don't believe the HCA buffers)? Would it just be the command sent to
the HCA?

Thanks,

Greg Kerr
Northeastern University
High Performance Computing Lab
--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html