Ben,

I verified it on CentOS 7.6 and it works fine: Here i s my gist for Robert and Fred with the steps I used.

https://gist.github.com/tfherbert/64cef832f03935c6a2a58729718a36ff

In CentOS 7.6, for RDMA drivers to work, we need to install rdma rpm as I did in my 19.02 patch to get the library and the libibverbs.so library.

This is documented in my patch to 19.02,

https://gerrit.fd.io/r/#/c/18521/

--Tom


On 05/10/2019 12:06 PM, Benoit Ganne (bganne) wrote:
Hi Robert,

That could be an issue with /dev/infiniband/* access rights.
Could you share the output of:
~# echo "create int rdma host-if enp94s0f1 name mlx5" > rdma.vpp
~# sudo timeout 10 strace /usr/bin/vpp "unix { nodaemon exec $PWD/rdma.vpp } plugins 
{ plugin dpdk_plugin.so { disable } }"
~# dmesg

Thanks!
ben

-----Original Message-----
From: Robert Starmer <rob...@kumul.us>
Sent: vendredi 10 mai 2019 17:29
To: Thomas F Herbert <therb...@redhat.com>
Cc: Benoit Ganne (bganne) <bga...@cisco.com>; vpp-dev <vpp-
d...@lists.fd.io>; Damjan Marion (damarion) <damar...@cisco.com>; Dave
Barach (dbarach) <dbar...@cisco.com>; Fred Sharp <f...@kumul.us>
Subject: Re: [vpp-dev] Question about rdma drivers in vpp 19.04

I just retried on a Packet.net m2.xlarge.x86 instance running Ubuntu 18.04

and I get:
vpp# create int rdma host-if enp94s0f1 name mlx5
create interface rdma: Device Open Failed: Bad file descriptor

On packet. I've removed the enp94s0f1 interface from the default bonded
config, and the interface is in a Down state:


3: enp94s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
default qlen 1000
     link/ether 98:03:9b:30:1e:f7 brd ff:ff:ff:ff:ff:ff

The devices are ConnectX-4:

lspci | grep -i mell
5e:00.0 Ethernet controller: Mellanox Technologies MT27710 Family
[ConnectX-4 Lx]
5e:00.1 Ethernet controller: Mellanox Technologies MT27710 Family
[ConnectX-4 Lx]

Robert

On Fri, May 10, 2019 at 7:46 AM Thomas F Herbert <therb...@redhat.com
<mailto:therb...@redhat.com> > wrote:


        Ben,

        I had this working with 19.02 with my patch using dpdk driver with
rdma.

        https://gerrit.fd.io/r/#/c/18521/

        Which I used in the rpms for CentOS.

        The challenge I had was that convincing dpdk to "see" the pci,

        That was because the server boots with the mlx ints in a bond and
after removing the if from the bond,

        I had to explicitly whitelist it in vpp.conf.

        I don't know how to with the new ext rdma driver in 19.04 because it
doesn't make sense to white list it for dpdk.






        On 05/10/2019 04:08 AM, Benoit Ganne (bganne) wrote:


                Hi Thomas,

                I just pushed a small doc:
https://gerrit.fd.io/r/c/19364/3/src/plugins/rdma/rdma_doc.md


                        I also tried:
                        vpp# create interface rdma name mlx5
                        create interface rdma: invalid interface (only mlx5
supported for now)

                You are missing the 'host-if <netdev>' stanza, eg:
                vpp# create int rdma host-if enp94s0f0 name mlx5
                Here enp94s0f0 is the netdev of the mhysical port you want to
use.

        Here are my commands for 19.04
https://gist.github.com/tfherbert/624d8465c42b0aafb4859f144ba8a4e4

                Let me know if you need more info.

                ben


        --
        Thomas F Herbert
        NFV and Fast Data Planes
        Networking Group Office of the CTO
        Red Hat

--
*Thomas F Herbert*
NFV and Fast Data Planes
Networking Group Office of the CTO
*Red Hat*
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12987): https://lists.fd.io/g/vpp-dev/message/12987
Mute This Topic: https://lists.fd.io/mt/31570536/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to