Ben,

I had this working with 19.02 with my patch using dpdk driver with rdma.

https://gerrit.fd.io/r/#/c/18521/

Which I used in the rpms for CentOS.

The challenge I had was that convincing dpdk to "see" the pci,

That was because the server boots with the mlx ints in a bond and after removing the if from the bond,

I had to explicitly whitelist it in vpp.conf.

I don't know how to with the new ext rdma driver in 19.04 because it doesn't make sense to white list it for dpdk.



On 05/10/2019 04:08 AM, Benoit Ganne (bganne) wrote:
Hi Thomas,

I just pushed a small doc: 
https://gerrit.fd.io/r/c/19364/3/src/plugins/rdma/rdma_doc.md

I also tried:
vpp# create interface rdma name mlx5
create interface rdma: invalid interface (only mlx5 supported for now)
You are missing the 'host-if <netdev>' stanza, eg:
vpp# create int rdma host-if enp94s0f0 name mlx5
Here enp94s0f0 is the netdev of the mhysical port you want to use.
Here are my commands for 19.04 https://gist.github.com/tfherbert/624d8465c42b0aafb4859f144ba8a4e4

Let me know if you need more info.

ben

--
*Thomas F Herbert*
NFV and Fast Data Planes
Networking Group Office of the CTO
*Red Hat*
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12981): https://lists.fd.io/g/vpp-dev/message/12981
Mute This Topic: https://lists.fd.io/mt/31570536/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to