Hello,

I currently have vpp working nicely in Azure via the netsvc pmd, however after 
reading https://docs.microsoft.com/en-us/azure/virtual-network/setup-dpdk and 
https://fd.io/docs/vpp/v2101/usecases/vppinazure.html it sounds like I should 
be using the failsafe pmd instead. So, I gave this a try but ran into some 
issues, some of which I've seen discussed on this email group but none of the 
solutions have worked for me thus far. I was able to make the failsafe pmd work 
via dpdk-testpmd with dpdk standalone from debian bullseye (dpdk 20.11).

I'm running vpp 22.06 and an external dpdk at version 21.11, though also see 
the same thing when downgrading to 20.11. We have 3 interfaces, the first is a 
non accelerated networking interface which is not to be controlled by vpp 
(eth0) the 2nd are data interfaces which are vpp owned. My dpdk section of the 
vpp startup config looks like this:

dpdk {

    dev 0ed6:00:02.0
    vdev net_vdev_netvsc0,iface=eth1

    dev 6fa1:00:02.0
    vdev net_vdev_netvsc1,iface=eth2

    base-virtaddr 0x200000000
}

When vpp starts up the 2 interfaces are shown:

me@azure:~$ sudo vppctl sh hard
              Name                Idx   Link  Hardware
FailsafeEthernet2                  1     up   FailsafeEthernet2
  Link speed: 50 Gbps
  RX Queues:
    queue thread         mode
    0     vpp_wk_0 (1)   polling
  Ethernet address 00:22:48:4c:c0:e5
  FailsafeEthernet
    carrier up full duplex max-frame-size 1518
    flags: maybe-multiseg tx-offload rx-ip4-cksum
    Devargs: fd(17),dev(net_tap_vsc0,remote=eth1)
    rx: queues 1 (max 16), desc 1024 (min 0 max 65535 align 1)
    tx: queues 8 (max 16), desc 1024 (min 0 max 65535 align 1)
    max rx packet len: 1522
    promiscuous: unicast off all-multicast off
    vlan offload: strip off filter off qinq off
    rx offload avail:  ipv4-cksum udp-cksum tcp-cksum scatter
    rx offload active: ipv4-cksum scatter
    tx offload avail:  ipv4-cksum udp-cksum tcp-cksum tcp-tso multi-segs
    tx offload active: ipv4-cksum udp-cksum tcp-cksum multi-segs
    rss avail:         ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 ipv6-tcp-ex
                       ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
                       ipv6-ex ipv6
    rss active:        none
    tx burst function: (not available)
    rx burst function: (not available)

FailsafeEthernet4                  2     up   FailsafeEthernet4
  Link speed: 50 Gbps
  RX Queues:
    queue thread         mode
    0     vpp_wk_1 (2)   polling
  Ethernet address 00:22:48:4c:c6:4a
  FailsafeEthernet
    carrier up full duplex max-frame-size 1518
    flags: maybe-multiseg tx-offload rx-ip4-cksum
    Devargs: fd(33),dev(net_tap_vsc1,remote=eth2)
    rx: queues 1 (max 16), desc 1024 (min 0 max 65535 align 1)
    tx: queues 8 (max 16), desc 1024 (min 0 max 65535 align 1)
    max rx packet len: 1522
    promiscuous: unicast off all-multicast off
    vlan offload: strip off filter off qinq off
    rx offload avail:  ipv4-cksum udp-cksum tcp-cksum scatter
    rx offload active: ipv4-cksum scatter
    tx offload avail:  ipv4-cksum udp-cksum tcp-cksum tcp-tso multi-segs
    tx offload active: ipv4-cksum udp-cksum tcp-cksum multi-segs
    rss avail:         ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 ipv6-tcp-ex
                       ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
                       ipv6-ex ipv6
    rss active:        none
    tx burst function: (not available)
    rx burst function: (not available)

local0                             0    down  local0
  Link speed: unknown
  local
me@azure:~$

However on enabling our application we see the following messages in the the 
vpp log & the vpp interfaces are unusable:

2022/04/09 09:11:16:424 notice     dpdk           net_failsafe: Link update 
failed for sub_device 0 with error 1
2022/04/09 09:11:16:424 notice     dpdk           net_failsafe: Link update 
failed for sub_device 0 with error 1
2022/04/09 09:12:32:380 notice     dpdk           common_mlx5: Unable to find 
virtually contiguous chunk for address (0x1000000000). rte_memseg_contig_walk() 
failed.

2022/04/09 09:12:36:144 notice     ip6/link       enable: FailsafeEthernet2
2022/04/09 09:12:36:144 error      interface      hw_add_del_mac_address: 
dpdk_add_del_mac_address: mac address add/del failed: -28
2022/04/09 09:12:36:145 error      interface      hw_add_del_mac_address: 
dpdk_add_del_mac_address: mac address add/del failed: -28
2022/04/09 09:12:36:145 error      interface      hw_add_del_mac_address: 
dpdk_add_del_mac_address: mac address add/del failed: -28
2022/04/09 09:12:36:145 error      interface      hw_add_del_mac_address: 
dpdk_add_del_mac_address: mac address add/del failed: -28
2022/04/09 09:12:36:146 notice     dpdk           Port 2: MAC address array full
2022/04/09 09:12:36:146 notice     dpdk           Port 2: MAC address array full
2022/04/09 09:12:36:146 notice     dpdk           Port 2: MAC address array full
2022/04/09 09:12:36:146 notice     dpdk           Port 2: MAC address array full

Earlier snippets from dmesg:

[    6.189856] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[    8.644980] hv_vmbus: registering driver uio_hv_generic
[   12.586115] mlx5_core 0ed6:00:02.0 enP3798s2: Link up
[   12.586914] hv_netvsc 0022484c-c0e5-0022-484c-c0e50022484c eth1: Data path 
switched to VF: enP3798s2
[   12.846211] mlx5_core 6fa1:00:02.0 enP28577s3: Link up
[   12.847014] hv_netvsc 0022484c-c64a-0022-484c-c64a0022484c eth2: Data path 
switched to VF: enP28577s3
[   13.032549] tun: Universal TUN/TAP device driver, 1.6
[   13.149802] Mirror/redirect action on
[   13.414199] hv_netvsc 0022484c-c0e5-0022-484c-c0e50022484c eth1: VF slot 2 
added
[   13.549360] hv_netvsc 0022484c-c0e5-0022-484c-c0e50022484c eth1: VF slot 2 
added
[   13.716361] hv_netvsc 0022484c-c64a-0022-484c-c64a0022484c eth2: VF slot 3 
added
[   13.848514] hv_netvsc 0022484c-c64a-0022-484c-c64a0022484c eth2: VF slot 3 
added
[   15.123859] tc mirred to Houston: device dtap1 is down
[   16.986207] tc mirred to Houston: device dtap0 is down
[   20.129536] tc mirred to Houston: device dtap1 is down
[   21.996293] tc mirred to Houston: device dtap0 is down
[   25.129538] tc mirred to Houston: device dtap1 is down
[   27.003053] tc mirred to Houston: device dtap0 is down
[   30.143527] tc mirred to Houston: device dtap1 is down
[   32.006497] tc mirred to Houston: device dtap0 is down
[   35.149622] tc mirred to Houston: device dtap1 is down
[   37.014221] tc mirred to Houston: device dtap0 is down
[   40.158724] tc mirred to Houston: device dtap1 is down
[   42.024193] tc mirred to Houston: device dtap0 is down
[   45.166728] tc mirred to Houston: device dtap1 is down
[   45.747723] tc mirred to Houston: device dtap0 is down
[   47.030589] tc mirred to Houston: device dtap0 is down
[   50.172306] tc mirred to Houston: device dtap1 is down
[   52.039827] tc mirred to Houston: device dtap0 is down
[   52.137147] hv_balloon: Max. dynamic memory size: 32768 MB

The dtap0/1 is down messages are continuing.

Can anyone suggest a problem with the configuration I've tried in Azure?

Thanks,
Peter.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21234): https://lists.fd.io/g/vpp-dev/message/21234
Mute This Topic: https://lists.fd.io/mt/90353070/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to