The issue might be fixed if you upgrade to a newer version than 19.01 - see
https://gerrit.fd.io/r/#/c/vpp/+/19383/

-Matt


On Mon, Sep 9, 2019 at 7:14 AM shi dave <dave....@outlook.com> wrote:

> Hi,
>
> Using VPP+DPDK for Ipsec Security Gateway application, want to handle
> traffic (*7Gbps uplink & decrypt + 28Gbps downlink & encrypt* ) with
> below configuration, but there have many rx-miss errors in downlink
> interface, *but the Vectors/Call for ipsec & dpdk crypto node is very low
> (only 3~4)*, and the traffic is balanced in all threads.
>
> *If only run 35Gbps downlink packets, there don't have rx-miss errors in
> downlink interface.*
>
> My initial test is using 16 worker cores, when I found the downlink
> rx-miss, I increased the worker cores to 32 cores, but the rx-miss are
> still there.
>
> Have anyone know how to resolve the rx-miss issue?
> and it seems the crypto scheduler doesn't work in this version.
>
>
> *configuration:*
>
> 40G nic * 2
> 32 work cores
>
> #./vppctl show hard detail
>
>               Name                Idx   Link  Hardware
> FortyGigabitEthernet0/a/0          1     up   FortyGigabitEthernet0/a/0
>   Link speed: *40 Gbps*
>   Ethernet address 3c:fd:fe:af:d6:f8
>   Intel X710/XL710 Family
>     carrier up full duplex mtu 9206
>     flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum
> rx-ip4-cksum
>
> *    rx: queues 32 (max 320), desc 4096 (min 64 max 4096 align 32) *
> *    tx: queues 32 (max 320), desc 4096 (min 64 max 4096 align 32)*
>     tx burst function: i40e_xmit_pkts
>     rx burst function: i40e_recv_scattered_pkts_vec
>
>     tx frames ok                                  9958725523
>     tx bytes ok                               11808362771488
>     rx frames ok                                  4714961077
>     rx bytes ok                                2234400127379
>
>
> FortyGigabitEthernet0/b/0          2     up   FortyGigabitEthernet0/b/0
>   Link speed: *40 Gbps*
>   Ethernet address 3c:fd:fe:af:e2:a0
>   Intel X710/XL710 Family
>     carrier up full duplex mtu 9206
>     flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum
> rx-ip4-cksum
>
> *    rx: queues 32 (max 320), desc 4096 (min 64 max 4096 align 32) *
> *    tx: queues 32 (max 320), desc 4096 (min 64 max 4096 align 32)*
>     tx burst function: i40e_xmit_pkts
>     rx burst function: i40e_recv_scattered_pkts_vec
>
>     tx frames ok                                  4714839685
>     tx bytes ok                                1939591401104
>     rx frames ok                                  9959949511
>     rx bytes ok                               11194675133398
>     *rx missed                                        1126523*
>
>
> *startup.conf*
>
> cpu {
>     main-core 0
>     corelist-workers 1-32
> }
>
> dpdk {
>     socket-mem 16384
>     uio-driver igb_uio
>     num-mbufs 8388608
>     dev 0000:00:0a.0 {
>         num-rx-queues 32
>         num-tx-queues 32
>         num-rx-desc 4096
>         num-tx-desc 4096
>     }
>     dev 0000:00:0b.0 {
>         num-rx-queues 32
>         num-tx-queues 32
>         num-rx-desc 4096
>         num-tx-desc 4096
>     }
>     vdev cryptodev_aesni_mb_pmd0,socket_id=0
>     vdev cryptodev_aesni_mb_pmd1,socket_id=0
>     vdev cryptodev_aesni_mb_pmd2,socket_id=0
>     vdev cryptodev_aesni_mb_pmd3,socket_id=0
>     vdev cryptodev_aesni_mb_pmd4,socket_id=0
>     vdev cryptodev_aesni_mb_pmd5,socket_id=0
>     vdev cryptodev_aesni_mb_pmd6,socket_id=0
>     vdev cryptodev_aesni_mb_pmd7,socket_id=0
> }
>
>
> *vpp# show dpdk crypto placement *
>
> Thread 1 (vpp_wk_0):
>   cryptodev_aesni_mb_p dev-id  0 inbound-queue  0 outbound-queue  1
>
> Thread 2 (vpp_wk_1):
>   cryptodev_aesni_mb_p dev-id  0 inbound-queue  2 outbound-queue  3
>
> Thread 3 (vpp_wk_2):
>   cryptodev_aesni_mb_p dev-id  0 inbound-queue  4 outbound-queue  5
>
> Thread 4 (vpp_wk_3):
>   cryptodev_aesni_mb_p dev-id  0 inbound-queue  6 outbound-queue  7
>
> Thread 5 (vpp_wk_4):
>   cryptodev_aesni_mb_p dev-id  1 inbound-queue  0 outbound-queue  1
>
> Thread 6 (vpp_wk_5):
>   cryptodev_aesni_mb_p dev-id  1 inbound-queue  2 outbound-queue  3
>
> ......
>
> Thread 31 (vpp_wk_30):
>
>   cryptodev_aesni_mb_p dev-id  7 inbound-queue  4 outbound-queue  5
>
>
>
> Thread 32 (vpp_wk_31):
>
>   cryptodev_aesni_mb_p dev-id  7 inbound-queue  6 outbound-queue  7
>
>
> *vpp# show interface rx-placement *
> Thread 1 (vpp_wk_0):
>   node dpdk-input:
>     FortyGigabitEthernet0/a/0 queue 0 (polling)
>     FortyGigabitEthernet0/b/0 queue 0 (polling)
> Thread 2 (vpp_wk_1):
>   node dpdk-input:
>     FortyGigabitEthernet0/a/0 queue 1 (polling)
>     FortyGigabitEthernet0/b/0 queue 1 (polling)
> Thread 3 (vpp_wk_2):
>   node dpdk-input:
>     FortyGigabitEthernet0/a/0 queue 2 (polling)
>     FortyGigabitEthernet0/b/0 queue 2 (polling)
> Thread 4 (vpp_wk_3):
>   node dpdk-input:
>     FortyGigabitEthernet0/a/0 queue 3 (polling)
>     FortyGigabitEthernet0/b/0 queue 3 (polling)
>
> ......
>
> Thread 31 (vpp_wk_30):
>                                                                       node
> dpdk-input:
>
>  FortyGigabitEthernet0/a/0 queue 30 (polling)
>
> FortyGigabitEthernet0/b/0 queue 30 (polling)
>
>
> Thread 32 (vpp_wk_31):
>                                                                       node
> dpdk-input:
>     FortyGigabitEthernet0/a/0 queue 31 (polling)
>     FortyGigabitEthernet0/b/0 queue 31 (polling)
>
>
> Best Rregards
>
> Dave
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#13927): https://lists.fd.io/g/vpp-dev/message/13927
> Mute This Topic: https://lists.fd.io/mt/34079036/675725
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [mgsm...@netgate.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13935): https://lists.fd.io/g/vpp-dev/message/13935
Mute This Topic: https://lists.fd.io/mt/34079036/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to