Hi all;
I am using Intel 82599 (10G) , running with  VPP v20.01-release with line rate 
of 10G 128 bytes of packet size, i am observing Rx misses on the interfaces.

The VPP related config as flow:
vpp# show hardware-interfaces
Name                Idx   Link  Hardware
TenGigabitEthernet3/0/1            1     up   TenGigabitEthernet3/0/1
Link speed: 10 Gbps
Ethernet address 6c:92:bf:4d:e2:fb
Intel 82599
carrier up full duplex mtu 9206
flags: admin-up pmd maybe-multiseg subif tx-offload intel-phdr-cksum 
rx-ip4-cksum
rx: queues 1 (max 128), desc 4096 (min 32 max 4096 align 8)
tx: queues 1 (max 64), desc 4096 (min 32 max 4096 align 8)
pci: device 8086:15ab subsystem 8086:0000 address 0000:03:00.01 numa 0
max rx packet len: 15872
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum outer-ipv4-cksum
vlan-filter vlan-extend jumbo-frame scatter keep-crc
rx offload active: ipv4-cksum jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
tcp-tso outer-ipv4-cksum multi-segs
tx offload active: udp-cksum tcp-cksum tcp-tso multi-segs
rss avail:         ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp
ipv6-udp ipv6-ex ipv6
rss active:        none
tx burst function: ixgbe_xmit_pkts
rx burst function: ixgbe_recv_scattered_pkts_vec

tx frames ok                                    61218472
tx bytes ok                                   7591090528
rx frames ok                                    61218472
rx bytes ok                                   7591090528
*rx missed                                          59536*
extended stats:
rx good packets                               61218472
tx good packets                               61218472
rx good bytes                               7591090528
tx good bytes                               7591090528
rx missed errors                                 59536
rx q0packets                                  61218472
rx q0bytes                                  7591090528
tx q0packets                                  61218472
tx q0bytes                                  7591090528
rx size 128 to 255 packets                    66097941
rx total packets                              66097927
rx total bytes                              8196143588
tx total packets                              61218472
tx size 128 to 255 packets                    61218472
rx l3 l4 xsum error                          101351297
out pkts untagged                             61218472
*rx priority0 dropped                             59536*
local0                             0    down  local0
Link speed: unknown
local

cpu {
## In the VPP there is one main thread and optionally the user can create 
worker(s)
## The main thread and worker thread(s) can be pinned to CPU core(s) manually 
or automatically

## Manual pinning of thread(s) to CPU core(s)

## Set logical CPU core where main thread runs, if main core is not set
## VPP will use core 1 if available
#main-core 1

## Set logical CPU core(s) where worker threads are running
#corelist-workers 2-3,18-19
#corelist-workers 4-3,5-7

## Automatic pinning of thread(s) to CPU core(s)

## Sets number of CPU core(s) to be skipped (1 ... N-1)
## Skipped CPU core(s) are not used for pinning main thread and working 
thread(s).
## The main thread is automatically pinned to the first available CPU core and 
worker(s)
## are pinned to next free CPU core(s) after core assigned to main thread
#skip-cores 4

## Specify a number of workers to be created
## Workers are pinned to N consecutive CPU cores while skipping "skip-cores" 
CPU core(s)
## and main thread's CPU core
# workers 4

## Set scheduling policy and priority of main and worker threads

## Scheduling policy options are: other (SCHED_OTHER), batch (SCHED_BATCH)
## idle (SCHED_IDLE), fifo (SCHED_FIFO), rr (SCHED_RR)
scheduler-policy fifo

## Scheduling priority is used only for "real-time policies (fifo and rr),
## and has to be in the range of priorities supported for a particular policy
scheduler-priority 50
}

buffers {
## Increase number of buffers allocated, needed only in scenarios with
## large number of interfaces and worker threads. Value is per numa node.
## Default is 16384 (8192 if running unpriviledged)
buffers-per-numa 30000

## Size of buffer data area
## Default is 2048
default data-size 2048
}

dpdk {
## Change default settings for all interfaces
dev default {
## Number of receive queues, enables RSS
## Default is 1
#num-rx-queues 4

## Number of transmit queues, Default is equal
## to number of worker threads or 1 if no workers treads
#num-tx-queues 4

## Number of descriptors in transmit and receive rings
## increasing or reducing number can impact performance
## Default is 1024 for both rx and tx
num-rx-desc 4096
num-tx-desc 4096

## VLAN strip offload mode for interface
## Default is off
#vlan-strip-offload on

## TCP Segment Offload
## Default is off
## To enable TSO, 'enable-tcp-udp-checksum' must be set
tso on

## Devargs
## device specific init args
## Default is NULL
# devargs safe-mode-support=1,pipeline-mode-support=1
}

## Whitelist specific interface by specifying PCI address
# dev 0000:02:00.0
# dev 0000:03:00.1

## Blacklist specific device type by specifying PCI vendor:device
## Whitelist entries take precedence
# blacklist 8086:10fb

## Set interface name
#dev 0000:03:00.1 {
# name ztj
#}

## Whitelist specific interface by specifying PCI address and in
## addition specify custom parameters for this interface
# dev 0000:02:00.1 {
# num-rx-queues 2
# }

## Change UIO driver used by VPP, Options are: igb_uio, vfio-pci,
## uio_pci_generic or auto (default)
uio-driver vfio-pci
#uio-driver uio_pci_generic

## Disable multi-segment buffers, improves performance but
## disables Jumbo MTU support
# no-multi-seg

## Change hugepages allocation per-socket, needed only if there is need for
## larger number of mbufs. Default is 256M on each detected CPU socket
#socket-mem 2048,2048
socket-mem 8192,0

## Disables UDP / TCP TX checksum offload. Typically needed for use
## faster vector PMDs (together with no-multi-seg)
# no-tx-checksum-offload

## Enable UDP / TCP TX checksum offload
## This is the reversed option of 'no-tx-checksum-offload'
enable-tcp-udp-checksum
}
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16260): https://lists.fd.io/g/vpp-dev/message/16260
Mute This Topic: https://lists.fd.io/mt/74043598/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to