Hello,

I am take a trial on AF_XDP feature of 19.05-rc2. I encountered two problems. 
One is about "iova-mode", another is about attaching bpf prog onto interface of 
specified namespace (probably new feature?)

I created a VM using uvt-tool, release=eoan, cpu=8, mem=8G, with 
host-passthrough enabled, and set vm.nr_hugepages = 2500. And then create 2 
docker containers. And then run testpmd with parameters "--vdev 
net_af_xdp_a,iface=veth055cc57,queue=0 --vdev 
net_af_xdp_b,iface=veth438434b,queue=0", where veths are host side interface 
names of two containers. Finally I got following error,
xdp_umem_configure(): Failed to reserve memzone for af_xdp umem.
eth_rx_queue_setup(): Failed to configure xdp socket
Fail to configure port 0 rx queues
After some digging, I fixed it by using "--iova-mode=va". Would anyone please 
let me know if it was safe to use "va" instead of "pa" for veth use case? Or 
any performance drops?

Secondly, I'd like to attach bpf prog onto interface inside container, e.g. 
"lo" interface, so that my code can verdict the traffic between containers 
inside a pod. Then the testpmd arguments would be like "--vdev 
net_af_xdp_a,namespace=<NS>,iface=NAME,queue=0". Do you think this feature is 
doable and meaningful?


Thanks,
Robert Nie





Reply via email to