Ravi,

VPP only supports vhost-user in the device mode. In your example, the host, in 
device mode, and the container also in device mode do not make a happy couple. 
You need one of them, either the host or container, running in driver mode 
using the dpdk vdev virtio_user command in startup.conf. So you need something 
like this

(host) VPP native vhost-user ----- (container) VPP DPDK vdev virtio_user
                          -- or --
(host) VPP DPDK vdev virtio_user ---- (container) VPP native vhost-user

Steven

On 6/4/18, 3:27 PM, "Ravi Kerur" <rke...@gmail.com> wrote:

    Hi Steven
    
    Though crash is not happening anymore, there is still an issue with Rx
    and Tx. To eliminate whether it is testpmd or vpp, I decided to run
    
    (1) VPP vhost-user server on host-x
    (2) Run VPP in a container on host-x and vhost-user client port
    connecting to vhost-user server.
    
    Still doesn't work. Details below. Please let me know if something is
    wrong in what I am doing.
    
    
    (1) VPP vhost-user as a server
    (2) VPP in a container virtio-user or vhost-user client
    
    (1) Create vhost-user server socket on VPP running on host.
    
    vpp#create vhost socket /var/run/vpp/sock3.sock server
    vpp#set interface state VirtualEthernet0/0/0 up
    show vhost-user VirtualEthernet0/0/0 descriptors
    Virtio vhost-user interfaces
    Global:
    coalesce frames 32 time 1e-3
    number of rx virtqueues in interrupt mode: 0
    Interface: VirtualEthernet0/0/0 (ifindex 3)
    virtio_net_hdr_sz 0
    features mask (0xffffffffffffffff):
    features (0x0):
    protocol features (0x0)
    
    socket filename /var/run/vpp/sock3.sock type server errno "Success"
    
    rx placement:
    tx placement: spin-lock
    thread 0 on vring 0
    
    Memory regions (total 0)
    
    vpp# set interface ip address VirtualEthernet0/0/0 192.168.1.1/24
    vpp#
    
    (2) Instantiate a docker container to run VPP connecting to sock3.server 
socket.
    
    docker run -it --privileged -v
    /var/run/vpp/sock3.sock:/var/run/usvhost1 -v
    /dev/hugepages:/dev/hugepages dpdk-app-vpp:latest
    root@4b1bd06a3225:~/dpdk#
    root@4b1bd06a3225:~/dpdk# ps -ef
    UID PID PPID C STIME TTY TIME CMD
    root 1 0 0 21:39 ? 00:00:00 /bin/bash
    root 17 1 0 21:39 ? 00:00:00 ps -ef
    root@4b1bd06a3225:~/dpdk#
    
    root@8efda6701ace:~/dpdk# ps -ef | grep vpp
    root 19 1 39 21:41 ? 00:00:03 /usr/bin/vpp -c /etc/vpp/startup.conf
    root 25 1 0 21:41 ? 00:00:00 grep --color=auto vpp
    root@8efda6701ace:~/dpdk#
    
    vpp#create vhost socket /var/run/usvhost1
    vpp#set interface state VirtualEthernet0/0/0 up
    vpp#show vhost-user VirtualEthernet0/0/0 descriptors
    Virtio vhost-user interfaces
    Global:
    coalesce frames 32 time 1e-3
    number of rx virtqueues in interrupt mode: 0
    Interface: VirtualEthernet0/0/0 (ifindex 1)
    virtio_net_hdr_sz 0
    features mask (0xffffffffffffffff):
    features (0x0):
    protocol features (0x0)
    
    socket filename /var/run/usvhost1 type client errno "Success"
    
    rx placement:
    tx placement: spin-lock
    thread 0 on vring 0
    
    Memory regions (total 0)
    
    vpp#
    
    vpp# set interface ip address VirtualEthernet0/0/0 192.168.1.2/24
    vpp#
    
    vpp# ping 192.168.1.1
    
    Statistics: 5 sent, 0 received, 100% packet loss
    vpp#
    
    On Thu, May 31, 2018 at 2:30 PM, Steven Luong (sluong) <slu...@cisco.com> 
wrote:
    > show interface and look for the counter and count columns for the 
corresponding interface.
    >
    > Steven
    >
    > On 5/31/18, 1:28 PM, "Ravi Kerur" <rke...@gmail.com> wrote:
    >
    >     Hi Steven,
    >
    >     You made my day, thank you. I didn't realize different dpdk versions
    >     (vpp -- 18.02.1 and testpmd -- from latest git repo (probably 18.05)
    >     could be the cause of the problem, I still dont understand why it
    >     should as virtio/vhost messages are meant to setup tx/rx rings
    >     correctly?
    >
    >     I downloaded dpdk 18.02.1 stable release and at least vpp doesn't
    >     crash now (for both vpp-native and dpdk vhost interfaces). I have one
    >     question is there a way to read vhost-user statistics counter (Rx/Tx)
    >     on vpp? I only know
    >
    >     'show vhost-user <intf>' and 'show vhost-user <intf> descriptors'
    >     which doesn't show any counters.
    >
    >     Thanks.
    >
    >     On Thu, May 31, 2018 at 11:51 AM, Steven Luong (sluong)
    >     <slu...@cisco.com> wrote:
    >     > Ravi,
    >     >
    >     > For (1) which works, what dpdk version are you using in the host? 
Are you using the same dpdk version as VPP is using? Since you are using VPP 
latest, I think it is 18.02. Type "show dpdk version" at the VPP prompt to find 
out for sure.
    >     >
    >     > Steven
    >     >
    >     > On 5/31/18, 11:44 AM, "Ravi Kerur" <rke...@gmail.com> wrote:
    >     >
    >     >     Hi Steven,
    >     >
    >     >     i have tested following scenarios and it basically is not clear 
why
    >     >     you think DPDK is the problem? Is it possible VPP and DPDK use
    >     >     different virtio versions?
    >     >
    >     >     Following are the scenarios I have tested
    >     >
    >     >     (1) testpmd/DPDK vhost-user (running on host) and testpmd/DPDK
    >     >     virito-user (in a container) -- can send and receive packets
    >     >     (2) VPP-native vhost-user (running on host) and testpmd/DPDK
    >     >     virtio-user (in a container) -- VPP crashes and it is in VPP 
code
    >     >     (3) VPP-DPDK vhost user (running on host) and testpmd/DPDK 
virtio-user
    >     >     (in a container) -- VPP crashes and in DPDK
    >     >
    >     >     Thanks.
    >     >
    >     >     On Thu, May 31, 2018 at 10:12 AM, Steven Luong (sluong)
    >     >     <slu...@cisco.com> wrote:
    >     >     > Ravi,
    >     >     >
    >     >     > I've proved my point -- there is a problem in the way that 
you invoke testpmd. The shared memory region that it passes to the device is 
not accessible from the device. I don't know what the correct options are that 
you need to use. This is really a question for dpdk.
    >     >     >
    >     >     > As a further exercise, you could remove VPP in the host and 
instead run testpmd in device mode using "--vdev 
net_vhost0,iface=/var/run/vpp/sock1.sock" option. I bet you testpmd in the host 
will crash in the same place. I hope you can find out the answer from dpdk and 
tell us about it.
    >     >     >
    >     >     > Steven
    >     >     >
    >     >     > On 5/31/18, 9:31 AM, "vpp-dev@lists.fd.io on behalf of Ravi 
Kerur" <vpp-dev@lists.fd.io on behalf of rke...@gmail.com> wrote:
    >     >     >
    >     >     >     Hi Steven,
    >     >     >
    >     >     >     Thank you for your help, I removed sock1.sock and 
sock2.sock,
    >     >     >     restarted vpp, atleast interfaces get created. However, 
when I start
    >     >     >     dpdk/testpmd inside the container it crashes as well. 
Below are some
    >     >     >     details. I am using vpp code from latest repo.
    >     >     >
    >     >     >     (1) On host
    >     >     >     show interface
    >     >     >                   Name               Idx       State          
Counter
    >     >     >         Count
    >     >     >     VhostEthernet2                    3        down
    >     >     >     VhostEthernet3                    4        down
    >     >     >     VirtualFunctionEthernet4/10/4     1        down
    >     >     >     VirtualFunctionEthernet4/10/6     2        down
    >     >     >     local0                            0        down
    >     >     >     vpp#
    >     >     >     vpp# set interface state VhostEthernet2 up
    >     >     >     vpp# set interface state VhostEthernet3 up
    >     >     >     vpp#
    >     >     >     vpp# set interface l2 bridge VhostEthernet2 1
    >     >     >     vpp# set interface l2 bridge VhostEthernet3 1
    >     >     >     vpp#
    >     >     >
    >     >     >     (2) Run tespmd inside the container
    >     >     >     docker run -it --privileged -v
    >     >     >     /var/run/vpp/sock1.sock:/var/run/usvhost1 -v
    >     >     >     /var/run/vpp/sock2.sock:/var/run/usvhost2 -v
    >     >     >     /dev/hugepages:/dev/hugepages dpdk-app-testpmd 
./bin/testpmd -l 16-19
    >     >     >     -n 4 --log-level=8 -m 64 --no-pci
    >     >     >     
--vdev=virtio_user0,path=/var/run/usvhost1,mac=54:00:00:01:01:01
    >     >     >     
--vdev=virtio_user1,path=/var/run/usvhost2,mac=54:00:00:01:01:02 --
    >     >     >     -i
    >     >     >     EAL: Detected 28 lcore(s)
    >     >     >     EAL: Detected 2 NUMA nodes
    >     >     >     EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    >     >     >     EAL: 8192 hugepages of size 2097152 reserved, but no 
mounted hugetlbfs
    >     >     >     found for that size
    >     >     >     EAL: Probing VFIO support...
    >     >     >     EAL: VFIO support initialized
    >     >     >     EAL: Setting up physically contiguous memory...
    >     >     >     EAL: locking hot plug lock memory...
    >     >     >     EAL: primary init32...
    >     >     >     Interactive-mode selected
    >     >     >     Warning: NUMA should be configured manually by using
    >     >     >     --port-numa-config and --ring-numa-config parameters 
along with
    >     >     >     --numa.
    >     >     >     testpmd: create a new mbuf pool <mbuf_pool_socket_0>: 
n=171456,
    >     >     >     size=2176, socket=0
    >     >     >     testpmd: preferred mempool ops selected: ring_mp_mc
    >     >     >     testpmd: create a new mbuf pool <mbuf_pool_socket_1>: 
n=171456,
    >     >     >     size=2176, socket=1
    >     >     >     testpmd: preferred mempool ops selected: ring_mp_mc
    >     >     >     Port 0 is now not stopped
    >     >     >     Port 1 is now not stopped
    >     >     >     Please stop the ports first
    >     >     >     Done
    >     >     >     testpmd>
    >     >     >
    >     >     >     (3) VPP crashes with the same issue but inside dpdk code
    >     >     >
    >     >     >     (gdb) cont
    >     >     >     Continuing.
    >     >     >
    >     >     >     Program received signal SIGSEGV, Segmentation fault.
    >     >     >     [Switching to Thread 0x7ffd0d08e700 (LWP 41257)]
    >     >     >     rte_vhost_dequeue_burst (vid=<optimized out>, 
queue_id=<optimized
    >     >     >     out>, mbuf_pool=0x7fe17fc883c0,
    >     >     >         pkts=pkts@entry=0x7fffb671ebc0, count=count@entry=32)
    >     >     >         at 
/var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_vhost/virtio_net.c:1504
    >     >     >     1504        free_entries = *((volatile uint16_t 
*)&vq->avail->idx) -
    >     >     >     (gdb) bt
    >     >     >     #0  rte_vhost_dequeue_burst (vid=<optimized out>, 
queue_id=<optimized out>,
    >     >     >         mbuf_pool=0x7fe17fc883c0, 
pkts=pkts@entry=0x7fffb671ebc0,
    >     >     >     count=count@entry=32)
    >     >     >         at 
/var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_vhost/virtio_net.c:1504
    >     >     >     #1  0x00007fffb4718e6f in eth_vhost_rx (q=0x7fe17fbbdd80, 
bufs=0x7fffb671ebc0,
    >     >     >         nb_bufs=<optimized out>)
    >     >     >         at 
/var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/net/vhost/rte_eth_vhost.c:410
    >     >     >     #2  0x00007fffb441cb7c in rte_eth_rx_burst (nb_pkts=256,
    >     >     >     rx_pkts=0x7fffb671ebc0, queue_id=0,
    >     >     >         port_id=3) at
    >     >     >     
/var/venom/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_ethdev.h:3635
    >     >     >     #3  dpdk_device_input (queue_id=0, 
thread_index=<optimized out>,
    >     >     >     node=0x7fffb732c700,
    >     >     >         xd=0x7fffb7337240, dm=<optimized out>, 
vm=0x7fffb6703340)
    >     >     >         at 
/var/venom/vpp/build-data/../src/plugins/dpdk/device/node.c:477
    >     >     >     #4  dpdk_input_node_fn_avx2 (vm=<optimized out>, 
node=<optimized out>,
    >     >     >     f=<optimized out>)
    >     >     >         at 
/var/venom/vpp/build-data/../src/plugins/dpdk/device/node.c:658
    >     >     >     #5  0x00007ffff7954d35 in dispatch_node
    >     >     >     (last_time_stamp=12531752723928016, frame=0x0,
    >     >     >         dispatch_state=VLIB_NODE_STATE_POLLING, 
type=VLIB_NODE_TYPE_INPUT,
    >     >     >     node=0x7fffb732c700,
    >     >     >         vm=0x7fffb6703340) at 
/var/venom/vpp/build-data/../src/vlib/main.c:988
    >     >     >     #6  vlib_main_or_worker_loop (is_main=0, 
vm=0x7fffb6703340)
    >     >     >         at /var/venom/vpp/build-data/../src/vlib/main.c:1507
    >     >     >     #7  vlib_worker_loop (vm=0x7fffb6703340) at
    >     >     >     /var/venom/vpp/build-data/../src/vlib/main.c:1641
    >     >     >     #8  0x00007ffff6ad25d8 in clib_calljmp ()
    >     >     >         at 
/var/venom/vpp/build-data/../src/vppinfra/longjmp.S:110
    >     >     >     #9  0x00007ffd0d08ddb0 in ?? ()
    >     >     >     #10 0x00007fffb4436edd in eal_thread_loop (arg=<optimized 
out>)
    >     >     >         at 
/var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_eal/linuxapp/eal/eal_thread.c:153
    >     >     >     #11 0x0000000000000000 in ?? ()
    >     >     >     (gdb) frame 0
    >     >     >     #0  rte_vhost_dequeue_burst (vid=<optimized out>, 
queue_id=<optimized out>,
    >     >     >         mbuf_pool=0x7fe17fc883c0, 
pkts=pkts@entry=0x7fffb671ebc0,
    >     >     >     count=count@entry=32)
    >     >     >         at 
/var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_vhost/virtio_net.c:1504
    >     >     >     1504        free_entries = *((volatile uint16_t 
*)&vq->avail->idx) -
    >     >     >     (gdb) p vq
    >     >     >     $1 = (struct vhost_virtqueue *) 0x7fc3ffc84b00
    >     >     >     (gdb) p vq->avail
    >     >     >     $2 = (struct vring_avail *) 0x7ffbfff98000
    >     >     >     (gdb) p *$2
    >     >     >     Cannot access memory at address 0x7ffbfff98000
    >     >     >     (gdb)
    >     >     >
    >     >     >
    >     >     >     Thanks.
    >     >     >
    >     >     >     On Thu, May 31, 2018 at 12:09 AM, Steven Luong (sluong)
    >     >     >     <slu...@cisco.com> wrote:
    >     >     >     > Sorry, I was expecting to see two VhostEthernet 
interfaces like this. Those VirtualFunctionEthernet are your physical 
interfaces.
    >     >     >     >
    >     >     >     > sh int
    >     >     >     >               Name               Idx       State        
  Counter          Count
    >     >     >     > VhostEthernet0                    1         up
    >     >     >     > VhostEthernet1                    2         up
    >     >     >     > local0                            0        down
    >     >     >     > DBGvpp#
    >     >     >     >
    >     >     >     > You have to first manually remove 
/var/run/vpp/sock1.sock and /var/run/vpp/sock2.sock before you start vpp on the 
host. dpdk does not like it if they already existed. If you successfully create 
VhostEthernet interface, try to send some traffic through it to see if it 
crashes or not.
    >     >     >     >
    >     >     >     > Steven
    >     >     >     >
    >     >     >     > On 5/30/18, 9:17 PM, "vpp-dev@lists.fd.io on behalf of 
Steven Luong (sluong)" <vpp-dev@lists.fd.io on behalf of slu...@cisco.com> 
wrote:
    >     >     >     >
    >     >     >     >     Ravi,
    >     >     >     >
    >     >     >     >     I don't think you can declare (2) works fine yet. 
Please bring up the dpdk vhost-user interfaces and try to send some traffic 
between them to exercise the shared memory region from dpdk virtio-user which 
may be "questionable".
    >     >     >     >
    >     >     >     >         VirtualFunctionEthernet4/10/4     1        down
    >     >     >     >         VirtualFunctionEthernet4/10/6     2        down
    >     >     >     >
    >     >     >     >     Steven
    >     >     >     >
    >     >     >     >     On 5/30/18, 4:41 PM, "Ravi Kerur" 
<rke...@gmail.com> wrote:
    >     >     >     >
    >     >     >     >         Hi Steve,
    >     >     >     >
    >     >     >     >         Thank you for your inputs, I added feature-mask 
to see if it helps in
    >     >     >     >         setting up queues correctly, it didn't so I 
will remove it. I have
    >     >     >     >         tried following combination
    >     >     >     >
    >     >     >     >         (1) VPP->vhost-user (on host) and 
DPDK/testpmd->virtio-user (in a
    >     >     >     >         container)  -- VPP crashes
    >     >     >     >         (2) DPDK/testpmd->vhost-user (on host) and 
DPDK/testpmd->virtio-user
    >     >     >     >         (in a container) -- works fine
    >     >     >     >
    >     >     >     >         To use DPDK vhost-user inside VPP, I defined 
configuration in
    >     >     >     >         startup.conf as mentioned by you and it looks 
as follows
    >     >     >     >
    >     >     >     >         unix {
    >     >     >     >           nodaemon
    >     >     >     >           log /var/log/vpp/vpp.log
    >     >     >     >           full-coredump
    >     >     >     >           cli-listen /run/vpp/cli.sock
    >     >     >     >           gid vpp
    >     >     >     >         }
    >     >     >     >
    >     >     >     >         api-segment {
    >     >     >     >           gid vpp
    >     >     >     >         }
    >     >     >     >
    >     >     >     >         cpu {
    >     >     >     >                 main-core 1
    >     >     >     >                 corelist-workers 6-9
    >     >     >     >         }
    >     >     >     >
    >     >     >     >         dpdk {
    >     >     >     >                 dev 0000:04:10.4
    >     >     >     >                 dev 0000:04:10.6
    >     >     >     >                 uio-driver vfio-pci
    >     >     >     >                 vdev 
net_vhost0,iface=/var/run/vpp/sock1.sock
    >     >     >     >                 vdev 
net_vhost1,iface=/var/run/vpp/sock2.sock
    >     >     >     >                 huge-dir /dev/hugepages_1GB
    >     >     >     >                 socket-mem 2048,2048
    >     >     >     >         }
    >     >     >     >
    >     >     >     >         From VPP logs
    >     >     >     >         dpdk: EAL init args: -c 3c2 -n 4 --vdev
    >     >     >     >         net_vhost0,iface=/var/run/vpp/sock1.sock --vdev
    >     >     >     >         net_vhost1,iface=/var/run/vpp/sock2.sock 
--huge-dir /dev/hugepages_1GB
    >     >     >     >         -w 0000:04:10.4 -w 0000:04:10.6 --master-lcore 
1 --socket-mem
    >     >     >     >         2048,2048
    >     >     >     >
    >     >     >     >         However, VPP doesn't create interface at all
    >     >     >     >
    >     >     >     >         vpp# show interface
    >     >     >     >                       Name               Idx       
State          Counter
    >     >     >     >             Count
    >     >     >     >         VirtualFunctionEthernet4/10/4     1        down
    >     >     >     >         VirtualFunctionEthernet4/10/6     2        down
    >     >     >     >         local0                            0        down
    >     >     >     >
    >     >     >     >         since it is a static mapping I am assuming it 
should be created, correct?
    >     >     >     >
    >     >     >     >         Thanks.
    >     >     >     >
    >     >     >     >         On Wed, May 30, 2018 at 3:43 PM, Steven Luong 
(sluong) <slu...@cisco.com> wrote:
    >     >     >     >         > Ravi,
    >     >     >     >         >
    >     >     >     >         > First and foremost, get rid of the 
feature-mask option. I don't know what 0x40400000 does for you. If that does 
not help, try testing it with dpdk based vhost-user instead of VPP native 
vhost-user to make sure that they can work well with each other first. To use 
dpdk vhost-user, add a vdev command in the startup.conf for each vhost-user 
device that you have.
    >     >     >     >         >
    >     >     >     >         > dpdk { vdev 
net_vhost0,iface=/var/run/vpp/sock1.sock }
    >     >     >     >         >
    >     >     >     >         > dpdk based vhost-user interface is named 
VhostEthernet0, VhostEthernet1, etc. Make sure you use the right interface name 
to set the state to up.
    >     >     >     >         >
    >     >     >     >         > If dpdk based vhost-user does not work with 
testpmd either, it looks like some problem with the way that you invoke testpmd.
    >     >     >     >         >
    >     >     >     >         > If dpdk based vhost-user works well with the 
same testpmd device driver and not vpp native vhost-user, I can set up 
something similar to yours to look into it.
    >     >     >     >         >
    >     >     >     >         > The device driver, testpmd, is supposed to 
pass the shared memory region to VPP for TX/RX queues. It looks like VPP 
vhost-user might have run into a bump there with using the shared memory 
(txvq->avail).
    >     >     >     >         >
    >     >     >     >         > Steven
    >     >     >     >         >
    >     >     >     >         > PS. vhost-user is not an optimum interface 
for containers. You may want to look into using memif if you don't already know 
about it.
    >     >     >     >         >
    >     >     >     >         >
    >     >     >     >         > On 5/30/18, 2:06 PM, "Ravi Kerur" 
<rke...@gmail.com> wrote:
    >     >     >     >         >
    >     >     >     >         >     I am not sure what is wrong with the 
setup or a bug in vpp, vpp
    >     >     >     >         >     crashes with vhost<-->virtio 
communication.
    >     >     >     >         >
    >     >     >     >         >     (1) Vhost-interfaces are created and 
attached to bridge-domain as follows
    >     >     >     >         >
    >     >     >     >         >     create vhost socket 
/var/run/vpp/sock1.sock server feature-mask 0x40400000
    >     >     >     >         >     create vhost socket 
/var/run/vpp/sock2.sock server feature-mask 0x40400000
    >     >     >     >         >     set interface state VirtualEthernet0/0/0 
up
    >     >     >     >         >     set interface state VirtualEthernet0/0/1 
up
    >     >     >     >         >
    >     >     >     >         >     set interface l2 bridge 
VirtualEthernet0/0/0 1
    >     >     >     >         >     set interface l2 bridge 
VirtualEthernet0/0/1 1
    >     >     >     >         >
    >     >     >     >         >
    >     >     >     >         >     (2) DPDK/testpmd is started in a 
container to talk to vpp/vhost-user
    >     >     >     >         >     interface as follows
    >     >     >     >         >
    >     >     >     >         >     docker run -it --privileged -v
    >     >     >     >         >     /var/run/vpp/sock1.sock:/var/run/usvhost1 
-v
    >     >     >     >         >     /var/run/vpp/sock2.sock:/var/run/usvhost2 
-v
    >     >     >     >         >     /dev/hugepages:/dev/hugepages 
dpdk-app-testpmd ./bin/testpmd -c 0x3 -n
    >     >     >     >         >     4 --log-level=9 -m 64 --no-pci 
--single-file-segments
    >     >     >     >         >     
--vdev=virtio_user0,path=/var/run/usvhost1,mac=54:00:00:01:01:01
    >     >     >     >         >     
--vdev=virtio_user1,path=/var/run/usvhost2,mac=54:00:00:01:01:02 --
    >     >     >     >         >     -i
    >     >     >     >         >
    >     >     >     >         >     (3) show vhost-user VirtualEthernet0/0/1
    >     >     >     >         >     Virtio vhost-user interfaces
    >     >     >     >         >     Global:
    >     >     >     >         >       coalesce frames 32 time 1e-3
    >     >     >     >         >       number of rx virtqueues in interrupt 
mode: 0
    >     >     >     >         >     Interface: VirtualEthernet0/0/1 (ifindex 
4)
    >     >     >     >         >     virtio_net_hdr_sz 10
    >     >     >     >         >      features mask (0x40400000):
    >     >     >     >         >      features (0x0):
    >     >     >     >         >       protocol features (0x0)
    >     >     >     >         >
    >     >     >     >         >      socket filename /var/run/vpp/sock2.sock 
type server errno "Success"
    >     >     >     >         >
    >     >     >     >         >      rx placement:
    >     >     >     >         >      tx placement: spin-lock
    >     >     >     >         >        thread 0 on vring 0
    >     >     >     >         >        thread 1 on vring 0
    >     >     >     >         >        thread 2 on vring 0
    >     >     >     >         >        thread 3 on vring 0
    >     >     >     >         >        thread 4 on vring 0
    >     >     >     >         >
    >     >     >     >         >      Memory regions (total 1)
    >     >     >     >         >      region fd    guest_phys_addr    
memory_size        userspace_addr
    >     >     >     >         >     mmap_offset        mmap_addr
    >     >     >     >         >      ====== ===== ================== 
================== ==================
    >     >     >     >         >     ================== ==================
    >     >     >     >         >       0     55    0x00007ff7c0000000 
0x0000000040000000 0x00007ff7c0000000
    >     >     >     >         >     0x0000000000000000 0x00007ffbc0000000
    >     >     >     >         >
    >     >     >     >         >     vpp# show vhost-user VirtualEthernet0/0/0
    >     >     >     >         >     Virtio vhost-user interfaces
    >     >     >     >         >     Global:
    >     >     >     >         >       coalesce frames 32 time 1e-3
    >     >     >     >         >       number of rx virtqueues in interrupt 
mode: 0
    >     >     >     >         >     Interface: VirtualEthernet0/0/0 (ifindex 
3)
    >     >     >     >         >     virtio_net_hdr_sz 10
    >     >     >     >         >      features mask (0x40400000):
    >     >     >     >         >      features (0x0):
    >     >     >     >         >       protocol features (0x0)
    >     >     >     >         >
    >     >     >     >         >      socket filename /var/run/vpp/sock1.sock 
type server errno "Success"
    >     >     >     >         >
    >     >     >     >         >      rx placement:
    >     >     >     >         >      tx placement: spin-lock
    >     >     >     >         >        thread 0 on vring 0
    >     >     >     >         >        thread 1 on vring 0
    >     >     >     >         >        thread 2 on vring 0
    >     >     >     >         >        thread 3 on vring 0
    >     >     >     >         >        thread 4 on vring 0
    >     >     >     >         >
    >     >     >     >         >      Memory regions (total 1)
    >     >     >     >         >      region fd    guest_phys_addr    
memory_size        userspace_addr
    >     >     >     >         >     mmap_offset        mmap_addr
    >     >     >     >         >      ====== ===== ================== 
================== ==================
    >     >     >     >         >     ================== ==================
    >     >     >     >         >       0     51    0x00007ff7c0000000 
0x0000000040000000 0x00007ff7c0000000
    >     >     >     >         >     0x0000000000000000 0x00007ffc00000000
    >     >     >     >         >
    >     >     >     >         >     (4) vpp stack trace
    >     >     >     >         >     Program received signal SIGSEGV, 
Segmentation fault.
    >     >     >     >         >     [Switching to Thread 0x7ffd0e090700 (LWP 
46570)]
    >     >     >     >         >     0x00007ffff7414642 in vhost_user_if_input
    >     >     >     >         >     (mode=VNET_HW_INTERFACE_RX_MODE_POLLING,
    >     >     >     >         >         node=0x7fffb76bab00, qid=<optimized 
out>, vui=0x7fffb6739700,
    >     >     >     >         >         vum=0x7ffff78f4480 <vhost_user_main>, 
vm=0x7fffb672a9c0)
    >     >     >     >         >         at 
/var/venom/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1596
    >     >     >     >         >     1596      if (PREDICT_FALSE 
(txvq->avail->flags & 0xFFFE))
    >     >     >     >         >     (gdb) bt
    >     >     >     >         >     #0  0x00007ffff7414642 in 
vhost_user_if_input
    >     >     >     >         >     (mode=VNET_HW_INTERFACE_RX_MODE_POLLING,
    >     >     >     >         >         node=0x7fffb76bab00, qid=<optimized 
out>, vui=0x7fffb6739700,
    >     >     >     >         >         vum=0x7ffff78f4480 <vhost_user_main>, 
vm=0x7fffb672a9c0)
    >     >     >     >         >         at 
/var/venom/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1596
    >     >     >     >         >     #1  vhost_user_input (f=<optimized out>, 
node=<optimized out>,
    >     >     >     >         >     vm=<optimized out>)
    >     >     >     >         >         at 
/var/venom/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1947
    >     >     >     >         >     #2  vhost_user_input_avx2 (vm=<optimized 
out>, node=<optimized out>,
    >     >     >     >         >     frame=<optimized out>)
    >     >     >     >         >         at 
/var/venom/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1972
    >     >     >     >         >     #3  0x00007ffff7954d35 in dispatch_node
    >     >     >     >         >     (last_time_stamp=12391212490024174, 
frame=0x0,
    >     >     >     >         >         
dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INPUT,
    >     >     >     >         >     node=0x7fffb76bab00,
    >     >     >     >         >         vm=0x7fffb672a9c0) at 
/var/venom/vpp/build-data/../src/vlib/main.c:988
    >     >     >     >         >     #4  vlib_main_or_worker_loop (is_main=0, 
vm=0x7fffb672a9c0)
    >     >     >     >         >         at 
/var/venom/vpp/build-data/../src/vlib/main.c:1507
    >     >     >     >         >     #5  vlib_worker_loop (vm=0x7fffb672a9c0) 
at
    >     >     >     >         >     
/var/venom/vpp/build-data/../src/vlib/main.c:1641
    >     >     >     >         >     #6  0x00007ffff6ad25d8 in clib_calljmp ()
    >     >     >     >         >         at 
/var/venom/vpp/build-data/../src/vppinfra/longjmp.S:110
    >     >     >     >         >     #7  0x00007ffd0e08fdb0 in ?? ()
    >     >     >     >         >     #8  0x00007fffb4436edd in eal_thread_loop 
(arg=<optimized out>)
    >     >     >     >         >         at 
/var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_eal/linuxapp/eal/eal_thread.c:153
    >     >     >     >         >     #9  0x0000000000000000 in ?? ()
    >     >     >     >         >     (gdb) frame 0
    >     >     >     >         >     #0  0x00007ffff7414642 in 
vhost_user_if_input
    >     >     >     >         >     (mode=VNET_HW_INTERFACE_RX_MODE_POLLING,
    >     >     >     >         >         node=0x7fffb76bab00, qid=<optimized 
out>, vui=0x7fffb6739700,
    >     >     >     >         >         vum=0x7ffff78f4480 <vhost_user_main>, 
vm=0x7fffb672a9c0)
    >     >     >     >         >         at 
/var/venom/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1596
    >     >     >     >         >     1596      if (PREDICT_FALSE 
(txvq->avail->flags & 0xFFFE))
    >     >     >     >         >     (gdb) p txvq
    >     >     >     >         >     $1 = (vhost_user_vring_t *) 0x7fffb6739ac0
    >     >     >     >         >     (gdb) p *txvq
    >     >     >     >         >     $2 = {cacheline0 = 0x7fffb6739ac0 "?", 
qsz_mask = 255, last_avail_idx
    >     >     >     >         >     = 0, last_used_idx = 0,
    >     >     >     >         >       n_since_last_int = 0, desc = 
0x7ffbfff97000, avail = 0x7ffbfff98000,
    >     >     >     >         >     used = 0x7ffbfff99000,
    >     >     >     >         >       int_deadline = 0, started = 1 '\001', 
enabled = 0 '\000', log_used = 0 '\000',
    >     >     >     >         >       cacheline1 = 0x7fffb6739b00 "????\n", 
errfd = -1, callfd_idx = 10,
    >     >     >     >         >     kickfd_idx = 14,
    >     >     >     >         >       log_guest_addr = 0, mode = 1}
    >     >     >     >         >     (gdb) p *(txvq->avail)
    >     >     >     >         >     Cannot access memory at address 
0x7ffbfff98000
    >     >     >     >         >     (gdb)
    >     >     >     >         >
    >     >     >     >         >     On Tue, May 29, 2018 at 10:47 AM, Ravi 
Kerur <rke...@gmail.com> wrote:
    >     >     >     >         >     > Steve,
    >     >     >     >         >     >
    >     >     >     >         >     > Thanks for inputs on debugs and gdb. I 
am using gdb on my development
    >     >     >     >         >     > system to debug the issue. I would like 
to have reliable core
    >     >     >     >         >     > generation on the system on which I 
don't have access to install gdb.
    >     >     >     >         >     > I installed corekeeper and it still 
doesn't generate core. I am
    >     >     >     >         >     > running vpp inside a VM 
(VirtualBox/vagrant), not sure if I need to
    >     >     >     >         >     > set something inside vagrant config 
file.
    >     >     >     >         >     >
    >     >     >     >         >     >  dpkg -l corekeeper
    >     >     >     >         >     > 
Desired=Unknown/Install/Remove/Purge/Hold
    >     >     >     >         >     > | 
Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
    >     >     >     >         >     > |/ Err?=(none)/Reinst-required 
(Status,Err: uppercase=bad)
    >     >     >     >         >     > ||/ Name                 Version        
 Architecture    Description
    >     >     >     >         >     > 
+++-====================-===============-===============-==============================================
    >     >     >     >         >     > ii  corekeeper           1.6            
 amd64           enable core
    >     >     >     >         >     > files and report crashes to the system
    >     >     >     >         >     >
    >     >     >     >         >     > Thanks.
    >     >     >     >         >     >
    >     >     >     >         >     > On Tue, May 29, 2018 at 9:38 AM, Steven 
Luong (sluong) <slu...@cisco.com> wrote:
    >     >     >     >         >     >> Ravi,
    >     >     >     >         >     >>
    >     >     >     >         >     >> I install corekeeper and the core file 
is kept in /var/crash. But why not use gdb to attach to the VPP process?
    >     >     >     >         >     >> To turn on VPP vhost-user debug, type 
"debug vhost-user on" at the VPP prompt.
    >     >     >     >         >     >>
    >     >     >     >         >     >> Steven
    >     >     >     >         >     >>
    >     >     >     >         >     >> On 5/29/18, 9:10 AM, 
"vpp-dev@lists.fd.io on behalf of Ravi Kerur" <vpp-dev@lists.fd.io on behalf of 
rke...@gmail.com> wrote:
    >     >     >     >         >     >>
    >     >     >     >         >     >>     Hi Marco,
    >     >     >     >         >     >>
    >     >     >     >         >     >>
    >     >     >     >         >     >>     On Tue, May 29, 2018 at 6:30 AM, 
Marco Varlese <mvarl...@suse.de> wrote:
    >     >     >     >         >     >>     > Ravi,
    >     >     >     >         >     >>     >
    >     >     >     >         >     >>     > On Sun, 2018-05-27 at 12:20 
-0700, Ravi Kerur wrote:
    >     >     >     >         >     >>     >> Hello,
    >     >     >     >         >     >>     >>
    >     >     >     >         >     >>     >> I have a VM(16.04.4 Ubuntu 
x86_64) with 2 cores and 4G RAM. I have
    >     >     >     >         >     >>     >> installed VPP successfully on 
it. Later I have created vhost-user
    >     >     >     >         >     >>     >> interfaces via
    >     >     >     >         >     >>     >>
    >     >     >     >         >     >>     >> create vhost socket 
/var/run/vpp/sock1.sock server
    >     >     >     >         >     >>     >> create vhost socket 
/var/run/vpp/sock2.sock server
    >     >     >     >         >     >>     >> set interface state 
VirtualEthernet0/0/0 up
    >     >     >     >         >     >>     >> set interface state 
VirtualEthernet0/0/1 up
    >     >     >     >         >     >>     >>
    >     >     >     >         >     >>     >> set interface l2 bridge 
VirtualEthernet0/0/0 1
    >     >     >     >         >     >>     >> set interface l2 bridge 
VirtualEthernet0/0/1 1
    >     >     >     >         >     >>     >>
    >     >     >     >         >     >>     >> I then run 'DPDK/testpmd' 
inside a container which will use
    >     >     >     >         >     >>     >> virtio-user interfaces using 
the following command
    >     >     >     >         >     >>     >>
    >     >     >     >         >     >>     >> docker run -it --privileged -v
    >     >     >     >         >     >>     >> 
/var/run/vpp/sock1.sock:/var/run/usvhost1 -v
    >     >     >     >         >     >>     >> 
/var/run/vpp/sock2.sock:/var/run/usvhost2 -v
    >     >     >     >         >     >>     >> /dev/hugepages:/dev/hugepages 
dpdk-app-testpmd ./bin/testpmd -c 0x3 -n
    >     >     >     >         >     >>     >> 4 --log-level=9 -m 64 --no-pci 
--single-file-segments
    >     >     >     >         >     >>     >> 
--vdev=virtio_user0,path=/var/run/usvhost1,mac=54:01:00:01:01:01
    >     >     >     >         >     >>     >> 
--vdev=virtio_user1,path=/var/run/usvhost2,mac=54:01:00:01:01:02 --
    >     >     >     >         >     >>     >> -i
    >     >     >     >         >     >>     >>
    >     >     >     >         >     >>     >> VPP Vnet crashes with following 
message
    >     >     >     >         >     >>     >>
    >     >     >     >         >     >>     >> May 27 11:44:00 localhost 
vnet[6818]: received signal SIGSEGV, PC
    >     >     >     >         >     >>     >> 0x7fcca4620187, faulting 
address 0x7fcb317ac000
    >     >     >     >         >     >>     >>
    >     >     >     >         >     >>     >> Questions:
    >     >     >     >         >     >>     >> I have 'ulimit -c unlimited' 
and /etc/vpp/startup.conf has
    >     >     >     >         >     >>     >> unix {
    >     >     >     >         >     >>     >>   nodaemon
    >     >     >     >         >     >>     >>   log /var/log/vpp/vpp.log
    >     >     >     >         >     >>     >>   full-coredump
    >     >     >     >         >     >>     >>   cli-listen /run/vpp/cli.sock
    >     >     >     >         >     >>     >>   gid vpp
    >     >     >     >         >     >>     >> }
    >     >     >     >         >     >>     >>
    >     >     >     >         >     >>     >> But I couldn't locate corefile?
    >     >     >     >         >     >>     > The location of the coredump 
file depends on your system configuration.
    >     >     >     >         >     >>     >
    >     >     >     >         >     >>     > Please, check "cat 
/proc/sys/kernel/core_pattern"
    >     >     >     >         >     >>     >
    >     >     >     >         >     >>     > If you have systemd-coredump in 
the output of the above command, then likely the
    >     >     >     >         >     >>     > location of the coredump files 
is "/var/lib/systemd/coredump/"
    >     >     >     >         >     >>     >
    >     >     >     >         >     >>     > You can also change the location 
of where your system places the coredump files:
    >     >     >     >         >     >>     > echo 
'/PATH_TO_YOU_LOCATION/core_%e.%p' | sudo tee /proc/sys/kernel/core_pattern
    >     >     >     >         >     >>     >
    >     >     >     >         >     >>     > See if that helps...
    >     >     >     >         >     >>     >
    >     >     >     >         >     >>
    >     >     >     >         >     >>     Initially 
'/proc/sys/kernel/core_pattern' was set to 'core'. I changed
    >     >     >     >         >     >>     it to 'systemd-coredump'. Still no 
core generated. VPP crashes
    >     >     >     >         >     >>
    >     >     >     >         >     >>     May 29 08:54:34 localhost 
vnet[4107]: received signal SIGSEGV, PC
    >     >     >     >         >     >>     0x7f0167751187, faulting address 
0x7efff43ac000
    >     >     >     >         >     >>     May 29 08:54:34 localhost 
systemd[1]: vpp.service: Main process
    >     >     >     >         >     >>     exited, code=killed, status=6/ABRT
    >     >     >     >         >     >>     May 29 08:54:34 localhost 
systemd[1]: vpp.service: Unit entered failed state.
    >     >     >     >         >     >>     May 29 08:54:34 localhost 
systemd[1]: vpp.service: Failed with result 'signal'.
    >     >     >     >         >     >>
    >     >     >     >         >     >>
    >     >     >     >         >     >>     cat /proc/sys/kernel/core_pattern
    >     >     >     >         >     >>     systemd-coredump
    >     >     >     >         >     >>
    >     >     >     >         >     >>
    >     >     >     >         >     >>     ulimit -a
    >     >     >     >         >     >>     core file size          (blocks, 
-c) unlimited
    >     >     >     >         >     >>     data seg size           (kbytes, 
-d) unlimited
    >     >     >     >         >     >>     scheduling priority             
(-e) 0
    >     >     >     >         >     >>     file size               (blocks, 
-f) unlimited
    >     >     >     >         >     >>     pending signals                 
(-i) 15657
    >     >     >     >         >     >>     max locked memory       (kbytes, 
-l) 64
    >     >     >     >         >     >>     max memory size         (kbytes, 
-m) unlimited
    >     >     >     >         >     >>     open files                      
(-n) 1024
    >     >     >     >         >     >>     pipe size            (512 bytes, 
-p) 8
    >     >     >     >         >     >>     POSIX message queues     (bytes, 
-q) 819200
    >     >     >     >         >     >>     real-time priority              
(-r) 0
    >     >     >     >         >     >>     stack size              (kbytes, 
-s) 8192
    >     >     >     >         >     >>     cpu time               (seconds, 
-t) unlimited
    >     >     >     >         >     >>     max user processes              
(-u) 15657
    >     >     >     >         >     >>     virtual memory          (kbytes, 
-v) unlimited
    >     >     >     >         >     >>     file locks                      
(-x) unlimited
    >     >     >     >         >     >>
    >     >     >     >         >     >>     cd /var/lib/systemd/coredump/
    >     >     >     >         >     >>     
root@localhost:/var/lib/systemd/coredump# ls
    >     >     >     >         >     >>     
root@localhost:/var/lib/systemd/coredump#
    >     >     >     >         >     >>
    >     >     >     >         >     >>     >>
    >     >     >     >         >     >>     >> (2) How to enable debugs? I 
have used 'make build' but no additional
    >     >     >     >         >     >>     >> logs other than those shown 
below
    >     >     >     >         >     >>     >>
    >     >     >     >         >     >>     >>
    >     >     >     >         >     >>     >> VPP logs from /var/log/syslog 
is shown below
    >     >     >     >         >     >>     >> cat /var/log/syslog
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: vlib_plugin_early_init:361:
    >     >     >     >         >     >>     >> plugin path 
/usr/lib/vpp_plugins:/usr/lib64/vpp_plugins
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: abf_plugin.so (ACL 
based Forwarding)
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: acl_plugin.so (Access 
Control Lists)
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: avf_plugin.so (Intel 
Adaptive Virtual Function (AVF) Device
    >     >     >     >         >     >>     >> Plugin)
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:191: Loaded
    >     >     >     >         >     >>     >> plugin: cdp_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: dpdk_plugin.so (Data 
Plane Development Kit (DPDK))
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: flowprobe_plugin.so 
(Flow per Packet)
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: gbp_plugin.so (Group 
Based Policy)
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: gtpu_plugin.so (GTPv1-U)
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: igmp_plugin.so (IGMP 
messaging)
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: ila_plugin.so 
(Identifier-locator addressing for IPv6)
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: ioam_plugin.so (Inbound 
OAM)
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:117: Plugin
    >     >     >     >         >     >>     >> disabled (default): 
ixge_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: l2e_plugin.so (L2 
Emulation)
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: lacp_plugin.so (Link 
Aggregation Control Protocol)
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: lb_plugin.so (Load 
Balancer)
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: memif_plugin.so (Packet 
Memory Interface (experimetal))
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: nat_plugin.so (Network 
Address Translation)
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: pppoe_plugin.so (PPPoE)
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: srv6ad_plugin.so 
(Dynamic SRv6 proxy)
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: srv6am_plugin.so 
(Masquerading SRv6 proxy)
    >     >     >     >         >     >>     >> May 27 11:40:28 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: srv6as_plugin.so 
(Static SRv6 proxy)
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: stn_plugin.so (VPP 
Steals the NIC for Container integration)
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: tlsmbedtls_plugin.so 
(mbedtls based TLS Engine)
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: load_one_plugin:189: Loaded
    >     >     >     >         >     >>     >> plugin: tlsopenssl_plugin.so 
(openssl based TLS Engine)
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: /usr/bin/vpp[6818]:
    >     >     >     >         >     >>     >> load_one_vat_plugin:67: Loaded 
plugin: dpdk_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
/usr/bin/vpp[6818]: load_one_vat_plugin:67:
    >     >     >     >         >     >>     >> Loaded plugin: 
dpdk_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: /usr/bin/vpp[6818]:
    >     >     >     >         >     >>     >> load_one_vat_plugin:67: Loaded 
plugin: lb_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: /usr/bin/vpp[6818]:
    >     >     >     >         >     >>     >> load_one_vat_plugin:67: Loaded 
plugin: flowprobe_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: /usr/bin/vpp[6818]:
    >     >     >     >         >     >>     >> load_one_vat_plugin:67: Loaded 
plugin: stn_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: /usr/bin/vpp[6818]:
    >     >     >     >         >     >>     >> load_one_vat_plugin:67: Loaded 
plugin: nat_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: /usr/bin/vpp[6818]:
    >     >     >     >         >     >>     >> load_one_vat_plugin:67: Loaded 
plugin: udp_ping_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: /usr/bin/vpp[6818]:
    >     >     >     >         >     >>     >> load_one_vat_plugin:67: Loaded 
plugin: pppoe_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: /usr/bin/vpp[6818]:
    >     >     >     >         >     >>     >> load_one_vat_plugin:67: Loaded 
plugin: lacp_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
/usr/bin/vpp[6818]: load_one_vat_plugin:67:
    >     >     >     >         >     >>     >> Loaded plugin: lb_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: /usr/bin/vpp[6818]:
    >     >     >     >         >     >>     >> load_one_vat_plugin:67: Loaded 
plugin: acl_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: /usr/bin/vpp[6818]:
    >     >     >     >         >     >>     >> load_one_vat_plugin:67: Loaded 
plugin: ioam_export_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: /usr/bin/vpp[6818]:
    >     >     >     >         >     >>     >> load_one_vat_plugin:67: Loaded 
plugin: ioam_trace_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: /usr/bin/vpp[6818]:
    >     >     >     >         >     >>     >> load_one_vat_plugin:67: Loaded 
plugin:
    >     >     >     >         >     >>     >> 
vxlan_gpe_ioam_export_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: /usr/bin/vpp[6818]:
    >     >     >     >         >     >>     >> load_one_vat_plugin:67: Loaded 
plugin: gtpu_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: /usr/bin/vpp[6818]:
    >     >     >     >         >     >>     >> load_one_vat_plugin:67: Loaded 
plugin: cdp_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: /usr/bin/vpp[6818]:
    >     >     >     >         >     >>     >> load_one_vat_plugin:67: Loaded 
plugin: ioam_vxlan_gpe_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: /usr/bin/vpp[6818]:
    >     >     >     >         >     >>     >> load_one_vat_plugin:67: Loaded 
plugin: memif_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: /usr/bin/vpp[6818]:
    >     >     >     >         >     >>     >> load_one_vat_plugin:67: Loaded 
plugin: ioam_pot_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
/usr/bin/vpp[6818]: load_one_vat_plugin:67:
    >     >     >     >         >     >>     >> Loaded plugin: 
flowprobe_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
/usr/bin/vpp[6818]: load_one_vat_plugin:67:
    >     >     >     >         >     >>     >> Loaded plugin: 
stn_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
/usr/bin/vpp[6818]: load_one_vat_plugin:67:
    >     >     >     >         >     >>     >> Loaded plugin: 
nat_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
/usr/bin/vpp[6818]: load_one_vat_plugin:67:
    >     >     >     >         >     >>     >> Loaded plugin: 
udp_ping_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
/usr/bin/vpp[6818]: load_one_vat_plugin:67:
    >     >     >     >         >     >>     >> Loaded plugin: 
pppoe_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
/usr/bin/vpp[6818]: load_one_vat_plugin:67:
    >     >     >     >         >     >>     >> Loaded plugin: 
lacp_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
/usr/bin/vpp[6818]: load_one_vat_plugin:67:
    >     >     >     >         >     >>     >> Loaded plugin: 
acl_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
/usr/bin/vpp[6818]: load_one_vat_plugin:67:
    >     >     >     >         >     >>     >> Loaded plugin: 
ioam_export_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
/usr/bin/vpp[6818]: load_one_vat_plugin:67:
    >     >     >     >         >     >>     >> Loaded plugin: 
ioam_trace_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
/usr/bin/vpp[6818]: load_one_vat_plugin:67:
    >     >     >     >         >     >>     >> Loaded plugin: 
vxlan_gpe_ioam_export_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
/usr/bin/vpp[6818]: load_one_vat_plugin:67:
    >     >     >     >         >     >>     >> Loaded plugin: 
gtpu_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
/usr/bin/vpp[6818]: load_one_vat_plugin:67:
    >     >     >     >         >     >>     >> Loaded plugin: 
cdp_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
/usr/bin/vpp[6818]: load_one_vat_plugin:67:
    >     >     >     >         >     >>     >> Loaded plugin: 
ioam_vxlan_gpe_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
/usr/bin/vpp[6818]: load_one_vat_plugin:67:
    >     >     >     >         >     >>     >> Loaded plugin: 
memif_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
/usr/bin/vpp[6818]: load_one_vat_plugin:67:
    >     >     >     >         >     >>     >> Loaded plugin: 
ioam_pot_test_plugin.so
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vpp[6818]: /usr/bin/vpp[6818]: dpdk: EAL
    >     >     >     >         >     >>     >> init args: -c 1 -n 4 --no-pci 
--huge-dir /dev/hugepages --master-lcore
    >     >     >     >         >     >>     >> 0 --socket-mem 256,0
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
/usr/bin/vpp[6818]: dpdk: EAL init args: -c
    >     >     >     >         >     >>     >> 1 -n 4 --no-pci --huge-dir 
/dev/hugepages --master-lcore 0
    >     >     >     >         >     >>     >> --socket-mem 256,0
    >     >     >     >         >     >>     >> May 27 11:40:29 localhost 
vnet[6818]: dpdk_ipsec_process:1019: not
    >     >     >     >         >     >>     >> enough DPDK crypto resources, 
default to OpenSSL
    >     >     >     >         >     >>     >> May 27 11:43:19 localhost 
vnet[6818]: show vhost-user: unknown input `detail
    >     >     >     >         >     >>     >> May 27 11:44:00 localhost 
vnet[6818]: received signal SIGSEGV, PC
    >     >     >     >         >     >>     >> 0x7fcca4620187, faulting 
address 0x7fcb317ac000
    >     >     >     >         >     >>     >> May 27 11:44:00 localhost 
systemd[1]: vpp.service: Main process
    >     >     >     >         >     >>     >> exited, code=killed, 
status=6/ABRT
    >     >     >     >         >     >>     >> May 27 11:44:00 localhost 
systemd[1]: vpp.service: Unit entered failed state.
    >     >     >     >         >     >>     >> May 27 11:44:00 localhost 
systemd[1]: vpp.service: Failed with result
    >     >     >     >         >     >>     >> 'signal'.
    >     >     >     >         >     >>     >> May 27 11:44:00 localhost 
systemd[1]: vpp.service: Service hold-off
    >     >     >     >         >     >>     >> time over, scheduling restart
    >     >     >     >         >     >>     >>
    >     >     >     >         >     >>
    >     >     >     >         >     >>     Thanks,
    >     >     >     >         >     >>     Ravi
    >     >     >     >         >     >>
    >     >     >     >         >     >>     >>
    >     >     >     >         >     >>     >>
    >     >     >     >         >     >>     >> Thanks.
    >     >     >     >         >     >>     > Cheers,
    >     >     >     >         >     >>     > Marco
    >     >     >     >         >     >>     >>
    >     >     >     >         >     >>     >>
    >     >     >     >         >     >>     >>
    >     >     >     >         >     >>     > --
    >     >     >     >         >     >>     > Marco V
    >     >     >     >         >     >>     >
    >     >     >     >         >     >>     > SUSE LINUX GmbH | GF: Felix 
Imendörffer, Jane Smithard, Graham Norton
    >     >     >     >         >     >>     > HRB 21284 (AG Nürnberg) 
Maxfeldstr. 5, D-90409, Nürnberg
    >     >     >     >         >     >>
    >     >     >     >         >     >>
    >     >     >     >         >     >>
    >     >     >     >         >     >>
    >     >     >     >         >     >>
    >     >     >     >         >     >
    >     >     >     >         >     >
    >     >     >     >         >     >
    >     >     >     >         >
    >     >     >     >         >
    >     >     >     >
    >     >     >     >
    >     >     >     >
    >     >     >     >
    >     >     >     >
    >     >     >     >
    >     >     >     >
    >     >     >
    >     >     >     
    >     >     >
    >     >     >
    >     >     >
    >     >
    >     >
    >
    >
    


-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9523): https://lists.fd.io/g/vpp-dev/message/9523
View All Messages In Topic (19): https://lists.fd.io/g/vpp-dev/topic/20346431
Mute This Topic: https://lists.fd.io/mt/20346431/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
Email sent to: arch...@mail-archive.com
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to