Re: [vpp-dev] Vhost-user interface not working

2023-03-02 Thread steven luong via lists.fd.io
It is likely that you are missing memAccess=’shared’

https://fdio-vpp.readthedocs.io/en/latest/usecases/vhost/xmlexample.html#:~:text=%3Ccell%20id%3D%270%27%20cpus%3D%270%27%20memory%3D%27262144%27%20unit%3D%27KiB%27%20memAccess%3D%27shared%27/%3E

From:  on behalf of Benjamin Vandendriessche 

Reply-To: "vpp-dev@lists.fd.io" 
Date: Thursday, March 2, 2023 at 9:24 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Vhost-user interface not working

Hello,

I followed the guide here 
https://fdio-vpp.readthedocs.io/en/latest/usecases/vhost/vhost.html and get 
into an issue as the VM running over libvirt (6.0.0) is not receiving the 
traffic from the bare metal machine.

I'm using Ubuntu 20.04.3 LTS and I run VPP version:

vpp# show ver
vpp v22.06.1-release built by root on bv at 2023-03-02T14:39:16

The XML of the VM:


  vm1
  32dede65-6db7-4db4-9093-f6484ea40836
  1048576
  1048576
  

  
  2
  
hvm

  
  


  
  
  



  
  destroy
  restart
  destroy
  


  
  
/usr/bin/qemu-system-x86_64

  
  
  
  


  
  
  
  


  


  
  


  
  


  
  



  


  
  

  


  
  
  
  
  


  

  


  


  




  


  
  


  

  


I can see the vhost interface has been properly setup:

vpp# sh vhost-user
Virtio vhost-user interfaces
Global:
  coalesce frames 32 time 1e-3
  Number of rx virtqueues in interrupt mode: 0
  Number of GSO interfaces: 0
  Thread 0: Polling queue count 1
Interface: VirtualEthernet0/0/0 (ifindex 2)
  Number of qids 2
virtio_net_hdr_sz 12
 features mask (0xfffbdfffa27c):
 features (0x150208000):
   VIRTIO_NET_F_MRG_RXBUF (15)
   VIRTIO_NET_F_GUEST_ANNOUNCE (21)
   VIRTIO_RING_F_INDIRECT_DESC (28)
   VHOST_USER_F_PROTOCOL_FEATURES (30)
   VIRTIO_F_VERSION_1 (32)
  protocol features (0x3)
   VHOST_USER_PROTOCOL_F_MQ (0)
   VHOST_USER_PROTOCOL_F_LOG_SHMFD (1)

 socket filename /tmp/vm00.sock type client errno "Success"

 rx placement:
   thread 0 on vring 1, polling
 tx placement
   threads 0 on vring 0: lock-free

 Memory regions (total 1)
 region fdguest_phys_addrmemory_sizeuserspace_addr 
mmap_offsetmmap_addr
 == = == == == 
== ==
  0 192   0x 0x4000 0x7fb34be0 
0x 0x7f0fb000

 Virtqueue 0 (TX)
  global TX queue index 3
  qsz 1024 last_avail_idx 0 last_used_idx 0 last_kick 0
  avail.flags 0 avail event idx 0 avail.idx 0 used.flags 1 used event idx 0 
used.idx 0
  kickfd 193 callfd 194 errfd -1

 Virtqueue 1 (RX)
  global RX queue index 3
  qsz 1024 last_avail_idx 0 last_used_idx 0 last_kick 0
  avail.flags 0 avail event idx 0 avail.idx 0 used.flags 1 used event idx 0 
used.idx 0
  kickfd 190 callfd 195 errfd -1

And all interfaces are UP:

vpp# sh int
  Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
Counter  Count
TenGigabitEthernet44/0/0  1  up  9000/0/0/0 rx packets  
 754
rx bytes
   45240
drops   
 144
VirtualEthernet0/0/0  2  up  9000/0/0/0 tx packets  
 610
tx bytes
   36600
drops   
 610
tx-error
 144
local00 down  0/0/0/0

However I see errors happening with VirtualEthernet0/0/0 Tx:

vpp# show errors
   Count  Node  Reason  
 Severity
   784 l2-output  L2 output packets 
   error
   784  l2-learn   L2 learn packets 
   error
 1  l2-learn   L2 learn misses  
   error
   784  l2-input   L2 input packets 
   error
   784  l2-flood   L2 flood packets 
   error
   640  VirtualEthernet0/0/0-tx  tx packet drops (no available 
descr   error
   144VirtualEthernet0/0/0-output interface is down 
   error

I've been trying with multiple VM and got the same result.

How can I troubleshoot this issue ?

Thanks,
Best regards,
Benjamin


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

Re: [vpp-dev] VPP Policer API Memory Leak

2023-02-21 Thread steven luong via lists.fd.io
I bet you didn’t limit the number of API trace entries. Try limit the number of 
API trace entries that VPP keeps with nitems and give it a reasonable number.

api-trace {
  on
nitems 65535
}

Steven

From:  on behalf of "efimochki...@gmail.com" 

Reply-To: "vpp-dev@lists.fd.io" 
Date: Tuesday, February 21, 2023 at 7:14 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] VPP Policer API Memory Leak

Hi Dear Developers,

I am testing creating and deleting of policers and it looks that there is a 
memory leak

VPP Version: v22.10-release


My simple script:

#!/bin/env python

from vpp_papi import VPPApiClient
from vpp_papi import VppEnum
import os
import fnmatch
import sys
from time import sleep

vpp_json_dir = '/usr/share/vpp/api/'

# construct a list of all the json api files

jsonfiles = []

for root, dirnames, filenames in os.walk(vpp_json_dir):
  for filename in fnmatch.filter(filenames, '*.api.json'):
jsonfiles.append(os.path.join(root, filename))

vpp = VPPApiClient(apifiles=jsonfiles, server_address='/run/vpp/api.sock')
vpp.connect("test-client")

r = vpp.api.show_version()
print('VPP version is %s' % r.version)

while True:
### Create 10 policers
  for i in range (10):
name = "policer_" + str(i)
policer_add_del = vpp.api.policer_add_del(is_add=True, name=name, 
cb=2500,cir=1000, eb=3000,eir=0,rate_type=0,round_type=1,type=1)
print(policer_add_del)
### Delete 10 policers
  for i in range (10):
name = "policer_" + str(i)
policer_add_del = vpp.api.policer_add_del(is_add=False, name=name, 
cb=2500,cir=1000, eb=3000,eir=0,rate_type=0,round_type=1,type=1)
print(policer_add_del)

The memory usage is growing permanently and very fast. It takes less than 10 
minutes to spend ~ 100Mb of main-heap.

vpp# show memory  main-heap
Thread 0 vpp_main
  base 0x7efb0a117000, size 8g, locked, unmap-on-destroy, traced, name 'main 
heap'
page stats: page-size 4K, total 2097152, mapped 116134, not-mapped 1450398, 
unknown 530620
  numa 0: 115788 pages, 452.29m bytes
  numa 1: 346 pages, 1.35m bytes
total: 7.99G, used: 188.26M, free: 7.82G, trimmable: 7.82G

  BytesCount Sample   Traceback
  177448814781 0x7efb15d59570 _vec_alloc_internal + 0x6b
  vl_msg_api_trace + 0x4a4
  vl_msg_api_socket_handler + 0x10f
  vl_socket_process_api_msg + 0x1d
  0x7efd0c177171
  0x7efd0a588837
  0x7efd0a48d6a8
   2912721 0x7efb15cf4190 _vec_realloc_internal + 0x89
  vl_msg_api_trace + 0x529
  vl_msg_api_socket_handler + 0x10f
  vl_socket_process_api_msg + 0x1d
  0x7efd0c177171
  0x7efd0a588837
  0x7efd0a48d6a8
   178928 7390 0x7efb15d595f0 _vec_alloc_internal + 0x6b
  va_format + 0x2318
  format + 0x83
  0x7efd0a896b91
  vl_msg_api_socket_handler + 0x226
  vl_socket_process_api_msg + 0x1d
  0x7efd0c177171
  0x7efd0a588837
  0x7efd0a48d6a8
858001 0x7efb135ca840 _vec_realloc_internal + 0x89
  vl_socket_api_send + 0x720
  vl_api_sockclnt_create_t_handler + 0x2e2
  vl_msg_api_socket_handler + 0x226
  vl_socket_process_api_msg + 0x1d
  0x7efd0c177171
  0x7efd0a588837
  0x7efd0a48d6a8
 41041 0x7efb13dcf220 _vec_alloc_internal + 0x6b
  0x7efd0a5e0965
  0x7efd0a5f05c4
  0x7efd0a584978
  0x7efd0a5845f5
  0x7efd0a5f213b
  0x7efd0a48d6a8
 1920   16 0x7efb13e62a40 _vec_realloc_internal + 0x89
  0x7efd0a482d1d
  va_format + 0xf62
  format + 0x83
  va_format + 0x1041
  format + 0x83
  va_format + 0x1041
  vlib_log + 0x2c6
  0x7efb08b033aa
  0x7efb08b031c9
  0x7efb08b0cc6d
  0x7efb08b988ee

vpp# show memory 

Re: [vpp-dev] VPP Hardware Interface Output show Carrier Down

2023-02-17 Thread steven luong via lists.fd.io
Sunil is using dpdk vmxnet3 driver. So he doesn’t need to load VPP native 
vmxnet3 plugin. Gdb dpdk code to see why it returns -22 when VPP adds the NIC 
to dpdk.

rte_eth_dev_start[port:1, errno:-22]: Unknown error -22

Steven

From:  on behalf of Guangming 
Reply-To: "vpp-dev@lists.fd.io" 
Date: Friday, February 17, 2023 at 6:55 AM
To: vpp-dev , sunil61090 
Subject: Re: [vpp-dev] VPP Hardware Interface Output show Carrier Down


you can  use vppctl show log  to display more  startup meesage.
VMXNET3 need load vmxnet3_plugin.so.

zhangguangm...@baicells.com

From: sunil kumar
Date: 2023-02-17 21:46
To: vpp-dev
Subject: [vpp-dev] VPP Hardware Interface Output show Carrier Down
Hi,

We are observing the state of the vpp interface as carrier down in  command 
vppctl show hardware output. This is observed while starting the vpp:

vppctl show hardware output:
==
device_c/0/0   2down  device_c/0/0
  Link speed: 10 Gbps
  Ethernet address 00:50:56:01:5c:63
  VMware VMXNET3
carrier down
flags: admin-up pmd rx-ip4-cksum
rx: queues 2 (max 16), desc 4096 (min 128 max 4096 align 1)
tx: queues 2 (max 8), desc 4096 (min 512 max 4096 align 1)
pci: device 15ad:07b0 subsystem 15ad:07b0 address :0c:00.00 numa 0
max rx packet len: 16384
promiscuous: unicast off all-multicast off
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
   vlan-filter jumbo-frame scatter
rx offload active: ipv4-cksum
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
   multi-segs
tx offload active: multi-segs
rss avail: ipv4-tcp ipv4 ipv6-tcp ipv6
rss active:none
tx burst function: vmxnet3_xmit_pkts
rx burst function: vmxnet3_recv_pkts
  Errors:
rte_eth_dev_start[port:1, errno:-22]: Unknown error -22

We are suspecting the following reasons:
1) Any issue with vfio-pci driver while unloading and loading again?
2) Any corruption is happening during initialization?

I am attaching the startup.conf and vppctl command output files with this mail:

Can anybody suggest a way to resolve this issue?

Thanks,
Sunil Kumar

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22600): https://lists.fd.io/g/vpp-dev/message/22600
Mute This Topic: https://lists.fd.io/mt/97027473/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP logging does not logs API calls debug message

2023-02-04 Thread steven luong via lists.fd.io
Did you try
vppctl show log

Steven

From:  on behalf of "Tripathi, VinayX" 

Reply-To: "vpp-dev@lists.fd.io" 
Date: Saturday, February 4, 2023 at 4:19 AM
To: "vpp-dev@lists.fd.io" 
Cc: "Ji, Kai" 
Subject: Re: [vpp-dev] VPP logging does not logs API calls debug message

Hi Team ,
Any suggestion would highly appreciable.

Thanks
Vinay

From: Tripathi, VinayX
Sent: Friday, February 3, 2023 6:28 PM
To: 'vpp-dev@lists.fd.io' 
Cc: Ji, Kai 
Subject: VPP logging does not logs API calls debug message


Hi Team,

I have notice that VPP infra/plugin/node/driver  related debug message does not 
logs into /var/log/vpp/vpp.vpp.log
Only logs CLI command being trigger from VPP console. Please find below 
configuration used.
Kindly suggest if I’m missing any configuration.

Using VPP version :- vpp v23.02-

unix {
   interactive
   log /var/log/vpp/vpp.log
   full-coredump
   cli-listen /var/log/vpp/cli.sock
   cli-pager-buffer-limit 1
   # cli-listen localhost:5002
   #exec /root/vinaytrx/vpp/dpdk-pmd.bash
}

api-trace {
  on
}
logging {
   default-syslog-log-level debug
   default-log-level debug
  # class dpdk/cryptodev { rate-limit 100 level debug syslog-level error }
}

Log messages from /var/log/vpp/vpp.log
2023/01/31 08:11:53:331[0]: show interface
2023/01/31 08:12:08:100[0]: set int ip address eth0 192.168.1.0/30
2023/01/31 08:14:07:757[0]: ipsec
2023/01/31 08:14:10:946[0]: ipsec ?
2023/01/31 08:15:04:185[0]: create interface ?
2023/01/31 08:19:46:385[0]: create host-interface ?
2023/01/31 08:38:07:979[0]: set ip ?
2023/01/31 08:44:08:455[0]: set interface ip ?
2023/01/31 08:59:29:253[0]: show interface '

Thanks
Vinay

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22545): https://lists.fd.io/g/vpp-dev/message/22545
Mute This Topic: https://lists.fd.io/mt/96721810/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] LACP issues w/ cdma/connectX 6

2022-12-05 Thread steven luong via lists.fd.io
Type
show lacp details
to see if the member interface that is not forming the bundle receives and 
sends LACP PDUs.
Type
show hardware
to see if both member interfaces have the same mac address.

From:  on behalf of Eyle Brinkhuis 
Reply-To: "vpp-dev@lists.fd.io" 
Date: Monday, December 5, 2022 at 5:23 AM
To: "Benoit Ganne (bganne)" 
Cc: vpp-dev 
Subject: [vpp-dev] LACP issues w/ cdma/connectX 6

Hi Ben,

We have a few new boxes that have a connectX 6 fitted (Mellanox ConnectX-6 Dx 
100GbE QSFP56 2-port PCIe 4 Ethernet Adapter) and we run into an issue with 
LACP (seems a popular topic these days.. :-)). We are not able to get LACP up, 
while running this:


create int rdma host-if ens3f0 name rdma0

create int rdma host-if ens3f1 name rdma1

set interface state rdma1 up

set interface state rdma0 up

create bond mode lacp

bond add BondEthernet0 rdma0

bond add BondEthernet0 rdma1

set int state BondEthernet0 up



On the switch side (Mellanox sn2700) everything is the same as with our 
Mellanox CX5 NICs (also on RDMA, with same lacp configuration as above). CX5 
stuff all works a charm.



For the CX6, we receive the BDPU’s from RDMA, and they seem to be processed:



Packet 1



00:00:52:494905: rdma-input

  rdma: rdma0 (1) next-node bond-input

00:00:52:494920: bond-input

  src b8:59:9f:67:fa:ba, dst 01:80:c2:00:00:02, rdma0 -> rdma0

00:00:52:494926: ethernet-input

  SLOW_PROTOCOLS: b8:59:9f:67:fa:ba -> 01:80:c2:00:00:02

00:00:52:494930: lacp-input

  rdma0:

Length: 110

  LACPv1

  Actor Information TLV: length 20

System b8:59:9f:67:fa:80

System priority 32768

Key 13834

Port priority 32768

Port number 25

State 0x45

  LACP_STATE_LACP_ACTIVITY (0)

  LACP_STATE_AGGREGATION (2)

  LACP_STATE_DEFAULTED (6)

  Partner Information TLV: length 20

System 00:00:00:00:00:00

System priority 0

Key 0

Port priority 0

Port number 0

State 0x7c

  LACP_STATE_AGGREGATION (2)

  LACP_STATE_SYNCHRONIZATION (3)

  LACP_STATE_COLLECTIING (4)

  LACP_STATE_DISTRIBUTING (5)

  LACP_STATE_DEFAULTED (6)

  0x:  0101 0114 8000 b859 9f67 fa80 360a 8000

  0x0010:  0019 4500  0214    

  0x0020:     7c00  0310  

  0x0030:         

  0x0040:         

  0x0050:         

  0x0060:        

00:00:52:494936: error-drop

  rx:rdma0

00:00:52:494937: drop

  lacp-input: good lacp packets — consumed



vpp# sh lacpactor state 
 partner state

interface namesw_if_index  bond interface   
exp/def/dis/col/syn/agg/tim/act  exp/def/dis/col/syn/agg/tim/act

rdma0 1BondEthernet0  0   1   0   0   1   1 
  1   10   0   0   0   0   0   0   0

  LAG ID: [(,02-fe-c6-4c-c3-62,0003,00ff,0001), 
(,00-00-00-00-00-00,0003,00ff,0001)]

  RX-state: DEFAULTED, TX-state: TRANSMIT, MUX-state: ATTACHED, PTX-state: 
PERIODIC_TX

rdma1 2BondEthernet0  0   1   0   0   0   1 
  1   10   0   0   0   0   0   0   0

  LAG ID: [(,02-fe-c6-4c-c3-62,0003,00ff,0002), 
(,00-00-00-00-00-00,0003,00ff,0002)]

  RX-state: DEFAULTED, TX-state: TRANSMIT, MUX-state: DETACHED, PTX-state: 
PERIODIC_TX



vpp# sh bond details

BondEthernet0

  mode: lacp

  load balance: l2

  number of active members: 0

  number of members: 2

rdma0

rdma1

  device instance: 0

  interface id: 0

  sw_if_index: 3

  hw_if_index: 3





vpp# sh log

2022/12/05 12:43:14:840 notice plugin/loadLoaded plugin: abf_plugin.so 
(Access Control List (ACL) Based Forwarding)

2022/12/05 12:43:14:841 notice plugin/loadLoaded plugin: acl_plugin.so 
(Access Control Lists (ACL))

2022/12/05 12:43:14:842 notice plugin/loadLoaded plugin: adl_plugin.so 
(Allow/deny list plugin)

2022/12/05 12:43:14:842 notice plugin/loadLoaded plugin: 
af_xdp_plugin.so (AF_XDP Device Plugin)

2022/12/05 12:43:14:842 notice plugin/loadLoaded plugin: 
arping_plugin.so (Arping (arping))

2022/12/05 12:43:14:843 notice plugin/loadLoaded plugin: avf_plugin.so 
(Intel Adaptive Virtual Function (AVF) Device Driver)

2022/12/05 12:43:14:843 notice plugin/loadLoaded plugin: 
builtinurl_plugin.so (vpp built-in URL support)

2022/12/05 12:43:14:843 notice plugin/loadLoaded plugin: cdp_plugin.so 
(Cisco Discovery Protocol (CDP))

2022/12/05 12:43:14:844 notice plugin/loadLoaded plugin: cnat_plugin.so 
(CNat Translate)

2022/12/05 12:43:14:860 notice plugin/loadLoaded plugin: 
crypto_ipsecmb_plugin.so (Intel IPSEC Multi-buffer Crypto Engine)

2022/12/05 12:43:14:860 notice plugin/loadLoaded plugin: 
crypto_native_plugin.so (Intel IA32 Software Crypto Engine)

2022/12/05 12:43:14:860 

Re: [vpp-dev] LACP bonding not working with RDMA driver

2022-11-15 Thread steven luong via lists.fd.io
In addition, do

1. show hardware
The bond, eth1/0, and eth2/0 should have the same mac address.

2. show lacp details
Check these statistics for the interface that is not forming the bond
Good LACP PDUs received: 13
Bad LACP PDUs received: 0
LACP PDUs sent: 14
last LACP PDU received:.58 seconds ago
last LACP PDU sent:.31 seconds ago

On 11/15/22, 7:22 AM, "vpp-dev@lists.fd.io on behalf of Benoit Ganne (bganne) 
via lists.fd.io"  wrote:

> As suggested we tried putting the rdma interfaces in promiscuous mode, but
> still facing the same issue:-
> vpp# set interface promiscuous on eth1/0 vpp# set interface promiscuous on
> eth2/1
> What could be the possible reason for this issue?

Can you share a packet trace?
vpp# cle tr
vpp# tr add rdma-input 10
[wait for LACP packets to be sent]
vpp# sh tr

Also, error counters and logs should be of interest:
vpp# sh hard
vpp# sh err
vpp# sh log

Best,
ben


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22185): https://lists.fd.io/g/vpp-dev/message/22185
Mute This Topic: https://lists.fd.io/mt/94956154/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] #vpp-dev No packets generated from Vhost user interface

2022-10-24 Thread steven luong via lists.fd.io
Use “virsh dumpxml” to check the output to see if you have memAccess=share as 
below
  

Steven

From:  on behalf of suresh vuppala 
Reply-To: "vpp-dev@lists.fd.io" 
Date: Friday, October 21, 2022 at 5:23 PM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] #vpp-dev No packets generated from Vhost user interface

Hi Steven,

   Thanks for responding on the request. I am using openstack to launch a VM 
here. SO you mean during VM launch I have to specify hugepage size ?

Thanks,
Suresh

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22067): https://lists.fd.io/g/vpp-dev/message/22067
Mute This Topic: https://lists.fd.io/mt/94432596/21656
Mute #vpp-dev:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-dev
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] #vpp-dev No packets generated from Vhost user interface

2022-10-21 Thread steven luong via lists.fd.io
Your Qemu command to launch the VM is likely missing the hugepage or share 
option.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22062): https://lists.fd.io/g/vpp-dev/message/22062
Mute This Topic: https://lists.fd.io/mt/94432596/21656
Mute #vpp-dev:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-dev
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP crashing if we configure srv6 policy with five sids in the sidlist

2022-08-05 Thread steven luong via lists.fd.io
Can you provide the topology, configurations, and steps to recreate this crash?

Steven

From:  on behalf of Chinmaya Aggarwal 

Reply-To: "vpp-dev@lists.fd.io" 
Date: Wednesday, July 13, 2022 at 4:07 AM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] VPP crashing if we configure srv6 policy with five sids 
in the sidlist

Hi,

We executed few more tests on this and we observed that after applying the 
above policy with 4 sids, we can see packet coming out of the interface. But as 
soon we add the 5th sid or more we don't see the packet coming out of the 
interface and VPP crashes after sometime. We got the below trace for VPP crash 
on gdb:-
(gdb) c
Continuing.

Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
0x7fe67f1cf256 in virtio_update_packet_stats () from 
/usr/lib/vpp_plugins//dpdk_plugin.so
(gdb) bt
#0  0x7fe67f1cf256 in virtio_update_packet_stats () from 
/usr/lib/vpp_plugins//dpdk_plugin.so
#1  0x7fe67f1d6646 in virtio_xmit_pkts () from 
/usr/lib/vpp_plugins//dpdk_plugin.so
#2  0x7fe67f42d60e in rte_eth_tx_burst (nb_pkts=, 
tx_pkts=0x7fe6862afc00, queue_id=,
port_id=) at 
/opt/vpp/external/x86_64/include/rte_ethdev.h:5680
#3  tx_burst_vector_internal (n_left=1, mb=0x7fe6862afc00, xd=, 
vm=)
at 
/usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/plugins/dpdk/device/device.c:175
#4  dpdk_device_class_tx_fn_hsw (vm=, node=, 
f=)
at 
/usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/plugins/dpdk/device/device.c:435
#5  0x7fe6c66bd802 in dispatch_node (last_time_stamp=, 
frame=,
dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INTERNAL, 
node=0x7fe685e93c00, vm=0x7fe68547a680)
at /usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/main.c:975
#6  dispatch_pending_node (vm=vm@entry=0x7fe68547a680, 
pending_frame_index=pending_frame_index@entry=10,
last_time_stamp=) at 
/usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/main.c:1134
#7  0x7fe6c66c1ebf in vlib_main_or_worker_loop (is_main=1, vm=)
at /usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/main.c:1600
#8  vlib_main_loop (vm=) at 
/usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/main.c:1728
#9  vlib_main (vm=, vm@entry=0x7fe68547a680, 
input=input@entry=0x7fe675df5fa0)
at /usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/main.c:2017
#10 0x7fe6c670cc86 in thread0 (arg=140628055271040)
at /usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/unix/main.c:671
#11 0x7fe6c5c29388 in clib_calljmp () at 
/usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vppinfra/longjmp.S:123
#12 0x7ffcdf319c80 in ?? ()
#13 0x7fe6c670e210 in vlib_unix_main (argc=, argv=)
at /usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/unix/main.c:751
#14 0x in ?? ()
#15 0x0001a53c5137 in ?? ()

Can this be related to some buffer setting/packet size for DPDK? Any pointers 
on how can we debug the cause of this issue?


Thanks and Regards,
Chinmaya Agarwal.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21771): https://lists.fd.io/g/vpp-dev/message/21771
Mute This Topic: https://lists.fd.io/mt/92185133/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] LACP bond interface not working

2022-08-04 Thread steven luong via lists.fd.io
Please check to make sure the interface can ping to each other prior to adding 
them to the bond. Type “show lacp details” to verify VPP receives LACP PDUs 
from each other and the state machine.

Steven

From:  on behalf of Chinmaya Aggarwal 

Reply-To: "vpp-dev@lists.fd.io" 
Date: Tuesday, June 21, 2022 at 3:01 AM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] LACP bond interface not working

Hi,

Any suggestions on the issue as to why lacp bond in VPP is not working on VM 
but it is working on physical machine.
Also, are there any log files we can refer to debug the issue on VM?


Thanks and Regards,
Chinmaya Agarwal.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21767): https://lists.fd.io/g/vpp-dev/message/21767
Mute This Topic: https://lists.fd.io/mt/91723467/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Memory region shows empty for vhost interface

2022-08-04 Thread steven luong via lists.fd.io
It is related to memoryBacking, missing hugepages, or missing shared option. 
What does your qemu launch command look like?

Steven

From:  on behalf of Chinmaya Aggarwal 

Reply-To: "vpp-dev@lists.fd.io" 
Date: Thursday, July 14, 2022 at 3:31 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Memory region shows empty for vhost interface

Hi,
We are running VPP v22.02. We found a link stating how to connect a VM to VPP 
(running on base machine).

https://fd.io/docs/vpp/v2101/usecases/vhost/index.html

As per the link, on VPP, we created a vhost interface:-

vpp# create vhost socket /tmp/vm00.sock
VirtualEthernet0/0/0

Also, we have virsh dump xml file where we added an interface tags of type 
"vhostuser" and passed the socket file path in "path" field.

  
  
  
  
  
  


Using this XML, we were able to spawn a VM having interface enp7s0 
corresponding to above tag. VPP can also detect the VM creation as we see below 
output for "show vhost-user" command: -

vpp# show vhost-user
Virtio vhost-user interfaces
Global:
  coalesce frames 32 time 1e-3
  Number of rx virtqueues in interrupt mode: 0
  Number of GSO interfaces: 0
  Thread 0: Polling queue count 2
Interface: VirtualEthernet0/0/0 (ifindex 1)
  Number of qids 2
virtio_net_hdr_sz 12
 features mask (0xfffbdfffa27c):
 features (0x150208000):
   VIRTIO_NET_F_MRG_RXBUF (15)
   VIRTIO_NET_F_GUEST_ANNOUNCE (21)
   VIRTIO_RING_F_INDIRECT_DESC (28)
   VHOST_USER_F_PROTOCOL_FEATURES (30)
   VIRTIO_F_VERSION_1 (32)
  protocol features (0x3)
   VHOST_USER_PROTOCOL_F_MQ (0)
   VHOST_USER_PROTOCOL_F_LOG_SHMFD (1)

 socket filename /tmp/vm00.sock type client errno "Success"

 rx placement:
   thread 0 on vring 1, polling
 tx placement

 Memory regions (total 0)

But memory regions is still empty. As per the link memory regions should not be 
empty as soon as VM is up.
Are we missing anything here as part of the configuration?

Thanks and Regards,
Chinmaya Agarwal.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21764): https://lists.fd.io/g/vpp-dev/message/21764
Mute This Topic: https://lists.fd.io/mt/92376017/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP crashes when lcp host interface is added in network bridge

2022-08-04 Thread steven luong via lists.fd.io
Please try debug image and provide a sane back trace.

Steven

From:  on behalf of Chinmaya Aggarwal 

Reply-To: "vpp-dev@lists.fd.io" 
Date: Thursday, July 21, 2022 at 4:42 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] VPP crashes when lcp host interface is added in network 
bridge

Hi,

As per our use case, we have created a vhost interface in VPP and further 
created lcp host interface for the same in linux using below commands:-

vpp# create vhost socket /tmp/vm00.sock
vpp# lcp create VirtualEthernet0/0/0 host-if enp7s0 netns dataplane

On linux, we then tried adding interface "enp7s0" in the bridge.

# ip netns exec dataplane brctl addbr br1_vhost
# ip netns exec dataplane brctl addif br1_vhost enp7s0

But as soon as we add this interface in the bridge, VPP crashes with below 
dump:-

Jul 21 07:27:32 localhost vnet[15814]: received signal SIGSEGV, PC 
0x7f494dff91f4, faulting address 0x0
Jul 21 07:27:32 localhost kernel: br1_vhost: port 1(enp7s0) entered blocking 
state
Jul 21 07:27:32 localhost kernel: br1_vhost: port 1(enp7s0) entered disabled 
state
Jul 21 07:27:32 localhost kernel: device enp7s0 entered promiscuous mode
Jul 21 07:27:32 localhost vnet[15814]: #0  0x7f49989d9c3b 0x7f49989d9c3b
Jul 21 07:27:32 localhost vnet[15814]: #1  0x7f4998321ce0 0x7f4998321ce0
Jul 21 07:27:32 localhost vnet[15814]: #2  0x7f494dff91f4 
nl_addr_get_family + 0x4
Jul 21 07:27:32 localhost vnet[15814]: #3  0x7f494d9388d6 0x7f494d9388d6
Jul 21 07:27:32 localhost vnet[15814]: #4  0x7f494d93ab53 0x7f494d93ab53
Jul 21 07:27:32 localhost vnet[15814]: #5  0x7f494dfffbe2 0x7f494dfffbe2
Jul 21 07:27:32 localhost vnet[15814]: #6  0x7f494ddaa6c8 0x7f494ddaa6c8
Jul 21 07:27:32 localhost vnet[15814]: #7  0x7f494dffbd33 nl_cache_parse + 
0x63
Jul 21 07:27:32 localhost vnet[15814]: #8  0x7f494e00160f nl_msg_parse + 
0x7f
Jul 21 07:27:32 localhost vnet[15814]: #9  0x7f494d93b2b4 0x7f494d93b2b4
Jul 21 07:27:32 localhost vnet[15814]: #10 0x7f4998988d86 0x7f4998988d86
Jul 21 07:27:32 localhost vnet[15814]: #11 0x7f4997ef5388 0x7f4997ef5388
Jul 21 07:27:32 localhost kernel: device enp7s0 left promiscuous mode
Jul 21 07:27:32 localhost kernel: br1_vhost: port 1(enp7s0) entered disabled 
state


What could be the possible reason for this crash? Is there anything we are 
missing from configuration point of view?

Thanks and Regards,
Chinmaya Agarwal.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21765): https://lists.fd.io/g/vpp-dev/message/21765
Mute This Topic: https://lists.fd.io/mt/92524673/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Bridge-domain function and usage.

2022-08-01 Thread steven luong via lists.fd.io
Pragya,

UU-Flood stands for Unknown Unicast Flooding. It does not flood multicast or 
broadcast packets. You need “Flooding” on to flood multicast/broadcast packets.

Steven

From:  on behalf of Pragya Nand Bhagat 

Reply-To: "vpp-dev@lists.fd.io" 
Date: Monday, August 1, 2022 at 2:59 AM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] Bridge-domain function and usage.

Hi Stanislav,

Following is the trace :

with flooding enabled:

vpp# show bridge-domain 100 det
  BD-ID   Index   BSN  Age(min)  Learning  U-Forwrd   UU-Flood   Flooding  
ARP-Term  arp-ufwd Learn-co Learn-li   BVI-Intf
   100  1   0off  onon  
floodon   offoff1   
16777216 N/A
span-l2-input l2-input-classify l2-input-feat-arc l2-policer-classify 
l2-input-acl vpath-input-l2 l2-ip-qos-record l2-input-vtr l2-learn l2-rw l2-fwd 
l2-flood l2-flood l2-output

   Interface   If-idx ISN  SHG  BVI  TxFlood
VLAN-Tag-Rewrite
port0/0  1 10-  * none
port0/1  2108   0-  * none
port0/2  3 10-  * none

Packet 1

00:11:47:356640: dpdk-input
  port0/0 rx queue 0
  buffer 0xfc9fc3: current data 0, length 60, buffer-pool 0, ref-count 1, trace 
handle 0x0
   ext-hdr-valid
  PKT MBUF: port 0, nb_segs 1, pkt_len 60
buf_len 2176, data_len 60, ol_flags 0x180, data_off 128, phys_addr 
0x3f27f140
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_IP_CKSUM_NONE (0x0090) no IP cksum of RX pkt.
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_NONE (0x0108) no L4 cksum of RX pkt.
  ARP: a4:bf:01:89:9d:cf -> ff:ff:ff:ff:ff:ff
  request, type ethernet/IP4, address size 6/4
  a4:bf:01:89:9d:cf/30.30.30.6 -> 
01:03:05:07:09:00/30.30.30.6
00:11:47:356665: ethernet-input
  frame: flags 0x3, hw-if-index 1, sw-if-index 1
  ARP: a4:bf:01:89:9d:cf -> ff:ff:ff:ff:ff:ff
00:11:47:357793: l2-input
  l2-input: sw_if_index 1 dst ff:ff:ff:ff:ff:ff src a4:bf:01:89:9d:cf [l2-learn 
l2-flood ]
00:11:47:357796: l2-learn
  l2-learn: sw_if_index 1 dst ff:ff:ff:ff:ff:ff src a4:bf:01:89:9d:cf bd_index 1
00:11:47:357799: l2-flood
  l2-flood: sw_if_index 1 dst ff:ff:ff:ff:ff:ff src a4:bf:01:89:9d:cf bd_index 1
  l2-flood: sw_if_index 1 dst ff:ff:ff:ff:ff:ff src a4:bf:01:89:9d:cf bd_index 1
00:11:47:357804: l2-output
  l2-output: sw_if_index 3 dst ff:ff:ff:ff:ff:ff src a4:bf:01:89:9d:cf data 08 
06 00 01 08 00 06 04 00 01 a4 bf
  l2-output: sw_if_index 2 dst ff:ff:ff:ff:ff:ff src a4:bf:01:89:9d:cf data 08 
06 00 01 08 00 06 04 00 01 a4 bf
00:11:47:357807: port0/2-output
  port0/2
  ARP: a4:bf:01:89:9d:cf -> ff:ff:ff:ff:ff:ff
  request, type ethernet/IP4, address size 6/4
  a4:bf:01:89:9d:cf/30.30.30.6 -> 
01:03:05:07:09:00/30.30.30.6
00:11:47:357812: port0/1-output
  port0/1
  ARP: a4:bf:01:89:9d:cf -> ff:ff:ff:ff:ff:ff
  request, type ethernet/IP4, address size 6/4
  a4:bf:01:89:9d:cf/30.30.30.6 -> 
01:03:05:07:09:00/30.30.30.6
00:11:47:357813: port0/2-tx
  port0/2 tx queue 0
  buffer 0xfc9fc3: current data 0, length 60, buffer-pool 0, ref-count 1, trace 
handle 0x0
   ext-hdr-valid
   l2-hdr-offset 0 l3-hdr-offset 14
  PKT MBUF: port 0, nb_segs 1, pkt_len 60
buf_len 2176, data_len 60, ol_flags 0x180, data_off 128, phys_addr 
0x3f27f140
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_IP_CKSUM_NONE (0x0090) no IP cksum of RX pkt.
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_NONE (0x0108) no L4 cksum of RX pkt.
  ARP: a4:bf:01:89:9d:cf -> ff:ff:ff:ff:ff:ff
  request, type ethernet/IP4, address size 6/4
  a4:bf:01:89:9d:cf/30.30.30.6 -> 
01:03:05:07:09:00/30.30.30.6
00:11:47:357819: error-drop
  rx:port0/0
00:11:47:357821: drop
  port0/1-output: interface is down



**
with flooding disabled :

vpp# show bridge-domain 100 det
  BD-ID   Index   BSN  Age(min)  Learning  U-Forwrd   UU-Flood   Flooding  
ARP-Term  arp-ufwd Learn-co Learn-li   BVI-Intf
   100  1 0 off   onon  
floodoff   off   off1 
16777216 N/A
span-l2-input l2-input-classify l2-input-feat-arc l2-policer-classify 
l2-input-acl 

[vpp-dev] Please include Fixes: tag for regression fix

2021-11-02 Thread steven luong via lists.fd.io
Folks,

In case you don’t already know, there is a tag called Fixes in the commit 
message which allows one to specify if the current patch fixes a regression. 
See an example usage in https://gerrit.fd.io/r/c/vpp/+/34212

When you commit a patch which fixes a known regression, please make use of the 
Fixes tag to benefit every consumer.

Steven


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20406): https://lists.fd.io/g/vpp-dev/message/20406
Mute This Topic: https://lists.fd.io/mt/86771694/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] DPDK PMD vs native VPP bonding driver

2021-09-16 Thread steven luong via lists.fd.io
Srikanth ,

You are correct that dpdk bonding has been deprecated for a while. I don’t 
remember since when. The performance of VPP native bonding when compared to 
dpdk bonding is about the same. With VPP native bonding, you have an additional 
option to configure LACP which was not supported when using dpdk bonding.

Steven

From:  on behalf of Srikanth Akula 
Date: Thursday, September 16, 2021 at 9:48 AM
To: "Steven Luong (sluong)" 
Cc: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] DPDK PMD vs native VPP bonding driver

Hi Steven,

We are trying to evaluate bonding driver functionality in VPP and it seems we 
have disabled the DPDK PMD driver by default from 19.08 onwards. Could you 
share your experience on this ?

Also could you share the performance comparisons b/w these two drivers in case 
it's available?

Any help would be appreciated.

Regards,
Srikanth

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20150): https://lists.fd.io/g/vpp-dev/message/20150
Mute This Topic: https://lists.fd.io/mt/85656418/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] fail_over_mac=1 (Active) Bonding

2021-09-15 Thread steven luong via lists.fd.io
Chetan,

I have a patch in gerrit a long time ago and I just rebased it to the latest 
master
https://gerrit.fd.io/r/c/vpp/+/30866
Please feel free to test it thoroughly and let me know if you encounter any 
problem or not.

Steven

From:  on behalf of chetan bhasin 

Date: Tuesday, September 14, 2021 at 10:16 PM
To: vpp-dev 
Subject: [vpp-dev] fail_over_mac=1 (Active) Bonding

Hi,

We have a feature to support Bonding fail_over_mac=1 (Active) in VPP . Are 
there any plans to implement this ?

If we want to implement this, can anybody please provide a direction.

Thanks,
Chetan



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20135): https://lists.fd.io/g/vpp-dev/message/20135
Mute This Topic: https://lists.fd.io/mt/85620877/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vnet bonding crashes - need some suggestions to narrow down

2021-05-22 Thread steven luong via lists.fd.io
I set up the same bonding with dot1q and subinterface configuration as given, 
but using tap interface to connect to Linux instead. It works just fine. I 
believe the crash was due to using a custom plugin which is cloned from VPP 
DPDK plugin to handle the Octeon-tx2 SoC. When bonding gets the buffer from the 
custom plugin, many of the necessary fields were not set/initialized which 
resulted a crash in calling vnet_feature_next. I believe the bug is in the 
custom plugin, not bonding.

Steven

On 5/17/21, 2:20 AM, "vpp-dev@lists.fd.io on behalf of Vikas Aggarwal via 
lists.fd.io"  wrote:

Hello vpp list/experts,
Need some suggestion  to understand following  vnet bonding crash when I
try to ping outside from linux.
Configuration: Linux <==> loopback-ports <==> vpp bridge <==> To outside
Note: No crash happens if  I  eliminate vnet bonding  from steps below.
VPP CLI steps:

Summary: Add outbound physical ports eth6 & eth7 into bonding. Create
sub-interfaces with tag 2003, 2001. Create access mode on local-linux
facing interfaces. Add all into L2 bridge.
vpp#  create bond mode active-backup
vpp#  bond add BondEthernet0 eth6
vpp#  bond add BondEthernet0 eth7
vpp#  set int state eth6  up
vpp#  set int state eth7  up
vpp#  set int state  BondEthernet0 up
vpp#  create sub-interfaces BondEthernet0 2001
vpp#  set int state BondEthernet0.2001 up
vpp#  create sub-interfaces BondEthernet0 2003
vpp#  set int state BondEthernet0.2003 up
vpp#  set int l2 bridge  BondEthernet0.2001  100
vpp#  set int state lbk0  up  #A loopback port between linux and VPP
vpp#  set int l2 bridge  lbk0  100
vpp#  set int l2 bridge  BondEthernet0.2003  100
vpp#  set int state lbk2  up  #Another loopback port between linux and VPP
vpp#  set int l2 bridge  lbk2  100
vpp#  set interface l2 tag-rewrite lbk0 push dot1q 2003
vpp#  set interface l2 tag-rewrite  lbk2 push dot1q 2001
VPP crash dump once tried to ping out from linux.

(gdb)
#0  0xa04b2184 in raise () from /lib64/libc.so.6

#1  0xa04b3228 in abort () from /lib64/libc.so.6

#2  0x00407f40 in os_panic ()
at arm64-soc-sdk-home/src/output/build/vpp/src/vpp/vnet/main.c:355

#3  0xa0903950 in debugger ()
at arm64-soc-sdk-home/src/output/build/vpp/src/vppinfra/error.c:84

#4  0xa0903cf8 in _clib_error (how_to_die=2, function_name=0x0,
line_number=0,
fmt=0xa12e6c20 "%s:%d (%s) assertion `%s' fails")
at arm64-soc-sdk-home/src/output/build/vpp/src/vppinfra/error.c:143

#5  0xa0c06c08 in vnet_get_config_data (cm=0x6278a670,
config_index=0xfffb415af614, next_index=0x638aee14,
n_data_bytes=0) at
arm64-soc-sdk-home/src/output/build/vpp/src/vnet/config.h:129

#6  0xa0c07128 in vnet_feature_next_with_data
(next0=0x638aee14, b0=0xfffb415af600, n_data_bytes=0)
at
arm64-soc-sdk-home/src/output/build/vpp/src/vnet/feature/feature.h:296

#7  0xa0c07150 in vnet_feature_next (next0=0x638aee14,
b0=0xfffb415af600)
at
arm64-soc-sdk-home/src/output/build/vpp/src/vnet/feature/feature.h:304

#8  0xa0c07c1c in bond_update_next (vm=0x628ed580,
node=0x62e46280, last_slave_sw_if_index=0x638aee0c,
slave_sw_if_index=3, bond_sw_if_index=0x638aee10,
b=0xfffb415af600, next_index=0x638aee14, error=0x638aee08)
at arm64-soc-sdk-home/src/output/build/vpp/src/vnet/bonding/node.c:180

#9  0xa0c08570 in bond_input_node_fn_arm64soc (vm=0x628ed580,
node=0x62e46280, frame=0x62f07300)
at arm64-soc-sdk-home/src/output/build/vpp/src/vnet/bonding/node.c:342

#10 0xa0a46834 in dispatch_node (vm=0x628ed580,
node=0x62e46280, type=VLIB_NODE_TYPE_INTERNAL,
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x62f07300,
last_time_stamp=104222198360858)
at arm64-soc-sdk-home/src/output/build/vpp/src/vlib/main.c:1205

#11 0xa0a46fc4 in dispatch_pending_node (vm=0x628ed580,
pending_frame_index=0, last_time_stamp=104222198360858)
at arm64-soc-sdk-home/src/output/build/vpp/src/vlib/main.c:1373

#12 0xa0a489dc in vlib_main_or_worker_loop (vm=0x628ed580,
is_main=0)
at arm64-soc-sdk-home/src/output/build/vpp/src/vlib/main.c:1835

#13 0xa0a491b8 in vlib_worker_loop (vm=0x628ed580)
at arm64-soc-sdk-home/src/output/build/vpp/src/vlib/main.c:1942

#14 0xa0a6c2cc in vlib_worker_thread_fn (arg=0x6062bc00)
at arm64-soc-sdk-home/src/output/build/vpp/src/vlib/threads.c:1751

#15 0xa0914458 in clib_calljmp () from
/lib64/libvppinfra.so.19.08.1

Backtrace stopped: not enough registers or memory available to unwind
further

Re: [vpp-dev] observing issue with LACP port selection logic

2021-05-12 Thread steven luong via lists.fd.io
Sudhir,

It is an error topology/configuration we don’t currently handle. Please try 
this and report back

https://gerrit.fd.io/r/c/vpp/+/32292

The behavior is container-1 will form one bonding group with container-2. It is 
with either BondEthernet0 or BondEthernet1.

Steven

From:  on behalf of "Sudhir CR via lists.fd.io" 

Reply-To: "sud...@rtbrick.com" 
Date: Tuesday, May 11, 2021 at 7:30 PM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] observing issue with LACP port selection logic

Hi all,
i am configuring LACP between two containers.
vpp version used : 20.09
topology looks like below
[cid:image001.png@01D7473F.C0E0F900]
in above topology since memif-4/4 interface is not part of same bond interface 
on both the containers (different partner system id)
memif-4/4 should not be marked as active  interface and attached to 
BondEthernet0 in container1 but is attaching
to BondEthernet0.

Any help in fixing the issue would be appreciated.


Please find configuration in container1 :
DBGvpp# show bond
interface name   sw_if_index  mode  load balance  active members members
BondEthernet09lacp  l23   3  3

DBGvpp# show bond details
BondEthernet0
  mode: lacp
  load balance: l23
  number of active members: 3
memif2/2
memif3/3
memif4/4
  number of members: 3
memif2/2
memif3/3
memif4/4
  device instance: 0
  interface id: 0
  sw_if_index: 9
  hw_if_index: 9

DBGvpp# show lacp
actor state 
 partner state
interface namesw_if_index  bond interface   
exp/def/dis/col/syn/agg/tim/act  exp/def/dis/col/syn/agg/tim/act
memif2/2  2BondEthernet0  0   0   1   1   1   1 
  1   10   0   1   1   1   1   1   1
  LAG ID: [(,7a-67-1e-01-0c-02,0009,00ff,0001), 
(,7a-37-f7-00-0c-02,000f,00ff,0001)]
  RX-state: CURRENT, TX-state: TRANSMIT, MUX-state: COLLECTING_DISTRIBUTING, 
PTX-state: PERIODIC_TX
memif3/3  3BondEthernet0  0   0   1   1   1   1 
  1   10   0   1   1   1   1   1   1
  LAG ID: [(,7a-67-1e-01-0c-02,0009,00ff,0002), 
(,7a-37-f7-00-0c-02,000f,00ff,0002)]
  RX-state: CURRENT, TX-state: TRANSMIT, MUX-state: COLLECTING_DISTRIBUTING, 
PTX-state: PERIODIC_TX
memif4/4  4BondEthernet0  0   0   1   1   1   1 
  1   10   0   1   1   1   1   1   1
  LAG ID: [(,7a-67-1e-01-0c-02,0009,00ff,0003), 
(,7a-37-f7-00-0c-04,0010,00ff,0001)]
  RX-state: CURRENT, TX-state: TRANSMIT, MUX-state: COLLECTING_DISTRIBUTING, 
PTX-state: PERIODIC_TX
DBGvpp#

Please find configuration in container2 :
DBGvpp# show bond
interface name   sw_if_index  mode  load balance  active members members
BondEthernet015   lacp  l23   2  2
BondEthernet116   lacp  l23   1  1
DBGvpp#
DBGvpp#

DBGvpp# show bond details
BondEthernet0
  mode: lacp
  load balance: l23
  number of active members: 2
memif2/2
memif3/3
  number of members: 2
memif2/2
memif3/3
  device instance: 0
  interface id: 0
  sw_if_index: 15
  hw_if_index: 15
BondEthernet1
  mode: lacp
  load balance: l23
  number of active members: 1
memif4/4
  number of members: 1
memif4/4
  device instance: 1
  interface id: 1
  sw_if_index: 16
  hw_if_index: 16

DBGvpp# show lacp
actor state 
 partner state
interface namesw_if_index  bond interface   
exp/def/dis/col/syn/agg/tim/act  exp/def/dis/col/syn/agg/tim/act
memif2/2  8BondEthernet0  0   0   1   1   1   1 
  1   10   0   1   1   1   1   1   1
  LAG ID: [(,7a-37-f7-00-0c-02,000f,00ff,0001), 
(,7a-67-1e-01-0c-02,0009,00ff,0001)]
  RX-state: CURRENT, TX-state: TRANSMIT, MUX-state: COLLECTING_DISTRIBUTING, 
PTX-state: PERIODIC_TX
memif3/3  9BondEthernet0  0   0   1   1   1   1 
  1   10   0   1   1   1   1   1   1
  LAG ID: [(,7a-37-f7-00-0c-02,000f,00ff,0002), 
(,7a-67-1e-01-0c-02,0009,00ff,0002)]
  RX-state: CURRENT, TX-state: TRANSMIT, MUX-state: COLLECTING_DISTRIBUTING, 
PTX-state: PERIODIC_TX
memif4/4  10   BondEthernet1  0   0   1   1   1   1 
  1   10   0   1   1   1   1   1   1
  LAG ID: [(,7a-37-f7-00-0c-04,0010,00ff,0001), 
(,7a-67-1e-01-0c-02,0009,00ff,0003)]
  RX-state: CURRENT, TX-state: TRANSMIT, MUX-state: COLLECTING_DISTRIBUTING, 
PTX-state: PERIODIC_TX
DBGvpp#


Thanks and Regards,
Sudhir

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19379): https://lists.fd.io/g/vpp-dev/message/19379
Mute This Topic: https://lists.fd.io/mt/82763645/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] lawful intercept

2021-04-27 Thread steven luong via lists.fd.io
Your commit subject line is missing a component name. The commit comment is 
missing “Type:”.

Steven

From:  on behalf of "hemant via lists.fd.io" 

Reply-To: "hem...@mnkcg.com" 
Date: Tuesday, April 27, 2021 at 12:56 PM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] lawful intercept

Newer review for LI crash:  https://gerrit.fd.io/r/c/vpp/+/32144

The build failed with the error below which I have tried to fix with the new 
review above.  However, the build still fails with the same error below.

I used “git commit -m "feature" -m "Fix crash in LI CLI" -s” with newer review 
but still I get the “Subject” line failure.

=== ERROR ===
15:16:26 git commit 'Subject:' line must contain at least one known feature id.
15:16:26 feature id(s) must be listed before ':' and space delimited
15:16:26 if more than one is listed.
15:16:26 Please refer to the MAINTAINERS file (I: lines) for known feature ids.
15:16:26 === ERROR ===

Hemant

From: hem...@mnkcg.com 
Sent: Tuesday, April 27, 2021 3:26 PM
To: hem...@mnkcg.com; vpp-dev@lists.fd.io
Subject: RE: [vpp-dev] lawful intercept

During the morning my machine was down for maintenance.  Thereafter, I used gdb 
and set a breakpoint in li_hit_node() and setup LI using CLI.  The breakpoint 
is not hit even when I am sending traffic to match LI-configured src IP.  I 
have actually developed LI for Cisco CPP asic and know how to use it.  It would 
be good for anyone to reply if they have been successful with using VPP LI.

Also, I could crash the LI code when I configure LI CLI once and then invoked 
the same CLI again.  On the 2nd invocation the code crashes on this line:

https://git.fd.io/vpp/tree/src/vnet/lawful-intercept/lawful_intercept.c#n62

I have issued a code review with fix:  https://gerrit.fd.io/r/c/vpp/+/32143

Also, before sending out the review, I did use “make checkstyle” and fixed 
style issues.  Then “make checkstyle” passed.  But after issuing the code 
review, VPP automated build report a style failure – go figure…

Hemant

From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of hemant via 
lists.fd.io
Sent: Tuesday, April 27, 2021 1:01 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] lawful intercept

Does VPP CSIT test LI or what is Lawful Intercept?

I used the LI CLI shown at the link below.  The CLI use did not incur any 
error. However, when I send packets, the packets are not getting tapped.  
“trace add dpdk-input” does not show any “LI_HIT” trace.

https://docs.fd.io/vpp/17.04/clicmd_src_vnet_lawful-intercept.html

I am using a fairly recent (few weeks behind latest) VPP repo.

Hemant

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19295): https://lists.fd.io/g/vpp-dev/message/19295
Mute This Topic: https://lists.fd.io/mt/82408851/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] LACP Troubleshooting

2021-02-16 Thread steven luong via lists.fd.io
VPP implements both active and passive modes. The default operation mode is 
active. The current setting for the port, active/passive, can be inferred from 
the output of show lacp. In the active state column, I see act=1 for all 4 
ports.

The output of the show bond command looks like VPP is already in sync with the 
remote partner in forming the bonding, with 2 member interfaces and both are 
negotiated successfully with the remote partner.

I remember the Nexus switch sometimes tends to put the port in suspended mode 
when it does not receive LACP PDU in the bootup. I think the way to recover is 
to kick the port in the switch, ie, shut and no shut to wake it up.

Steven

From:  on behalf of Marcos - Mgiga 
Date: Tuesday, February 16, 2021 at 7:42 AM
To: 'vpp-dev' 
Subject: [vpp-dev] LACP Troubleshooting

Hi there,

I’m facing some troubles When connecting VPP to a Cisco Nexus 3k, Where one of 
switch’s port remains in “suspended” state indicating its no receiving any LACP 
PDU.

I got this same VPP server connected to a Huweii S6720 Switch and this 
situation does not occur.

Here is the output of show LACP and show Bond command:

[cid:image001.jpg@01D70446.55B1DAA0]


I would like to know if VPP ports attached to bond interfaces are in active 
mode by default or if am I supposed to set it manually.

Any suggestion ?

Best Regards

Marcos


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18759): https://lists.fd.io/g/vpp-dev/message/18759
Mute This Topic: https://lists.fd.io/mt/80680845/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP Packet Generator and Packet Tracer

2021-01-06 Thread steven luong via lists.fd.io
“make build” from the top of the workspace will generate the debug image for 
you to run gdb.

Steven

From:  on behalf of Yaser Azfar 
Date: Wednesday, January 6, 2021 at 1:21 PM
To: "Benoit Ganne (bganne)" 
Cc: "fdio+vpp-...@groups.io" 
Subject: Re: [vpp-dev] VPP Packet Generator and Packet Tracer

Hi,

Thank you for that!

After cloning a fresh copy of VPP and then running `./extras/vagrant/build.sh` 
to install and build VPP. I do not get the file 
`./build-root/install-vpp-debug-native` in my build-root directory, which is 
required for running a debug version of VPP for this example.

Am I required to install VPP another way to get these files?

Thanks again.

On Wed, Jan 6, 2021 at 9:47 PM Benoit Ganne (bganne) 
mailto:bga...@cisco.com>> wrote:
Hi,

> I was trying to follow the example of using the packet generator and
> packet tracer in VPP
>  acer#Step_1._Start_a_debug_version_of_vpp>  and I have realised that it
> may be outdated as it was last updated 4 years ago.

Yes the instructions were outdated, I updated them.

Best
ben

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18480): https://lists.fd.io/g/vpp-dev/message/18480
Mute This Topic: https://lists.fd.io/mt/79463327/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Blackholed packets after forwarding interface output

2020-12-20 Thread steven luong via lists.fd.io
Additionally, please figure out why carrier is down. It needs to be up.


Intel 82599

carrier down

Steven

From:  on behalf of Dave Barach 
Date: Sunday, December 20, 2020 at 4:58 AM
To: 'Merve' , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] Blackholed packets after forwarding interface output

“trace add dpdk-input 100” ... run a bit of traffic ... “show trace”. That will 
tell you precisely what’s happening. Check the outbound src and dst MAC 
addresses. It’s quite possible that they have NOT been swapped...

D.

From: vpp-dev@lists.fd.io  On Behalf Of Merve
Sent: Sunday, December 20, 2020 5:46 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Blackholed packets after forwarding interface output

Hi everyone. I created a plugin for process packet. After process İ forward 
packets interface output. For testing, I generate packet with trex and send 
vpp. Trex send packet vpp but after process in my node, vpp not send to trex.  
But when show int:

TenGigabitEthernet1/0/1   2  up  9000/0/0/0 rx packets  
47034936

rx bytes
  2822096160

tx packets  
47034934

tx bytes
  2163606978
it seems to be transfer packet, but not see in trex.

vpp# show errors

   Count  Node  Reason  
 Severity

 1   TenGigabitEthernet1/0/0-output   interface is down 
   error

 1   TenGigabitEthernet1/0/1-output   interface is down 
   error

  11961361 null-node  blackholed packets
   error

 1 dpdk-input  no error 
   error

 2 arp-reply   ARP replies sent 
   error

 1   TenGigabitEthernet1/0/0-output   interface is down 
   error

  23381499 null-node  blackholed packets
   error

 1 dpdk-input  no error 
   error

 2 arp-reply   ARP replies sent 
   error

 1   TenGigabitEthernet1/0/1-output   interface is down 
   error

  11692073 null-node  blackholed packets
   error

these packets "blackholed"

Intel 82599

carrier down

flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum rx-ip4-cksum

rx: queues 4 (max 128), desc 2048 (min 32 max 4096 align 8)

tx: queues 4 (max 64), desc 2048 (min 32 max 4096 align 8)

pci: device 8086:1528 subsystem 15d9:0734 address :01:00.01 numa 0

max rx packet len: 15872

promiscuous: unicast off all-multicast on

vlan offload: strip off filter off qinq off

rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro

   macsec-strip vlan-filter vlan-extend jumbo-frame scatter

   security keep-crc rss-hash

rx offload active: ipv4-cksum jumbo-frame scatter

tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum

   tcp-tso macsec-insert multi-segs security

tx offload active: udp-cksum tcp-cksum multi-segs

rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp

   ipv6-udp ipv6-ex ipv6

rss active:ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp

   ipv6-udp ipv6-ex ipv6

tx burst function: ixgbe_xmit_pkts

rx burst function: ixgbe_recv_scattered_pkts_vec



tx frames ok47034934

tx bytes ok   2822096040

rx frames ok47034936

rx bytes ok   2822096160

rx missed  384981066

extended stats:

  rx_good_packets   47034936

  tx_good_packets   47034934

  rx_good_bytes   2822096160

  tx_good_bytes   2822096040

  rx_missed_errors 384981066

  rx_q0_packets 47034936

  rx_q0_bytes 2822096160

  tx_q0_packets 47034934

  tx_q0_bytes 2163606978

  mac_local_errors52

  mac_remote_errors2

  rx_size_64_packets   432016002

  rx_broadcast_packets 3

  rx_total_packets   

Re: [vpp-dev] Multicast packets sent via memif when rule says to forward through another interface

2020-12-17 Thread steven luong via lists.fd.io
show interface displays the interface’s admin state.
show hardware displays the interface’s operational link state.
The link down is likely caused by memif configuration error. Please check your 
configuration on both sides to make sure they match. Some tips to debug,
show memif
set logging class memif plugin level debug
show log

As to why traffic is forwarded to the memif interface, who knows what you got 
in the mfib table. You need to look into the mfib table and check.

Steven

From:  on behalf of "tahir.a.sangli...@gmail.com" 

Date: Wednesday, December 16, 2020 at 1:27 PM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] Multicast packets sent via memif when rule says to 
forward through another interface

Thank you for your reply!
yes the memif interface is down, in $sh hardware, it shows up in $sh int. what 
could be the reason for this discrepancy?
still not clear why its choosing memif when mfib rule says forward on 
HundredGigabitEthernet12/0/0.501

ip mroute add ff38:23:2001:5b0:2000::8000/128 via tuntap-0 Accept

ip mroute add ff38:23:2001:5b0:2000::8000/128 via 
HundredGigabitEthernet12/0/0.501 Forward
vpp#sh hardware
memif81/0 21down  memif81/0
  Link speed: unknown
  memif-ip
  MEMIF interface
 instance 15
vpp#sh int
memif81/0 23 up  9000/0/0/0

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18390): https://lists.fd.io/g/vpp-dev/message/18390
Mute This Topic: https://lists.fd.io/mt/79043315/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] #vpp #vpp-memif #vppcom

2020-12-11 Thread steven luong via lists.fd.io
Can you check the output of show hardware? I suspect the link is down for the 
corresponding memif interface.

Steven

From:  on behalf of "tahir.a.sangli...@gmail.com" 

Date: Friday, December 11, 2020 at 1:14 PM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] #vpp #vpp-memif #vppcom

in our application I have to send out multicast packets, but the packet is 
getting dropped in vpp

The error message says memif81/0-output: interface is down but I am forwarding 
the packet through some other interface "HundredGigabitEthernet12/0/0.501", 
could you please help me understand why its trying to go out on memif and not 
using HundredGigabitEthernet12/0/0.501?
Thanks in advance for the help!

This is the rules added to send packet out
ip mroute add ff38:23:2001:5b0:2000::8000/128 via tuntap-0 Accept
ip mroute add ff38:23:2001:5b0:2000::8000/128 via 
HundredGigabitEthernet12/0/0.501 Forward

02:39:27:201439: ip6-input
  UDP: 2001:5b0::501:b883:31f:39e:7890 -> ff38:23:2001:5b0:2000::8000
tos 0x00, flow label 0xaf8f3, hop limit 2, payload length 26
  UDP: 8000 -> 8000
length 26, checksum 0x3e97
02:39:27:201439: ip6-mfib-forward-lookup
  fib 0 entry 9
02:39:27:201439: ip6-mfib-forward-rpf
  entry 9 itf 1 flags Accept,
02:39:27:201439: ip6-replicate
  replicate: 7 via [@2]: ipv4-mcast: HundredGigabitEthernet12/0/0.501: mtu:9000 
next:11 01005e00b883039e7890810001f586dd
  replicate: 7 via [@1]: dpo-receive
02:39:27:201439: ip6-rewrite-mcast
  tx_sw_if_index 6 adj-idx 54 : ipv4-mcast: HundredGigabitEthernet12/0/0.501: 
mtu:9000 next:11 01005e00b883039e7890810001f586dd flow hash: 0x
  : 01005e008000b883039e7890810001f586dd600af8f3001a1101200105b0
  0020: 0501b883031f039e7890ff380023200105b0200080001f401f40001a
  0040: 3e97010102000204000342af5fd3b38700210001005c000307d0
  0060: 
02:39:27:201440: ip6-local
UDP: 2001:5b0::501:b883:31f:39e:7890 -> ff38:23:2001:5b0:2000::8000
  tos 0x00, flow label 0xaf8f3, hop limit 2, payload length 26
UDP: 8000 -> 8000
  length 26, checksum 0x3e97
02:39:27:201440: memif81/0-output
  HundredGigabitEthernet12/0/0.501
  : 01005e008000b883039e7890810001f586dd600af8f3001a1101200105b0
  0020: 0501b883031f039e7890ff380023200105b0200080001f401f40001a
  0040: 3e97010102000204000342af5fd3b38700210001005c000307d0
  0060: 
02:39:27:201440: ip6-drop
UDP: 2001:5b0::501:b883:31f:39e:7890 -> ff38:23:2001:5b0:2000::8000
  tos 0x00, flow label 0xaf8f3, hop limit 2, payload length 26
UDP: 8000 -> 8000
  length 26, checksum 0x3e97
02:39:27:201440: error-drop
  rx:tuntap-0
  rx:tuntap-0
02:39:27:201441: drop
  memif81/0-output: interface is down
  ip6-input: valid ip6 packets
vpp# sh int addr
HundredGigabitEthernet12/0/0 (up):
HundredGigabitEthernet12/0/0.501 (up):
  L3 2001:5b0::501:b883:31f:19e:7890/64
  L3 2001:5b0::501:b883:31f:29e:7890/64
  L3 2001:5b0::501:b883:31f:39e:7890/64
HundredGigabitEthernet12/0/0.1100 (up):
  L3 192.168.115.11/24
  L3 192.168.115.12/24
  L3 2001:5b0::1100::11/64
  L3 2001:5b0::1100::12/64
HundredGigabitEthernet12/0/0.1103 (up):
  L3 192.168.118.11/24
  L3 192.168.118.12/24
  L3 2001:5b0::1103::11/64
  L3 2001:5b0::1103::12/64
HundredGigabitEthernet12/0/1 (dn):
HundredGigabitEthernetd8/0/0 (dn):
HundredGigabitEthernetd8/0/1 (dn):
local0 (dn):
memif1/0 (up):
  L3 192.168.1.3/24
memif11/0 (up):
  L3 192.168.11.3/24
  L3 fd11::11/64
memif2/0 (up):
  L3 192.168.2.3/24
memif21/0 (up):
  L3 192.168.21.3/24
  L3 fd21::21/64
memif3/0 (up):
  L3 192.168.3.3/24
memif31/0 (up):
  L3 192.168.31.3/24
  L3 fd31::31/64
memif4/0 (up):
  L3 192.168.4.3/24
memif41/0 (up):
  L3 192.168.41.3/24
  L3 fd41::41/64
memif5/0 (up):
  L3 192.168.5.3/24
memif51/0 (up):
  L3 192.168.51.3/24
  L3 fd51::51/64
memif6/0 (up):
  L3 192.168.6.3/24
memif61/0 (up):
  L3 192.168.61.3/24
  L3 fd61::61/64
memif7/0 (up):
  L3 192.168.7.3/24
memif71/0 (up):
  L3 192.168.71.3/24
  L3 fd71::71/64
memif8/0 (up):
  L3 192.168.8.3/24
memif81/0 (up):
  L3 192.168.81.3/24
  L3 fd81::81/64
tuntap-0 (up):



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18313): https://lists.fd.io/g/vpp-dev/message/18313
Mute This Topic: https://lists.fd.io/mt/78889239/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Mute #vpp-memif:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-memif
Mute #vppcom:https://lists.fd.io/g/vpp-dev/mutehashtag/vppcom
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-09 Thread steven luong via lists.fd.io
Right, it should not crash. With the patch, the VM just refuses to come up 
unless we raise the queue support.

Steven

On 12/9/20, 10:24 AM, "Benoit Ganne (bganne)"  wrote:

> This argument in your qemu command line,
> queues=16,
> is over our current limit. We support up to 8. I can submit an improvement
> patch. But I think it will be master only.

Yes but we should not crash 
I actually forgot some additional checks in my initial patch. I updated it 
https://gerrit.fd.io/r/c/vpp/+/30346
Eyle, could you check if the crash still happens with queues=16?

Best
ben


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18301): https://lists.fd.io/g/vpp-dev/message/18301
Mute This Topic: https://lists.fd.io/mt/78659780/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-09 Thread steven luong via lists.fd.io
Eyle,

This argument in your qemu command line,

queues=16,

is over our current limit. We support up to 8. I can submit an improvement 
patch. But I think it will be master only.

Steven

From: Eyle Brinkhuis 
Date: Wednesday, December 9, 2020 at 9:24 AM
To: "Steven Luong (sluong)" 
Cc: "Benoit Ganne (bganne)" , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

Hi Steven, This is the command line:

libvirt+ 1620511   1  0 17:19 ?00:00:00 /usr/bin/qemu-system-x86_64 
-name guest=instance-02be,debug-threads=on -S -object 
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-96-instance-02be/master-key.aes
 -machine pc-i440fx-4.0,accel=kvm,usb=off,dump-guest-core=off -cpu host -m 8192 
-overcommit mem-lock=off -smp 16,sockets=16,cores=1,threads=1 -object 
memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu/96-instance-02be,share=yes,size=8589934592,host-nodes=0,policy=bind
 -numa node,nodeid=0,cpus=0-15,memdev=ram-node0 -uuid 
e2dcaeda-1b7c-4d4d-b860-b56d58cf1e86 -smbios type=1,manufacturer=OpenStack 
Foundation,product=OpenStack 
Nova,version=20.3.0,serial=e2dcaeda-1b7c-4d4d-b860-b56d58cf1e86,uuid=e2dcaeda-1b7c-4d4d-b860-b56d58cf1e86,family=Virtual
 Machine -no-user-config -nodefaults -chardev 
socket,id=charmonitor,fd=25,server,nowait -mon 
chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global 
kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device 
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -object 
secret,id=virtio-disk0-secret0,data=6heG0DJExrHzsPjvdMMDZEgCRzMTVhEQNM1q+t/PeVI=,keyid=masterKey0,iv=q1A9BiAx0eW1MsIpYrU56A==,format=base64
 -drive 
file=rbd:cinder-ceph/volume-22c67810-cd55-4cc2-a830-1433488003eb:id=cinder-ceph:auth_supported=cephx\;none:mon_host=10.0.91.205\:6789\;10.0.91.206\:6789\;10.0.91.207\:6789,file.password-secret=virtio-disk0-secret0,format=raw,if=none,id=drive-virtio-disk0,cache=none,discard=unmap
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on,serial=22c67810-cd55-4cc2-a830-1433488003eb
 -chardev 
socket,id=charnet0,path=/tmp/15873ca6-0488-4826-9f50-bab037271c93,server 
-netdev vhost-user,chardev=charnet0,queues=16,id=hostnet0 -device 
virtio-net-pci,mq=on,vectors=34,rx_queue_size=1024,tx_queue_size=1024,netdev=hostnet0,id=net0,mac=fa:16:3e:ce:e4:df,bus=pci.0,addr=0x3
 -add-fd set=1,fd=28 -chardev 
pty,id=charserial0,logfile=/dev/fdset/1,logappend=on -device 
isa-serial,chardev=charserial0,id=serial0 -device 
usb-tablet,id=input0,bus=usb.0,port=1 -vnc 10.0.92.191:1 -k en-us -device 
cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -sandbox 
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg 
timestamp=on


It looks like it is only requesting 16 queues.


@Ben, I have put those in the same file share as well 
(https://surfdrive.surf.nl/files/index.php/s/0SUKUNivkpg9Dnb)


Regards,

Eyle


On 9 Dec 2020, at 18:00, Steven Luong (sluong) 
mailto:slu...@cisco.com>> wrote:

Eyle,

Can you also show me the qemu command line to bring up the VM? I think it is 
asking for more than 16 queues. VPP supports up to 16.

Steven

On 12/9/20, 8:22 AM, "vpp-dev@lists.fd.io on behalf 
of Benoit Ganne (bganne) via lists.fd.io" 
mailto:vpp-dev@lists.fd.io> on behalf of 
bganne=cisco@lists.fd.io> wrote:

   Hi Eyle, could you share the associated .deb files you built (esp. vpp, 
vpp-dbg, libvppinfra , vpp-plugin-core and vpp-plugin-dpdk)?
   I cannot exploit the core without those, as you rebuilt vpp.

   Best
   ben


-Original Message-
From: Eyle Brinkhuis mailto:eyle.brinkh...@surf.nl>>
Sent: mercredi 9 décembre 2020 17:02
To: Benoit Ganne (bganne) mailto:bga...@cisco.com>>
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

Hi Ben,

I have built a new 20.05.1 version with this fix cherry-picked. It gets a
lot further now: VM is actually spawning and I can see the interface being
created inside VPP. However, a little while later, VPP crashes once again.
I have created a new core dump and api-post mortem, which can be found
here:

https://surfdrive.surf.nl/files/index.php/s/0SUKUNivkpg9Dnb

BTW, havent yet tried this with 20.09. Let me know if you want me to do
that first. Once again, thanks for your quick reply.

Regards,

Eyle


On 8 Dec 2020, at 19:14, Benoit Ganne (bganne) via lists.fd.io
  mailto:bganne=cisco@lists.fd.io> > wrote:

Hi Eyle,

Thanks for the core, I think I identified the issue.
Can you check if https://gerrit.fd.io/r/c/vpp/+/30346 fix the issue?
It should apply to 20.05 without conflicts.

Best
ben



-Original Message-
From: Eyle Brinkhuis mailto:eyle.brinkh...@surf.nl> >

Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-09 Thread steven luong via lists.fd.io
Eyle,

Can you also show me the qemu command line to bring up the VM? I think it is 
asking for more than 16 queues. VPP supports up to 16.

Steven

On 12/9/20, 8:22 AM, "vpp-dev@lists.fd.io on behalf of Benoit Ganne (bganne) 
via lists.fd.io"  wrote:

Hi Eyle, could you share the associated .deb files you built (esp. vpp, 
vpp-dbg, libvppinfra , vpp-plugin-core and vpp-plugin-dpdk)?
I cannot exploit the core without those, as you rebuilt vpp.

Best
ben

> -Original Message-
> From: Eyle Brinkhuis 
> Sent: mercredi 9 décembre 2020 17:02
> To: Benoit Ganne (bganne) 
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Vpp crashes with core dump vhost-user interface
> 
> Hi Ben,
> 
> I have built a new 20.05.1 version with this fix cherry-picked. It gets a
> lot further now: VM is actually spawning and I can see the interface being
> created inside VPP. However, a little while later, VPP crashes once again.
> I have created a new core dump and api-post mortem, which can be found
> here:
> 
> https://surfdrive.surf.nl/files/index.php/s/0SUKUNivkpg9Dnb
> 
> BTW, havent yet tried this with 20.09. Let me know if you want me to do
> that first. Once again, thanks for your quick reply.
> 
> Regards,
> 
> Eyle
> 
> 
>   On 8 Dec 2020, at 19:14, Benoit Ganne (bganne) via lists.fd.io
>     > wrote:
> 
>   Hi Eyle,
> 
>   Thanks for the core, I think I identified the issue.
>   Can you check if https://gerrit.fd.io/r/c/vpp/+/30346 fix the issue?
> It should apply to 20.05 without conflicts.
> 
>   Best
>   ben
> 
> 
> 
>   -Original Message-
>   From: Eyle Brinkhuis   >
>   Sent: mercredi 2 décembre 2020 17:13
>   To: Benoit Ganne (bganne)   >
>   Cc: vpp-dev@lists.fd.io 
>   Subject: Re: Vpp crashes with core dump vhost-user interface
> 
>   Hi Ben, all,
> 
>   I’m sorry, I forgot about adding a backtrace. I have now
> posted it here:
>   https://surfdrive.surf.nl/files/index.php/s/0SUKUNivkpg9Dnb
> 
> 
>   I am not too familiar with the openstack integration, but now
> that
>   20.09 is out, can't you move to 20.09? At least in your lab to
> check
>   whether you still see this issue.
> 
>   The last “guaranteed to work” version is 20.05.1 against
> networking-vpp. I
>   can still try though, in my testbed, but I’d like to keep to
> the known
>   working combinations as much as possible. Ill let you know if
> anything
>   comes up!
> 
>   Thanks for the quick replies, both you and Steven.
> 
>   Regards,
> 
>   Eyle
> 
> 
>   On 2 Dec 2020, at 16:35, Benoit Ganne (bganne)
> > wrote:
> 
>   Hi Eyle,
> 
>   I am not too familiar with the openstack integration, but now
> that
>   20.09 is out, can't you move to 20.09? At least in your lab to
> check
>   whether you still see this issue.
>   Apart from that, we'd need to decipher the backtrace to be
> able to
>   help. The best should be to share a coredump as explained
> here:
> 
>   https://fd.io/docs/vpp/master/troubleshooting/reportingissues/report
> ingiss
>   ues.html#core-files
> 
>    tingis
>   sues.html#core-files>
> 
>   Best
>   ben
> 
> 
> 
>   -Original Message-
>   From: vpp-dev@lists.fd.io 
>  d...@lists.fd.io    d...@lists.fd.io> > On Behalf Of Eyle
>   Brinkhuis
>   Sent: mercredi 2 décembre 2020 14:59
>   To: vpp-dev@lists.fd.io 
> 
>   Subject: [vpp-dev] Vpp crashes with core dump vhost-user
>   interface
> 
>   Hi all,
> 
>   In our environment (vpp 20.05.1, ubuntu 18.04.5, networking-
>   vpp 20.05.1,
>   Openstack train) we are running into an issue. When we spawn a
>   VM (regular
>   ubuntu 1804.4) with 16 CPU cores and 8G memory and a VPP
>   backed interface,
>   our VPP instance dies:
> 
>   Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
>  

Re: [vpp-dev] multicast traffic getting duplicated in lacp bond mode

2020-12-06 Thread steven luong via lists.fd.io
When you create the bond interface using either lacp or xor mode, there is an 
option to specify load-balance l2, l23, or l34 which is equivalent to linux 
xmit_hash_policy.

Steven

From:  on behalf of "ashish.sax...@hsc.com" 

Date: Sunday, December 6, 2020 at 3:24 AM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] multicast traffic getting duplicated in lacp bond mode

Hi Steven,

Thanks for replying.

In linux bonding, there is option for xmit_hash_policy , that prevents the 
duplicate packets.

 
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-using_channel_bonding

Does VPP has anything equivalent to this linux feature?
Thanks and Regards,
Ashish

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18257): https://lists.fd.io/g/vpp-dev/message/18257
Mute This Topic: https://lists.fd.io/mt/78654798/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] multicast traffic getting duplicated in lacp bond mode

2020-12-02 Thread steven luong via lists.fd.io
Bonding cares less whether the traffic is unicast or multicast. It just hashes 
the packet header and selects one of the members as the outgoing interface. The 
only bonding mode which it replicates packets across all members is when you 
create the bonding interface to do broadcast which you didn’t do in your config.

That said, you can debug further to see where the problem is by using packet 
trace or pcap. You can also disable bonding to see if it helps.

Steven

From:  on behalf of "ashish.sax...@hsc.com" 

Date: Tuesday, December 1, 2020 at 11:40 PM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] multicast traffic getting duplicated in lacp bond mode

Hi vpp community,

We are using vpp 20.05 on our setup. We were testing out vpp bonding in lacp 
mode. With this config, we saw that multicast traffic was getting duplicated. 
However, the unicast traffic was ok without duplicate packets. Looks like maybe 
2 joins are happening, one on each interface. We could not find anything on 
documentation regarding this.



We have tested the below combinations:

   xor

   lacp

lcap+l34+passive

lcap+l34+no passive command

lcap+l23+pasive

lcap+l23+no passive

lcap+l2+passive





Config used for testing:

create bond mode xor id 0

bond add BondEthernet0 HundredGigabitEthernet12/0/1

create sub-interfaces BondEthernet0 501

set int state HundredGigabitEthernet12/0/1 up

set int state BondEthernet0 up

set int state BondEthernet0.501 up

set interface ip address BondEthernet0.501 
2001:5b0::501:b883:31f:19e:68f1/64

bond add BondEthernet0 HundredGigabitEthernet12/0/0

create sub-interfaces BondEthernet0 701

set int state HundredGigabitEthernet12/0/0 up

set int state BondEthernet0 up

set int state BondEthernet0.701 up

ip6 nd address autoconfig BondEthernet0.701 default-route



Multicast route added:

ip mroute add FF38:23:2001:5B0:2000::9901/128 via BondEthernet0.501 Accept

ip mroute add FF38:23:2001:5B0:2000::9901/128 via local Forward

ip mroute add FF38:23:2001:5B0:2000::9900/128 via tuntap-0 Accept

ip mroute add FF38:23:2001:5B0:2000::9900/128 via BondEthernet0.501 Forward

Is this a known isssue ? How can we stop the duplicate packet?
Thanks and Regards,
Ashish

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18240): https://lists.fd.io/g/vpp-dev/message/18240
Mute This Topic: https://lists.fd.io/mt/78654798/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-02 Thread steven luong via lists.fd.io
Please use gdb to provide a meaningful backtrace.

Steven

From:  on behalf of Eyle Brinkhuis 
Date: Wednesday, December 2, 2020 at 5:59 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Vpp crashes with core dump vhost-user interface

Hi all,

In our environment (vpp 20.05.1, ubuntu 18.04.5, networking-vpp 20.05.1, 
Openstack train) we are running into an issue. When we spawn a VM (regular 
ubuntu 1804.4) with 16 CPU cores and 8G memory and a VPP backed interface, our 
VPP instance dies:

Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: 
linux_epoll_file_update:120: epoll_ctl: Operation not permitted (errno 1)
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: 
linux_epoll_file_update:120: epoll_ctl: Operation not permitted (errno 1)
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: received 
signal SIGSEGV, PC 0x7fdf80653188, faulting address 0x7ffe414b8680
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: received signal 
SIGSEGV, PC 0x7fdf80653188, faulting address 0x7ffe414b8680
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #0  
0x7fdf806556d5 0x7fdf806556d5
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #0  0x7fdf806556d5 
0x7fdf806556d5
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #1  0x7fdf7feab8a0 
0x7fdf7feab8a0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #1  
0x7fdf7feab8a0 0x7fdf7feab8a0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #2  
0x7fdf80653188 0x7fdf80653188
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #2  0x7fdf80653188 
0x7fdf80653188
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #3  0x7fdf81f29e52 
0x7fdf81f29e52
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #3  
0x7fdf81f29e52 0x7fdf81f29e52
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #4  
0x7fdf80653b79 0x7fdf80653b79
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #4  0x7fdf80653b79 
0x7fdf80653b79
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #5  
0x7fdf805f1bdb 0x7fdf805f1bdb
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #5  0x7fdf805f1bdb 
0x7fdf805f1bdb
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #6  
0x7fdf805f18c0 0x7fdf805f18c0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #7  
0x7fdf80655076 0x7fdf80655076
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #6  0x7fdf805f18c0 
0x7fdf805f18c0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #8  
0x7fdf7fa3b3f4 0x7fdf7fa3b3f4
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #7  0x7fdf80655076 
0x7fdf80655076
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #8  0x7fdf7fa3b3f4 
0x7fdf7fa3b3f4
Dec 02 13:39:39 compute03-asd002a systemd[1]: vpp.service: Main process exited, 
code=dumped, status=6/ABRT
Dec 02 13:39:39 compute03-asd002a systemd[1]: vpp.service: Failed with result 
'core-dump'.


While we are able to run 8 core VMs, we’d like to be able to create beefier. 
VPP restarts, but never makes it to create the vhost-user interface.. Anyone 
ran into the same issue?

Regards,

Eyle


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18223): https://lists.fd.io/g/vpp-dev/message/18223
Mute This Topic: https://lists.fd.io/mt/78659780/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] unformat fails processing > 3 variables

2020-11-27 Thread steven luong via lists.fd.io
You have 17 format tags, but you pass 18 arguments to the unformat function. Is 
that intentional?

Steven

From:  on behalf of "hemant via lists.fd.io" 

Reply-To: "hem...@mnkcg.com" 
Date: Friday, November 27, 2020 at 3:52 PM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] unformat fails processing > 3 variables

I am writing VPP CLI for the first time. Please see this new CLI I developed 
for my VPP plugin at the link below.

https://github.com/hesingh/misc/blob/master/vpp-issues/cli.c

If I use more than three variables with unformat on line 
, I run into 
the error on this line of code in the same file 


Any idea how to get around this issue?

Thanks,

Hemant






-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18177): https://lists.fd.io/g/vpp-dev/message/18177
Mute This Topic: https://lists.fd.io/mt/78558564/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] How to do Bond interface configuration as fail_over_mac=active in VPP

2020-07-20 Thread steven luong via lists.fd.io
It is not supported.

From:  on behalf of Venkatarao M 

Date: Monday, July 20, 2020 at 8:35 AM
To: "vpp-dev@lists.fd.io" 
Cc: praveenkumar A S , Lokesh Chimbili 
, Mahesh Sivapuram 
Subject: [vpp-dev] How to do Bond interface configuration as 
fail_over_mac=active in VPP

Hi all,
We are trying bond interface configuration with VPP and looking for 
configuration as fail_over_mac=active as mentioned in below snippet.
We observed in VPP, default bond interface configuration is fail_over_mac =none 
and couldn’t see any CLI in VPP to configure  bond interface with fail over mac 
as active

Could you please let us know configuration to achieve the same

Snip from the below link
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-using_channel_bonding

[cid:image001.png@01D65E77.4C6142D0]
VPP Configuration
==
create bond mode active-backup id 100
bond add BondEthernet100 vpp_itf_1
bond add BondEthernet100 vpp_itf_2
ip6 table add 100
set interface ip6 table BondEthernet100 100
set interface state vpp_itf_1 up
set interface state vpp_itf_2 up
set interface state BondEthernet100 up
set interface reassembly BondEthernet100 on

Thanks
Venkatarao Malempati
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17009): https://lists.fd.io/g/vpp-dev/message/17009
Mute This Topic: https://lists.fd.io/mt/75684142/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [tsc] [vpp-dev] Replacing master/slave nomenclature

2020-07-14 Thread steven luong via lists.fd.io
The list has a good number of suggestions. In 802.1ax spec, they use the term 
aggregator and member link. So I am inclined to stick to aggregator/member 
unless someone finds that it is unacceptable.

Steven

From: Ed Warnicke 
Date: Tuesday, July 14, 2020 at 9:45 AM
To: "Jerome Tollet (jtollet)" 
Cc: "Steven Luong (sluong)" , "Dave Barach (dbarach)" 
, "Kinsella, Ray" , Stephen Hemminger 
, "vpp-dev@lists.fd.io" , 
"t...@lists.fd.io" , "Ed Warnicke (eaw)" 
Subject: Re: [tsc] [vpp-dev] Replacing master/slave nomenclature

This is a pretty good summary of various suggestions for replacement terms:

https://www.zdnet.com/article/linux-team-approves-new-terminology-bans-terms-like-blacklist-and-slave/

Ed

On Tue, Jul 14, 2020 at 11:36 AM Jerome Tollet via 
lists.fd.io<http://lists.fd.io> 
mailto:cisco@lists.fd.io>> wrote:
Hi Steven,
Please note that per this proposition,  https://lkml.org/lkml/2020/7/4/229, 
slave must be avoided but master can be kept.
Maybe master/member or master/secondary could be options too.
Jerome

Le 14/07/2020 18:32, « vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> au nom 
de steven luong via lists.fd.io<http://lists.fd.io> » 
mailto:vpp-dev@lists.fd.io> au nom de 
sluong=cisco@lists.fd.io<mailto:cisco@lists.fd.io>> a écrit :

I am in the process of pushing a patch to replace master/slave with 
aggregator/member for the bonding.

Steven

On 7/13/20, 4:44 AM, "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> on 
behalf of Dave Barach via lists.fd.io<http://lists.fd.io>" 
mailto:vpp-dev@lists.fd.io> on behalf of 
dbarach=cisco@lists.fd.io<mailto:cisco@lists.fd.io>> wrote:

+1, especially since our next release will be supported for a year, and 
API name changes are involved...

-Original Message-
From: Kinsella, Ray mailto:m...@ashroe.eu>>
Sent: Monday, July 13, 2020 6:01 AM
To: Dave Barach (dbarach) 
mailto:dbar...@cisco.com>>; Stephen Hemminger 
mailto:step...@networkplumber.org>>; 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>; 
t...@lists.fd.io<mailto:t...@lists.fd.io>; Ed Warnicke (eaw) 
mailto:e...@cisco.com>>
Subject: Re: [vpp-dev] Replacing master/slave nomenclature

Hi Stephen,

I agree, I don't think we should ignore this.
Ed - I suggest we table a discussion at the next FD.io TSC?

Ray K

On 09/07/2020 17:05, Dave Barach via lists.fd.io<http://lists.fd.io> 
wrote:
> Looping in the technical steering committee...
>
> -Original Message-
> From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Stephen Hemminger
> Sent: Thursday, July 2, 2020 7:02 PM
> To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
> Subject: [vpp-dev] Replacing master/slave nomenclature
>
> Is the VPP project addressing the use of master/slave nomenclature in 
the code base, documentation and CLI?  We are doing this for DPDK and it would 
be good if the replacement wording used in DPDK matched the wording used in 
FD.io projects.
>
> Particularly problematic is the use of master/slave in bonding.
> This seems to be a leftover from Linux, since none of the commercial 
products use that terminology and it is not present in 802.1AX standard.
>
> The IEEE and IETF are doing an across the board look at these terms 
in standards.
>
>
>
>



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16952): https://lists.fd.io/g/vpp-dev/message/16952
Mute This Topic: https://lists.fd.io/mt/75503014/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Replacing master/slave nomenclature

2020-07-14 Thread steven luong via lists.fd.io
I am in the process of pushing a patch to replace master/slave with 
aggregator/member for the bonding.

Steven

On 7/13/20, 4:44 AM, "vpp-dev@lists.fd.io on behalf of Dave Barach via 
lists.fd.io"  
wrote:

+1, especially since our next release will be supported for a year, and API 
name changes are involved... 

-Original Message-
From: Kinsella, Ray  
Sent: Monday, July 13, 2020 6:01 AM
To: Dave Barach (dbarach) ; Stephen Hemminger 
; vpp-dev@lists.fd.io; t...@lists.fd.io; Ed 
Warnicke (eaw) 
Subject: Re: [vpp-dev] Replacing master/slave nomenclature

Hi Stephen,

I agree, I don't think we should ignore this.
Ed - I suggest we table a discussion at the next FD.io TSC?

Ray K

On 09/07/2020 17:05, Dave Barach via lists.fd.io wrote:
> Looping in the technical steering committee...
> 
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Stephen 
Hemminger
> Sent: Thursday, July 2, 2020 7:02 PM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Replacing master/slave nomenclature
> 
> Is the VPP project addressing the use of master/slave nomenclature in the 
code base, documentation and CLI?  We are doing this for DPDK and it would be 
good if the replacement wording used in DPDK matched the wording used in FD.io 
projects.
> 
> Particularly problematic is the use of master/slave in bonding.
> This seems to be a leftover from Linux, since none of the commercial 
products use that terminology and it is not present in 802.1AX standard.
> 
> The IEEE and IETF are doing an across the board look at these terms in 
standards.
> 
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16948): https://lists.fd.io/g/vpp-dev/message/16948
Mute This Topic: https://lists.fd.io/mt/75399929/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Userspace tcp between two vms using vhost user interface?

2020-07-02 Thread steven luong via lists.fd.io
Inline.

From:  on behalf of "sadhanakesa...@gmail.com" 

Date: Thursday, July 2, 2020 at 9:55 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Userspace tcp between two vms using vhost user interface?

Hi,
there seems like lot of ways to setup userspace tcp with vpp, hoststack , with 
and without mtcp mode.
int he following link,
1. 
https://wiki.fd.io/view/VPP/Use_VPP_to_connect_VMs_Using_Vhost-User_Interface - 
is this using userspace tcp ?

 No. 

are the iips configured here local?


Yes, local to the VMs.

VM1 (192.168.0.1/24)  VPP --- (192.168.0.2/24) VM2

Steven


I understood from previous discussion that the ips created in built-in echo 
test client server are local - hence they may not be accessible out the vm 
session?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16876): https://lists.fd.io/g/vpp-dev/message/16876
Mute This Topic: https://lists.fd.io/mt/75262474/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Need help with setup.. cannot ping a VPP interface.

2020-06-15 Thread steven luong via lists.fd.io
rface from System A ==
$ ping -c1 10.1.1.11
PING 10.1.1.11 (10.1.1.11) 56(84) bytes of data.
64 bytes from 10.1.1.11: icmp_seq=1 ttl=64 time=0.033 ms

--- 10.1.1.11 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms

== Ping System B interface from System B ==
$ ping -c1 10.1.1.10
PING 10.1.1.10 (10.1.1.10) 56(84) bytes of data.

--- 10.1.1.10 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

Thanks
Manoj Iyer

From: Dave Barach (dbarach) 
Sent: Friday, June 12, 2020 3:21 PM
To: Steven Luong (sluong) ; Manoj Iyer ; 
vpp-dev@lists.fd.io 
Subject: RE: [vpp-dev] Need help with setup.. cannot ping a VPP interface.


+check hardware addresses with “show hardware”, to make sure you’ve configured 
the interface which is actually connected to the peer system / switch...



HTH... Dave



From: vpp-dev@lists.fd.io  On Behalf Of steven luong via 
lists.fd.io
Sent: Friday, June 12, 2020 4:18 PM
To: Manoj Iyer ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Need help with setup.. cannot ping a VPP interface.



Please correct the subnet mask first.

  L3 10.1.1.10/24. <-- system A

   inet 10.1.1.11  netmask 255.0.0.0  broadcast 10.255.255.255  <--- system B



Steven



From: mailto:vpp-dev@lists.fd.io>> on behalf of Manoj Iyer 
mailto:manoj.i...@arm.com>>
Date: Friday, June 12, 2020 at 12:28 PM
To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] Need help with setup.. cannot ping a VPP interface.



Hello,



I am very new to VPP and I am having trouble pinging the vpp interface. I have 
a system with ip address 10.1.1.10 set up using a VPP interface, and I have 
another system with ip address 10.1.1.11 setup with no VPP. Both systems are 
connected though a switch.



If I do not use VPP I am able to ping each other, but when I use VPP to 
configure one of the IPs I am unable to ping.



I know this might be a very basic setup issue. Could someone please point me in 
the right direction. I have read through

https://wiki.fd.io/view/VPP/How_To_Connect_A_PCI_Interface_To_VPP



== on system A ==

$ sudo vppctl show interface address

bnxt0 (dn):

bnxt1 (dn):

bnxt2 (dn):

bnxt3 (up):

  L3 10.1.1.10/24

local0 (dn):



$ sudo vppctl show interface

  Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
Counter  Count

bnxt0 1 down 9000/0/0/0

bnxt1 2 down 9000/0/0/0

bnxt2 3 down 9000/0/0/0

bnxt3 4  up  9000/0/0/0 rx packets  
 498

rx bytes
   74839

drops   
 498

ip4 
 112

ip6 
 188

local00 down  0/0/0/0





== on system B ==

$ ifconfig enp2s0f2np0

enp2s0f2np0: flags=4163  mtu 1500

inet 10.1.1.11  netmask 255.0.0.0  broadcast 10.255.255.255

ether b0:26:28:82:09:ce  txqueuelen 1000  (Ethernet)

RX packets 250  bytes 59677 (59.6 KB)

RX errors 0  dropped 0  overruns 0  frame 0

TX packets 103  bytes 15583 (15.5 KB)

TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0





IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16733): https://lists.fd.io/g/vpp-dev/message/16733
Mute This Topic: https://lists.fd.io/mt/74846214/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Need help with setup.. cannot ping a VPP interface.

2020-06-12 Thread steven luong via lists.fd.io
Please correct the subnet mask first.
  L3 10.1.1.10/24. <-- system A
   inet 10.1.1.11  netmask 255.0.0.0  broadcast 10.255.255.255  <--- system B

Steven

From:  on behalf of Manoj Iyer 
Date: Friday, June 12, 2020 at 12:28 PM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Need help with setup.. cannot ping a VPP interface.

Hello,

I am very new to VPP and I am having trouble pinging the vpp interface. I have 
a system with ip address 10.1.1.10 set up using a VPP interface, and I have 
another system with ip address 10.1.1.11 setup with no VPP. Both systems are 
connected though a switch.

If I do not use VPP I am able to ping each other, but when I use VPP to 
configure one of the IPs I am unable to ping.

I know this might be a very basic setup issue. Could someone please point me in 
the right direction. I have read through
https://wiki.fd.io/view/VPP/How_To_Connect_A_PCI_Interface_To_VPP

== on system A ==
$ sudo vppctl show interface address
bnxt0 (dn):
bnxt1 (dn):
bnxt2 (dn):
bnxt3 (up):
  L3 10.1.1.10/24
local0 (dn):

$ sudo vppctl show interface
  Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
Counter  Count
bnxt0 1 down 9000/0/0/0
bnxt1 2 down 9000/0/0/0
bnxt2 3 down 9000/0/0/0
bnxt3 4  up  9000/0/0/0 rx packets  
 498
rx bytes
   74839
drops   
 498
ip4 
 112
ip6 
 188
local00 down  0/0/0/0


== on system B ==
$ ifconfig enp2s0f2np0
enp2s0f2np0: flags=4163  mtu 1500
inet 10.1.1.11  netmask 255.0.0.0  broadcast 10.255.255.255
ether b0:26:28:82:09:ce  txqueuelen 1000  (Ethernet)
RX packets 250  bytes 59677 (59.6 KB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 103  bytes 15583 (15.5 KB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16716): https://lists.fd.io/g/vpp-dev/message/16716
Mute This Topic: https://lists.fd.io/mt/74846214/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Unable to ping vpp interface from outside after configuring vrrp on vpp interface and making it as Master

2020-06-08 Thread steven luong via lists.fd.io
Vmxnet3 is a paravirtualized device. I could be wrong, it does not appear it 
supports adding virtual mac address. This error returns from dpdk indicates 
just that.

Jun  8 12:32:43 bfs-dl360g10-14-vm17 vnet[29645]: vrrp_vr_transition_vmac:120: 
Adding virtual MAC address 00:00:5e:00:01:01 on hardware interface 1
Jun  8 12:32:43 bfs-dl360g10-14-vm17 vnet[29645]: dpdk_add_del_mac_address: mac 
address add/del failed: -95

#define EOPNOTSUPP  95  /* Operation not supported on transport 
endpoint */

I suggest you try it on the physical NIC to see if that works.

Steven

From:  on behalf of Amit Mehra 
Date: Monday, June 8, 2020 at 5:57 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Unable to ping vpp interface from outside after configuring 
vrrp on vpp interface and making it as Master

Hi,

I am trying to test VRRP functionality using VRRP plugin available in VPP-20.05 
and after running VRRP functionality on one of the VPP nodes and configuring it 
as a master, i am not able to ping the vpp interface from outside. I am using 
the following configuration

modprobe -r vfio_pci
modprobe -r vfio

./bin/dpdk-devbind.py -s  // bind to vfio-pci driver

Network devices using DPDK-compatible driver

:13:00.0 'VMXNET3 Ethernet Controller' drv=vfio-pci unused=vmxnet3
:1b:00.0 'VMXNET3 Ethernet Controller' drv=vfio-pci unused=vmxnet3

./bin/vppctl create interface vmxnet3 :13:00.0
./bin/vppctl set interface ip address vmxnet3-0/13/0/0 10.20.53.143/24
./bin/vppctl set int state vmxnet3-0/13/0/0 up

./bin/vppctl vrrp vr add GigabitEthernet13/0/0 vr_id 1 priority 255 interval 1 
accept_mode 10.20.53.143
./bin/vppctl vrrp proto start GigabitEthernet13/0/0 vr_id 1

Also, when i am trying to ping this vpp interface ip(10.20.53.143) from outside 
machine(which is having same subnet as 10.20.53.xx), I could see that ARP 
request was received by vpp and responded with a MAC address as 
00:00:5E:00:01:01 in ARP Reply.
However, the icmp packets were not entering the vpp. Verified the same using 
"trace" functionality available in vpp i.e. trace add vmxnet3-input 200

Moreover, i could see that vrrp packets(Announcement) continuously being 
transmitted by the vpp interface.

Can someone confirm whether i am following correct steps and what could be the 
reason why icmp packets are not being received by vpp? I tried by enabling 
promiscuous mode on vpp interface using "set interface promiscuous on 
vmxnet3-0/13/0/0" but still icmp packets were not entering the vpp.

Also, is there any CLI by which i can see that virtual MAC has been assigned on 
the vpp interface?

I also tried testing with dpdk plugin, but observing the following log in 
vpp.log while executing ./bin/vppctl vrrp proto start GigabitEthernet13/0/0 
vr_id 1 CLI

Jun  8 12:32:43 bfs-dl360g10-14-vm17 vnet[29645]: vrrp_vr_transition_vmac:120: 
Adding virtual MAC address 00:00:5e:00:01:01 on hardware interface 1
Jun  8 12:32:43 bfs-dl360g10-14-vm17 vnet[29645]: dpdk_add_del_mac_address: mac 
address add/del failed: -95
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16691): https://lists.fd.io/g/vpp-dev/message/16691
Mute This Topic: https://lists.fd.io/mt/74750885/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] worker thread deadlock for current master branch, started with commit "bonding: adjust link state based on active slaves"

2020-05-29 Thread steven luong via lists.fd.io
The problem is the aforementioned commit added a call to invoke 
vnet_hw_interface_set_flags() in the worker thread. That is no can do. We are 
in the process of reverting the commit.

Steven

On 5/29/20, 10:02 AM, "vpp-dev@lists.fd.io on behalf of Elias Rudberg" 
 wrote:

Hello,

We now get this kind of error for the current master branch (5bb3e81e):

vlib_worker_thread_barrier_sync_int: worker thread deadlock

Testing previous commits indicates the problem started with the recent
commit 9121c415 "bonding: adjust link state based on active slaves"
(AuthorDate May 18, CommitDate May 27).

We can reproduce the problem using the following config:

unix {
  nodaemon
  exec /etc/vpp/commands.txt
}
cpu {
  workers 10
}

where commands.txt looks like this:

create bond mode lacp load-balance l23
create int rdma host-if enp101s0f1 name Interface101
create int rdma host-if enp179s0f1 name Interface179
bond add BondEthernet0 Interface101
bond add BondEthernet0 Interface179
create sub-interfaces BondEthernet0 1012
create sub-interfaces BondEthernet0 1013
set int ip address BondEthernet0.1012 10.1.1.1/30
set int ip address BondEthernet0.1013 10.1.2.1/30
set int state BondEthernet0 up
set int state Interface101 up
set int state Interface179 up
set int state BondEthernet0.1012 up
set int state BondEthernet0.1013 up

Then we get the "worker thread deadlock" every time at startup, after
just a few seconds.

We get the following gdb backtrace (for a release build):

vlib_worker_thread_barrier_sync_int: worker thread deadlock
Thread 3 "vpp_wk_0" received signal SIGABRT, Aborted.
[Switching to Thread 0x7ffe027fe700 (LWP 12171)]
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
51  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at
../sysdeps/unix/sysv/linux/raise.c:51
#1  0x742ff801 in __GI_abort () at abort.c:79
#2  0xc700 in os_panic () at vpp/src/vpp/vnet/main.c:371
#3  0x75dd03ab in vlib_worker_thread_barrier_sync_int
(vm=0x7fffb87c0300, func_name=) at
vpp/src/vlib/threads.c:1517
#4  0x777bfa9c in dpo_get_next_node (child_type=, child_proto=, parent_dpo=0x7fffb9cebda0) at
vpp/src/vnet/dpo/dpo.c:430
#5  dpo_stack (child_type=, child_proto=,
dpo=, parent=0x7fffb9cebda0) at
vpp/src/vnet/dpo/dpo.c:521
#6  0x777c50ac in load_balance_set_bucket_i (lb=0x7fffb8e784c0,
bucket=, buckets=0x7fffb8e784e0, next=)
at vpp/src/vnet/dpo/load_balance.c:252
#7  load_balance_fill_buckets_norm (lb=0x7fffb8e784c0,
nhs=0x7fffb9cebda0, buckets=0x7fffb8e784e0, n_buckets=)
at vpp/src/vnet/dpo/load_balance.c:525
#8  load_balance_fill_buckets (lb=0x7fffb8e784c0, nhs=0x7fffb9cebda0,
buckets=0x7fffb8e784e0, n_buckets=, flags=)
at vpp/src/vnet/dpo/load_balance.c:589
#9  0x777c4d5f in load_balance_multipath_update (dpo=, raw_nhs=, flags=) at
vpp/src/vnet/dpo/load_balance.c:88
#10 0x7778e0fc in fib_entry_src_mk_lb
(fib_entry=0x7fffb90dd770, esrc=0x7fffb8c60150,
fct=FIB_FORW_CHAIN_TYPE_UNICAST_IP4, dpo_lb=0x7fffb90dd798)
at vpp/src/vnet/fib/fib_entry_src.c:645
#11 0x7778e4b7 in fib_entry_src_action_install
(fib_entry=0x7fffb90dd770, source=FIB_SOURCE_INTERFACE) at
vpp/src/vnet/fib/fib_entry_src.c:705
#12 0x7778f0b0 in fib_entry_src_action_reactivate
(fib_entry=0x7fffb90dd770, source=FIB_SOURCE_INTERFACE) at
vpp/src/vnet/fib/fib_entry_src.c:1221
#13 0x7778d873 in fib_entry_back_walk_notify
(node=0x7fffb90dd770, ctx=0x7fffb89c21d0) at
vpp/src/vnet/fib/fib_entry.c:316
#14 0x7778343b in fib_walk_advance (fwi=) at
vpp/src/vnet/fib/fib_walk.c:368
#15 0x77784107 in fib_walk_sync (parent_type=,
parent_index=, ctx=0x7fffb89c22a0) at
vpp/src/vnet/fib/fib_walk.c:792
#16 0x7779a43b in fib_path_back_walk_notify (node=, ctx=0x7fffb89c22a0) at vpp/src/vnet/fib/fib_path.c:1226
#17 0x7778343b in fib_walk_advance (fwi=) at
vpp/src/vnet/fib/fib_walk.c:368
#18 0x77784107 in fib_walk_sync (parent_type=,
parent_index=, ctx=0x7fffb89c2330) at
vpp/src/vnet/fib/fib_walk.c:792
#19 0x777a6dec in adj_glean_interface_state_change
(vnm=, sw_if_index=5, flags=) at
vpp/src/vnet/adj/adj_glean.c:166
#20 adj_nbr_hw_sw_interface_state_change (vnm=,
sw_if_index=5, arg=) at vpp/src/vnet/adj/adj_glean.c:183
#21 0x770e06cc in vnet_hw_interface_walk_sw (vnm=0x77b570f0
, hw_if_index=, fn=0x777a6da0
, ctx=0x1)
at vpp/src/vnet/interface.c:1062
#22 0x777a6b72 in adj_glean_hw_interface_state_change (vnm=0x2,
hw_if_index=3097238656, flags=) at

Re: [vpp-dev] Query regarding bonding in Vpp 19.08

2020-04-20 Thread steven luong via lists.fd.io
First, your question has nothing to do with bonding. Whatever you are seeing is 
true regardless of bonding configured or not.

Show interfaces displays the admin state of the interface. Whenever you set the 
admin state to up, it is displayed as up regardless of the physical carrier is 
up or down. While the admin state may be up, the physical carrier may be down.

Show hardware displays the physical state of the interface, carrier up or down. 
Admin state must be set to up prior to seeing the hardware carrier state to up.

Steven

From:  on behalf of chetan bhasin 

Date: Sunday, April 19, 2020 at 11:40 PM
To: vpp-dev 
Subject: [vpp-dev] Query regarding bonding in Vpp 19.08

Hi,

I am using vpp 19.08 , When I use bonding configuration , I am seeing below 
output of "show int " CLI .
Query : Is it ok to show the status of slave interface as up in "show 
interface" CLI while as per the show hardware-interface its down ?

vpp# show
int
Name
Idx
State
MTU (L3/IP4/IP6/MPLS)
Counter
Count
BondEthernet0
3
up 9000/0/0/0
rx packets 12
BondEthernet0.811
4
up 0/0/0/0
rx packets 6
BondEthernet0.812
5
up 0/0/0/0
rx packets 6
device_5d/0/0
1
up 9000/0/0/0
rx packets 12
device_5d/0/1
2
up 9000/0/0/0
rx packets 17
rx bytes 1100
drops 14
local0 0
down 0/0/0/0

Thanks,
Chetan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16119): https://lists.fd.io/g/vpp-dev/message/16119
Mute This Topic: https://lists.fd.io/mt/73144225/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Unknown input `tap connect` #vpp

2020-04-14 Thread steven luong via lists.fd.io
tapcli has been deprecated few releases ago. It has been replaced by virtio 
over tap. The new cli is

create tap …

Steven

From:  on behalf of "mauricio.solisjr via lists.fd.io" 

Reply-To: "mauricio.soli...@tno.nl" 
Date: Tuesday, April 14, 2020 at 3:55 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Unknown input `tap connect` #vpp

Hi,
I am trying to connect a tap interface to a vpp using "tap connect tap0" 
command.  I am receive an "Unknown input" error whenever I attempt it.

Am I missing a plugin? I'm on CentOS7 running vpp v20.01 and the following are 
my plugins.

abf_plugin.so gbp_plugin.so  mactime_plugin.so  
sctp_plugin.so

acl_plugin.so gtpu_plugin.so map_plugin.so  
srv6ad_plugin.so

avf_plugin.so hs_apps_plugin.so  mdata_plugin.so
srv6am_plugin.so

builtinurl_plugin.so  http_static_plugin.so  memif_plugin.so
srv6as_plugin.so

cdp_plugin.so igmp_plugin.so nat_plugin.so  
srv6mobile_plugin.so

crypto_ia32_plugin.so ikev2_plugin.sonsh_plugin.so  
stn_plugin.so

crypto_ipsecmb_plugin.so  ila_plugin.so  nsim_plugin.so 
svs_plugin.so

crypto_openssl_plugin.so  ioam_plugin.so oddbuf_plugin.so   
tlsmbedtls_plugin.so

ct6_plugin.so ixge_plugin.so perfmon_plugin.so  
tlsopenssl_plugin.so

dhcp_plugin.sol2e_plugin.so  ping_plugin.so 
tlspicotls_plugin.so

dns_plugin.so l3xc_plugin.so pppoe_plugin.so
unittest_plugin.so

dpdk_plugin.solacp_plugin.so quic_plugin.so 
vmxnet3_plugin.so

flowprobe_plugin.so   lb_plugin.so   rdma_plugin.so

Thanks
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16075): https://lists.fd.io/g/vpp-dev/message/16075
Mute This Topic: https://lists.fd.io/mt/73007641/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread steven luong via lists.fd.io
Dear Andrew,

I confirm that master has been rescued and reverted from “lockdown” back to 
“normal”. Please proceed the “disinfection process” on 19.08 and 20.01 if you 
will.

Steven

From: Andrew  Yourtchenko 
Date: Monday, April 6, 2020 at 8:09 AM
To: "Steven Luong (sluong)" 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] Jobs are failing due to inspect.py

Sphinx upgraded itself last night under the hood to a (crashing) version 3.0.0 
from 2.4.4.

I made a pin on master, so the master should be ok now - rebase and recheck 
please, and let me know if it works!

Will do the same on the other two branches later today if we are all happy 
about the fix on master...
--a


On 6 Apr 2020, at 17:03, steven luong via lists.fd.io 
 wrote:
Folks,

It looks like jobs for all branches, 19.08, 20.01, and master, are failing due 
to this inspect.py error. Could somebody who is familiar with the issue please 
take a look at it?


18:59:12 Exception occurred:

18:59:12   File "/usr/lib/python3.6/inspect.py", line 516, in unwrap

18:59:12 raise ValueError('wrapper loop when unwrapping {!r}'.format(f))

18:59:12 ValueError: wrapper loop when unwrapping scapy.fields.BitEnumField

18:59:12 The full traceback has been saved in /tmp/sphinx-err-o2xo4j0j.log, if 
you want to report the issue to the developers.

18:59:12 Please also report this if it was a user error, so that a better error 
message can be provided next time.

18:59:12 A bug report can be filed in the tracker at 
<https://github.com/sphinx-doc/sphinx/issues>. Thanks!

18:59:12 Makefile:71: recipe for target 'html' failed

18:59:12 make[2]: *** [html] Error 2

18:59:12 make[2]: Leaving directory 
'/w/workspace/vpp-make-test-docs-verify-1908/test/doc'

18:59:12 Makefile:237: recipe for target 'doc' failed

18:59:12 make[1]: *** [doc] Error 2

18:59:12 make[1]: Leaving directory 
'/w/workspace/vpp-make-test-docs-verify-1908/test'

18:59:12 Makefile:449: recipe for target 'test-doc' failed

18:59:12 make: *** [test-doc] Error 2

18:59:12 Build step 'Execute shell' marked build as failure

18:59:12 $ ssh-agent -k


Steven

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16013): https://lists.fd.io/g/vpp-dev/message/16013
Mute This Topic: https://lists.fd.io/mt/72813354/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread steven luong via lists.fd.io
master
https://jenkins.fd.io/job/vpp-make-test-docs-verify-master/19049/console

20.01
https://jenkins.fd.io/job/vpp-make-test-docs-verify-2001/61/console

Steven

From: Paul Vinciguerra 
Date: Monday, April 6, 2020 at 8:35 AM
To: Paul Vinciguerra 
Cc: "Steven Luong (sluong)" , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] Jobs are failing due to inspect.py

I have not been able to reproduce the problem from a fresh ubuntu 18.04 
container and for me, the build succeeds.

build succeeded, 240 warnings.
The HTML pages are in ../../build-root/build-test/doc/html.
make[2]: Leaving directory '/vpp/test/doc'
If someone can send me the error log:
The full traceback has been saved in /tmp/sphinx-err-o2xo4j0j.log
I will gladly look into it.

Paul

On Mon, Apr 6, 2020 at 11:15 AM Paul Vinciguerra via 
lists.fd.io<http://lists.fd.io> 
mailto:vinciconsulting@lists.fd.io>>
 wrote:
Andrew submitted a changeset that backs out the updated Sphinx package.  I am 
building the target 'test-doc' to try to learn the root cause.

On Mon, Apr 6, 2020 at 11:03 AM steven luong via 
lists.fd.io<http://lists.fd.io> 
mailto:cisco@lists.fd.io>> wrote:
Folks,

It looks like jobs for all branches, 19.08, 20.01, and master, are failing due 
to this inspect.py error. Could somebody who is familiar with the issue please 
take a look at it?


18:59:12 Exception occurred:

18:59:12   File "/usr/lib/python3.6/inspect.py", line 516, in unwrap

18:59:12 raise ValueError('wrapper loop when unwrapping {!r}'.format(f))

18:59:12 ValueError: wrapper loop when unwrapping scapy.fields.BitEnumField

18:59:12 The full traceback has been saved in /tmp/sphinx-err-o2xo4j0j.log, if 
you want to report the issue to the developers.

18:59:12 Please also report this if it was a user error, so that a better error 
message can be provided next time.

18:59:12 A bug report can be filed in the tracker at 
<https://github.com/sphinx-doc/sphinx/issues>. Thanks!

18:59:12 Makefile:71: recipe for target 'html' failed

18:59:12 make[2]: *** [html] Error 2

18:59:12 make[2]: Leaving directory 
'/w/workspace/vpp-make-test-docs-verify-1908/test/doc'

18:59:12 Makefile:237: recipe for target 'doc' failed

18:59:12 make[1]: *** [doc] Error 2

18:59:12 make[1]: Leaving directory 
'/w/workspace/vpp-make-test-docs-verify-1908/test'

18:59:12 Makefile:449: recipe for target 'test-doc' failed

18:59:12 make: *** [test-doc] Error 2

18:59:12 Build step 'Execute shell' marked build as failure

18:59:12 $ ssh-agent -k


Steven


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16011): https://lists.fd.io/g/vpp-dev/message/16011
Mute This Topic: https://lists.fd.io/mt/72813354/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread steven luong via lists.fd.io
Folks,

It looks like jobs for all branches, 19.08, 20.01, and master, are failing due 
to this inspect.py error. Could somebody who is familiar with the issue please 
take a look at it?


18:59:12 Exception occurred:

18:59:12   File "/usr/lib/python3.6/inspect.py", line 516, in unwrap

18:59:12 raise ValueError('wrapper loop when unwrapping {!r}'.format(f))

18:59:12 ValueError: wrapper loop when unwrapping scapy.fields.BitEnumField

18:59:12 The full traceback has been saved in /tmp/sphinx-err-o2xo4j0j.log, if 
you want to report the issue to the developers.

18:59:12 Please also report this if it was a user error, so that a better error 
message can be provided next time.

18:59:12 A bug report can be filed in the tracker at 
. Thanks!

18:59:12 Makefile:71: recipe for target 'html' failed

18:59:12 make[2]: *** [html] Error 2

18:59:12 make[2]: Leaving directory 
'/w/workspace/vpp-make-test-docs-verify-1908/test/doc'

18:59:12 Makefile:237: recipe for target 'doc' failed

18:59:12 make[1]: *** [doc] Error 2

18:59:12 make[1]: Leaving directory 
'/w/workspace/vpp-make-test-docs-verify-1908/test'

18:59:12 Makefile:449: recipe for target 'test-doc' failed

18:59:12 make: *** [test-doc] Error 2

18:59:12 Build step 'Execute shell' marked build as failure

18:59:12 $ ssh-agent -k


Steven
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16004): https://lists.fd.io/g/vpp-dev/message/16004
Mute This Topic: https://lists.fd.io/mt/72813354/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Unknown input `ping' #vpp

2020-03-26 Thread steven luong via Lists.Fd.Io
Ping command has been moved to a separate plugin. You probably didn’t have the 
ping plugin enable in your startup.conf. Please add the ping plugin to your 
startup.conf. Something like this will do the trick.

plugins {
…
plugin ping_plugin.so { enable }
}

From:  on behalf of "mauricio.solisjr via Lists.Fd.Io" 

Reply-To: "mauricio.soli...@tno.nl" 
Date: Thursday, March 26, 2020 at 7:16 AM
To: "vpp-dev@lists.fd.io" 
Cc: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Unknown input `ping' #vpp

Hi,
I have followed the simple tutorial to Create an Interface and I'm able to ping 
from the host, but when I try from the VPP I get "Unknown input `ping' ". This 
is also the case for arp.  What could be the issue here?

I'm running:
CentOS 7
vpp v20.01-release built by root on 8e5d994b3d26
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15887): https://lists.fd.io/g/vpp-dev/message/15887
Mute This Topic: https://lists.fd.io/mt/72564347/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vmxnet3 rx-queue error "vmxnet3 failed to activate dev error 1" #vmxnet3 #ipsec

2020-02-25 Thread steven luong via Lists.Fd.Io


From:  on behalf of "ravinder.ya...@hughes.com" 

Date: Tuesday, February 25, 2020 at 7:27 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] vmxnet3 rx-queue error "vmxnet3 failed to activate dev error 
1" #vmxnet3 #ipsec


[Edited Message Follows]
VPP IPsec responder on ESXI VM RHEL 7.6

Is there a limitation on the number of num-tx-queues and num-rx-queues we can 
associate with vmxnet3? I have 12 worker threads running but when i try to 
associate more than 4 num-rx-queues i get error saying "vmxnet3 failed to 
activate dev error 1" 

Setup Details:
I have vpp running on 16 vCPUs and set the worker thread to use vCPU (2-15) i.e 
14 worker threads.

Error:
I run into vmxnet3 error when i try to create vmxnet interface with:

  *   num-tx-queue set to greater than 8 (Error #1 below)
 The limit is 8 for TX. 

  *   num-rx-queues set to greater than 4 (Error #2 below)

The above statement is not entirely true. I am able to set num-rx-queues 
greater than 4 in two different cases as below.

DBGvpp# create interface vmxnet3 :13:00.0 rx-queue-size 2048 tx-queue-size 
2048 num-tx-queues 8 num-rx-queues 8
create interface vmxnet3 :13:00.0 rx-queue-size 2048 tx-queue-size 2048 
num-tx-queues 8 num-rx-queues 8
DBGvpp#
DBGvpp# create interface vmxnet3 :13:00.0  num-tx-queues 8 num-rx-queues 8
create interface vmxnet3 :13:00.0  num-tx-queues 8 num-rx-queues 8
DBGvpp#

However, some odd combinations like (tx=4, rx=5) and (tx=4, rx=6) are unwelcome 
by the ESXi driver. I don’t know why yet.



ERROR #1: vCPUs = 16 and Worker Threads = 14

vpp# create interface vmxnet3 :13:00.0 rx-queue-size 2048 tx-queue-size 
2048 num-tx-queues 12 num-rx-queues 4 bind

ERROR: create interface vmxnet3: number of tx queues must be <= 8 and <= number 
of CPU's assigned to VPP

Works fine when num-tx-queues 8

 Isn’t the error obvious to you? The limit for tx queues is 8.

Steven



ERROR #2:

vpp# create interface vmxnet3 :13:00.0 rx-queue-size 2048 tx-queue-size 
2048 num-tx-queues 8 num-rx-queues 5 bind

ERROR: create interface vmxnet3: error on activating device rc (1)

Works fine when num-rx-queues 4

vpp# create interface vmxnet3 :13:00.0 rx-queue-size 2048 tx-queue-size 
2048 num-tx-queues 8 num-rx-queues 4 bind

vpp# sh int rx-placement

Thread 1 (vpp_wk_0):

  node vmxnet3-input:

vmxnet3-0/13/0/0 queue 0 (polling)

Thread 2 (vpp_wk_1):

  node vmxnet3-input:

vmxnet3-0/13/0/0 queue 1 (polling)

Thread 3 (vpp_wk_2):

  node vmxnet3-input:

vmxnet3-0/13/0/0 queue 2 (polling)

Thread 4 (vpp_wk_3):

  node vmxnet3-input:

vmxnet3-0/13/0/0 queue 3 (polling)

Reference:


https://vpp.flirble.org/master/db/df1/clicmd_src_plugins_vmxnet3.html
create interface vmxnet3  [rx-queue-size ] [tx-queue-size 
] [num-tx-queues ] [num-rx-queues ] [bind] [gso].


Thank you,
Ravin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15525): https://lists.fd.io/g/vpp-dev/message/15525
Mute This Topic: https://lists.fd.io/mt/71527703/21656
Mute #vmxnet3: https://lists.fd.io/mk?hashtag=vmxnet3=1480452
Mute #ipsec: https://lists.fd.io/mk?hashtag=ipsec=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vmxnet3 rx-queue error "vmxnet3 failed to activate dev error 1" #vmxnet3 #ipsec

2020-02-24 Thread steven luong via Lists.Fd.Io
It works for me although I am on ubuntu 1804 VM. Your statement is unclear to 
me if your problem is strictly related to more than 4 rx-queues or not when you 
say
“but when i try to associate more than 4 num-rx-queues i get error”

Does it work fine when you reduce the number of rx-queues less than 4?

vpp# create interface vmxnet3 :13:00.0 rx-queue-size 2048 tx-queue-size 
2048 num-tx-queues 2 num-rx-queues 4
create interface vmxnet3 :13:00.0 rx-queue-size 2048 tx-queue-size 2048 
num-tx-queues 2 num-rx-queues 4
vpp# sh int
sh int
  Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
Counter  Count
local00 down  0/0/0/0
vmxnet3-0/13/0/0  1 down 9000/0/0/0
vpp# set interface state vmxnet3-0/13/0/0 up
set interface state vmxnet3-0/13/0/0 up
vpp# sh int
sh int
  Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
Counter  Count
local00 down  0/0/0/0
vmxnet3-0/13/0/0  1  up  9000/0/0/0 rx packets  
   2
rx bytes
 120
drops   
   2
ip4 
   2
vpp#

$ $ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 18.04.1 LTS
Release:   18.04
Codename:  bionic
$

From:  on behalf of "ravinder.ya...@hughes.com" 

Date: Monday, February 24, 2020 at 8:47 PM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] vmxnet3 rx-queue error "vmxnet3 failed to activate dev error 
1" #vmxnet3 #ipsec

VPP IPsec responder on ESXI VM RHEL 7.6

Is there a limitation on the number of num-tx-queues and num-rx-queues we can 
associate with vmxnet3? I have 12 worker threads running but when i try to 
associate more than 4 num-rx-queues i get error saying "vmxnet3 failed to 
activate dev error 1" 

vppctl create interface vmxnet3 :13:00.0 rx-queue-size 2048 tx-queue-size 
2048 num-tx-queues 2 num-rx-queues 4 bind

Reference:


https://vpp.flirble.org/master/db/df1/clicmd_src_plugins_vmxnet3.html
create interface vmxnet3  [rx-queue-size ] [tx-queue-size 
] [num-tx-queues ] [num-rx-queues ] [bind] [gso].

Thank you,
Ravin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15514): https://lists.fd.io/g/vpp-dev/message/15514
Mute This Topic: https://lists.fd.io/mt/71527703/21656
Mute #vmxnet3: https://lists.fd.io/mk?hashtag=vmxnet3=1480452
Mute #ipsec: https://lists.fd.io/mk?hashtag=ipsec=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet

2020-01-07 Thread steven luong via Lists.Fd.Io
So you now know what command in the dpdk section that dpdk doesn’t like.
Try adding “log-level debug” in the dpdk section of startup.conf to see if you 
can find more helpful messages in “vppctl show log” from dpdk why it fails to 
probe the NIC.

Steven

From:  on behalf of Gencli Liu <18600640...@163.com>
Date: Tuesday, January 7, 2020 at 7:42 PM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet

Hi steven:
Thank you for your reply!
I followed your advice(3), and made some attempts.
I create three startup config files of vpp:
The first one is named "startup.conf.smp", the second one is named 
"startup.conf"(my config file).
And The third one is named "startup.conf.ok", it just delete "uio-driver 
vfio-pci" on the basis of "startup.conf".
- startup.conf.smp 
unix { interactive }
-- startup.conf ---
unix {
nodaemon
log /var/log/vpp/vpp.log
full-coredump
cli-listen /run/vpp/cli.sock
gid vpp
}
api-trace { on }
api-segment { gid vpp }
socksvr { default }
cpu {
main-core 30
corelist-workers 26,28
workers 2
}
dpdk {
dev default {
num-rx-queues 1
num-tx-queues 2
}
dev :3b:00.0
dev :3b:00.1
#dev :3b:00.2
#dev :3b:00.3
uio-driver vfio-pci
}
-- startup.conf.ok ---
unix {
nodaemon
log /var/log/vpp/vpp.log
full-coredump
cli-listen /run/vpp/cli.sock
gid vpp
}
api-trace { on }
api-segment { gid vpp }
socksvr { default }
cpu {
main-core 30
corelist-workers 26,28
workers 2
}
dpdk {
dev default {
num-rx-queues 1
num-tx-queues 2
}
dev :3b:00.0
dev :3b:00.1
#dev :3b:00.2
#dev :3b:00.3
#uio-driver vfio-pci (just modify here on the basis of startup.conf--my config)
# @steven, do you know why this option makes the difference?@
}
-
I'm not familiar with testpmd, but I will take some time to find out how it 
works.

(1)When I use “startup.conf.smp", and follow the operation sequence below after 
Centos startup, it seems ok:
[root@localhost ~]# modprobe vfio-pci
[root@localhost ~]# lsmod | grep vfio
vfio_pci   41412  2
vfio_iommu_type1   22440  0
vfio   32657  8 vfio_iommu_type1,vfio_pci
irqbypass  13503  4 kvm,vfio_pci
[root@localhost ~]#/usr/bin/numactl --cpubind=0 --membind=0 /usr/bin/vpp -c 
/etc/vpp/startup.conf.smp
...
vpp# show pci
Address  Sock VID:PID Link Speed   Driver  Product Name 
   Vital Product Data
...
:3b:00.0   0  8086:1572   8.0 GT/s x8  vfio-pciXL710 40GbE 
Controller  RV: 0x 86
:3b:00.1   0  8086:1572   8.0 GT/s x8  vfio-pciXL710 40GbE 
Controller  RV: 0x 86
:3b:00.2   0  8086:1572   8.0 GT/s x8  vfio-pciXL710 40GbE 
Controller  RV: 0x 86
:3b:00.3   0  8086:1572   8.0 GT/s x8  vfio-pciXL710 40GbE 
Controller  RV: 0x 86
vpp# show interface
  Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
Counter  Count
TenGigabitEthernet3b/0/0  1 down 9000/0/0/0
TenGigabitEthernet3b/0/1  2 down 9000/0/0/0
TenGigabitEthernet3b/0/2  3 down 9000/0/0/0
TenGigabitEthernet3b/0/3  4 down 9000/0/0/0
local00 down  0/0/0/0
vpp# show log
2020/01/08 10:35:38:001 warn   dpdk   Unsupported PCI device 
0x14e4:0x165f found at PCI address :18:00.0
2020/01/08 10:35:38:017 warn   dpdk   Unsupported PCI device 
0x14e4:0x165f found at PCI address :18:00.1
2020/01/08 10:35:38:032 warn   dpdk   Unsupported PCI device 
0x14e4:0x165f found at PCI address :19:00.0
2020/01/08 10:35:38:076 warn   dpdk   Unsupported PCI device 
0x14e4:0x165f found at PCI address :19:00.1
2020/01/08 10:35:39:447 warn   dpdk   EAL init args: -c 2 -n 4 
--in-memory --file-prefix vpp --master-lcore 1
2020/01/08 10:35:40:682 notice dpdk   EAL: Detected 32 lcore(s)
2020/01/08 10:35:40:682 notice dpdk   EAL: Detected 2 NUMA nodes
2020/01/08 10:35:40:682 notice dpdk   EAL: Some devices want iova as va 
but pa will be used because.. EAL: vfio-noiommu mode configured
2020/01/08 10:35:40:682 notice dpdk   EAL: No available hugepages 
reported in hugepages-1048576kB
2020/01/08 10:35:40:682 notice dpdk   EAL: No free hugepages reported 
in hugepages-1048576kB
2020/01/08 10:35:40:682 notice dpdk   EAL: No free hugepages reported 
in hugepages-1048576kB
2020/01/08 10:35:40:682 notice dpdk   EAL: No available hugepages 
reported in hugepages-1048576kB
2020/01/08 10:35:40:682 notice dpdk   EAL: Probing VFIO support...
2020/01/08 10:35:40:682 notice dpdk   EAL: VFIO support initialized
2020/01/08 10:35:40:682 notice dpdk   EAL: WARNING! Base virtual 
address hint (0xa80001000 != 0x7f4f4000) not 

Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet

2020-01-06 Thread steven luong via Lists.Fd.Io
It is likely a resource problem – when VPP requests more descriptors and/or 
TX/RX queues for the NIC than the firmware has, DPDK fails to initialize the 
interface. There are few ways to figure out what the problem is.

  1.  Bypass VPP and run testpmd with debug options turned on, something like 
this

--log-level=lib.eal,debug --log-level=pmd,debug

  1.  Reduce your RX/TX queues and descriptors to the minimum for the 
interface. What do you have in the dpdk section for the NIC, anyway?
  2.  Run VPP with bare minimum config.

unix { interactive }

I would start with (3) since it is the easiest. I hope DPDK will discover the 
NIC in show hardware if the interface is already bound to DPDK. If that is the 
case, you can proceed to check and see if your startup.conf oversubscribes the 
descriptors and/or TX/RX queues. If (3) still fails, try (1). It is a bit more 
work. I am sure you’ll figure out how to compile testpmd APP and run it.

Steven

From:  on behalf of Gencli Liu <18600640...@163.com>
Date: Monday, January 6, 2020 at 7:22 PM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet

Hi Ezpeer :
Thank you for you advice.
I did a test to update X710's driver(i40e) and X710's Firmware:
i40e's new version : 2.10.19.30
Firmware's new version : 6.80  (inter delete NVM's 7.0 and 7.1 verison 
files because they introduced some serious errors).
(I will try again when inter republish NVM's 7.1).
UIO-driver use vfio-pci.
Even so, NIC still have no driver when use "vppctl show pci".
I aslo switch UIO-driver to uio_pci_generic by modidfy vpp.service and 
startup.conf, the result has little different but still not OK.

This is my environment:
[root@localhost i40e]# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)

[root@localhost i40e]# uname -a
Linux localhost.localdomain 3.10.0-1062.4.1.el7.x86_64 #1 SMP Fri Oct 18 
17:15:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost i40e]# uname -r
3.10.0-1062.4.1.el7.x86_64

[root@localhost i40e]# modinfo i40e
filename:   
/lib/modules/3.10.0-1062.4.1.el7.x86_64/updates/drivers/net/ethernet/intel/i40e/i40e.ko
version:2.10.19.30
license:GPL
description:Intel(R) 40-10 Gigabit Ethernet Connection Network Driver
author: Intel Corporation, 
retpoline:  Y
rhelversion:7.7
srcversion: 9EB781BDF574D047F098566
alias:  pci:v8086d158Bsv*sd*bc*sc*i*
alias:  pci:v8086d158Asv*sd*bc*sc*i*
alias:  pci:v8086d37D3sv*sd*bc*sc*i*
alias:  pci:v8086d37D2sv*sd*bc*sc*i*
alias:  pci:v8086d37D1sv*sd*bc*sc*i*
alias:  pci:v8086d37D0sv*sd*bc*sc*i*
alias:  pci:v8086d37CFsv*sd*bc*sc*i*
alias:  pci:v8086d37CEsv*sd*bc*sc*i*
alias:  pci:v8086d0D58sv*sd*bc*sc*i*
alias:  pci:v8086d0CF8sv*sd*bc*sc*i*
alias:  pci:v8086d1588sv*sd*bc*sc*i*
alias:  pci:v8086d1587sv*sd*bc*sc*i*
alias:  pci:v8086d104Fsv*sd*bc*sc*i*
alias:  pci:v8086d104Esv*sd*bc*sc*i*
alias:  pci:v8086d15FFsv*sd*bc*sc*i*
alias:  pci:v8086d1589sv*sd*bc*sc*i*
alias:  pci:v8086d1586sv*sd*bc*sc*i*
alias:  pci:v8086d1585sv*sd*bc*sc*i*
alias:  pci:v8086d1584sv*sd*bc*sc*i*
alias:  pci:v8086d1583sv*sd*bc*sc*i*
alias:  pci:v8086d1581sv*sd*bc*sc*i*
alias:  pci:v8086d1580sv*sd*bc*sc*i*
alias:  pci:v8086d1574sv*sd*bc*sc*i*
alias:  pci:v8086d1572sv*sd*bc*sc*i*
depends:ptp
vermagic:   3.10.0-1062.4.1.el7.x86_64 SMP mod_unload modversions
parm:   debug:Debug level (0=none,...,16=all) (int)

[root@localhost i40e]# ethtool -i p1p3
driver: i40e
version: 2.10.19.30
firmware-version: 6.80 0x80003c64 1.2007.0
expansion-rom-version:
bus-info: :3b:00.2
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

This is vfio-pci error:
[root@localhost ~]# lsmod | grep vfio
vfio_pci   41412  0
vfio_iommu_type1   22440  0
vfio   32657  3 vfio_iommu_type1,vfio_pci
irqbypass  13503  2 kvm,vfio_pci
[root@localhost ~]#
[root@localhost ~]# dmesg
[   41.670075] VFIO - User Level meta-driver version: 0.3
[   43.380387] i40e :3b:00.0: removed PHC from p1p1
[   43.583958] vfio-pci: probe of :3b:00.0 failed with error -22
[   43.595876] i40e :3b:00.1: removed PHC from p1p2
[   43.811364] vfio-pci: probe of :3b:00.1 failed with error -22

[root@localhost ~]# cat /usr/lib/systemd/system/vpp.service
[Unit]
Description=Vector Packet Processing Process
After=syslog.target network.target auditd.service

[Service]
ExecStartPre=-/bin/rm -f /dev/shm/db /dev/shm/global_vm /dev/shm/vpe-api
#ExecStartPre=-/sbin/modprobe uio_pci_generic

Re: [vpp-dev] #vpp #bond How to config bond mode in vpp?

2020-01-03 Thread steven luong via Lists.Fd.Io
DPDK bonding is no longer supported in 19.08. However, you can use VPP native 
bonding to accomplish the same thing.

create bond mode active-backup load-balance l34
set interface state BondEthernet0 up
bond add BondEthernet0 GigabitEthernet1/0/0
bond add BondEthernet0 GigabitEthernet1/0/1

Steven

From:  on behalf of "wei_sky2...@163.com" 

Date: Thursday, January 2, 2020 at 10:14 PM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] #vpp #bond How to config bond mode in vpp?


[Edited Message Follows]

we use vpp version 19.08
I config bond in VPP startup.conf

dpdk{
...
vdev eth_bond0,mode=1,slave=:01:00.0,slave=:01:00.1,xmit_policy=l34
..
}
But when vpp start,I show int

DBGvpp# show int

  Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
Counter  Count

GigabitEthernet1/0/0  1 down 9000/0/0/0

GigabitEthernet1/0/1  2 down 9000/0/0/0

UnknownEthernet2  3 down 9000/0/0/0

local00 down  0/0/0/0



when I Set state UnknownEthernet2 up, vpp crash down.
Another question, when I compile VPP, Is need cfg 
RTE_LIBRTE_PMD_BOND(build/external/packages/dpdk.mk) option on?
Thanks!


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15038): https://lists.fd.io/g/vpp-dev/message/15038
Mute This Topic: https://lists.fd.io/mt/69394532/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Mute #bond: https://lists.fd.io/mk?hashtag=bond=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP with DPDK vhost ---- VPP with DPDK virtio

2019-08-12 Thread steven luong via Lists.Fd.Io
Using VPP+DPDK virtio to connect with VPP + vhost-user is not actively 
maintained. I got it working couple years ago by committing some changes to the 
DPDK virtio code. Since then, I’ve not been playing with it anymore. Breakage 
is possible. I could spend a whole week on it to get it working again (maybe). 
However, there is no telling when it will break again. Here is my 
recommendation:
If you are using containers, use memif interface.
If you are not using containers, use VPP native virtio to connect to VPP 
vhost-user.

Steven

From:  on behalf of Sharon Enoch 
Date: Monday, August 12, 2019 at 11:32 AM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] VPP with DPDK vhost  VPP with DPDK virtio

Tried the same with the latest VPP version master which has the DPDK 19.05 
version and still faced the same issue..

VPP +  DPDK vhost mode
DBGvpp# show version
vpp v20.01-rc0~20-g6b53fd516 built by root on kickseed at Tue Aug 13 02:12:38 
JST 2019

DBGvpp# show int
  Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
Counter  Count
VhostEthernet01  up  1500/0/0/0
local00 down  0/0/0/0
DBGvpp# show hardware-interfaces
  NameIdx   Link  Hardware
VhostEthernet0 1 up   VhostEthernet0
  Link speed: 10 Gbps
  Ethernet address 56:48:4f:53:54:00
  VhostEthernet
carrier up full duplex mtu 1500
flags: admin-up pmd maybe-multiseg
rx: queues 1 (max 1), desc 1024 (min 0 max 65535 align 1)
tx: queues 1 (max 1), desc 1024 (min 0 max 65535 align 1)
max rx packet len: -1
promiscuous: unicast off all-multicast off
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip
rx offload active: none
tx offload avail:  vlan-insert multi-segs
tx offload active: multi-segs
rss avail: none
rss active:none
tx burst function: (nil)
rx burst function: (nil)

local0 0down  local0
  Link speed: unknown
  local




VPP + DPDK VIRTIO
DBGvpp# show int
  Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
Counter  Count
VirtioUser0   1  up  1500/0/0/0 tx packets  
   5
tx bytes
 550
local00 down  0/0/0/0
DBGvpp# show hardware-interfaces
  NameIdx   Link  Hardware
VirtioUser01 up   VirtioUser0
  Link speed: 10 Gbps
  Ethernet address e2:84:b1:74:4d:f2
  Virtio User
carrier up full duplex mtu 1500
flags: admin-up pmd maybe-multiseg
rx: queues 1 (max 1), desc 1024 (min 0 max 65535 align 1)
tx: queues 1 (max 1), desc 1024 (min 0 max 65535 align 1)
max rx packet len: 9728
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip udp-cksum tcp-cksum tcp-lro jumbo-frame
rx offload active: jumbo-frame
tx offload avail:  vlan-insert udp-cksum tcp-cksum tcp-tso multi-segs
tx offload active: multi-segs
rss avail: none
rss active:none
tx burst function: virtio_xmit_pkts_inorder
rx burst function: virtio_recv_pkts_inorder

tx frames ok   5
tx bytes ok  550
local0 0down  local0
  Link speed: unknown
  local


I noticed the following new logs though in the new version in the show log 
output

The below on the virtio side
DBGvpp# show log
2019/08/13 03:10:43:956 errperfmonNo table for cpuid 306e4
2019/08/13 03:10:43:956 errperfmon  model 3e, stepping 4
2019/08/13 03:10:43:984 warn   dpdk   EAL init args: -c 2 -n 4 
--in-memory --no-pci --log-level 8 --huge-dir /dev/hugepages --vdev 
virtio_user0,path=/opt/sock/sock2.sock --file-prefix vpp --master-lcore 1
2019/08/13 03:10:46:666 notice dpdk   DPDK drivers found 1 ports...
2019/08/13 03:10:46:666 warn   dpdk   unsupported rx offloads requested 
on port 0: scatter
2019/08/13 03:10:46:669 notice dpdk   EAL: Detected 24 lcore(s)
2019/08/13 03:10:46:669 notice dpdk   EAL: Detected 2 NUMA nodes
2019/08/13 03:10:46:669 notice dpdk   EAL: Probing VFIO support...
2019/08/13 03:10:46:669 notice dpdk   EAL: WARNING! Base virtual 
address hint (0xa80001000 != 0x7f7c) not respected!
2019/08/13 03:10:46:669 notice dpdk   EAL:This may cause issues 
with mapping memory into secondary processes
2019/08/13 03:10:46:669 notice dpdk   EAL: WARNING! Base virtual 
address hint (0xc2000 != 0x7f73c000) not respected!
2019/08/13 03:10:46:669 notice dpdk   EAL:This may cause issues 
with mapping memory into secondary 

Re: [vpp-dev] #vpp Connecting a VPP inside a container to a VPP inside host using vhost-virtio-user interfaces

2019-07-29 Thread steven luong via Lists.Fd.Io
create interface virtio 

Or just use memif interface. That is what it is built for.

Steven

From:  on behalf of "mojtaba.eshghi" 

Date: Monday, July 29, 2019 at 5:50 AM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] #vpp Connecting a VPP inside a container to a VPP inside 
host using vhost-virtio-user interfaces


Thanks Steven

Yes, I have 1G hugepages on container. Can you tell me how to use native vpp 
virtio? How can I use it to connect container to host. I don't want to use NIC.

Thanks


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13606): https://lists.fd.io/g/vpp-dev/message/13606
Mute This Topic: https://lists.fd.io/mt/32635595/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] #vpp Connecting a VPP inside a container to a VPP inside host using vhost-virtio-user interfaces

2019-07-28 Thread steven luong via Lists.Fd.Io
The debug CLI was replaced by
set logging class vhost-user level debug
Use show log to view the messages.

Did you configure 1GB huge on the container? It used to be that dpdk virtio 
requires 1GB huge page. Not sure if it is still the case nowadays. If you use 
VPP 19.04 or later, you could try VPP native virtio instead.

Steven

From:  on behalf of "mojtaba.eshghi" 

Date: Sunday, July 28, 2019 at 1:56 PM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] #vpp Connecting a VPP inside a container to a VPP inside 
host using vhost-virtio-user interfaces


Hi Guys,

I'm trying to connect a vpp inside a host to a vpp which is running inside a 
lxc container on a linux machine. I am going to do that via vhost-virtio-user.
The problem is that after I create the vhost and virtio-user, the output of the 
"show vhost-user interfaces" command on the host vpp is as below:

[cid:attach_0_15B5ADA3A865928D_10559@lists.fd.io]

(it seems that a handshake is not done in the right way, the "memory regions 
(total 0)" part...).
* VPP 19.x does not support "debug vhost on" cli command.
I will put my configuration files one by one here.

Here is my startup.conf for host vpp:

[cid:attach_1_15B5ADA3A86981DB_10559@lists.fd.io]


This one is startup.conf of the container VPP:
[cid:attach_2_15B5ADA3A86B4E72_10559@lists.fd.io]

These are the commands issued in the host vpp:
create vhost-user socket /etc/vpp/sock3.sock server
set int state VirtualEthernet0/0/0 up

Both virtio-user0 interface in container vpp and the virtualethernet0/0/0 
inside the host are created successfully. When I check the "htop" utility, it 
seems that after creation of these interfaces both of vpps begin to poll (CPU 
usage 100% on two cores).
ANY HELP WOULD BE APPRECIATED

Mojtaba,
Thanks,
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13600): https://lists.fd.io/g/vpp-dev/message/13600
Mute This Topic: https://lists.fd.io/mt/32635595/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Many "tx packet drops (no available descriptors)" #vpp

2019-07-11 Thread steven luong via Lists.Fd.Io
Packet drops due to “no available descriptors” for vhost-user interface is 
extremely likely when doing performance test with qemu’s default vring queue 
size. You need to specify the vring queue size of 1024, default is 256, when 
you bring up the VM. The queue size can be specified either via XML file if 
using virsh or qemu command line if using qemu command line. You’ll need to get 
the more recent qemu version for the queue size option to work. No additional 
option is needed in VPP.

Steven

From:  on behalf of "amir...@rad.com" 
Date: Monday, July 1, 2019 at 8:15 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Many "tx packet drops (no available descriptors)" #vpp

Hi All,

I'm using VPP 19.02.1 with DPDK 18.11.0.
When running traffic through VM I can see many drop packets due to "tx packet 
drops (no available descriptors)" on the VirtualEthernet ports.
I've tried to change some of the DPDK parameters with no success.

Which parameters might cause such packets drop ?
vpp# show errors
   CountNode  Reason
  39937373l2-output   L2 output packets
  39937476l2-learnL2 learn packets
 2l2-learnL2 learn misses
 7l2-learnL2 learn hit updates
  39937476l2-inputL2 input packets
   105l2-floodL2 flood packets
   103l2-floodL2 replication complete
 25689 VirtualEthernet0/0/1-txtx packet drops (no available 
descriptors)
 36406 VirtualEthernet0/0/2-txtx packet drops (no available 
descriptors)

In addition I've tried to figure out how much memory the VPP+DPDK is using but 
didn't see any change on the Huge pages usage.

Thanks in advance,
Amir.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13492): https://lists.fd.io/g/vpp-dev/message/13492
Mute This Topic: https://lists.fd.io/mt/32273101/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] some questions about LACP(link bonding mode 4)

2019-06-13 Thread steven luong via Lists.Fd.Io
Yes on both counts.

From:  on behalf of Zhiyong Yang 
Date: Wednesday, June 12, 2019 at 10:33 PM
To: "Yang, Zhiyong" , "Steven Luong (sluong)" 
, "vpp-dev@lists.fd.io" , "Carter, 
Thomas N" 
Cc: "Kinsella, Ray" 
Subject: Re: [vpp-dev] some questions about LACP(link bonding mode 4)

I mean,  Is there no limit on the number of active-slaves as well?

From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Zhiyong Yang
Sent: Thursday, June 13, 2019 1:30 PM
To: Steven Luong (sluong) ; vpp-dev@lists.fd.io; Carter, 
Thomas N 
Cc: Kinsella, Ray 
Subject: Re: [vpp-dev] some questions about LACP(link bonding mode 4)

Thanks a lot, Steven.

Does it mean that all linkings(active-slaves in the same bonding group) of 
negotiating successfully can join loading balance for TX in VPP , right?

Thanks
Zhiyong
From: Steven Luong (sluong) [mailto:slu...@cisco.com]
Sent: Thursday, June 13, 2019 12:10 PM
To: Yang, Zhiyong mailto:zhiyong.y...@intel.com>>; 
vpp-dev@lists.fd.io; Carter, Thomas N 
mailto:thomas.car...@charter.com>>
Cc: Kinsella, Ray mailto:ray.kinse...@intel.com>>
Subject: Re: some questions about LACP(link bonding mode 4)

There is no limit on the number of slaves in a bonding group in VPP’s 
implementation. I don’t know/remember how to select one port over another from 
the spec without reading it carefully again.

Steven

From: "Yang, Zhiyong" mailto:zhiyong.y...@intel.com>>
Date: Tuesday, June 11, 2019 at 11:09 PM
To: "vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>, "Steven Luong (sluong)" 
mailto:slu...@cisco.com>>, "Carter, Thomas N" 
mailto:thomas.car...@charter.com>>
Cc: "Kinsella, Ray" mailto:ray.kinse...@intel.com>>
Subject: some questions about LACP(link bonding mode 4)

Hi Steven and VPP guys,

I’m studying the lacp implementation. and want to know if it is 
possible that Numa is considered in LACP active port selection. As we all know, 
if  slave with local numa can be preferred to help improve throughput.
One question is that current code seems no linking number limit for 
active-slave, right ?
Does it mean if we can add any number of linkings to link aggregation group? If 
two sides (actor and partner) are negotiated  well for linking?  I also don’t 
see that how to selection policy in group. What do I miss?
Port_priority is set to 0xff by default and don’t change in any case.
If numa is considered, do we use it when negotiation happens, and make slave 
with local numa selected in priority?

Thanks
Zhiyong



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13278): https://lists.fd.io/g/vpp-dev/message/13278
Mute This Topic: https://lists.fd.io/mt/32038398/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] some questions about LACP(link bonding mode 4)

2019-06-12 Thread steven luong via Lists.Fd.Io
There is no limit on the number of slaves in a bonding group in VPP’s 
implementation. I don’t know/remember how to select one port over another from 
the spec without reading it carefully again.

Steven

From: "Yang, Zhiyong" 
Date: Tuesday, June 11, 2019 at 11:09 PM
To: "vpp-dev@lists.fd.io" , "Steven Luong (sluong)" 
, "Carter, Thomas N" 
Cc: "Kinsella, Ray" 
Subject: some questions about LACP(link bonding mode 4)

Hi Steven and VPP guys,

I’m studying the lacp implementation. and want to know if it is 
possible that Numa is considered in LACP active port selection. As we all know, 
if  slave with local numa can be preferred to help improve throughput.
One question is that current code seems no linking number limit for 
active-slave, right ?
Does it mean if we can add any number of linkings to link aggregation group? If 
two sides (actor and partner) are negotiated  well for linking?  I also don’t 
see that how to selection policy in group. What do I miss?
Port_priority is set to 0xff by default and don’t change in any case.
If numa is considered, do we use it when negotiation happens, and make slave 
with local numa selected in priority?

Thanks
Zhiyong



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13272): https://lists.fd.io/g/vpp-dev/message/13272
Mute This Topic: https://lists.fd.io/mt/32038398/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp received signal SIGSEGV, PC 0x7fe54936bc40, faulting address 0x0

2019-05-29 Thread steven luong via Lists.Fd.Io
Clueless with useless tracebacks. Please hook up gdb and get the complete 
human-readable backtrace.

Steven

From:  on behalf of Mostafa Salari 
Date: Wednesday, May 29, 2019 at 10:24 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] vpp received signal SIGSEGV, PC 0x7fe54936bc40, faulting 
address 0x0

When i install a vpp stable/1810 on an ubuntu server 1604 successfully, but, 
vpp service does not start. I executed `/usr/bin/vpp -c /etc/vpp/startup.conf` 
command to see the error:

...
load_one_plugin:189: Loaded plugin: tlsopenssl_plugin.so (openssl based TLS 
Engine)
load_one_plugin:117: Plugin disabled (default): unittest_plugin.so
load_one_plugin:189: Loaded plugin: vmxnet3_plugin.so (VMWare Vmxnet3 Device 
Plugin)
/usr/bin/vpp[1779]: received signal SIGSEGV, PC 0x7fe54936bc40, faulting 
address 0x0
/usr/bin/vpp[1779]: #0  0x7fe5493910fc 0x7fe5493910fc
/usr/bin/vpp[1779]: #1  0x7fe548efa390 0x7fe548efa390
/usr/bin/vpp[1779]: #2  0x7fe54936bc40 0x7fe54936bc40
/usr/bin/vpp[1779]: #3  0x7fe54936d793 vlib_register_all_static_nodes + 0x33
/usr/bin/vpp[1779]: #4  0x7fe549368952 vlib_main + 0x182
/usr/bin/vpp[1779]: #5  0x7fe549390643 0x7fe549390643
/usr/bin/vpp[1779]: #6  0x7fe548c897cc 0x7fe548c897cc
Aborted

What is wrong with Vmxnet plugin?

Note: When I ignore the startup.conf file and run ` /usr/bin/vpp` the SIGSEGV 
line is not appeared:

root@ubuntu:~/mostafa/vpp/build-root# /usr/bin/vpp
vlib_plugin_early_init:361: plugin path /usr/lib/vpp_plugins
load_one_plugin:189: Loaded plugin: abf_plugin.so (ACL based Forwarding)
load_one_plugin:189: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:189: Loaded plugin: avf_plugin.so (Intel Adaptive Virtual 
Function (AVF) Device Plugin)
load_one_plugin:191: Loaded plugin: cdp_plugin.so
load_one_plugin:189: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit 
(DPDK))
load_one_plugin:189: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:189: Loaded plugin: gbp_plugin.so (Group Based Policy)
load_one_plugin:189: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:189: Loaded plugin: igmp_plugin.so (IGMP messaging)
load_one_plugin:189: Loaded plugin: ila_plugin.so (Identifier-locator 
addressing for IPv6)
load_one_plugin:189: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:117: Plugin disabled (default): ixge_plugin.so
load_one_plugin:189: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:189: Loaded plugin: lacp_plugin.so (Link Aggregation Control 
Protocol)
load_one_plugin:189: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:189: Loaded plugin: mactime_plugin.so (Time-based MAC 
source-address filter)
load_one_plugin:189: Loaded plugin: map_plugin.so (Mapping of address and port 
(MAP))
load_one_plugin:189: Loaded plugin: memif_plugin.so (Packet Memory Interface 
(experimental))
load_one_plugin:189: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:189: Loaded plugin: nsh_plugin.so (Network Service Header)
load_one_plugin:189: Loaded plugin: nsim_plugin.so (network delay simulator 
plugin)
load_one_plugin:189: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:189: Loaded plugin: router.so (router)
load_one_plugin:189: Loaded plugin: srv6ad_plugin.so (Dynamic SRv6 proxy)
load_one_plugin:189: Loaded plugin: srv6am_plugin.so (Masquerading SRv6 proxy)
load_one_plugin:189: Loaded plugin: srv6as_plugin.so (Static SRv6 proxy)
load_one_plugin:189: Loaded plugin: stn_plugin.so (VPP Steals the NIC for 
Container integration)
load_one_plugin:189: Loaded plugin: svs_plugin.so (Source VRF Select)
load_one_plugin:189: Loaded plugin: tlsmbedtls_plugin.so (mbedtls based TLS 
Engine)
load_one_plugin:189: Loaded plugin: tlsopenssl_plugin.so (openssl based TLS 
Engine)
load_one_plugin:117: Plugin disabled (default): unittest_plugin.so
load_one_plugin:189: Loaded plugin: vmxnet3_plugin.so (VMWare Vmxnet3 Device 
Plugin)
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13182): https://lists.fd.io/g/vpp-dev/message/13182
Mute This Topic: https://lists.fd.io/mt/31836213/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Vpp 1904 does not recognize vmxnet3 interfaces

2019-05-12 Thread steven luong via Lists.Fd.Io
Mostafa,

Vmxnet3 NICs are in the blacklist by default. Please specify the vmxnet3 pci’s 
in the dpdk section of the startup.conf.

Steven

From:  on behalf of Mostafa Salari 
Date: Sunday, May 12, 2019 at 4:52 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Vpp 1904 does not recognize vmxnet3 interfaces

Hi
I installed version stable/19.04 successfully. Although binding of VM's 
interfaces (drv=vmxnet3) to DPDK with dpdk-devbind.py works with no error, but, 
i can not see the NICs when executing: vppctl show interface command


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13000): https://lists.fd.io/g/vpp-dev/message/13000
Mute This Topic: https://lists.fd.io/mt/31596059/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Bond interface won't respond ping #vnet #vpp

2019-02-23 Thread steven luong via Lists.Fd.Io
Dear Anthony,

Please check the bond interface to see if the active slaves count has any 
positive number using show bond. Since you didn’t configure LACP on VM2, I 
believe you’ve not gotten any active slave in VPP. Your solution is to 
configure a bond interface in VM2 using mode 4 (I believe) if it is running 
linux.

vpp# sh bond
sh bond
interface name   sw_if_index  mode  load balance  active slaves  slaves
BondEthernet03lacp  l22  2
vpp#

Alternately, you can replace lacp from VPP with xor as shown below when you 
create the bond interface if you don’t want to bother with lacp on VM2.

create bond mode xor

Steven

From:  on behalf of Anthony Linz 
Date: Friday, February 22, 2019 at 11:11 PM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] Bond interface won't respond ping #vnet #vpp

Dear Steven
Thank you very much for your quick and nice response.
I suppose the beginning of my question might misguide you.
Let me clear my test scenario:

I have a Virtual Machine (VM1) which runs VPP and has an interface called 
GigabitEthernet0/8/0. I have another Virtual Machine (VM2) that it's interface 
(eth1) is in the same Kernel bridge as GigabitEthernet0/8/0 (and does not run 
VPP). So that I am able to ping GigabitEthernet0/8/0 easily (from VM2).

After creating a Bond interface (in VM1) with LACP mode (and also setting the 
state up and giving it IP address) and binding GigabitEthernet0/8/0 to the Bond 
interface I have neither ping of  GigabitEthernet0/8/0 nor Bond interface (from 
VM2).

I traced packets and It seems Bond interface is able to receive ARP packets 
(and learn it), however is not able to respond back to it. So I went ahead and 
set arps (for VM2) so that it started to send ICMP packets and again Bond 
interface (in VM1) was able to receive it but could not respond to it.

So my question really is:

1) Why don't Bond interface is not able to respond to either ARP or ICMP 
packets?
2) Is my config incomplete? (or maybe wrong)
3) Is it logical to ping a Bond Interface?

Thank you again for your response,
--
Anthony
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12333): https://lists.fd.io/g/vpp-dev/message/12333
Mute This Topic: https://lists.fd.io/mt/29916205/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Mute #vnet: https://lists.fd.io/mk?hashtag=vnet=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Bond interface won't respond ping #vnet #vpp

2019-02-19 Thread steven luong via Lists.Fd.Io
Anthony,

L3 address should be configured on the bond interface, not the slave interface. 
If there is a switch in between VPP’s physical NICs and the VM, the switch 
should be configured to do the bonding, not the remote VM. Use show bond to 
check the bundle is created successfully between VPP and the remote partner.

vpp# sh bond
sh bond
interface name   sw_if_index  mode  load balance  active slaves  slaves
BondEthernet03lacp  l22  2
vpp# show bond details
show bond details
BondEthernet0
  mode: lacp
  load balance: l2
  number of active slaves: 2
TenGigabitEthernet8/0/0
TenGigabitEthernet8/0/1
  number of slaves: 2
TenGigabitEthernet8/0/0
TenGigabitEthernet8/0/1
  device instance: 0
  interface id: 0
  sw_if_index: 3
  hw_if_index: 3
vpp#

Steven

From:  on behalf of Anthony Linz 
Date: Tuesday, February 19, 2019 at 3:11 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Bond interface won't respond ping #vnet #vpp

Dear all
I was working on some LACP testing in VPP 19.01.
I configured VPP like this:

create bond mode lacp
set interface state BondEthernet0 up
bond add BondEthernet0 GigabitEthernet0/x/0

The 'GigabitEthernet0/x/0' has some IP address like '10.10.10.80/24' and before 
adding it as a slave to Bond interface, pinging the interface from another 
Virtual Machine with IP address '10.10.10.81/24' was all OK (Both interfaces 
are in the same switch).
After binding GigabitEthernet to Bond, slave interface stopped responding . The 
trace looks like this:

00:02:07:006939: dpdk-input
  GigabitEthernet0/8/0 rx queue 0
  buffer 0x494a: current data 0, length 98, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0
 ext-hdr-valid
 l4-cksum-computed l4-cksum-correct l2-hdr-offset 0
  PKT MBUF: port 0, nb_segs 1, pkt_len 98
buf_len 2176, data_len 98, ol_flags 0x0, data_off 128, phys_addr 0x22b25300
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
  IP4: 52:54:00:16:b8:43 -> 52:54:00:d8:95:e0
  ICMP: 10.10.10.81 -> 10.10.10.80
tos 0x00, ttl 64, length 84, checksum 0x168f
fragment id 0xfb65, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xb04d
00:02:07:010073: bond-input
  src 52:54:00:16:b8:43, dst 52:54:00:d8:95:e0, GigabitEthernet0/8/0 -> 
BondEthernet0
00:02:07:010103: ethernet-input
  IP4: 52:54:00:16:b8:43 -> 52:54:00:d8:95:e0
00:02:07:010129: ip4-input
  ICMP: 10.10.10.81 -> 10.10.10.80
tos 0x00, ttl 64, length 84, checksum 0x168f
fragment id 0xfb65, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xb04d
00:02:07:010139: ip4-not-enabled
ICMP: 10.10.10.81 -> 10.10.10.80
  tos 0x00, ttl 64, length 84, checksum 0x168f
  fragment id 0xfb65, flags DONT_FRAGMENT
ICMP echo_request checksum 0xb04d
00:02:07:010154: error-drop
  ethernet-input: no error

I thought Bond interface needs an IP so I tried to give it an IP as 
'60.60.60.60/24' and so change Virtual Machine's IP as '60.60.60.81/24' and 
tried to ping Bond interface this time.
I wasn't able to get ICMP packets so I tried to manually set ARPs in both VPP 
and VM.
Result of pinging trace looks pretty like this:
00:27:21:247255: dpdk-input
  GigabitEthernet0/8/0 rx queue 0
  buffer 0x115a5: current data 0, length 98, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x1
  ext-hdr-valid
  l4-cksum-computed l4-cksum-correct l2-hdr-offset 0
  PKT MBUF: port 0, nb_segs 1, pkt_len 98
buf_len 2176, data_len 98, ol_flags 0x0, data_off 128, phys_addr 0x22e569c0
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
  IP4: 52:54:00:16:b8:43 -> 52:54:00:d8:95:e0
  ICMP: 60.60.60.81 -> 60.60.60.60
tos 0x00, ttl 64, length 84, checksum 0xeee1
fragment id 0x5ac2, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x7fb4
00:27:21:247283: bond-input
  src 52:54:00:16:b8:43, dst 52:54:00:d8:95:e0, GigabitEthernet0/8/0 -> 
BondEthernet0
00:27:21:247286: ethernet-input
  IP4: 52:54:00:16:b8:43 -> 52:54:00:d8:95:e0
00:27:21:247287: ip4-input
  ICMP: 60.60.60.81 -> 60.60.60.60
tos 0x00, ttl 64, length 84, checksum 0xeee1
fragment id 0x5ac2, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x7fb4
00:27:21:247288: ip4-lookup
  fib 0 dpo-idx 4 flow hash: 0x
  ICMP: 60.60.60.81 -> 60.60.60.60
tos 0x00, ttl 64, length 84, checksum 0xeee1
fragment id 0x5ac2, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x7fb4
00:27:21:247290: ip4-local
ICMP: 60.60.60.81 -> 60.60.60.60
  tos 0x00, ttl 64, length 84, checksum 0xeee1
  fragment id 0x5ac2, flags DONT_FRAGMENT
ICMP echo_request checksum 0x7fb4
00:27:21:247290: ip4-icmp-input
  ICMP: 60.60.60.81 -> 60.60.60.60
tos 0x00, ttl 64, length 84, checksum 0xeee1
fragment id 0x5ac2, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x7fb4
00:27:21:247292: ip4-icmp-echo-request
  ICMP: 60.60.60.81 -> 

Re: [vpp-dev] Problem switching a bonded interface from L2 to L3 mode

2018-10-25 Thread steven luong via Lists.Fd.Io
members to ethernet-input 
,

but when I switch it back to l3, all the members don't redirect to bond-input.


saint_...@aliyun.com

From: steven luong via Lists.Fd.Io<mailto:sluong=cisco@lists.fd.io>
Date: 2018-10-25 12:06
To: saint_...@aliyun.com<mailto:saint_...@aliyun.com>; John Lo 
(loj)<mailto:l...@cisco.com>
CC: vpp-dev<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] Problem switching a bonded interface from L2 to L3 mode
Are you using VPP native bonding driver or DPDK bonding driver? How do you 
configure the bonding interface? Please include the configuration and process 
to recreate the problem.

Steven

From:  on behalf of "saint_sun 孙 via Lists.Fd.Io" 

Reply-To: "saint_...@aliyun.com" 
Date: Wednesday, October 24, 2018 at 8:07 PM
To: "John Lo (loj)" 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] Problem switching a bonded interface from L2 to L3 mode

Ok, I forgot to click the reply-all. who is familiar with the problem I 
mentioned below please tell me,thanks!






2018年10月25日 星期四 +0800 10:32 发件人 John Lo (loj) :

Please include vpp-dev alias on any questions about VPP, instead of unicast an 
individual only. Then whoever is familiar with the area you are asking about 
may respond.  Does anyone know about the potential problem of switching between 
L2 and L3 modes on a bonded interface described in this email (I did change the 
email subject accordingly)?   -John



From: saint_sun 孙 mailto:saint_...@aliyun.com>>
Sent: Wednesday, October 24, 2018 8:52 PM
To: John Lo (loj) mailto:l...@cisco.com>>
Subject: Re: RE: RE: [vpp-dev]vlan interface support?



I am very grateful for your help.

And when I test the VLAN, maybe I find a bug that if I switch the mode of the 
Bonding interface to L2 and then switch back to L3,the bonding interface can 
not work as before.

I have found the error code that is in the mode switch function of bonding 
device: when set the mode of bonding interface to l2, all the members of the 
bonding interface will be set to l2, but when set the bonding interface back, 
all the members do not recover to l3.



At last I have another doubt that when I configure an IP address for an 
interface, then I ping the address from VPP, it’s failed, why?should I do other 
more settings?




2018年10月15日 星期一 +0800 22:20 发件人 John Lo (loj) 
mailto:l...@cisco.com>>:

If there is a BVI in a BD with sub-interfaces in the same BD which get packets 
with VLAN tags, it is best to configure a tag-rewrite operation on the 
sub-interfaces to pop their VLAN tags.  Then all packets are forwarded in BD 
without VLAN tags.  The CLI is “set interface l2 tag-rewrite  
pop 1” if the sub-interface has one VLAN tag.–John



From: saint_sun 孙 mailto:saint_...@aliyun.com>>
Sent: Monday, October 15, 2018 2:42 AM
To: John Lo (loj) mailto:l...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: RE: [vpp-dev]vlan interface support?



I am very grateful to you for your advice!

I have tested it,But there is something wrong. when I receive an arp or ICMP 
packet from a l2 subif  that joins to bd 200 and encapsulates vlan 200,the 
reply packet that send from the subif does not have the vlan tag 200. Any more 
other configurations should I set?




可用于iOS的myMail发送


2018年10月14日 星期日 +0800 04:58 发件人 l...@cisco.com<mailto:l...@cisco.com> 
mailto:l...@cisco.com>>:

The equivalent of VLAN on a switch in VPP is a bridge domain or BD for short.  
One can put interfaces or VLAN sub-interfaces in a BD to form a L2 network 
among all interfaces in it.  One can also create a loopback interface, put it 
in a BD as its BVI (Bridge Virtual Interface) and assign IP addresses to it.  
Then packet can be IP forwarded into a BD through its BVI.



Following is the VPP CLI sequence to create a loopback (resulting in interface 
name loop0), put it in BD 13 as a BVI, and put an IP address on it:



loopback create mac 1a:2b:3c:4d:5e:6f

set interface l2 bridge loop0 13 bvi

set interface state loop0 up

set interface ip address loop0 6.0.0.250/16



Regards,

John



From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>> On Behalf Of saint_sun ? via 
Lists.Fd.Io
Sent: Friday, October 12, 2018 3:52 AM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev]vlan interface support?



I have a question:

Does vpp has the function like the configuration example:

interface f0/1

switchport access vlan 10



Interface vlan 10

ip address 10.0.0.1 255.255.255.0



If vpp has the function, where can I find the command and the source code?



Another question, does vpp support superVLAN?



anyone who knows please tell me, appreciate for your reply very much!


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10993): https://lists.fd.io/g/vpp-dev/message/10993
Mute This Topic: https://lists.fd.io/mt/27628831/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Problem switching a bonded interface from L2 to L3 mode

2018-10-24 Thread steven luong via Lists.Fd.Io
Are you using VPP native bonding driver or DPDK bonding driver? How do you 
configure the bonding interface? Please include the configuration and process 
to recreate the problem.

Steven

From:  on behalf of "saint_sun 孙 via Lists.Fd.Io" 

Reply-To: "saint_...@aliyun.com" 
Date: Wednesday, October 24, 2018 at 8:07 PM
To: "John Lo (loj)" 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] Problem switching a bonded interface from L2 to L3 mode

Ok, I forgot to click the reply-all. who is familiar with the problem I 
mentioned below please tell me,thanks!





2018年10月25日 星期四 +0800 10:32 发件人 John Lo (loj) :

Please include vpp-dev alias on any questions about VPP, instead of unicast an 
individual only. Then whoever is familiar with the area you are asking about 
may respond.  Does anyone know about the potential problem of switching between 
L2 and L3 modes on a bonded interface described in this email (I did change the 
email subject accordingly)?   -John



From: saint_sun 孙 mailto:saint_...@aliyun.com>>
Sent: Wednesday, October 24, 2018 8:52 PM
To: John Lo (loj) mailto:l...@cisco.com>>
Subject: Re: RE: RE: [vpp-dev]vlan interface support?



I am very grateful for your help.

And when I test the VLAN, maybe I find a bug that if I switch the mode of the 
Bonding interface to L2 and then switch back to L3,the bonding interface can 
not work as before.

I have found the error code that is in the mode switch function of bonding 
device: when set the mode of bonding interface to l2, all the members of the 
bonding interface will be set to l2, but when set the bonding interface back, 
all the members do not recover to l3.



At last I have another doubt that when I configure an IP address for an 
interface, then I ping the address from VPP, it’s failed, why?should I do other 
more settings?




2018年10月15日 星期一 +0800 22:20 发件人 John Lo (loj) 
mailto:l...@cisco.com>>:

If there is a BVI in a BD with sub-interfaces in the same BD which get packets 
with VLAN tags, it is best to configure a tag-rewrite operation on the 
sub-interfaces to pop their VLAN tags.  Then all packets are forwarded in BD 
without VLAN tags.  The CLI is “set interface l2 tag-rewrite  
pop 1” if the sub-interface has one VLAN tag.–John



From: saint_sun 孙 mailto:saint_...@aliyun.com>>
Sent: Monday, October 15, 2018 2:42 AM
To: John Lo (loj) mailto:l...@cisco.com>>
Cc: vpp-dev@lists.fd.io
Subject: Re: RE: [vpp-dev]vlan interface support?



I am very grateful to you for your advice!

I have tested it,But there is something wrong. when I receive an arp or ICMP 
packet from a l2 subif  that joins to bd 200 and encapsulates vlan 200,the 
reply packet that send from the subif does not have the vlan tag 200. Any more 
other configurations should I set?




可用于iOS的myMail发送


2018年10月14日 星期日 +0800 04:58 发件人 l...@cisco.com 
mailto:l...@cisco.com>>:

The equivalent of VLAN on a switch in VPP is a bridge domain or BD for short.  
One can put interfaces or VLAN sub-interfaces in a BD to form a L2 network 
among all interfaces in it.  One can also create a loopback interface, put it 
in a BD as its BVI (Bridge Virtual Interface) and assign IP addresses to it.  
Then packet can be IP forwarded into a BD through its BVI.



Following is the VPP CLI sequence to create a loopback (resulting in interface 
name loop0), put it in BD 13 as a BVI, and put an IP address on it:



loopback create mac 1a:2b:3c:4d:5e:6f

set interface l2 bridge loop0 13 bvi

set interface state loop0 up

set interface ip address loop0 6.0.0.250/16



Regards,

John



From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of saint_sun ? via 
Lists.Fd.Io
Sent: Friday, October 12, 2018 3:52 AM
To: vpp-dev@lists.fd.io
Cc: vpp-dev@lists.fd.io
Subject: [vpp-dev]vlan interface support?



I have a question:

Does vpp has the function like the configuration example:

interface f0/1

switchport access vlan 10



Interface vlan 10

ip address 10.0.0.1 255.255.255.0



If vpp has the function, where can I find the command and the source code?



Another question, does vpp support superVLAN?



anyone who knows please tell me, appreciate for your reply very much!


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10977): https://lists.fd.io/g/vpp-dev/message/10977
Mute This Topic: https://lists.fd.io/mt/27628831/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] "Incompatible UPT version” error when running VPP v18.01 with DPDK v17.11 on VMWare with VMXNET3 interface ,ESXI Version 6.5/6.7

2018-10-01 Thread steven luong via Lists.Fd.Io
DPDK is expecting UPT version > 0 and ESXi 6.5/6.7 seems to be returning UPT 
version 0, when it was queried, which is not a supported version. I am using 
ESXi 6.0 and it is working fine. You could try ESXi 6.0 to see if it helps.

Steven

From:  on behalf of truring truring 
Date: Monday, October 1, 2018 at 10:12 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] "Incompatible UPT version” error when running VPP v18.01 
with DPDK v17.11 on VMWare with VMXNET3 interface ,ESXI Version 6.5/6.7

Hi Everyone,

We're trying to run VPP-18.01 with DPDK plugin in a guest machine running Red 
Hat 7.5. The host is ESXi version 6.5/6.7.

guest machine have VMXNET3 Interface ,i am getting following error while 
running the vpp :

PMD: eth_vmxnet3_dev_init():  >>

PMD: eth_vmxnet3_dev_init(): Hardware version : 1

PMD: eth_vmxnet3_dev_init(): Using device version 1



PMD: eth_vmxnet3_dev_init(): UPT hardware version : 0

PMD: eth_vmxnet3_dev_init(): Incompatible UPT version.



Any help to resolve above issue would be greatly appreciated. Thanks!



Regards
Puneet

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10725): https://lists.fd.io/g/vpp-dev/message/10725
Mute This Topic: https://lists.fd.io/mt/26443715/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] [BUG] vhost-user display bug

2018-09-20 Thread steven luong via Lists.Fd.Io
Stephen,

Fix for vhost
https://gerrit.fd.io/r/14920

I'll take care of vmxnet3 later.

Steven

On 9/20/18, 10:57 AM, "vpp-dev@lists.fd.io on behalf of Stephen Hemminger" 
 wrote:


Why is there not a simple link on FD.io developer web page to report bugs.
Reporting bugs page talks about the data BUT DOESN'T GIVE THE PROCESS.

If you are using JIRA why not vpp-bugs mail alias?



I tried creating a virtio user device and noticed that the device name
displayed is garbage:
DBGvpp# create vhost-user socket /var/run/vpp/sock1.sock server
VirtualEthernet0/0/0
DBGvpp# show vhost-user
Virtio vhost-user interfaces
Global:
  coalesce frames 32 time 1e-3
  number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0�?x�D (ifindex 3)
   ^^

Looking at source, vmxnet3 has same bug.


Looks like a bug related to string handling.
Somewhat disgruntled that VPP had to reinvent strings in C.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10586): https://lists.fd.io/g/vpp-dev/message/10586
Mute This Topic: https://lists.fd.io/mt/25822425/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [**EXTERNAL**] Re: [vpp-dev] tx-drops with vhost-user interface

2018-08-30 Thread steven luong via Lists.Fd.Io
Chandra,

Would you mind sharing what you found? You’ve piqued my curiosity.

Steven

From: "Chandra Mohan, Vijay Mohan" 
Date: Thursday, August 30, 2018 at 10:18 AM
To: "Yichen Wang (yicwang)" , "Steven Luong (sluong)" 
, "vpp-dev@lists.fd.io" 
Subject: Re: [**EXTERNAL**] Re: [vpp-dev] tx-drops with vhost-user interface

This was my test setup issue which is resolved now able to pass traffic. Thanks 
for all the inputs.

-Vijay


From: "Chandra Mohan, Vijay Mohan" 
Date: Tuesday, August 7, 2018 at 10:01 AM
To: "Yichen Wang (yicwang)" , "Steven Luong (sluong)" 
, "vpp-dev@lists.fd.io" 
Subject: Re: [**EXTERNAL**] Re: [vpp-dev] tx-drops with vhost-user interface

Hi Yichen,

It’s a good question. First thing I checked was if the interface is ‘up’ in VM 
and its up.

Steven,
Thanks for responding back quickly. I am trying to reproduce this issue without 
any changes from my side, just to isolate the issue. Will update the thread 
soon.

-Vijay

From: "Yichen Wang (yicwang)" 
Date: Monday, August 6, 2018 at 2:17 PM
To: "Steven Luong (sluong)" , "Chandra Mohan, Vijay Mohan" 
, "vpp-dev@lists.fd.io" 
Subject: [**EXTERNAL**] Re: [vpp-dev] tx-drops with vhost-user interface

Hi, Vijay,

Sorry to ask dumb question, can you make sure the interface in your VM (either 
Linux Kernel or DPDK) is “UP”?

Regards,
Yichen

From:  on behalf of "steven luong via Lists.Fd.Io" 

Reply-To: "Steven Luong (sluong)" 
Date: Monday, August 6, 2018 at 12:10 PM
To: "Chandra Mohan, Vijay Mohan" , "vpp-dev@lists.fd.io" 

Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] tx-drops with vhost-user interface

Vijay,

From the show output, I can’t really tell what your problem is. If you could 
provide additional information about your environment, I could try setting it 
up and see what’s wrong. Things I need from you are exact VPP version, VPP 
configuration, qemu startup command line or the XML startup file if you use 
virsh, and the version of the VM distro.

Steven

From:  on behalf of "Chandra Mohan, Vijay Mohan" 

Date: Monday, August 6, 2018 at 10:31 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] tx-drops with vhost-user interface

Hi,

I am trying to pass traffic with vhost-user interface and seeing tx-drops on 
virtual interface. Here is the setup: created a bridge domain with a physical 
interface and a vhost-user interface. Physical interface GigabitEthernet5/0/0 
is connected to traffic generator. As shown below, observing drops on  
VirtualEthernet0/0/0 .

Following is the config and vhost-user commands o/p:

DBGvpp# show bridge-domain 1 detail
  BD-ID   Index   BSN  Age(min)  Learning  U-Forwrd  UU-Flood  Flooding  
ARP-Term  BVI-Intf
1   1  0 offonononon   off  
 N/A

   Interface   If-idx ISN  SHG  BVI  TxFlood
VLAN-Tag-Rewrite
 GigabitEthernet5/0/03 10-  * none
 GigabitEthernet5/0/14 10-  * none
 VirtualEthernet0/0/05 10-  * none

Virtual interface is operationally up. Connected to virtual interface server in 
VM.

DBGvpp# show hardware-interfaces VirtualEthernet0/0/0
  NameIdx   Link  Hardware
VirtualEthernet0/0/0   5 up   VirtualEthernet0/0/0
  Ethernet address 02:fe:98:19:c2:6b

DBGvpp# show interface VirtualEthernet0/0/0

  Name   Idx   State  Counter  Count

VirtualEthernet0/0/0  5 up   tx packets 
1

 tx bytes   
   60

 drops  
1





DBGvpp# show errors

   CountNode  Reason

 3l2-output   L2 output packets

 2l2-learnL2 learn packets

 2l2-learnL2 learn misses

 2l2-inputL2 input packets

 3l2-floodL2 flood packets

 1 VirtualEthernet0/0/0-txtx packet drops (no available 
descriptors)


DBGvpp# show vhost-user VirtualEthernet0/0/0
Virtio vhost-user interfaces
Global:
  coalesce frames 32 time 1e-3
  number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 5)
virtio_net_hdr_sz 12
features mask (0x):
 features (0x150208000):
   VIRTIO_NET_F_MRG_RXBUF (15)
   VIRTIO_NET_F_GUEST_ANNOUNCE (21)
   VIRTIO_F_INDIRECT_DESC (28)
   VHOST_USER_F_PROTOCOL_FEATURES (30)
   VIRTIO_F_VERSION_1 (32)
  protocol features (0x3)
   VHOST_USER_PROTOCOL_F_MQ (0)
   VHOST_U

Re: [vpp-dev] LACP link bonding issue

2018-08-17 Thread steven luong via Lists.Fd.Io
Aleksander,

I found the CLI bug. You can easily workaround with it. Please set the physical 
interface state up first in your CLI sequence and it will work.

create bond mode lacp load-balance l23
bond add BondEthernet0 GigabitEtherneta/0/0
bond add BondEthernet0 GigabitEtherneta/0/1
set interface ip address BondEthernet0 10.0.0.1/24
set interface state GigabitEtherneta/0/0 up   < move these two lines to the 
beginning, prior to create bond
set interface state GigabitEtherneta/0/1 up
set interface state BondEthernet0 up

Steven
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10207): https://lists.fd.io/g/vpp-dev/message/10207
Mute This Topic: https://lists.fd.io/mt/24525535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] LACP link bonding issue

2018-08-16 Thread steven luong via Lists.Fd.Io
Aleksander,

This problem should be easy to figure out if you can gdb the code. When the 
very first slave interface is added to the bonding group via the command “bond 
add BondEthernet0 GigabitEthnerneta/0/0/1”,

- The PTX machine schedules the interface with the periodic timer via 
lacp_schedule_periodic_timer().
- lacp-process is signaled with event_start to enable with periodic timer. 
lacp_process() only calls lacp_periodic() if “enabled” is set .

One of these two things is not happening in your platform/environment and I 
cannot explain why with bare eyes. GDB the above two places will solve the 
mystery. Of course, it works in my environment all the times and I am not 
seeing the problem. What is your working environment? VM or bare metal? What 
flavor of linux distro and version? I am running VPP on Ubuntu-1604 on bare 
metal.

Steven


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10189): https://lists.fd.io/g/vpp-dev/message/10189
Mute This Topic: https://lists.fd.io/mt/24525535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] LACP link bonding issue

2018-08-15 Thread steven luong via Lists.Fd.Io
This configuration is not supported in VPP.

Steven

From:  on behalf of Aleksander Djuric 

Date: Wednesday, August 15, 2018 at 12:33 AM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] LACP link bonding issue

In addition.. I have tried to configure LACP in dpdk section of vpp 
startup.conf.. and I've got the same output:

startup.conf:
unix {
   nodaemon
   log /var/log/vpp/vpp.log
   full-coredump
   cli-listen /run/vpp/cli.sock
   gid vpp
}

api-trace {
   on
}

api-segment {
   gid vpp
}

socksvr {
   default
}

dpdk {
   socket-mem 2048
   num-mbufs 131072

   dev :0a:00.0
   dev :0a:00.1
   dev :0a:00.2
   dev :0a:00.3

   vdev eth_bond0,mode=4,slave=:0a:00.0,slave=:0a:00.1,xmit_policy=l23
}

plugins {
   path /usr/lib/vpp_plugins
}

vpp# sh int
 Name   IdxState  MTU (L3/IP4/IP6/MPLS) Counter 
 Count
BondEthernet0 5 down 9000/0/0/0
GigabitEtherneta/0/0  1  bond-slave  9000/0/0/0
GigabitEtherneta/0/1  2  bond-slave  9000/0/0/0
GigabitEtherneta/0/2  3 down 9000/0/0/0
GigabitEtherneta/0/3  4 down 9000/0/0/0
local00 down  0/0/0/0
vpp# set interface ip address BondEthernet0 10.0.0.2/24
vpp# set interface state BondEthernet0 up
vpp# clear hardware
vpp# clear error
vpp# show hardware
 NameIdx   Link  Hardware
BondEthernet0  5 up   Slave-Idx: 1 2
 Ethernet address 00:0b:ab:f4:bd:84
 Ethernet Bonding
   carrier up full duplex speed 2000 auto mtu 9202
   flags: admin-up pmd maybe-multiseg
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/0   1slave GigabitEtherneta/0/0
 Ethernet address 00:0b:ab:f4:bd:84
 Intel e1000
   carrier up full duplex speed 1000 auto mtu 9202  promisc
   flags: pmd maybe-multiseg bond-slave bond-slave-up tx-offload 
intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/1   2slave GigabitEtherneta/0/1
 Ethernet address 00:0b:ab:f4:bd:84
 Intel e1000
   carrier up full duplex speed 1000 auto mtu 9202  promisc
   flags: pmd maybe-multiseg bond-slave bond-slave-up tx-offload 
intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/2   3down  GigabitEtherneta/0/2
 Ethernet address 00:0b:ab:f4:bd:86
 Intel e1000
   carrier down
   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/3   4down  GigabitEtherneta/0/3
 Ethernet address 00:0b:ab:f4:bd:87
 Intel e1000
   carrier down
   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

local0 0down  local0
 local
vpp# show error
  CountNode  Reason
vpp# trace add dpdk-input 50
vpp# show trace
--- Start of thread 0 vpp_main ---
No packets in trace buffer
vpp# ping 10.0.0.1

Statistics: 5 sent, 0 received, 100% packet loss
vpp# show trace
--- Start of thread 0 vpp_main ---
No packets in trace buffer

Thanks in advance for any help..

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10174): https://lists.fd.io/g/vpp-dev/message/10174
Mute This Topic: https://lists.fd.io/mt/24525535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] LACP link bonding issue

2018-08-15 Thread steven luong via Lists.Fd.Io
Aleksander,

The problem is LACP periodic timer is not running as shown in your output. I 
wonder if lacp-process is launched properly or got stuck. Could you please do 
show run and check on the health of lacp-process?

 periodic timer: not running

Steven

From:  on behalf of Aleksander Djuric 

Date: Wednesday, August 15, 2018 at 12:11 AM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] LACP link bonding issue

Hi Steven,

Thanks much for the answer. Yes, these 2 boxes’ interfaces are connected back 
to back.
Both sides shows same diagnostics results, here is the output:

vpp# sh int
 Name   IdxState  MTU (L3/IP4/IP6/MPLS) Counter 
 Count
BondEthernet0 5  up  9000/0/0/0
GigabitEtherneta/0/0  1  up  9000/0/0/0 tx-error
   1
GigabitEtherneta/0/1  2  up  9000/0/0/0 tx-error
   1
GigabitEtherneta/0/2  3 down 9000/0/0/0
GigabitEtherneta/0/3  4 down 9000/0/0/0
local00 down  0/0/0/0   drops   
   2
vpp# clear hardware
vpp# clear error
vpp# clear hardware
vpp# clear error
vpp# ping 10.0.0.1

Statistics: 5 sent, 0 received, 100% packet loss
vpp# show hardware
 NameIdx   Link  Hardware
BondEthernet0  5 up   BondEthernet0
 Ethernet address 00:0b:ab:f4:bd:84
GigabitEtherneta/0/0   1 up   GigabitEtherneta/0/0
 Ethernet address 00:0b:ab:f4:bd:84
 Intel e1000
   carrier up full duplex speed 1000 auto mtu 9202
   flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/1   2 up   GigabitEtherneta/0/1
 Ethernet address 00:0b:ab:f4:bd:84
 Intel e1000
   carrier up full duplex speed 1000 auto mtu 9202
   flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/2   3down  GigabitEtherneta/0/2
 Ethernet address 00:0b:ab:f4:bd:86
 Intel e1000
   carrier down
   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/3   4down  GigabitEtherneta/0/3
 Ethernet address 00:0b:ab:f4:bd:87
 Intel e1000
   carrier down
   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

local0 0down  local0
 local
vpp# show error
  CountNode  Reason
5ip4-glean   ARP requests sent
5BondEthernet0-txno slave
vpp# show lacp details
 GigabitEtherneta/0/0
   debug: 0
   loopback port: 0
   port moved: 0
   ready_n: 0
   ready: 0
   Actor
 system: 00:0b:ab:f4:bd:84
 system priority: 65535
 key: 5
 port priority: 255
 port number: 1
 state: 0x7
   LACP_STATE_LACP_ACTIVITY (0)
   LACP_STATE_LACP_TIMEOUT (1)
   LACP_STATE_AGGREGATION (2)
   Partner
 system: 00:00:00:00:00:00
 system priority: 65535
 key: 5
 port priority: 255
 port number: 1
 state: 0x1
   LACP_STATE_LACP_ACTIVITY (0)
 wait while timer: not running
 current while timer: not running
 periodic timer: not running
   RX-state: EXPIRED
   TX-state: TRANSMIT
   MUX-state: DETACHED
   PTX-state: PERIODIC_TX

 GigabitEtherneta/0/1
   debug: 0
   loopback port: 0
   port moved: 0
   ready_n: 0
   ready: 0
   Actor
 system: 00:0b:ab:f4:bd:84
 system priority: 65535
 key: 5
 port priority: 255
 port number: 2
 state: 0x7
   LACP_STATE_LACP_ACTIVITY (0)
   LACP_STATE_LACP_TIMEOUT (1)
   LACP_STATE_AGGREGATION (2)
   Partner
 system: 00:00:00:00:00:00
 system priority: 65535
 key: 5
 port priority: 255
 port number: 2
 state: 0x1
   LACP_STATE_LACP_ACTIVITY (0)
 wait while timer: not running
 current while timer: not running
 periodic timer: not running
   RX-state: EXPIRED
   TX-state: TRANSMIT
   MUX-state: DETACHED
   PTX-state: PERIODIC_TX

vpp# trace add dpdk-input 50
vpp# show trace
--- Start of thread 0 vpp_main ---
No packets in trace buffer
vpp# ping 10.0.0.1

Statistics: 5 sent, 0 received, 100% packet loss
vpp# show trace
--- Start of thread 0 vpp_main ---
No packets in trace buffer

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10173): https://lists.fd.io/g/vpp-dev/message/10173
Mute This Topic: https://lists.fd.io/mt/24525535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]

Re: [vpp-dev] LACP link bonding issue

2018-08-14 Thread steven luong via Lists.Fd.Io
I forgot to ask if these 2 boxes’ interfaces are connected back to back or 
through a switch.

Steven

From:  on behalf of "steven luong via Lists.Fd.Io" 

Reply-To: "Steven Luong (sluong)" 
Date: Tuesday, August 14, 2018 at 8:24 AM
To: Aleksander Djuric , "vpp-dev@lists.fd.io" 

Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] LACP link bonding issue

Aleksander

It looks like the LACP packets are not going out to the interfaces as expected 
or being dropped. Additional output and trace are needed to determine why. 
Please collect the following from both sides.

clear hardware
clear error

wait a few seconds

show hardware
show error
show lacp details
trace add dpdk-input 50

wait a few seconds

show trace

Steven

From:  on behalf of Aleksander Djuric 

Date: Tuesday, August 14, 2018 at 7:28 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] LACP link bonding issue

Hi all,

I'm trying to setup bonding in mode 4 (LACP) between 2 VPP hosts and
I have encounterd the problem of no active slaves on bond interface. Both hosts 
runs VPP v18.10-rc0. Same config runs perfect in other modes. Any idea?

1st VPP config:

create bond mode lacp load-balance l23
bond add BondEthernet0 GigabitEtherneta/0/0
bond add BondEthernet0 GigabitEtherneta/0/1
set interface ip address BondEthernet0 10.0.0.1/24<http://10.0.0.1/24>
set interface state GigabitEtherneta/0/0 up
set interface state GigabitEtherneta/0/1 up
set interface state BondEthernet0 up

2nd VPP config:

create bond mode lacp load-balance l23
bond add BondEthernet0 GigabitEtherneta/0/0
bond add BondEthernet0 GigabitEtherneta/0/1
set interface ip address BondEthernet0 10.0.0.2/24<http://10.0.0.2/24>
set interface state GigabitEtherneta/0/0 up
set interface state GigabitEtherneta/0/1 up
set interface state BondEthernet0 up

vpp1# ping 10.0.0.2
Statistics: 5 sent, 0 received, 100% packet loss

vpp1# sh int
 Name   IdxState  MTU (L3/IP4/IP6/MPLS) Counter 
 Count
BondEthernet0 5  up  9000/0/0/0 tx packets  
  10
   tx bytes 
420
   drops
 10
GigabitEtherneta/0/0  1  up  9000/0/0/0 tx-error
   1
GigabitEtherneta/0/1  2  up  9000/0/0/0 tx-error
   1
GigabitEtherneta/0/2  3 down 9000/0/0/0
GigabitEtherneta/0/3  4 down 9000/0/0/0
local00 down  0/0/0/0   drops   
   2

vpp1# sh bond
interface name   sw_if_index  mode load balance  active slaves  slaves
BondEthernet05lacp l23   0  2

vpp1# show lacp
actor state 
 partner state
interface namesw_if_index  bond interface   
exp/def/dis/col/syn/agg/tim/act  exp/def/dis/col/syn/agg/tim/act
GigabitEtherneta/0/0  1BondEthernet0  0   0   0   0   0   1 
  1   10   0   0   0   0   0   0   1
  LAG ID: [(,00-0b-ab-f4-f9-66,0005,00ff,0001), 
(,00-00-00-00-00-00,0005,00ff,0001)]
  RX-state: EXPIRED, TX-state: TRANSMIT, MUX-state: DETACHED, PTX-state: 
PERIODIC_TX
GigabitEtherneta/0/1  2BondEthernet0  0   0   0   0   0   1 
  1   10   0   0   0   0   0   0   1
  LAG ID: [(,00-0b-ab-f4-f9-66,0005,00ff,0002), 
(,00-00-00-00-00-00,0005,00ff,0002)]
  RX-state: EXPIRED, TX-state: TRANSMIT, MUX-state: DETACHED, PTX-state: 
PERIODIC_TX

Regards,
Aleksander

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10149): https://lists.fd.io/g/vpp-dev/message/10149
Mute This Topic: https://lists.fd.io/mt/24525535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] LACP link bonding issue

2018-08-14 Thread steven luong via Lists.Fd.Io
Aleksander

It looks like the LACP packets are not going out to the interfaces as expected 
or being dropped. Additional output and trace are needed to determine why. 
Please collect the following from both sides.

clear hardware
clear error

wait a few seconds

show hardware
show error
show lacp details
trace add dpdk-input 50

wait a few seconds

show trace

Steven

From:  on behalf of Aleksander Djuric 

Date: Tuesday, August 14, 2018 at 7:28 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] LACP link bonding issue

Hi all,

I'm trying to setup bonding in mode 4 (LACP) between 2 VPP hosts and
I have encounterd the problem of no active slaves on bond interface. Both hosts 
runs VPP v18.10-rc0. Same config runs perfect in other modes. Any idea?

1st VPP config:

create bond mode lacp load-balance l23
bond add BondEthernet0 GigabitEtherneta/0/0
bond add BondEthernet0 GigabitEtherneta/0/1
set interface ip address BondEthernet0 10.0.0.1/24
set interface state GigabitEtherneta/0/0 up
set interface state GigabitEtherneta/0/1 up
set interface state BondEthernet0 up

2nd VPP config:

create bond mode lacp load-balance l23
bond add BondEthernet0 GigabitEtherneta/0/0
bond add BondEthernet0 GigabitEtherneta/0/1
set interface ip address BondEthernet0 10.0.0.2/24
set interface state GigabitEtherneta/0/0 up
set interface state GigabitEtherneta/0/1 up
set interface state BondEthernet0 up

vpp1# ping 10.0.0.2
Statistics: 5 sent, 0 received, 100% packet loss

vpp1# sh int
 Name   IdxState  MTU (L3/IP4/IP6/MPLS) Counter 
 Count
BondEthernet0 5  up  9000/0/0/0 tx packets  
  10
   tx bytes 
420
   drops
 10
GigabitEtherneta/0/0  1  up  9000/0/0/0 tx-error
   1
GigabitEtherneta/0/1  2  up  9000/0/0/0 tx-error
   1
GigabitEtherneta/0/2  3 down 9000/0/0/0
GigabitEtherneta/0/3  4 down 9000/0/0/0
local00 down  0/0/0/0   drops   
   2

vpp1# sh bond
interface name   sw_if_index  mode load balance  active slaves  slaves
BondEthernet05lacp l23   0  2

vpp1# show lacp
actor state 
 partner state
interface namesw_if_index  bond interface   
exp/def/dis/col/syn/agg/tim/act  exp/def/dis/col/syn/agg/tim/act
GigabitEtherneta/0/0  1BondEthernet0  0   0   0   0   0   1 
  1   10   0   0   0   0   0   0   1
  LAG ID: [(,00-0b-ab-f4-f9-66,0005,00ff,0001), 
(,00-00-00-00-00-00,0005,00ff,0001)]
  RX-state: EXPIRED, TX-state: TRANSMIT, MUX-state: DETACHED, PTX-state: 
PERIODIC_TX
GigabitEtherneta/0/1  2BondEthernet0  0   0   0   0   0   1 
  1   10   0   0   0   0   0   0   1
  LAG ID: [(,00-0b-ab-f4-f9-66,0005,00ff,0002), 
(,00-00-00-00-00-00,0005,00ff,0002)]
  RX-state: EXPIRED, TX-state: TRANSMIT, MUX-state: DETACHED, PTX-state: 
PERIODIC_TX

Regards,
Aleksander

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10148): https://lists.fd.io/g/vpp-dev/message/10148
Mute This Topic: https://lists.fd.io/mt/24525535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] tx-drops with vhost-user interface

2018-08-06 Thread steven luong via Lists.Fd.Io
Vijay,

From the show output, I can’t really tell what your problem is. If you could 
provide additional information about your environment, I could try setting it 
up and see what’s wrong. Things I need from you are exact VPP version, VPP 
configuration, qemu startup command line or the XML startup file if you use 
virsh, and the version of the VM distro.

Steven

From:  on behalf of "Chandra Mohan, Vijay Mohan" 

Date: Monday, August 6, 2018 at 10:31 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] tx-drops with vhost-user interface

Hi,

I am trying to pass traffic with vhost-user interface and seeing tx-drops on 
virtual interface. Here is the setup: created a bridge domain with a physical 
interface and a vhost-user interface. Physical interface GigabitEthernet5/0/0 
is connected to traffic generator. As shown below, observing drops on  
VirtualEthernet0/0/0 .

Following is the config and vhost-user commands o/p:

DBGvpp# show bridge-domain 1 detail
  BD-ID   Index   BSN  Age(min)  Learning  U-Forwrd  UU-Flood  Flooding  
ARP-Term  BVI-Intf
1   1  0 offonononon   off  
 N/A

   Interface   If-idx ISN  SHG  BVI  TxFlood
VLAN-Tag-Rewrite
 GigabitEthernet5/0/03 10-  * none
 GigabitEthernet5/0/14 10-  * none
 VirtualEthernet0/0/05 10-  * none

Virtual interface is operationally up. Connected to virtual interface server in 
VM.

DBGvpp# show hardware-interfaces VirtualEthernet0/0/0
  NameIdx   Link  Hardware
VirtualEthernet0/0/0   5 up   VirtualEthernet0/0/0
  Ethernet address 02:fe:98:19:c2:6b

DBGvpp# show interface VirtualEthernet0/0/0

  Name   Idx   State  Counter  Count

VirtualEthernet0/0/0  5 up   tx packets 
1

 tx bytes   
   60

 drops  
1





DBGvpp# show errors

   CountNode  Reason

 3l2-output   L2 output packets

 2l2-learnL2 learn packets

 2l2-learnL2 learn misses

 2l2-inputL2 input packets

 3l2-floodL2 flood packets

 1 VirtualEthernet0/0/0-txtx packet drops (no available 
descriptors)


DBGvpp# show vhost-user VirtualEthernet0/0/0
Virtio vhost-user interfaces
Global:
  coalesce frames 32 time 1e-3
  number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 5)
virtio_net_hdr_sz 12
features mask (0x):
 features (0x150208000):
   VIRTIO_NET_F_MRG_RXBUF (15)
   VIRTIO_NET_F_GUEST_ANNOUNCE (21)
   VIRTIO_F_INDIRECT_DESC (28)
   VHOST_USER_F_PROTOCOL_FEATURES (30)
   VIRTIO_F_VERSION_1 (32)
  protocol features (0x3)
   VHOST_USER_PROTOCOL_F_MQ (0)
   VHOST_USER_PROTOCOL_F_LOG_SHMFD (1)

socket filename /socket/vnet-0 type client errno "Success"

rx placement:
   thread 0 on vring 1, polling
tx placement: lock-free
   thread 0 on vring 0

Memory regions (total 3)
region fdguest_phys_addrmemory_sizeuserspace_addr 
mmap_offsetmmap_addr
== = == == == 
== ==
  0 500x0001 0x0001c000 0x7fe39360 
0xc000 0x7fc0e4a0
  1 510x 0x000a 0x7fe2d360 
0x 0x7fc02480
  2 520x000c 0xbff4 0x7fe2d36c 
0x000c 0x7fbf648c

Virtqueue 0 (TX)
  qsz 256 last_avail_idx 0 last_used_idx 0
  avail.flags 0 avail.idx 0 used.flags 1 used.idx 0
  kickfd 53 callfd 54 errfd -1

Virtqueue 1 (RX)
  qsz 256 last_avail_idx 0 last_used_idx 0
  avail.flags 0 avail.idx 0 used.flags 1 used.idx 0
  kickfd 46 callfd 55 errfd -1

Tried dumping descriptors from Rx queue and didn’t find any entries. It’s all 
zeros.

Any idea what is going on here ?

Thanks,
Vijay

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10054): https://lists.fd.io/g/vpp-dev/message/10054
Mute This Topic: https://lists.fd.io/mt/24211191/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-06 Thread steven luong via Lists.Fd.Io
Ravi,

I supposed you already checked the obvious that the vhost connection is 
established and shared memory has at least 1 region in show vhost. For traffic 
issue, use show error to see why packets are dropping. trace add 
vhost-user-input and show trace to see if vhost is getting the packet.

Steven

On 6/6/18, 1:33 PM, "Ravi Kerur"  wrote:

Damjan, Steven,

I will get back to the system on which VPP is crashing and get more
info on it later.

For now, I got hold of another system (same 16.04 x86_64) and I tried
with the same configuration

VPP vhost-user on host
VPP virtio-user on a container

This time VPP didn't crash. Ping doesn't work though. Both vhost-user
and virtio are transmitting and receiving packets. What do I need to
enable so that ping works?

(1) on host:
show interface
  Name   Idx   State  Counter
Count
VhostEthernet01down
VhostEthernet12down
VirtualEthernet0/0/0  3 up   rx packets
 5
 rx bytes
   210
 tx packets
 5
 tx bytes
   210
 drops
10
local00down
vpp# show ip arp
vpp#


(2) On container
show interface
  Name   Idx   State  Counter
Count
VirtioUser0/0/0   1 up   rx packets
 5
 rx bytes
   210
 tx packets
 5
 tx bytes
   210
 drops
10
local00down
vpp# show ip arp
vpp#

Thanks.

On Wed, Jun 6, 2018 at 10:44 AM, Saxena, Nitin  
wrote:
> Hi Ravi,
>
> Sorry for diluting your topic. From your stack trace and show interface 
output I thought you are using OCTEONTx.
>
> Regards,
> Nitin
>
>> On 06-Jun-2018, at 22:10, Ravi Kerur  wrote:
>>
>> Steven, Damjan, Nitin,
>>
>> Let me clarify so there is no confusion, since you are assisting me to
>> get this working I will make sure we are all on same page. I believe
>> OcteonTx is related to Cavium/ARM and I am not using it.
>>
>> DPDK/testpmd (vhost-virtio) works with both 2MB and 1GB hugepages. For
>> 2MB I had to use '--single-file-segments' option.
>>
>> There used to be a way in DPDK to influence compiler to compile for
>> certain architecture f.e. 'nehalem'. I will try that option but I want
>> to make sure steps I am executing is fine first.
>>
>> (1) I compile VPP (18.04) code on x86_64 system with following
>> CPUFLAGS. My system has 'avx, avx2, sse3, see4_2' for SIMD.
>>
>> fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
>> pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
>> pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
>> xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor
>> ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1
>> sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c
>> rdrand lahf_lm abm epb invpcid_single retpoline kaiser tpr_shadow vnmi
>> flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms
>> invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts
>>
>> (2) I run VPP on the same system.
>>
>> (3) VPP on host has following startup.conf
>> unix {
>>  nodaemon
>>  log /var/log/vpp/vpp.log
>>  full-coredump
>>  cli-listen /run/vpp/cli.sock
>>  gid vpp
>> }
>>
>> api-trace {
>>  on
>> }
>>
>> api-segment {
>>  gid vpp
>> }
>>
>> dpdk {
>>  no-pci
>>
>>  vdev net_vhost0,iface=/var/run/vpp/sock1.sock
>>  vdev net_vhost1,iface=/var/run/vpp/sock2.sock
>>
>>  huge-dir /dev/hugepages_1G
>>  socket-mem 2,0
>> }
>>
>> (4) VPP vhost-user config (on host)
>> create vhost socket /var/run/vpp/sock3.sock
>> set interface state VirtualEthernet0/0/0 up
>> set interface ip address VirtualEthernet0/0/0 10.1.1.1/24
>>
>> (5) show dpdk version (Version is the same on host and container, EAL
>> params are different)
>> DPDK Version: DPDK 18.02.1
>> DPDK EAL init args:   -c 1 -n 4 --no-pci --vdev
>> 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread steven luong via Lists.Fd.Io
Ravi,

I only have an SSE machine (Ivy Bridge) and DPDK is using ring mempool as far 
as I can tell from gdb. You are using AVX2 which I don't have one to try it to 
see whether Octeontx mempool is the default mempool for AVX2. What do you put 
in dpdk in the host startup.conf? What is the output for show dpdk version?

Steven

On 6/5/18, 1:40 PM, "vpp-dev@lists.fd.io on behalf of Ravi Kerur" 
 wrote:

Hi Damjan,

I am not intentional using it. I am running VPP on a x86 Ubuntu server.

uname -a
4.9.77.2-rt61 #1 SMP PREEMPT RT Tue May 15 20:36:51 UTC 2018 x86_64
x86_64 x86_64 GNU/Linux

Thanks.

On Tue, Jun 5, 2018 at 1:10 PM, Damjan Marion  wrote:
> Dear Ravi,
>
> Currently we don't support Octeon TX mempool. Are you intentionally using
> it?
>
> Regards,
>
> Damjan
>
> On 5 Jun 2018, at 21:46, Ravi Kerur  wrote:
>
> Steven,
>
> I managed to get Tx/Rx rings setup with 1GB hugepages. However, when I
> assign an IP address to both vhost-user/virtio interfaces and initiate
> a ping VPP crashes.
>
> Any other mechanism available to test Tx/Rx path between Vhost and
> Virtio? Details below.
>
>
> ***On host***
> vpp#show vhost-user VirtualEthernet0/0/0
> Virtio vhost-user interfaces
> Global:
>  coalesce frames 32 time 1e-3
>  number of rx virtqueues in interrupt mode: 0
> Interface: VirtualEthernet0/0/0 (ifindex 3)
> virtio_net_hdr_sz 12
> features mask (0x):
> features (0x110008000):
>   VIRTIO_NET_F_MRG_RXBUF (15)
>   VIRTIO_F_INDIRECT_DESC (28)
>   VIRTIO_F_VERSION_1 (32)
>  protocol features (0x0)
>
> socket filename /var/run/vpp/sock3.sock type server errno "Success"
>
> rx placement:
>   thread 0 on vring 1, polling
> tx placement: lock-free
>   thread 0 on vring 0
>
> Memory regions (total 1)
> region fdguest_phys_addrmemory_sizeuserspace_addr
> mmap_offsetmmap_addr
> == = == == ==
> == ==
>  0 260x7f54c000 0x4000 0x7f54c000
> 0x 0x7faf
>
> Virtqueue 0 (TX)
>  qsz 256 last_avail_idx 0 last_used_idx 0
>  avail.flags 1 avail.idx 256 used.flags 1 used.idx 0
>  kickfd 27 callfd 24 errfd -1
>
> Virtqueue 1 (RX)
>  qsz 256 last_avail_idx 0 last_used_idx 0
>  avail.flags 1 avail.idx 0 used.flags 1 used.idx 0
>  kickfd 28 callfd 25 errfd -1
>
>
> vpp#set interface ip address VirtualEthernet0/0/0 10.1.1.1/24
>
> On container**
> vpp# show interface VirtioUser0/0/0
>  Name   Idx   State  Counter
>Count
> VirtioUser0/0/0   1 up
> vpp#
> vpp# set interface ip address VirtioUser0/0/0 10.1.1.2/24
> vpp#
> vpp# ping 10.1.1.1
>
> Statistics: 5 sent, 0 received, 100% packet loss
> vpp#
>
>
> Host vpp crash with following backtrace**
> Continuing.
>
> Program received signal SIGSEGV, Segmentation fault.
> octeontx_fpa_bufpool_alloc (handle=0)
>at
> 
/var/venom/rk-vpp-1804/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/mempool/octeontx/rte_mempool_octeontx.c:57
> 57return (void *)(uintptr_t)fpavf_read64((void *)(handle +
> (gdb) bt
> #0  octeontx_fpa_bufpool_alloc (handle=0)
>at
> 
/var/venom/rk-vpp-1804/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/mempool/octeontx/rte_mempool_octeontx.c:57
> #1  octeontx_fpavf_dequeue (mp=0x7fae7fc9ab40, obj_table=0x7fb04d868880,
> n=528)
>at
> 
/var/venom/rk-vpp-1804/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/mempool/octeontx/rte_mempool_octeontx.c:98
> #2  0x7fb04b73bdef in rte_mempool_ops_dequeue_bulk (n=528,
> obj_table=,
>mp=0x7fae7fc9ab40)
>at
> 
/var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:492
> #3  __mempool_generic_get (cache=, n=,
> obj_table=,
>mp=)
>at
> 
/var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:1271
> #4  rte_mempool_generic_get (cache=, n=,
>obj_table=, mp=)
>at
> 
/var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:1306
> #5  rte_mempool_get_bulk (n=528, obj_table=,
> mp=0x7fae7fc9ab40)
>at
> 
/var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:1339
> #6  dpdk_buffer_fill_free_list_avx2 (vm=0x7fb08ec69480
> , fl=0x7fb04cb2b100,
>min_free_buffers=)
>at 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread steven luong via Lists.Fd.Io
Ravi,

In order to use dpdk virtio_user, you need 1GB huge page.

Steven

On 6/5/18, 11:17 AM, "Ravi Kerur"  wrote:

Hi Steven,

Connection is the problem. I don't see memory regions setup correctly.
Below are some details. Currently I am using 2MB hugepages.

(1) Create vhost-user server
debug vhost-user on
vpp# create vhost socket /var/run/vpp/sock3.sock server
VirtualEthernet0/0/0
vpp# set interface state VirtualEthernet0/0/0 up
vpp#
vpp#

(2) Instantiate a container
docker run -it --privileged -v
/var/run/vpp/sock3.sock:/var/run/usvhost1 -v
/dev/hugepages:/dev/hugepages dpdk-app-vpp:latest

(3) Inside the container run EAL/DPDK virtio with following startup conf.
unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  cli-listen /run/vpp/cli.sock
  gid vpp
}

api-trace {
  on
}

api-segment {
  gid vpp
}

dpdk {
no-pci
vdev virtio_user0,path=/var/run/usvhost1
}

Following errors are seen due to 2MB hugepages and I think DPDK
requires "--single-file-segments" option.

/usr/bin/vpp[19]: dpdk_config:1275: EAL init args: -c 1 -n 4 --no-pci
--vdev virtio_user0,path=/var/run/usvhost1 --huge-dir
/run/vpp/hugepages --file-prefix vpp --master-lcore 0 --socket-mem
64,64
/usr/bin/vpp[19]: dpdk_config:1275: EAL init args: -c 1 -n 4 --no-pci
--vdev virtio_user0,path=/var/run/usvhost1 --huge-dir
/run/vpp/hugepages --file-prefix vpp --master-lcore 0 --socket-mem
64,64
EAL: 4 hugepages of size 1073741824 reserved, but no mounted hugetlbfs
found for that size
EAL: VFIO support initialized
get_hugepage_file_info(): Exceed maximum of 8
prepare_vhost_memory_user(): Failed to prepare memory for vhost-user
DPDK physical memory layout:


Second test case>
(1) and (2) are same as above. I run VPP inside a container with
following startup config

unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  cli-listen /run/vpp/cli.sock
  gid vpp
}

api-trace {
  on
}

api-segment {
  gid vpp
}

dpdk {
no-pci
single-file-segments
vdev virtio_user0,path=/var/run/usvhost1
}


VPP fails to start with
plugin.so
vpp[19]: dpdk_config: unknown input `single-file-segments no-pci vd...'
vpp[19]: dpdk_config: unknown input `single-file-segments no-pci vd...'

[1]+  Done/usr/bin/vpp -c /etc/vpp/startup.conf
root@867dc128b544:~/dpdk#


show version (on both host and container).
vpp v18.04-rc2~26-gac2b736~b45 built by root on 34a554d1c194 at Wed
Apr 25 14:53:07 UTC 2018
vpp#

Thanks.

On Tue, Jun 5, 2018 at 9:23 AM, Steven Luong (sluong)  
wrote:
> Ravi,
>
> Do this
>
> 1. Run VPP native vhost-user in the host. Turn on debug "debug vhost-user 
on".
> 2. Bring up the container with the vdev virtio_user commands that you 
have as before
> 3. show vhost-user in the host and verify that it has a shared memory 
region. If not, the connection has a problem. Collect the show vhost-user and 
debug vhost-user and send them to me and stop. If yes, proceed with step 4.
> 4. type "trace vhost-user-input 100" in the host
> 5. clear error, and clear interfaces in the host and the container.
> 6. do the ping from the container.
> 7. Collect show error, show trace, show interface, and show vhost-user in 
the host. Collect show error and show interface in the container. Put output in 
github and provide a link to view. There is no need to send a large file.
>
> Steven
>
> On 6/4/18, 5:50 PM, "Ravi Kerur"  wrote:
>
> Hi Steven,
>
> Thanks for your help. I am using vhost-user client (VPP in container)
> and vhost-user server (VPP in host). I thought it should work.
>
> create vhost socket /var/run/vpp/sock3.sock server (On host)
>
> create vhost socket /var/run/usvhost1 (On container)
>
> Can you please point me to a document which shows how to create VPP
> virtio_user interfaces or static configuration in
> /etc/vpp/startup.conf?
>
> I have used following declarations in /etc/vpp/startup.conf
>
> # vdev virtio_user0,path=/var/run/vpp/sock3.sock,mac=52:54:00:00:04:01
> # vdev virtio_user1,path=/var/run/vpp/sock4.sock,mac=52:54:00:00:04:02
>
> but it doesn't work.
>
> Thanks.
>
> On Mon, Jun 4, 2018 at 3:57 PM, Steven Luong (sluong) 
 wrote:
> > Ravi,
> >
> > VPP only supports vhost-user in the device mode. In your example, 
the host, in device mode, and the container also in device mode do not make a 
happy couple. You need one of them, either the host or 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread steven luong via Lists.Fd.Io
Ravi,

Do this

1. Run VPP native vhost-user in the host. Turn on debug "debug vhost-user on". 
2. Bring up the container with the vdev virtio_user commands that you have as 
before
3. show vhost-user in the host and verify that it has a shared memory region. 
If not, the connection has a problem. Collect the show vhost-user and debug 
vhost-user and send them to me and stop. If yes, proceed with step 4.
4. type "trace vhost-user-input 100" in the host
5. clear error, and clear interfaces in the host and the container.
6. do the ping from the container.
7. Collect show error, show trace, show interface, and show vhost-user in the 
host. Collect show error and show interface in the container. Put output in 
github and provide a link to view. There is no need to send a large file.

Steven

On 6/4/18, 5:50 PM, "Ravi Kerur"  wrote:

Hi Steven,

Thanks for your help. I am using vhost-user client (VPP in container)
and vhost-user server (VPP in host). I thought it should work.

create vhost socket /var/run/vpp/sock3.sock server (On host)

create vhost socket /var/run/usvhost1 (On container)

Can you please point me to a document which shows how to create VPP
virtio_user interfaces or static configuration in
/etc/vpp/startup.conf?

I have used following declarations in /etc/vpp/startup.conf

# vdev virtio_user0,path=/var/run/vpp/sock3.sock,mac=52:54:00:00:04:01
# vdev virtio_user1,path=/var/run/vpp/sock4.sock,mac=52:54:00:00:04:02

but it doesn't work.

Thanks.

On Mon, Jun 4, 2018 at 3:57 PM, Steven Luong (sluong)  
wrote:
> Ravi,
>
> VPP only supports vhost-user in the device mode. In your example, the 
host, in device mode, and the container also in device mode do not make a happy 
couple. You need one of them, either the host or container, running in driver 
mode using the dpdk vdev virtio_user command in startup.conf. So you need 
something like this
>
> (host) VPP native vhost-user - (container) VPP DPDK vdev virtio_user
>   -- or --
> (host) VPP DPDK vdev virtio_user  (container) VPP native vhost-user
>
> Steven
>
> On 6/4/18, 3:27 PM, "Ravi Kerur"  wrote:
>
> Hi Steven
>
> Though crash is not happening anymore, there is still an issue with Rx
> and Tx. To eliminate whether it is testpmd or vpp, I decided to run
>
> (1) VPP vhost-user server on host-x
> (2) Run VPP in a container on host-x and vhost-user client port
> connecting to vhost-user server.
>
> Still doesn't work. Details below. Please let me know if something is
> wrong in what I am doing.
>
>
> (1) VPP vhost-user as a server
> (2) VPP in a container virtio-user or vhost-user client
>
> (1) Create vhost-user server socket on VPP running on host.
>
> vpp#create vhost socket /var/run/vpp/sock3.sock server
> vpp#set interface state VirtualEthernet0/0/0 up
> show vhost-user VirtualEthernet0/0/0 descriptors
> Virtio vhost-user interfaces
> Global:
> coalesce frames 32 time 1e-3
> number of rx virtqueues in interrupt mode: 0
> Interface: VirtualEthernet0/0/0 (ifindex 3)
> virtio_net_hdr_sz 0
> features mask (0x):
> features (0x0):
> protocol features (0x0)
>
> socket filename /var/run/vpp/sock3.sock type server errno "Success"
>
> rx placement:
> tx placement: spin-lock
> thread 0 on vring 0
>
> Memory regions (total 0)
>
> vpp# set interface ip address VirtualEthernet0/0/0 192.168.1.1/24
> vpp#
>
> (2) Instantiate a docker container to run VPP connecting to 
sock3.server socket.
>
> docker run -it --privileged -v
> /var/run/vpp/sock3.sock:/var/run/usvhost1 -v
> /dev/hugepages:/dev/hugepages dpdk-app-vpp:latest
> root@4b1bd06a3225:~/dpdk#
> root@4b1bd06a3225:~/dpdk# ps -ef
> UID PID PPID C STIME TTY TIME CMD
> root 1 0 0 21:39 ? 00:00:00 /bin/bash
> root 17 1 0 21:39 ? 00:00:00 ps -ef
> root@4b1bd06a3225:~/dpdk#
>
> root@8efda6701ace:~/dpdk# ps -ef | grep vpp
> root 19 1 39 21:41 ? 00:00:03 /usr/bin/vpp -c /etc/vpp/startup.conf
> root 25 1 0 21:41 ? 00:00:00 grep --color=auto vpp
> root@8efda6701ace:~/dpdk#
>
> vpp#create vhost socket /var/run/usvhost1
> vpp#set interface state VirtualEthernet0/0/0 up
> vpp#show vhost-user VirtualEthernet0/0/0 descriptors
> Virtio vhost-user interfaces
> Global:
> coalesce frames 32 time 1e-3
> number of rx virtqueues in interrupt mode: 0
> Interface: VirtualEthernet0/0/0 (ifindex 1)
> virtio_net_hdr_sz 0
> features mask (0x):
>  

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-04 Thread steven luong via Lists.Fd.Io
Ravi,

VPP only supports vhost-user in the device mode. In your example, the host, in 
device mode, and the container also in device mode do not make a happy couple. 
You need one of them, either the host or container, running in driver mode 
using the dpdk vdev virtio_user command in startup.conf. So you need something 
like this

(host) VPP native vhost-user - (container) VPP DPDK vdev virtio_user
  -- or --
(host) VPP DPDK vdev virtio_user  (container) VPP native vhost-user

Steven

On 6/4/18, 3:27 PM, "Ravi Kerur"  wrote:

Hi Steven

Though crash is not happening anymore, there is still an issue with Rx
and Tx. To eliminate whether it is testpmd or vpp, I decided to run

(1) VPP vhost-user server on host-x
(2) Run VPP in a container on host-x and vhost-user client port
connecting to vhost-user server.

Still doesn't work. Details below. Please let me know if something is
wrong in what I am doing.


(1) VPP vhost-user as a server
(2) VPP in a container virtio-user or vhost-user client

(1) Create vhost-user server socket on VPP running on host.

vpp#create vhost socket /var/run/vpp/sock3.sock server
vpp#set interface state VirtualEthernet0/0/0 up
show vhost-user VirtualEthernet0/0/0 descriptors
Virtio vhost-user interfaces
Global:
coalesce frames 32 time 1e-3
number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 3)
virtio_net_hdr_sz 0
features mask (0x):
features (0x0):
protocol features (0x0)

socket filename /var/run/vpp/sock3.sock type server errno "Success"

rx placement:
tx placement: spin-lock
thread 0 on vring 0

Memory regions (total 0)

vpp# set interface ip address VirtualEthernet0/0/0 192.168.1.1/24
vpp#

(2) Instantiate a docker container to run VPP connecting to sock3.server 
socket.

docker run -it --privileged -v
/var/run/vpp/sock3.sock:/var/run/usvhost1 -v
/dev/hugepages:/dev/hugepages dpdk-app-vpp:latest
root@4b1bd06a3225:~/dpdk#
root@4b1bd06a3225:~/dpdk# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 21:39 ? 00:00:00 /bin/bash
root 17 1 0 21:39 ? 00:00:00 ps -ef
root@4b1bd06a3225:~/dpdk#

root@8efda6701ace:~/dpdk# ps -ef | grep vpp
root 19 1 39 21:41 ? 00:00:03 /usr/bin/vpp -c /etc/vpp/startup.conf
root 25 1 0 21:41 ? 00:00:00 grep --color=auto vpp
root@8efda6701ace:~/dpdk#

vpp#create vhost socket /var/run/usvhost1
vpp#set interface state VirtualEthernet0/0/0 up
vpp#show vhost-user VirtualEthernet0/0/0 descriptors
Virtio vhost-user interfaces
Global:
coalesce frames 32 time 1e-3
number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 1)
virtio_net_hdr_sz 0
features mask (0x):
features (0x0):
protocol features (0x0)

socket filename /var/run/usvhost1 type client errno "Success"

rx placement:
tx placement: spin-lock
thread 0 on vring 0

Memory regions (total 0)

vpp#

vpp# set interface ip address VirtualEthernet0/0/0 192.168.1.2/24
vpp#

vpp# ping 192.168.1.1

Statistics: 5 sent, 0 received, 100% packet loss
vpp#

On Thu, May 31, 2018 at 2:30 PM, Steven Luong (sluong)  
wrote:
> show interface and look for the counter and count columns for the 
corresponding interface.
>
> Steven
>
> On 5/31/18, 1:28 PM, "Ravi Kerur"  wrote:
>
> Hi Steven,
>
> You made my day, thank you. I didn't realize different dpdk versions
> (vpp -- 18.02.1 and testpmd -- from latest git repo (probably 18.05)
> could be the cause of the problem, I still dont understand why it
> should as virtio/vhost messages are meant to setup tx/rx rings
> correctly?
>
> I downloaded dpdk 18.02.1 stable release and at least vpp doesn't
> crash now (for both vpp-native and dpdk vhost interfaces). I have one
> question is there a way to read vhost-user statistics counter (Rx/Tx)
> on vpp? I only know
>
> 'show vhost-user ' and 'show vhost-user  descriptors'
> which doesn't show any counters.
>
> Thanks.
>
> On Thu, May 31, 2018 at 11:51 AM, Steven Luong (sluong)
>  wrote:
> > Ravi,
> >
> > For (1) which works, what dpdk version are you using in the host? 
Are you using the same dpdk version as VPP is using? Since you are using VPP 
latest, I think it is 18.02. Type "show dpdk version" at the VPP prompt to find 
out for sure.
> >
> > Steven
> >
> > On 5/31/18, 11:44 AM, "Ravi Kerur"  wrote:
> >
> > Hi Steven,
> >
> > i have tested following scenarios and it basically is not clear 
why