Dear Hongjun,

(PFA = Please find attached)

Code:
Branch: stable/1901
Patch: PFA: stable_1901_patch.txt

Configuration:
PFA: vpp-startup.conf

Steps for setting up:
PFA: setup_steps.bash

Attempted to make 10,000 connections.
Using API calls from control plane application; to create pppoe sessions
Final output from control plane before VPP crashed:
PFA: no_of_connections.png

VPP log-file:
PFA:vpp.log

Dump from journalctl:
PFA: journal_dump.txt

As you can see from the journalctl output, the code crashes on the Assertion 
‘node_index < 2^14’. And as you can see from the patch file included, we 
changed the original assertion from ‘less than 2^10’ to ‘less than 2^14’. We 
did that, because the original assertion crashes VPP way before even 7,000 
connections are connected. (Around 400 connections.)



Regards,
 
Abeeha Aqeel


From: AbduSami bin Khurram
Sent: Wednesday, April 17, 2019 2:33 PM
To: Abeeha Aqeel
Subject: RE: VPP PPPoE Plugin

Dear Abeeha,

(PFA = Please find attached)

Code:
Branch: stable/1901
Patch: PFA: stable_1901_patch.txt

Configuration:
PFA: vpp-startup.conf

Steps for setting up:
PFA: setup_steps.bash

Attempted to make 10,000 connections.
Using API calls from control plane application; to create pppoe sessions
Final output from control plane before VPP crashed:
PFA: no_of_connections.png

VPP log-file:
PFA:vpp.log

Dump from journalctl:
PFA: journal_dump.txt

As you can see from the journalctl output, the code crashes on the Assertion 
‘node_index < 2^14’. And as you can see from the patch file included, we 
changed the original assertion from ‘less than 2^10’ to ‘less than 2^14’. We 
did that, because the original assertion crashes VPP way before even 7,000 
connections are connected. (Around 400 connections.)

Warm Regards,

AbduSami bin Khurram
Software Design Engineer, xFlow Research Pvt. Ltd.
+92-331-5543190
abdusami.khur...@xflowresearch.com
www.xflowresearch.com

From: Ni, Hongjun
Sent: 17 April 2019 13:04
To: Abeeha Aqeel; vpp-dev@lists.fd.io
Cc: b...@xflowresearch.com
Subject: RE: VPP PPPoE Plugin

Hi Aqeel,

Could you send out the core dump log when VPP crash with 7000 sessions?

Thanks,
Hongjun

From: Abeeha Aqeel [mailto:abeeha.aq...@xflowresearch.com] 
Sent: Tuesday, April 16, 2019 8:01 PM
To: Ni, Hongjun <hongjun...@intel.com>; vpp-dev@lists.fd.io
Cc: b...@xflowresearch.com
Subject: VPP PPPoE Plugin

Dear Hongjun,

I am trying to create 64000 VPP sessions with the VPP Plugin. VPP is being used 
as the forwarding plane while our control plane separately caters the PPPoE 
control packets. The VPP is installed on Centos 7 on bare-metal server. The 
current implementation of the plugin included in VPP Stable 19.01 is allowing 
only 7000 sessions and VPP crashes afterwards. 

I tried to add the PPPoE plugin implemented by OpenBRAS 
(https://gerrit.fd.io/r/#/c/7407/) to the VPP Stable 19.01 ( 
https://github.com/FDio/vpp.git)  with DPDK 18.11.0.  VPP built successfully 
and started as well but the PPPoE plugin did not show in “vppctl show plugins”. 

Kindly, suggest how to connect 64000 pppoe sessions with VPP. 

Regards,
 
Abeeha Aqeel




Attachment: setup_steps.bash
Description: Binary data

Attachment: vpp-startup.conf
Description: Binary data

diff --git a/.gitignore b/.gitignore
index 7d46d3d..0dcf62b 100644
--- a/.gitignore
+++ b/.gitignore
@@ -23,6 +23,7 @@
 /build-config.mk
 /build/external/*.tar.gz
 /build/external/*.tar.xz
+/build/external/vpp-*.rpm
 /build/external/vpp-*.deb
 /build/external/vpp-*.changes
 /build/external/downloads/
diff --git a/src/plugins/dpdk/CMakeLists.txt b/src/plugins/dpdk/CMakeLists.txt
index 45605ba..24962ba 100644
--- a/src/plugins/dpdk/CMakeLists.txt
+++ b/src/plugins/dpdk/CMakeLists.txt
@@ -11,6 +11,20 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+add_compile_definitions(RTE_SCHED_COLLECT_STATS)
+
+##############################################################################
+# macros
+##############################################################################
+macro(dpdk_find_library var name)
+  find_library(${var} NAMES ${name} ${ARGN})
+if (NOT ${var})
+  message(WARNING "-- ${name} library not found - dpdk_plugin disabled")
+  return()
+endif()
+    message(STATUS "DPDK plugin needs ${name} library - found at ${${var}}")
+endmacro()
+
 ##############################################################################
 # Find lib and include files
 ##############################################################################
diff --git a/src/plugins/dpdk/device/cli.c b/src/plugins/dpdk/device/cli.c
index 7e20f56..d933e7d 100644
--- a/src/plugins/dpdk/device/cli.c
+++ b/src/plugins/dpdk/device/cli.c
@@ -30,6 +30,8 @@
 
 #include <dpdk/device/dpdk_priv.h>
 
+#define RTE_SCHED_COLLECT_STATS
+
 /**
  * @file
  * @brief CLI for DPDK Abstraction Layer and pcap Tx Trace.
@@ -1848,6 +1850,7 @@ show_dpdk_hqos_queue_stats (vlib_main_t * vm, 
unformat_input_t * input,
   dpdk_device_t *xd;
   uword *p = 0;
   struct rte_eth_dev_info dev_info;
+  struct rte_pci_device *pci_dev;
   dpdk_device_config_t *devconf = 0;
   u32 qindex;
   struct rte_sched_queue_stats stats;
@@ -1893,14 +1896,16 @@ show_dpdk_hqos_queue_stats (vlib_main_t * vm, 
unformat_input_t * input,
   xd = vec_elt_at_index (dm->devices, hw->dev_instance);
 
   rte_eth_dev_info_get (xd->port_id, &dev_info);
-  if (dev_info.pci_dev)
+  pci_dev = dpdk_get_pci_device (&dev_info);
+
+  if (pci_dev)
     {                          /* bonded interface has no pci info */
       vlib_pci_addr_t pci_addr;
 
-      pci_addr.domain = dev_info.pci_dev->addr.domain;
-      pci_addr.bus = dev_info.pci_dev->addr.bus;
-      pci_addr.slot = dev_info.pci_dev->addr.devid;
-      pci_addr.function = dev_info.pci_dev->addr.function;
+      pci_addr.domain = pci_dev->addr.domain;
+      pci_addr.bus = pci_dev->addr.bus;
+      pci_addr.slot = pci_dev->addr.devid;
+      pci_addr.function = pci_dev->addr.function;
 
       p =
        hash_get (dm->conf->device_config_index_by_pci_addr, pci_addr.as_u32);
diff --git a/src/plugins/dpdk/hqos/hqos.c b/src/plugins/dpdk/hqos/hqos.c
index 1a8dd6d..f3ff0fa 100644
--- a/src/plugins/dpdk/hqos/hqos.c
+++ b/src/plugins/dpdk/hqos/hqos.c
@@ -100,7 +100,7 @@ static dpdk_device_config_hqos_t hqos_params_default = {
           .n_pipes_per_subport = 4096,
           .qsize = {64, 64, 64, 64},
           .pipe_profiles = NULL,       /* Set at config */
-          .n_pipe_profiles = 1,
+          .n_pipe_profiles = 2,
 
 #ifdef RTE_SCHED_RED
           .red_params = {
@@ -158,6 +158,17 @@ static struct rte_sched_pipe_params 
hqos_pipe_params_default = {
   .wrr_weights = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1},
 };
 
+static struct rte_sched_pipe_params hqos_pipe_params_5MB = {
+  .tb_rate = 5000000,          /* 10GbE line rate divided by 4K pipes */
+  .tb_size = 1000000,
+  .tc_rate = {5000000,5000000,5000000,5000000},
+  .tc_period = 150 ,
+#ifdef RTE_SCHED_SUBPORT_TC_OV
+  .tc_ov_weight = 1,
+#endif
+  .wrr_weights = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1},
+};
+
 /***
  *
  * HQoS configuration
@@ -214,9 +225,11 @@ dpdk_device_config_hqos_default (dpdk_device_config_hqos_t 
* hqos)
   /* pipe */
   vec_add2 (hqos->pipe, pipe_params, hqos->port.n_pipe_profiles);
 
-  for (i = 0; i < vec_len (hqos->pipe); i++)
+  /*for (i = 0; i < vec_len (hqos->pipe); i++)
     memcpy (&pipe_params[i],
-           &hqos_pipe_params_default, sizeof (hqos_pipe_params_default));
+           &hqos_pipe_params_default, sizeof (hqos_pipe_params_default));*/
+  memcpy (&pipe_params[0], &hqos_pipe_params_default, sizeof 
(hqos_pipe_params_default));
+  memcpy (&pipe_params[1], &hqos_pipe_params_5MB, 
sizeof(hqos_pipe_params_5MB));
 
   hqos->port.pipe_profiles = hqos->pipe;
 
diff --git a/src/vlib/error.h b/src/vlib/error.h
index 5835251..6f4ccf4 100644
--- a/src/vlib/error.h
+++ b/src/vlib/error.h
@@ -58,7 +58,7 @@ vlib_error_get_code (vlib_error_t e)
 always_inline vlib_error_t
 vlib_error_set (u32 node_index, u32 code)
 {
-  ASSERT (node_index < (1 << 10));
+  ASSERT (node_index < (1 << 14));
   ASSERT (code < (1 << 6));
   return (node_index << 6) | code;
 }
diff --git a/src/vnet/ethernet/node.c b/src/vnet/ethernet/node.c
index 268b171..27b22b3 100755
--- a/src/vnet/ethernet/node.c
+++ b/src/vnet/ethernet/node.c
@@ -1124,11 +1124,11 @@ ethernet_input_inline (vlib_main_t * vm,
                  if (!ethernet_address_cast (e0->dst_address) &&
                      (hi->hw_address != 0) &&
                      !eth_mac_equal ((u8 *) e0, hi->hw_address))
-                   error0 = ETHERNET_ERROR_L3_MAC_MISMATCH;
+                   {}  //error0 = ETHERNET_ERROR_L3_MAC_MISMATCH;
                  if (!ethernet_address_cast (e1->dst_address) &&
                      (hi->hw_address != 0) &&
                      !eth_mac_equal ((u8 *) e1, hi->hw_address))
-                   error1 = ETHERNET_ERROR_L3_MAC_MISMATCH;
+                   {}  //error1 = ETHERNET_ERROR_L3_MAC_MISMATCH;
                  vlib_buffer_advance (b0, sizeof (ethernet_header_t));
                  determine_next_node (em, variant, 0, type0, b0,
                                       &error0, &next0);
@@ -1348,7 +1348,7 @@ ethernet_input_inline (vlib_main_t * vm,
                  if (!ethernet_address_cast (e0->dst_address) &&
                      (hi->hw_address != 0) &&
                      !eth_mac_equal ((u8 *) e0, hi->hw_address))
-                   error0 = ETHERNET_ERROR_L3_MAC_MISMATCH;
+                   {}  //error0 = ETHERNET_ERROR_L3_MAC_MISMATCH;
                  vlib_buffer_advance (b0, sizeof (ethernet_header_t));
                  determine_next_node (em, variant, 0, type0, b0,
                                       &error0, &next0);

Attachment: vpp.log
Description: Binary data

Apr 17 13:41:22 localhost.localdomain ./vpp[25179]: load_one_vat_plugin:67: 
Loaded plugin: mactime_test_plugin.so
Apr 17 13:41:22 localhost.localdomain ./vpp[25179]: load_one_vat_plugin:67: 
Loaded plugin: memif_test_plugin.so
Apr 17 13:41:22 localhost.localdomain ./vpp[25179]: load_one_vat_plugin:67: 
Loaded plugin: nat_test_plugin.so
Apr 17 13:41:22 localhost.localdomain ./vpp[25179]: load_one_vat_plugin:67: 
Loaded plugin: nsh_test_plugin.so
Apr 17 13:41:22 localhost.localdomain ./vpp[25179]: load_one_vat_plugin:67: 
Loaded plugin: nsim_test_plugin.so
Apr 17 13:41:22 localhost.localdomain ./vpp[25179]: load_one_vat_plugin:67: 
Loaded plugin: pppoe_test_plugin.so
Apr 17 13:41:22 localhost.localdomain ./vpp[25179]: load_one_vat_plugin:67: 
Loaded plugin: stn_test_plugin.so
Apr 17 13:41:22 localhost.localdomain ./vpp[25179]: load_one_vat_plugin:67: 
Loaded plugin: vmxnet3_test_plugin.so
Apr 17 13:41:23 localhost.localdomain ./vpp[25179]: dpdk: EAL init args: -c fe 
-n 4 --in-memory --file-prefix vpp -w 0000:05:00.0 -w 0000:05:
Apr 17 13:41:23 localhost.localdomain kernel: igb_uio 0000:05:00.0: irq 61 for 
MSI/MSI-X
Apr 17 13:41:23 localhost.localdomain kernel: igb_uio 0000:05:00.0: uio device 
registered with irq 61
Apr 17 13:41:23 localhost.localdomain kernel: igb_uio 0000:05:00.1: irq 62 for 
MSI/MSI-X
Apr 17 13:41:23 localhost.localdomain kernel: igb_uio 0000:05:00.1: uio device 
registered with irq 62
Apr 17 13:41:24 localhost.localdomain vnet[25179]: dpdk_ipsec_process:1010: not 
enough DPDK crypto resources, default to OpenSSL
Apr 17 13:41:31 localhost.localdomain NetworkManager[11210]: <info>  
[1555490491.6239] manager: (tap0): new Tun device (/org/freedesktop/Netw
Apr 17 13:41:33 localhost.localdomain avahi-daemon[11155]: Registering new 
address record for fe80::c8f4:b9ff:fe32:40bd on tap0.*.
Apr 17 13:50:01 localhost.localdomain systemd[1]: Started Session 2709 of user 
root.
-- Subject: Unit session-2709.scope has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-2709.scope has finished starting up.
--
-- The start-up result is done.
Apr 17 13:50:01 localhost.localdomain CROND[25689]: (root) CMD 
(/usr/lib64/sa/sa1 1 1)
Apr 17 13:54:49 localhost.localdomain vnet[25179]: 
/root/vpp/src/vlib/error.h:61 (vlib_error_set) assertion `node_index < (1 << 
14)' fails
Apr 17 13:54:49 localhost.localdomain vnet[25179]: received signal SIGABRT, PC 
0x7f9c5639e207
Apr 17 13:54:49 localhost.localdomain vnet[25179]: #0  0x00007f9c57d554b3 
unix_signal_handler + 0x24c
Apr 17 13:54:49 localhost.localdomain vnet[25179]: #1  0x00007f9c5764c5d0 
0x7f9c5764c5d0
Apr 17 13:54:49 localhost.localdomain vnet[25179]: #2  0x00007f9c5639e207 
gsignal + 0x37
Apr 17 13:54:49 localhost.localdomain vnet[25179]: #3  0x00007f9c5639f8f8 abort 
+ 0x148
Apr 17 13:54:49 localhost.localdomain vnet[25179]: #4  0x0000000000407811 
vhost_user_unmap_all + 0x0
Apr 17 13:54:49 localhost.localdomain vnet[25179]: #5  0x00007f9c5714923f 
debugger + 0x1c
Apr 17 13:54:49 localhost.localdomain vnet[25179]: #6  0x00007f9c5714967a 
_clib_error + 0x2d2
Apr 17 13:54:49 localhost.localdomain vnet[25179]: #7  0x00007f9c57d02d1e 
vlib_error_set + 0x63
Apr 17 13:54:49 localhost.localdomain vnet[25179]: #8  0x00007f9c57d067db 
register_node + 0x1552
Apr 17 13:54:49 localhost.localdomain vnet[25179]: #9  0x00007f9c57d069cb 
vlib_register_node + 0x32
Apr 17 13:54:49 localhost.localdomain vnet[25179]: #10 0x00007f9c582fa3b3 
vnet_register_interface + 0x118d
Apr 17 13:54:49 localhost.localdomain vnet[25179]: #11 0x00007f9c10825ca6 
vnet_pppoe_add_del_session + 0x7ec
Apr 17 13:54:49 localhost.localdomain vnet[25179]: #12 0x00007f9c10819ec8 
vl_api_pppoe_add_del_session_t_handler + 0x140
Apr 17 13:54:49 localhost.localdomain vnet[25179]: #13 0x00007f9c59250da7 
vl_msg_api_handler_with_vm_node + 0x1b8
Apr 17 13:54:49 localhost.localdomain vnet[25179]: #14 0x00007f9c5921d15e 
void_mem_api_handle_msg_i + 0x59
Apr 17 13:54:50 localhost.localdomain abrt-hook-ccpp[25743]: Process 25179 
(vpp) of user 0 killed by SIGABRT - dumping core

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12808): https://lists.fd.io/g/vpp-dev/message/12808
Mute This Topic: https://lists.fd.io/mt/31199594/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to