Hi,

Thanks for patch. Please also help in undestanding the use of  ip6_input
and use of vnet_feature_arc_start function.

 In the ip6_input function  "next0" and "next1"  is to
LOOKUP_MULTICAST/LOOKUP before invoking  the vnet_feature_arc_start
function. As in the case of the interface feature are not initialized
appropriately as in the case of this bug, we end up the
"vnet_have_features" return 0 and the next0 and next1  does NOT get
updated, and will  next remain as LOOKUP_MULTICAST/LOOKUP and processed. Is
that correct? Should the next0 and next1 set to IP6_INPUT_NEXT_DROP  before
the   arc0 and arc1 values will lead to the appropriate update of next0 and
next1  with in the vnet_feature_arc_start call if interface has feature
else it is error dropped. That way we will not lead the buffers handled by
the code with un-initialized datastructures and tables etc leading to
crash.

Please see the relevant lines of the ip6_input function below.

          sw_if_index0 = vnet_buffer (p0)->sw_if_index[VLIB_RX];
          sw_if_index1 = vnet_buffer (p1)->sw_if_index[VLIB_RX];

          if (PREDICT_FALSE (ip6_address_is_multicast (&ip0->dst_address)))
            {
              arc0 = lm->mcast_feature_arc_index;
              next0 = IP6_INPUT_NEXT_LOOKUP_MULTICAST;
            }
          else
            {
              arc0 = lm->ucast_feature_arc_index;
              next0 = IP6_INPUT_NEXT_LOOKUP;
            }

          if (PREDICT_FALSE (ip6_address_is_multicast (&ip1->dst_address)))
            {
              arc1 = lm->mcast_feature_arc_index;
              next1 = IP6_INPUT_NEXT_LOOKUP_MULTICAST;
            }
          else
            {
              arc1 = lm->ucast_feature_arc_index;
              next1 = IP6_INPUT_NEXT_LOOKUP;
            }

          vnet_buffer (p0)->ip.adj_index[VLIB_RX] = ~0;
          vnet_buffer (p1)->ip.adj_index[VLIB_RX] = ~0;

          vnet_feature_arc_start (arc0, sw_if_index0, &next0, p0);
          vnet_feature_arc_start (arc1, sw_if_index1, &next1, p1);


Thanks
Rupesh

On Fri, Feb 22, 2019 at 6:13 PM Neale Ranns (nranns) <nra...@cisco.com>
wrote:

>
>
> Hi Rupesh,
>
>
>
> Thank you for spending the time to investigate. I have pushed a patch to
> fix the issue, and another found, after removing the cast that was masking
> the problem.
>
>   https://gerrit.fd.io/r/#/c/17777/
>
>
>
> Regards,
>
> neale
>
>
>
> *De : *Rupesh Raghuvaran <rupesh.raghuva...@gmail.com>
> *Date : *vendredi 22 février 2019 à 13:13
> *À : *"Neale Ranns (nranns)" <nra...@cisco.com>
> *Cc : *"vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
> *Objet : *Re: [vpp-dev] VPP coredump on handling IPv6 mDNS packets on
> interface with ipv6 not enabled explicilty
>
>
>
> Hi Neale,
>
>
>
> Thanks for  all the help.
>
>
>
> I was able to the root cause the issue happening with my private build.
> There was issue with one of the sw_interface_add_del  functions invoked via
> the call_elf_section_interface_callbacks during the interface create.
> The vnet_arp_delete_sw_interface definitied of void return type. but in the
> call_elf_section_interface_callbacks expects the callbacks to return
> clib_error_t type.  For some reason the execution seems to fail due a
> non-zero error type with the build using gcc7.3 which was build from the
> yocto framework. After fixing this code I was able to get the features
> enabled as required.  Even though most of the callbacks do not have any
> error paths and returns 0,  it would be good to have some logs with in
> call_elf_section_interface_callbacks to take not of any failure due to
> error.
>
>
>
> Thanks
>
> Rupesh
>
>
>
> On Thu, Feb 21, 2019 at 9:45 PM Neale Ranns (nranns) <nra...@cisco.com>
> wrote:
>
>
>
> Hi Rupesh,
>
>
>
> Those feature are enabled by default on interfaces that are newly created.
>
>
>
> DBGvpp# loop cre
>
> loop0
>
> DBGvpp# set int state loop0 up
>
> DBGvpp# sh int feat loop0
>
> Feature paths configured on loop0...
>
> …
>
> ip6-multicast:
>
>   ip6-not-enabled
>
>
>
> ip6-unicast:
>
>   ip6-not-enabled
>
>
>
> DBGvpp# create bridge 1
>
> DBGvpp# set int l2 bridge loop0 1 bvi
>
> DBGvpp# sh int feat loop0
>
> Feature paths configured on loop0...
>
>
>
> ip6-multicast:
>
>   ip6-not-enabled
>
>
>
> ip6-unicast:
>
>   ip6-not-enabled
>
>
>
>
>
> Add and remove an address:
>
>
>
> DBGvpp# set int ip address loop0 2001::1/64
>
> DBGvpp# sh int feat loop0
>
> Feature paths configured on loop0...
>
> …
>
> ip6-multicast:
>
>
>
> ip6-unicast:
>
>
>
> DBGvpp# set int ip address del loop0 2001::1/64
>
> DBGvpp# sh int feat loop0
>
> Feature paths configured on loop0...
>
> …
>
> ip6-multicast:
>
>   ip6-not-enabled
>
>
>
> ip6-unicast:
>
>   ip6-not-enabled
>
>
>
>
>
> regards,
>
> neale
>
>
>
>
>
> *De : *Rupesh Raghuvaran <rupesh.raghuva...@gmail.com>
> *Date : *jeudi 21 février 2019 à 16:23
> *À : *"Neale Ranns (nranns)" <nra...@cisco.com>
> *Cc : *"vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
> *Objet : *Re: [vpp-dev] VPP coredump on handling IPv6 mDNS packets on
> interface with ipv6 not enabled explicilty
>
>
>
> Hi Neale,
>
> I do not see the interface get this ip6-not-enabled feature enabled on the
> interface on creation, the show interface feat  have the ip4/ip6
> unicast/multicast  have arc "none configured".  when the interface is
> created.
> Is this the default state with which interface is expected to come up ?
> How does one bring a created loopback into the desired ip6-not-enabled
> feature set for ip6-unicast/ip6-multicast arc ?
> From the cli commands using the "set interface feature <intfc>
> <feature-name> arc <arc-name>" was able to bring the interface to that
> specific state, but  I could not find the api doing the same.
>
> Thanks
> Rupesh
>
>
>
> On Wed, Feb 20, 2019 at 12:02 AM Neale Ranns (nranns) <nra...@cisco.com>
> wrote:
>
>
>
> Hi Rupseh,
>
>
>
> Interfaces that are not ip6 enabled show these features enabled:
>
>
>
> ip6-multicast:
>
>   ip6-not-enabled
>
>
>
> ip6-unicast:
>
>   ip6-not-enabled
>
>
>
> it’s the ip6-not-enabled node/feature that is enabled on the interface as
> an input feature that drops the packets.
>
>
>
> /neale
>
>
>
> *De : *<vpp-dev@lists.fd.io> au nom de Rupesh Raghuvaran <
> rupesh.raghuva...@gmail.com>
> *Date : *mardi 19 février 2019 à 18:06
> *À : *"vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
> *Objet : *[vpp-dev] VPP coredump on handling IPv6 mDNS packets on
> interface with ipv6 not enabled explicilty
>
>
>
> Missed to do a reply all, adding vpp-dev back
>
>
>
> Thanks
>
> Rupesh
>
> ---------- Forwarded message ---------
> From: *Rupesh Raghuvaran* <rupesh.raghuva...@gmail.com>
> Date: Tue, Feb 19, 2019 at 10:02 PM
> Subject: Re: [vpp-dev] VPP coredump on handling IPv6 mDNS packets on
> interface with ipv6 not enabled explicilty
> To: Neale Ranns (nranns) <nra...@cisco.com>
>
>
>
> Hi Neale,e
>
>
>
> I could not spot the specific code in ip6-input which drops the packet if
> the rx interface is not ipv6 enabled. Could you please point that to me.
>
>
>
>
>
> Please find the requested information below
>
>
>
> DBGvpp# show interface loop1
>
>               Name               Idx    State  MTU (L3/IP4/IP6/MPLS)
>  Counter          Count
>
> loop1                             5      up          9000/0/0/0     rx
> packets                  2617
>
>                                                                     rx
> bytes                  130415
>
>                                                                     tx
> packets                  2068
>
>                                                                     tx
> bytes                  111686
>
>                                                                     drops
>                      3245
>
>                                                                     punt
>                        170
>
>                                                                     ip4
>                       240
>
> DBGvpp# show interface address
>
> GigabitEthernet0/14/0 (up):
>
>   L2 bridge bd-id 1 idx 1 shg 0
>
> GigabitEthernet0/14/1 (dn):
>
> GigabitEthernet0/14/2 (up):
>
>   L2 bridge bd-id 1 idx 1 shg 0
>
> local0 (dn):
>
> loop1 (up):
>
>   L2 bridge bd-id 1 idx 1 shg 0 bvi
>
>   L3 192.168.10.128/24
>
> tuntap-0 (up):
>
> DBGvpp#
>
> DBGvpp# sh ip6 interface
>
> DBGvpp# sh ip6 interface loop1
>
> show ip6 interface: IPv6 not enabled on interface
>
> DBGvpp# show int feat loop1
>
> Feature paths configured on loop1...
>
>
>
> nsh-output:
>
>   none configured
>
>
>
> mpls-output:
>
>   none configured
>
>
>
> mpls-input:
>
>   mpls-not-enabled
>
>
>
> ip6-drop:
>
>   none configured
>
>
>
> ip6-punt:
>
>   none configured
>
>
>
> ip6-local:
>
>   none configured
>
>
>
> ip6-output:
>
>   none configured
>
>
>
> ip6-multicast:
>
>   none configured
>
> ip6-unicast:
>
>   none configured
>
>
>
> ip4-drop:
>
>   none configured
>
>
>
> ip4-punt:
>
>   none configured
>
>
>
> ip4-local:
>
>   none configured
>
>
>
> ip4-output:
>
>   none configured
>
>
>
> ip4-multicast:
>
>   none configured
>
>
>
> ip4-unicast:
>
>
>
> l2-output-nonip:
>
>   none configured
>
>
>
> l2-input-nonip:
>
>   none configured
>
>
>
> l2-output-ip6:
>
>   none configured
>
>
>
> l2-input-ip6:
>
>   none configured
>
>
>
> l2-output-ip4:
>
>   none configured
>
>
>
> l2-input-ip4:
>
>   none configured
>
>
>
> ethernet-output:
>
>   none configured
>
>
>
> interface-output:
>
>   none configured
>
>
>
> device-input:
>
>   none configured
>
>
>
> l2-input:
>
>               FWD (l2-fwd)
>
>          UU_FLOOD (l2-flood)
>
>             FLOOD (l2-flood)
>
>
>
> l2-output:
>
>            OUTPUT (interface-output)
>
> DBGvpp#
>
>
>
>
>
>
>
> Thanks
>
> Rupesh
>
>
>
> On Tue, Feb 19, 2019 at 9:43 PM Neale Ranns (nranns) <nra...@cisco.com>
> wrote:
>
> Hi Rupesh,
>
>
>
> An IPv6 packet arriving on an interface that is not IPv6 enabled should be
> dropped in ip6-input.
>
>
>
> Can you please show me:
>
>   sh int feat loop0
>
>   sh ip6 interface loop0
>
>
>
> local0 is a special case. Think of it as a means for VPP to consume the ID
> 0 so that we can be sure that no other interface can use that ID. No
> packets should tx nor rx on local0.
>
>
>
> /neale
>
>
>
>
>
>
>
> *De : *<vpp-dev@lists.fd.io> au nom de Rupesh Raghuvaran <
> rupesh.raghuva...@gmail.com>
> *Date : *mardi 19 février 2019 à 16:06
> *À : *"vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
> *Objet : *[vpp-dev] VPP coredump on handling IPv6 mDNS packets on
> interface with ipv6 not enabled explicilty
>
>
>
> Hi,
>
>
>
> The vpp core observed with the following trace, this seems to occur when a
> mDNS ipv6 packet is getting handled in the mfib_forward_lookup. In the
> mfib_lookup, access to  ip6_main.mfib_index_by_sw_if_index[sw_index]
> results in assertion failure as  mfib_index_by_sw_if_index in not yet
> initialized, note the sw_index is that of a loopback interface created and
> added to the bridge domain. Note ipv6 is not explicitly enabled for this
> loopback interface. The loopack interface vrf is set to default 0 using the
> set_table api with is_ipv6 set as 0.  Earlier without interface set table
> setting and enabling the dhcp client on the interface used to result in
> assertion on ipv4_main.fib_index_by_sw_if_index ip4_lookup_inline
> function.
>
>
>
> My understanding so far regarding the issue is that the ip6 is enabled for
> the “local0” interface using the ip6_sw_interface_enable_disable from
> vnet_main_init unconditionally. This leads to the ip6-input feature
> processing enabled. But note that case does not invoke a
> ip6_add_del_address function for local0 , which would set up the fib and
> mfib and invoke vec_validate for the mfib_index_by_sw_if_index that would
> initialize the vector.
>
>
>
> I am not able to find any code in ip6-input that would look up the
> ip6_main.ip_enabled_by_sw_index[sw_index]  before passing the frame for
> lookup.
>
>
>
>  Is there any mechanism currently to handle the buffers only if the
> interface interface has ipv6  enabled else drop stating  that ipv6 not
> enabled on the interface ?
>
>
>
> Should local0 interface explicitly enable ip4/ip6 implicitly or should
> this be driven by some configuration ?
>
>
>
> The device has two ethernet interface bound to the vpp by specifying the
> pci-id in startup .conf, the ethernet interfaces are added to a
> bridge-domain 1 and an additional loopback interface is added for l3.
> please see the show interfaces and show bridge domain configuration below.
>
>
>
> Please see the stack trace and other details below. Looking for valuable
> inputs regarding this. Thanks in advance.
>
>
>
> Thanks
>
> Rupesh
>
>
>
>
>
> (gdb) core-file vpp_main_core_6_20691
>
> [New LWP 20691]
>
> [New LWP 20693]
>
> Core was generated by `/usr/bin/vpp -c /etc/vpp/startup.conf'.
>
> Program terminated with signal SIGABRT, Aborted.
>
> #0  0x0000003b00833bff in ?? ()
>
> [Current thread is 1 (LWP 20691)]
>
> (gdb) set solib-search-path lib:usr/lib
>
> warning: Unable to find libthread_db matching inferior's thread library,
> thread debugging will not be available.
>
> (gdb) bt
>
> #0  __GI_raise (sig=sig@entry=6) at
> /usr/src/debug/glibc/2.26-r0/git/sysdeps/unix/sysv/linux/raise.c:51
>
> #1  0x0000003b00834fe7 in __GI_abort () at
> /usr/src/debug/glibc/2.26-r0/git/stdlib/abort.c:90
>
> #2  0x00000000004077cb in os_exit (code=1) at
> /home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vpp/vnet/main.c:359
>
> #3  0x0000003b032acbd9 in unix_signal_handler (signum=6,
> si=0x7f57ef5ff470, uc=0x7f57ef5ff340)
>
>     at
> /home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vlib/unix/main.c:156
>
> #4  <signal handler called>
>
> #5  __GI_raise (sig=sig@entry=6) at
> /usr/src/debug/glibc/2.26-r0/git/sysdeps/unix/sysv/linux/raise.c:51
>
> #6  0x0000003b00834fe7 in __GI_abort () at
> /usr/src/debug/glibc/2.26-r0/git/stdlib/abort.c:90
>
> #7  0x0000000000407786 in os_panic () at
> /home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vpp/vnet/main.c:335
>
> #8  0x0000003b01c3a33d in debugger () at
> /home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vppinfra/error.c:84
>
> #9  0x0000003b01c3a70c in _clib_error (how_to_die=2, function_name=0x0,
> line_number=0, fmt=0x3b046d8bb0 "%s:%d (%s) assertion `%s' fails")
>
>     at
> /home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vppinfra/error.c:143
>
> #10 0x0000003b0442315e in mfib_forward_lookup (vm=0x3b034dc300
> <vlib_global_main>, node=0x7f57ef1fef40, frame=0x7f57ef21a780, is_v4=0)
>
>     at
> /home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vnet/mfib/mfib_forward.c:165
>
> #11 0x0000003b04423405 in ip6_mfib_forward_lookup (vm=0x3b034dc300
> <vlib_global_main>, node=0x7f57ef1fef40, frame=0x7f57ef21a780)
>
>     at
> /home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vnet/mfib/mfib_forward.c:216
>
> #12 0x0000003b03257314 in dispatch_node (vm=0x3b034dc300
> <vlib_global_main>, node=0x7f57ef1fef40, type=VLIB_NODE_TYPE_INTERNAL,
>
>     dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7f57ef21a780,
> last_time_stamp=826527506884080)
>
>     at
> /home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vlib/main.c:1201
>
> #13 0x0000003b03257aac in dispatch_pending_node (vm=0x3b034dc300
> <vlib_global_main>, pending_frame_index=7, last_time_stamp=826527506884080)
>
>     at
> /home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vlib/main.c:1368
>
> #14 0x0000003b0325944c in vlib_main_or_worker_loop (vm=0x3b034dc300
> <vlib_global_main>, is_main=1)
>
>     at
> /home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vlib/main.c:1793
>
> #15 0x0000003b03259cfa in vlib_main_loop (vm=0x3b034dc300
> <vlib_global_main>)
>
>     at
> /home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vlib/main.c:1895
>
> #16 0x0000003b0325a8b9 in vlib_main (vm=0x3b034dc300 <vlib_global_main>,
> input=0x7f57ef5fffa0)
>
>     at
> /home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vlib/main.c:2084
>
> #17 0x0000003b032ae125 in thread0 (arg=253458498304)
>
>     at
> /home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vlib/unix/main.c:606
>
> #18 0x0000003b01c5a274 in clib_calljmp ()
>
>     at
> /home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vppinfra/longjmp.S:123
>
> #19 0x00007ffc9ad708e0 in ?? ()
>
> #20 0x0000003b032ae5a0 in vlib_unix_main (argc=37, argv=0x90a4d0)
>
>     at
> /home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vlib/unix/main.c:675
>
> #21 0x00000000004072c6 in main (argc=37, argv=0x90a4d0)
>
>     at
> /home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vpp/vnet/main.c:274
>
> (gdb) frame 10
>
> #10 0x0000003b0442315e in mfib_forward_lookup (vm=0x3b034dc300
> <vlib_global_main>, node=0x7f57ef1fef40, frame=0x7f57ef21a780, is_v4=0)
>
>     at
> /home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vnet/mfib/mfib_forward.c:165
>
> 165
>  
> /home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vnet/mfib/mfib_forward.c:
> No such file or directory.
>
> (gdb) info locals
>
> ip0 = 0x3b03254dd8 <vlib_node_runtime_update_stats+357>
>
> mfei0 = 32599
>
> p0 = 0x7f53eed90680
>
> fib_index0 = 0
>
> pi0 = 549914
>
> n_left_from = 0
>
> n_left_to_next = 255
>
> from = 0x7f57ef21a794
>
> to_next = 0x7f57ef21ac14
>
> __FUNCTION__ = "mfib_forward_lookup"
>
> (gdb) p/x (ethernet_header_t *)p0->data
>
> $3 = 0x7f53eed90780
>
> (gdb) p/x *(ethernet_header_t *)p0->data
>
> $4 = {dst_address = {0x33, 0x33, 0x0, 0x0, 0x0, 0xfb}, src_address =
> {0xb4, 0x96, 0x91, 0x6, 0x35, 0x3}, type = 0xdd86}
>
> (gdb)
>
> $5 = {dst_address = {0x33, 0x33, 0x0, 0x0, 0x0, 0xfb}, src_address =
> {0xb4, 0x96, 0x91, 0x6, 0x35, 0x3}, type = 0xdd86}
>
> (gdb) p/x *(ip6_header_t *)(p0->data+p0->current_data)
>
> $6 = {ip_version_traffic_class_and_flow_label = 0x56070160, payload_length
> = 0x3500, protocol = 0x11, hop_limit = 0xff, src_address = {as_u8 = {0xfe,
> 0x80,
>
>       0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xb6, 0x96, 0x91, 0xff, 0xfe, 0x6,
> 0x35, 0x3}, as_u16 = {0x80fe, 0x0, 0x0, 0x0, 0x96b6, 0xff91, 0x6fe, 0x335},
>
>     as_u32 = {0x80fe, 0x0, 0xff9196b6, 0x33506fe}, as_u64 = {0x80fe,
> 0x33506feff9196b6}, as_uword = {0x80fe, 0x33506feff9196b6}}, dst_address =
> {as_u8 = {
>
>       0xff, 0x2, 0x0 <repeats 13 times>, 0xfb}, as_u16 = {0x2ff, 0x0, 0x0,
> 0x0, 0x0, 0x0, 0x0, 0xfb00}, as_u32 = {0x2ff, 0x0, 0x0, 0xfb000000}, as_u64
> = {
>
>       0x2ff, 0xfb00000000000000}, as_uword = {0x2ff, 0xfb00000000000000}}}
>
> (gdb) p ip6_main->mfib_index_by_sw_if_index
>
> $7 = (u32 *) 0x0
>
> (gdb) p ip6_main->fib_index_by_sw_if_index
>
> $8 = (u32 *) 0x7f57ef20f39c
>
> (gdb) p ip6_main->fib_index_by_sw_if_index[0]
>
> $9 = 0
>
> (gdb) p ip6_main->ip_enabled_by_sw_if_index[0]
>
> $10 = 1 '\001'
>
> (gdb) p ip6_main->ip_enabled_by_sw_if_index[1]
>
> $11 = 0 '\000'
>
> (gdb) p ip6_main->ip_enabled_by_sw_if_index[5]
>
> $12 = 186 '\272'
>
> (gdb) p ip6_main
>
> $13 = {ip6_table = {{ip6_hash = {buckets = 0x7f53aa7f0000, alloc_lock =
> 0x7f53aa870000, working_copies = 0x0, working_copy_lengths = 0x0,
> saved_bucket = {{{
>
>               offset = 0, lock = 0, linear_search = 0, log2_pages = 0,
> refcnt = 0}, as_u64 = 0}}, nbuckets = 65536, log2_nbuckets = 16,
>
>         name = 0x3b0461648f "ip6 FIB fwding table", freelists =
> 0x7f57eefeb02c, sh = {alloc_arena_next = 524608, alloc_arena_size =
> 33554432,
>
>           alloc_lock_as_u64 = 0, buckets_as_u64 = 0, freelists_as_u64 = 0,
> nbuckets = 0, ready = 0, pad = {0, 0}}, alloc_arena = 139997319462912,
>
>         fmt_fn = 0x0}, non_empty_dst_address_length_bitmap =
> 0x7f57ef0d683c, prefix_lengths_in_search_order = 0x7f57eefd9adc "\n",
>
>       dst_address_length_refcounts = {1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0
> <repeats 118 times>}}, {ip6_hash = {buckets = 0x7f53a87f0000,
>
>         alloc_lock = 0x7f53a8870000, working_copies = 0x0,
> working_copy_lengths = 0x0, saved_bucket = {{{offset = 0, lock = 0,
> linear_search = 0,
>
>               log2_pages = 0, refcnt = 0}, as_u64 = 0}}, nbuckets = 65536,
> log2_nbuckets = 16, name = 0x3b046164a4 "ip6 FIB non-fwding table",
>
>         freelists = 0x7f57eefea3cc, sh = {alloc_arena_next = 524608,
> alloc_arena_size = 33554432, alloc_lock_as_u64 = 0, buckets_as_u64 = 0,
>
>           freelists_as_u64 = 0, nbuckets = 0, ready = 0, pad = {0, 0}},
> alloc_arena = 139997285908480, fmt_fn = 0x0},
>
>       non_empty_dst_address_length_bitmap = 0x7f57ef0d687c,
> prefix_lengths_in_search_order = 0x7f57eefd970c "\n",
> dst_address_length_refcounts = {1, 0, 0,
>
>         0, 0, 0, 0, 0, 0, 0, 1, 0 <repeats 118 times>}}}, ip6_mtable =
> {ip6_mhash = {buckets = 0x7f53a67f0000, alloc_lock = 0x7f53a6870000,
>
>       working_copies = 0x0, working_copy_lengths = 0x0, saved_bucket =
> {{{offset = 0, lock = 0, linear_search = 0, log2_pages = 0, refcnt = 0},
>
>           as_u64 = 0}}, nbuckets = 65536, log2_nbuckets = 16, name =
> 0x3b046164bd "ip6 mFIB table", freelists = 0x7f57eefbe90c, sh = {
>
>         alloc_arena_next = 525312, alloc_arena_size = 33554432,
> alloc_lock_as_u64 = 0, buckets_as_u64 = 0, freelists_as_u64 = 0, nbuckets =
> 0, ready = 0,
>
>         pad = {0, 0}}, alloc_arena = 139997252354048, fmt_fn = 0x0},
> non_empty_dst_address_length_bitmap = 0x7f57ef0d70ac,
>
>     prefix_lengths_in_search_order = 0x7f57eefd309c,
> dst_address_length_refcounts = {1, 0 <repeats 103 times>, 1, 0 <repeats 23
> times>, 3,
>
>       0 <repeats 128 times>}}, lookup_main = {if_address_pool = 0x0,
> address_to_if_address_index = {key_vector_or_heap = 0x0,
>
>       key_vector_free_indices = 0x0, key_tmps = 0x0, n_key_bytes = 20,
> hash_seed = 0, hash = 0x7f57ef0ce16c, format_key = 0x0},
>
>     if_address_pool_index_by_sw_if_index = 0x7f57ef20f3dc,
> classify_table_index_by_sw_if_index = 0x0, mcast_feature_arc_index = 7 '\a',
>
>     ucast_feature_arc_index = 8 '\b', output_feature_arc_index = 6 '\006',
> fib_result_n_bytes = 8, fib_result_n_words = 0, is_ip6 = 1,
>
>     format_address_and_length = 0x3b03d2a7eb
> <format_ip6_address_and_length>,
>
>     local_next_by_ip_protocol = '\001' <repeats 17 times>, "\002", '\001'
> <repeats 26 times>,
> "\004\001\001\006\001\001\001\001\001\001\001\001\001\001\003", '\001'
> <repeats 56 times>, "\005", '\001' <repeats 140 times>,
>
>     builtin_protocol_by_ip_protocol = '\002' <repeats 17 times>, "\000",
> '\002' <repeats 40 times>, "\001", '\002' <repeats 197 times>},
>
>   fibs = 0x7f57eeeeb2dc, v6_fibs = 0x7f57ef0d6780, mfibs = 0x7f57ef0c4900,
> fib_masks = {{as_u8 = '\000' <repeats 15 times>, as_u16 = {0, 0, 0, 0, 0,
> 0, 0,
>
> -----------snip---
>
>       as_u8 = '\377' <repeats 15 times>, "\376", as_u16 = {65535, 65535,
> 65535, 65535, 65535, 65535, 65535, 65279}, as_u32 = {4294967295, 4294967295,
>
>         4294967295, 4278190079}, as_u64 = {18446744073709551615,
> 18374686479671623679}, as_uword = {18446744073709551615,
> 18374686479671623679}}, {
>
> ---Type <return> to continue, or q <return> to quit---
>
>       as_u8 = '\377' <repeats 16 times>, as_u16 = {65535, 65535, 65535,
> 65535, 65535, 65535, 65535, 65535}, as_u32 = {4294967295, 4294967295,
> 4294967295,
>
>         4294967295}, as_u64 = {18446744073709551615,
> 18446744073709551615}, as_uword = {18446744073709551615,
> 18446744073709551615}}},
>
>   fib_index_by_sw_if_index = 0x7f57ef20f39c, mfib_index_by_sw_if_index =
> 0x0, ip_enabled_by_sw_if_index = 0x7f57eefb1a8c "\001",
>
>   fib_index_by_table_id = 0x7f57eeb5fd3c, mfib_index_by_table_id =
> 0x7f57eeff331c, interface_route_adj_index_by_sw_if_index = 0x0,
>
>   add_del_interface_address_callbacks = 0x7f57ef0d618c,
> table_bind_callbacks = 0x7f57eefea5ac, discover_neighbor_packet_template = {
>
>     packet_data = 0x7f57ef0d74bc "`", min_n_buffers_each_alloc = 8, name =
> 0x0}, lookup_table_nbuckets = 65536, lookup_table_size = 33554432,
>
>   flow_hash_seed = 3735928559, host_config = {ttl = 64 '@', pad =
> "\000\000"}, hbh_enabled = 0 '\000', nd_throttle = {time = 0.001,
>
>     bitmaps = 0x7f57ef1c9bcc, seeds = 0x7f57ef1cb59c,
> last_seed_change_time = 0x7f57ef1c9c8c}}
>
>
>
>
>
> DBGvpp# show ver
>
> vpp v19.04-rc0~178-g6fef74a built by  on rraghuvaran-dev at Thu Feb 14
> 14:20:07 IST 2019
>
> DBGvpp# show interface
>
>               Name               Idx    State  MTU (L3/IP4/IP6/MPLS)
>  Counter          Count
>
> GigabitEthernet0/14/0             2      up          1500/0/0/0     rx
> packets                  1706
>
>                                                                     rx
> bytes                  113473
>
>                                                                     tx
> packets                   673
>
>                                                                     tx
> bytes                   41215
>
>                                                                     drops
>                      1531
>
>
> tx-error                       2
>
> GigabitEthernet0/14/1             3     down         9000/0/0/0
>
> GigabitEthernet0/14/2             4      up          1500/0/0/0
>  tx-error                    2094
>
> local0                            0     down          0/0/0/0       drops
>                         1
>
> loop1                             5      up          9000/0/0/0     rx
> packets                  1706
>
>                                                                     rx
> bytes                   89589
>
>                                                                     tx
> packets                  1350
>
>                                                                     tx
> bytes                   83718
>
>                                                                     drops
>                      2098
>
>                                                                     punt
>                        127
>
>                                                                     ip4
>                       173
>
> tuntap-0                          1      up           0/0/0/0       rx
> packets                   627
>
>                                                                     rx
> bytes                   48233
>
>                                                                     tx
> packets                   127
>
>                                                                     tx
> bytes                   14755
>
>                                                                     drops
>                       558
>
>                                                                     ip4
>                       627
>
> DBGvpp# show interface address
>
> GigabitEthernet0/14/0 (up):
>
>   L2 bridge bd-id 1 idx 1 shg 0
>
> GigabitEthernet0/14/1 (dn):
>
> GigabitEthernet0/14/2 (up):
>
>   L2 bridge bd-id 1 idx 1 shg 0
>
> local0 (dn):
>
> loop1 (up):
>
>   L2 bridge bd-id 1 idx 1 shg 0 bvi
>
>   L3 192.168.10.128/24
>
> tuntap-0 (up):
>
> DBGvpp# show bridge-domain 1 detail
>
>   BD-ID   Index   BSN  Age(min)  Learning  U-Forwrd   UU-Flood   Flooding
> ARP-Term   BVI-Intf
>
>     1       1      0     off        on        on       flood        on
>    off       loop1
>
>
>
>            Interface           If-idx ISN  SHG  BVI  TxFlood
> VLAN-Tag-Rewrite
>
>              loop1               5     1    0    *      *
>  none
>
>      GigabitEthernet0/14/2       4     1    0    -      *
>  none
>
>      GigabitEthernet0/14/0       2     1    0    -      *
>  none
>
> DBGvpp#
>
>
>
> Following is the startup conf
>
>
>
>
>
> unix {
>
>     nodaemon
>
>     log /tmp/vpp.log
>
>     full-coredump
>
>     cli-listen localhost:5002
>
> }
>
>
>
> api-segment {
>
>    gid 0
>
> }
>
>
>
> tuntap {
>
>     enable
>
> }
>
>
>
> api-trace {
>
>   on
>
> }
>
> cpu {
>
>         skip-cores 4
>
> }
>
> dpdk {
>
>         dev 0000:00:14.0
>
>         dev 0000:00:14.1
>
>         dev 0000:00:14.2
>
> }
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12329): https://lists.fd.io/g/vpp-dev/message/12329
Mute This Topic: https://lists.fd.io/mt/29918187/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to