Re: [vpp-dev] How to check stack size of process node

2023-03-26 Thread chetan bhasin
Thanks a lot Dave!

On Sat, Mar 25, 2023, 19:45 Dave Barach  wrote:

> Find the current process index: vlib_node_main_t *nm = &vm->node_main;
> current_process_index = nm->current_process_index;
>
>
>
> Find the process object:   vlib_process_t *p = vec_elt (nm->processes,
> current_process_index);
>
>
>
> Find the stack base: p->stack
>
>
>
> Take the address of a local variable in a function at what you think is
> the maximum stack depth, and subtract. That’s the instantaneous stack
> depth.
>
>
>
> You can sprinkle “stack_depth_now = p->stack - &foo; if (stack_depth_now >
> max_stack_depth) max = now” cookies in various places.
>
>
>
> HTH...
>
>
>
> *From:* vpp-dev@lists.fd.io  *On Behalf Of *chetan
> bhasin
> *Sent:* Friday, March 24, 2023 9:24 AM
> *To:* vpp-dev 
> *Subject:* Re: [vpp-dev] How to check stack size of process node
>
>
>
> Hi,
>
>
>
> Please share any thoughts on this.
>
>
>
> Thanks,
>
>
>
> On Sat, Mar 18, 2023, 13:18 chetan bhasin via lists.fd.io
>  wrote:
>
> Hi vpp-team,
>
>
>
>
>
> Could you please provide a way via which a process node stack size can be
> calculated. We are creating a process node, how to figure out the value
> of process_log2_n_stack_bytes??
>
>
>
> Thanks,
>
>
>
>
>
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22769): https://lists.fd.io/g/vpp-dev/message/22769
Mute This Topic: https://lists.fd.io/mt/97689882/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] How to check stack size of process node

2023-03-24 Thread chetan bhasin
Hi,

Please share any thoughts on this.

Thanks,


On Sat, Mar 18, 2023, 13:18 chetan bhasin via lists.fd.io  wrote:

> Hi vpp-team,
>
>
> Could you please provide a way via which a process node stack size can be
> calculated. We are creating a process node, how to figure out the value
> of process_log2_n_stack_bytes??
>
> Thanks,
>
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22763): https://lists.fd.io/g/vpp-dev/message/22763
Mute This Topic: https://lists.fd.io/mt/97689882/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] How to check stack size of process node

2023-03-18 Thread chetan bhasin
Hi vpp-team,


Could you please provide a way via which a process node stack size can be
calculated. We are creating a process node, how to figure out the value
of process_log2_n_stack_bytes??

Thanks,

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22719): https://lists.fd.io/g/vpp-dev/message/22719
Mute This Topic: https://lists.fd.io/mt/97689882/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] process node suspended indefinitely

2023-03-02 Thread chetan bhasin
Hi Sudhir,

Is your issue resolved?

Actually we are facing same issue on vpp.2106.
In our case "api-rx-ring" is not getting called.
in our usecase workers are calling some functions in main-thread context
leading to RPC message and memory is allocated from api section.
This leads to Api-segment memory is used fully and leads to crash.

Thanks,
Chetan


On Mon, Feb 20, 2023, 18:24 Sudhir CR via lists.fd.io  wrote:

> Hi Dave,
> Thank you very much for your inputs. I will try this out and get back to
> you with the results.
>
> Regards,
> Sudhir
>
> On Mon, Feb 20, 2023 at 6:01 PM Dave Barach  wrote:
>
>> Please try something like this, to eliminate the possibility that some
>> bit of code is sending this process an event. It’s not a good idea to skip
>> the vec_reset_length (event_data) step.
>>
>>
>>
>> while (1)
>>
>> {
>>
>>uword event_type, * event_data = 0;
>>
>>int i;
>>
>>
>>
>>vlib_process_wait_for_event_or_clock (vm, 1e-2 /* 10 ms */);
>>
>>
>>
>>event_type = vlib_process_get_events (vm, &event_data);
>>
>>
>>
>>switch (event_type) {
>>
>>   case ~0: /* handle timer expirations */
>>
>>rtb_event_loop_run_once ();
>>
>>break;
>>
>>
>>
>>default: /* bug! */
>>
>>ASSERT (0);
>>
>>}
>>
>>
>>
>>vec_reset_length(event_data);
>>
>> }
>>
>>
>>
>> *From:* vpp-dev@lists.fd.io  *On Behalf Of *Sudhir
>> CR via lists.fd.io
>> *Sent:* Monday, February 20, 2023 4:02 AM
>> *To:* vpp-dev@lists.fd.io
>> *Subject:* Re: [vpp-dev] process node suspended indefinitely
>>
>>
>>
>> Hi Dave,
>> Thank you for your response and help.
>>
>>
>>
>> Please find the additional details below.
>>
>> VPP Version *21.10*
>>
>>
>> We are creating a process node* rtb-vpp-epoll-process *to handle control
>> plane events like interface add/delete, route add/delete.
>> This process node waits for *10ms* of time (Not Interested in any events
>> ) once 10ms is expired it will process control plane events mentioned above.
>>
>> code snippet looks like below
>>
>>
>>
>> ```
>>
>> static uword
>> rtb_vpp_epoll_process (vlib_main_t *vm,
>>vlib_node_runtime_t  *rt,
>>vlib_frame_t *f)
>> {
>>
>> ...
>> ...
>> while (1) {
>> vlib_process_wait_for_event_or_clock (vm, 10e-3);
>> vlib_process_get_events (vm, NULL);
>>
>> rtb_event_loop_run_once();   *< controlplane events handling*
>>
>> }
>> }
>> ```
>>
>> What we observed is that sometimes (when there is a high controlplane
>> load like request to install more routes) "rtb-vpp-epoll-process" is
>> suspended and not scheduled furever. this we found by using "show runtime
>> rtb-vpp-epoll-process"*  (*in "show runtime rtb-vpp-epoll-process"
>> command output suspends counter is not incrementing.)
>>
>> *show runtime output in working case :*
>>
>>
>> ```
>> DBGvpp# show runtime rtb-vpp-epoll-process
>>  Name State Calls  Vectors
>>  *Suspends* Clocks   Vectors/Call
>> rtb-vpp-epoll-process   any wait 0
>> 0  *192246*  1.91e60.00
>> DBGvpp#
>>
>> DBGvpp# show runtime rtb-vpp-epoll-process
>>  Name State Calls  Vectors
>>  *Suspends* Clocks   Vectors/Call
>> rtb-vpp-epoll-process   any wait 0
>> 0  *193634*  1.89e60.00
>> DBGvpp#
>>
>> ```
>>
>>
>> *show runtime output in issue case :```*
>>
>> DBGvpp# show runtime rtb-vpp-epoll-process
>>
>>  Name State Calls  Vectors   
>>  *Suspends* Clocks   Vectors/Call
>>
>> rtb-vpp-epoll-process   any wait 0   0   
>> *81477*  7.08e60.00
>>
>> DBGvpp# show runtime rtb-vpp-epoll-process
>>
>>  Name State Calls  Vectors   
>>  *Suspends *Clocks   Vectors/Call
>>
>> rtb-vpp-epoll-process   any wait 0   0   
>> *81477*  7.08e60.00
>>
>> *```*
>>
>> Other process nodes like lldp-process,
>> ip4-neighbor-age-process, ip6-ra-process running without any issue. only
>> "rtb-vpp-epoll-process" process node suspended forever.
>>
>>
>>
>> Please let me know if any additional information is required.
>>
>> Hi Jinsh,
>> Thanks for pointing me to the issue you faced. The issue I am facing
>> looks similar.
>> I will verify with the given patch.
>>
>>
>> Thanks and Regards,
>>
>> Sudhir
>>
>>
>>
>> On Sun, Feb 19, 2023 at 6:19 AM jinsh11  wrote:
>>
>> HI:
>>
>>
>>- I have the same problem,
>>
>> bfd process node stop running. I raised this issue,
>>
>> https://lists.fd.io/g/vpp-dev/message/22380
>> I think there is a problem with the porcess scheduling module when using
>> the time wheel.
>>
>>
>>
>>
>>
>> NOTICE TO RECIPIENT This e-mail mess

Re: [vpp-dev] Is Vpp21.06 compatible with Redhat8.5 ?

2022-06-22 Thread chetan bhasin
Hi Ben,

You mean we don't support centos anymore and we only support ubuntu.

Thanks,
Chetan


On Wed, Jun 22, 2022, 12:48 Benoit Ganne (bganne) via lists.fd.io  wrote:

> We do not run CentOS/opensource RHEL-equivalent anymore and I do not think
> anybody is looking at it.
> Chances are it is quietly bit-rotting...
>
> ben
>
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of chetan
> bhasin
> > Sent: Wednesday, June 22, 2022 8:00
> > To: vpp-dev 
> > Subject: [vpp-dev] Is Vpp21.06 compatible with Redhat8.5 ?
> >
> > Hi ,
> >
> > We are trying to compile vpp21.06 over Redhat 8.5 . We are facing some
> > error in make install-dep. Is it compatible?
> >
> > Thanks,
> > CB
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21576): https://lists.fd.io/g/vpp-dev/message/21576
Mute This Topic: https://lists.fd.io/mt/91916475/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Is Vpp21.06 compatible with Redhat8.5 ?

2022-06-21 Thread chetan bhasin
Hi ,

We are trying to compile vpp21.06 over Redhat 8.5 . We are facing some
error in make install-dep. Is it compatible?

Thanks,
CB

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21567): https://lists.fd.io/g/vpp-dev/message/21567
Mute This Topic: https://lists.fd.io/mt/91916475/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP2202 Facing compilation issue with out of tree plugin

2022-06-05 Thread chetan bhasin
Hi,

We are facing compilation issues as we migrate from vpp_2106 to vpp2202 ,
while compiling out of tree plugin. Please suggest the correct approach.

vpp/build-root/install-vpp-native/vpp/include/vppinfra/memcpy_x86_64.h: In
function 'clib_memcpy_const_le32'
/home/mahesh-vpp/vpp-setup/vpp/build-root/install-vpp-native/vpp/include/vppinfra/memcpy_x86_64.h:118:7:
*warning: implicit declaration f function 'clib_memcpy16'; did you mean
'clib_memcpy1'? [-Wimplicit-function-declaration]*
*  118 |   clib_memcpy16 (dst, src);*

*Reason for compilation error is *
File - *src/vppinfra/memcpy_x86_64.h , *having function *clib_memcpy16
*compiled
under macro * CLIB_HAVE_VEC128 *but used in some functions without macro ,

These macros are defined in below file

#include 

*/* Vector types. */*

*#if defined (__aarch64__) && defined(__ARM_NEON) || defined (__i686__)*
*#define CLIB_HAVE_VEC128*
*#endif*

*#if defined (__SSE4_2__) && __GNUC__ >= 4*
*#define CLIB_HAVE_VEC128*
*#endif*

*#if defined (__ALTIVEC__)*
*#define CLIB_HAVE_VEC128*
*#endif*

*#if defined (__AVX2__)*
*#define CLIB_HAVE_VEC256*
*#if defined (__clang__)  && __clang_major__ < 4*
*#undef CLIB_HAVE_VEC256*
*#endif*
*#endif*

*#if defined (__AVX512BITALG__)*
*#define CLIB_HAVE_VEC512*
*#endif*

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21506): https://lists.fd.io/g/vpp-dev/message/21506
Mute This Topic: https://lists.fd.io/mt/91572056/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP 2106 Main thread is 99% CPU

2022-05-22 Thread chetan bhasin
HI,

As per the GDB back-trace coming with *214 frames*


Program received signal SIGINT, Interrupt.
0x7fa6d0f1fca4 in clib_time_now_internal (n=35626371070243907,
c=0x7fa30edf8680) at /vpp/src/vppinfra/time.h:215
215 /vpp/src/vppinfra/time.h: No such file or directory.
(gdb) bt
#0  0x7fa6d0f1fca4 in clib_time_now_internal (n=35626371070243907,
c=0x7fa30edf8680) at /vpp/src/vppinfra/time.h:215
#1  vlib_main_or_worker_loop (is_main=1, vm=) at
/vpp/src/vlib/main.c:3076
#2  vlib_main_loop (vm=) at /vpp/src/vlib/main.c:3108
#3  vlib_main (vm=, vm@entry=0x7fa30edf8680,
input=input@entry=0x7fa308277fa0) at /vpp/src/vlib/main.c:3399
#4  0x7fa6d0f71bc8 in thread0 (arg=140338305926784) at
/vpp/src/vlib/unix/main.c:671
#5  0x7fa6cf6298ec in clib_calljmp () at /vpp/src/vppinfra/longjmp.S:123
#6  0x7ffc98d63e90 in ?? ()



(gdb) bt
#0  clib_cpu_time_now () at /vpp/src/vppinfra/time.h:85
#1  vlib_main_or_worker_loop (is_main=1, vm=) at
/vpp/src/vlib/main.c:2939
#2  vlib_main_loop (vm=) at /vpp/src/vlib/main.c:3108
#3  vlib_main (vm=, vm@entry=0x7fa30edf8680,
input=input@entry=0x7fa308277fa0) at /vpp/src/vlib/main.c:3399
#4  0x7fa6d0f71bc8 in thread0 (arg=140338305926784) at
/vpp/src/vlib/unix/main.c:671
#5  0x7fa6cf6298ec in clib_calljmp () at /vpp/src/vppinfra/longjmp.S:123
#6  0x7ffc98d63e90 in ?? ()
#7  0x7fa6d0f73147 in vlib_unix_main (argc=,
argv=) at /vpp/src/vlib/unix/main.c:751
#8  0x in ?? ()
#9  0x in ?? ()
#10 0x in ?? ()
#11 0x02e0 in ?? ()

Program received signal SIGINT, Interrupt.
0x7fa6ceff7d47 in epoll_pwait () from /lib64/libc.so.6
(gdb) thr 1
[Switching to thread 1 (Thread 0x7fa6d0ed47c0 (LWP 1533))]
#0  0x7fa6ceff7d47 in epoll_pwait () from /lib64/libc.so.6
(gdb) bt
#0  0x7fa6ceff7d47 in epoll_pwait () from /lib64/libc.so.6
#1  0x7fa6d0f70b5d in linux_epoll_input_inline (thread_index=0,
frame=, node=, vm=0x7fa30edf8680)
at /vpp/src/vlib/unix/input.c:219
#2  linux_epoll_input (vm=0x7fa30edf8680, node=,
frame=) at /vpp/src/vlib/unix/input.c:365
#3  0x7fa6d0f1f7e1 in dispatch_node (last_time_stamp=,
frame=0x0, dispatch_state=VLIB_NODE_STATE_POLLING,
type=VLIB_NODE_TYPE_PRE_INPUT, node=0x7fa31189a300, vm=)
at /vpp/src/vlib/main.c:1024
#4  vlib_main_or_worker_loop (is_main=1, vm=) at
/vpp/src/vlib/main.c:2941
#5  vlib_main_loop (vm=) at /vpp/src/vlib/main.c:3108
#6  vlib_main (vm=, vm@entry=0x7fa30edf8680,
input=input@entry=0x7fa308277fa0) at /vpp/src/vlib/main.c:3399
#7  0x7fa6d0f71bc8 in thread0 (arg=140338305926784) at
/vpp/src/vlib/unix/main.c:671
#8  0x7fa6cf6298ec in clib_calljmp () at /vpp/src/vppinfra/longjmp.S:123
#9  0x7ffc98d63e90 in ?? ()
#10 0x7fa6d0f73147 in vlib_unix_main (argc=,
argv=) at /vpp/src/vlib/unix/main.c:751
#7  0x in ?? ()
#8  0x in ?? ()
#9  0x in ?? ()
#10 0x02e0 in ?? ()
#11 0x in ?? ()
#12 0x in ?? ()
#13 0x in ?? ()
#14 0x in ?? ()
#15 0x in ?? ()
...

#208 0x7eff27bec1dc in ?? ()
#209 0x001f003e in ?? ()
#210 0x6289f8e7 in ?? ()
#211 0x0e11 in ?? ()
*#212 0x00010002 in ?? ()*
*#213 0x00020005 in ?? ()*
*#214 0x in ?? ()*


On Sat, May 21, 2022 at 5:08 PM  wrote:

> Attach to the process in gdb, switch to the main thread (if necessary),
> and type “backtrace” or “bt”. You may need to type “continue” a few times
> to build a picture of what’s going on.
>
>
>
>
>
> D.
>
>
>
> *From:* vpp-dev@lists.fd.io  *On Behalf Of *chetan
> bhasin
> *Sent:* Saturday, May 21, 2022 2:50 AM
> *To:* vpp-dev 
> *Subject:* [vpp-dev] VPP 2106 Main thread is 99% CPU
>
>
>
> Hello Everyone,
>
>
>
> We are running VPP2106 with our plugin, we are observing a problem on
> load setup where CPU of main thread is reaching at 99% suddenly and stays
> there even if we stop load, although we dont have problems like any of your
> own nodes polling on main thread etc and even CLI is not blocking,
>
> Could you please provide any direction to debug where the main thread is
> getting stuck ?
>
>
>
> Thanks,
>
> Chetan
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21429): https://lists.fd.io/g/vpp-dev/message/21429
Mute This Topic: https://lists.fd.io/mt/91247657/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP 2106 Main thread is 99% CPU

2022-05-20 Thread chetan bhasin
Hello Everyone,

We are running VPP2106 with our plugin, we are observing a problem on
load setup where CPU of main thread is reaching at 99% suddenly and stays
there even if we stop load, although we dont have problems like any of your
own nodes polling on main thread etc and even CLI is not blocking,

Could you please provide any direction to debug where the main thread is
getting stuck ?


Thanks,

Chetan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21423): https://lists.fd.io/g/vpp-dev/message/21423
Mute This Topic: https://lists.fd.io/mt/91247657/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Query Regarding Bihash (VPP 2106)

2022-04-28 Thread chetan bhasin
Hi,

I have tried by increasing the value of buckets to 4,997,120 for 10M DB  ,
but I still see VPP RSZ memory is keep on growing even after 24 hours of
test. Please provide suggestions

Hash table 'v4_flow_hash'
*10789283 active elements 6244300 active buckets*
7 free lists
   [len 2] 230496 free elts
   [len 16] 2 free elts
12223 linear search buckets
*heap: 7286 chunk(s) allocated*
  bytes: used *1.48g*, scrap 400.81k

After 12 hours
Hash table 'v4_flow_hash'^M
*10882016 active elements 6268886 active buckets*
7 free lists^M
   [len 2] 917835 free elts
   [len 4] 52 free elts
   [len 8] 3 free elts
*15488 linear search buckets*
heap: 10779 chunk(s) allocated
  bytes: used *1.79g,* scrap 802.44k

Thanks,
Chetan

On Thu, Apr 7, 2022 at 5:38 PM  wrote:

> Not enough buckets for the number of active key/value pairs. Buckets are
> cheap. Start with nBuckets = nActiveSessions / (BIHASH_KVP_PER_PAGE / 2) or
> some such and increase the number of buckets as necessary...
>
>
>
> HTH... Dave
>
>
>
> *From:* vpp-dev@lists.fd.io  *On Behalf Of *chetan
> bhasin
> *Sent:* Thursday, April 7, 2022 7:28 AM
> *To:* vpp-dev 
> *Subject:* [vpp-dev] Query Regarding Bihash (VPP 2106)
>
>
>
> Hi,
>
>
>
> We are using bihash where key is 5 tuple of the packet.  There are
> different type of free list which are maintained and once an entry is
> allocated and removed from a bihash that memory still remain with the
> bihash for future usage.
>
>
>
> The problem we are seeing is consistent growth in the memory of bihash
> even after 24 hours of traffic , somehow memory is not coming in a stable
> situation.
>
>
>
> Any suggestions here are really appreciated.
>
>
>
>
>
> With 5 tuple and less buckets
>
> Hash table 'abc_v4_flow_hash'
>
> 5260666 active elements 131072 active buckets
>
> 11 free lists
>
>  *  [len 2] 50214 free elts*
>
> *   [len 4] 58542 free elts*
>
> *   [len 8] 69288 free elts*
>
> *   [len 16] 74395 free elts*
>
> *   [len 32] 77187 free elts*
>
> *   [len 64] 53558 free elts*
>
> *   [len 128] 1 free elts*
>
> 1788 linear search buckets
>
> heap: 25363 chunk(s) allocated
>
>   bytes: used 2.33g, scrap 48.84m
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21299): https://lists.fd.io/g/vpp-dev/message/21299
Mute This Topic: https://lists.fd.io/mt/90310568/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Query Regarding Bihash (VPP 2106)

2022-04-07 Thread chetan bhasin
Hi,

We are using bihash where key is 5 tuple of the packet.  There are
different type of free list which are maintained and once an entry is
allocated and removed from a bihash that memory still remain with the
bihash for future usage.

The problem we are seeing is consistent growth in the memory of bihash even
after 24 hours of traffic , somehow memory is not coming in a stable
situation.

Any suggestions here are really appreciated.


With 5 tuple and less buckets
Hash table 'abc_v4_flow_hash'
5260666 active elements 131072 active buckets
11 free lists
 *  [len 2] 50214 free elts*
*   [len 4] 58542 free elts*
*   [len 8] 69288 free elts*
*   [len 16] 74395 free elts*
*   [len 32] 77187 free elts*
*   [len 64] 53558 free elts*
*   [len 128] 1 free elts*
1788 linear search buckets
heap: 25363 chunk(s) allocated
  bytes: used 2.33g, scrap 48.84m

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21220): https://lists.fd.io/g/vpp-dev/message/21220
Mute This Topic: https://lists.fd.io/mt/90310568/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] fail_over_mac=1 (Active) Bonding

2021-09-15 Thread chetan bhasin
Thanks a lot Steven!

On Wed, Sep 15, 2021 at 11:47 AM Steven Luong (sluong) 
wrote:

> Chetan,
>
>
>
> I have a patch in gerrit a long time ago and I just rebased it to the
> latest master
>
> https://gerrit.fd.io/r/c/vpp/+/30866
>
> Please feel free to test it thoroughly and let me know if you encounter
> any problem or not.
>
>
>
> Steven
>
>
>
> *From: * on behalf of chetan bhasin <
> chetan.bhasin...@gmail.com>
> *Date: *Tuesday, September 14, 2021 at 10:16 PM
> *To: *vpp-dev 
> *Subject: *[vpp-dev] fail_over_mac=1 (Active) Bonding
>
>
>
> Hi,
>
>
>
> We have a feature to support Bonding fail_over_mac=1 (Active) in VPP . Are
> there any plans to implement this ?
>
>
>
> If we want to implement this, can anybody please provide a direction.
>
>
>
> Thanks,
>
> Chetan
>
>
>
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20137): https://lists.fd.io/g/vpp-dev/message/20137
Mute This Topic: https://lists.fd.io/mt/85620877/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] fail_over_mac=1 (Active) Bonding

2021-09-14 Thread chetan bhasin
Hi,

We have a feature to support Bonding fail_over_mac=1 (Active) in VPP . Are
there any plans to implement this ?

If we want to implement this, can anybody please provide a direction.

Thanks,
Chetan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20134): https://lists.fd.io/g/vpp-dev/message/20134
Mute This Topic: https://lists.fd.io/mt/85620877/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP 2106 with Sanitizer enabled

2021-09-09 Thread chetan bhasin
Hi Benoit,

Thanks a lot!

We are not observing any crash in the VPP TCP host stack after applying the
second patch provided by you with SANITIZER enabled build.

Thanks,
Chetan



On Wed, Sep 8, 2021 at 9:41 PM chetan bhasin via lists.fd.io
 wrote:

> Thanks Benoit.
>
> Actually Same out of tree plugins are working fine with SANITIZER Enabled
> on vpp 2101.
>
> Will update the result with the patch provided by you and test which
> Florin suggested .
>
> Thanks & Regards,
> Chetan
>
> On Wed, Sep 8, 2021 at 8:13 PM Benoit Ganne (bganne) 
> wrote:
>
>> Here is maybe a better fix: https://gerrit.fd.io/r/c/vpp/+/33693
>> My previous changeset was only hiding the underlying issue which is
>> arrays passed to vlib_buffer_enqueue_to_next() must be big enough to allow
>> for some overflow (because of the use of clib_mask_compare_u16() and
>> clib_mask_compare_u32() that overflow for optimization reasons).
>> This is usually fine for your typical VPP node which allocates
>> buffer[VLIB_FRAME_SIZE] and nexts[VLIB_FRAME_SIZE] but not when using
>> vectors.
>> This patch should actually fix the issue (and tests should pass), but
>> there might be better ways.
>> However I do not think the last issue reported by Chetan will be fixed
>> with that one, looks like a new one.
>> Chetan, Florin's request is good: do you reproduce it only with your
>> plugin (in which case that could be an issue in your plugin) or also with
>> stock VPP?
>>
>> ben
>>
>> > -Original Message-
>> > From: Florin Coras 
>> > Sent: mercredi 8 septembre 2021 08:39
>> > To: chetan bhasin 
>> > Cc: Benoit Ganne (bganne) ; vpp-dev > > d...@lists.fd.io>
>> > Subject: Re: [vpp-dev] VPP 2106 with Sanitizer enabled
>> >
>> > Hi Chetan,
>> >
>> > Something like the following should exercise using iperf and homegrown
>> > test apps most of the components of the host stack, i.e., vcl, session
>> > layer and the transports, including tcp:
>> >
>> > make test-debug VPP_EXTRA_CMAKE_ARGS=-DVPP_ENABLE_SANITIZE_ADDR=ON
>> CC=gcc-
>> > 10 TEST=vcl CACHE_OUTPUT=0
>> >
>> > Having said that, it seems some issue has creeped in since last time
>> I’ve
>> > tested (granted it’s been a long time). That is, most of the tests seems
>> > to fail but I’ve only checked without Benoit’s patch.
>> >
>> > Regards,
>> > Florin
>> >
>> >
>> >   On Sep 7, 2021, at 9:57 PM, chetan bhasin
>> > mailto:chetan.bhasin...@gmail.com> >
>> wrote:
>> >
>> >   Hi Florin ,
>> >
>> >   This issue is coming when we are testing with iperf traffic with
>> our
>> > plugins that involve VPP TCP Host stack (VPP 21.06).
>> >
>> >   What's the best way to test VPP TCP host stack with Sanitizer
>> > without involving our out of tree plugins ?
>> >
>> >
>> >   Thanks,
>> >   Chetan
>> >
>> >   On Tue, Sep 7, 2021 at 7:47 PM Florin Coras <
>> fcoras.li...@gmail.com
>> > <mailto:fcoras.li...@gmail.com> > wrote:
>> >
>> >
>> >   Hi Chetan,
>> >
>> >   Out of curiosity, is this while running some make test
>> (like
>> > the iperf3 one) or with actual traffic through vpp?
>> >
>> >   Regards,
>> >   Florin
>> >
>> >
>> >
>> >   On Sep 7, 2021, at 12:17 AM, chetan bhasin
>> > mailto:chetan.bhasin...@gmail.com> >
>> wrote:
>> >
>> >   Hi Ben,
>> >
>> >   After applying the patch , now its crashing at
>> below
>> > place
>> >
>> >
>> > static void
>> >   session_flush_pending_tx_buffers (session_worker_t * wrk,
>> >   vlib_node_runtime_t * node)
>> >   {
>> >   vlib_buffer_enqueue_to_next (wrk->vm, node,
>> wrk->pending_tx_buffers,
>> >
>> >   wrk->pending_tx_nexts,
>> >   vec_len (wrk->pending_tx_nexts));
>> >   vec_reset_length (wrk->pending_tx_buffers);
>> >   vec_reset_length (wrk->pending_tx_nexts);
>> >   }
>> >   Program received signal SIGSEGV, Segmentation
>> fault.
>> >   [Switching to Thread 0x7ffb527f9700 (LWP 27762)]
>> &

Re: [vpp-dev] VPP 2106 with Sanitizer enabled

2021-09-08 Thread chetan bhasin
Thanks Benoit.

Actually Same out of tree plugins are working fine with SANITIZER Enabled
on vpp 2101.

Will update the result with the patch provided by you and test which
Florin suggested .

Thanks & Regards,
Chetan

On Wed, Sep 8, 2021 at 8:13 PM Benoit Ganne (bganne) 
wrote:

> Here is maybe a better fix: https://gerrit.fd.io/r/c/vpp/+/33693
> My previous changeset was only hiding the underlying issue which is arrays
> passed to vlib_buffer_enqueue_to_next() must be big enough to allow for
> some overflow (because of the use of clib_mask_compare_u16() and
> clib_mask_compare_u32() that overflow for optimization reasons).
> This is usually fine for your typical VPP node which allocates
> buffer[VLIB_FRAME_SIZE] and nexts[VLIB_FRAME_SIZE] but not when using
> vectors.
> This patch should actually fix the issue (and tests should pass), but
> there might be better ways.
> However I do not think the last issue reported by Chetan will be fixed
> with that one, looks like a new one.
> Chetan, Florin's request is good: do you reproduce it only with your
> plugin (in which case that could be an issue in your plugin) or also with
> stock VPP?
>
> ben
>
> > -Original Message-
> > From: Florin Coras 
> > Sent: mercredi 8 septembre 2021 08:39
> > To: chetan bhasin 
> > Cc: Benoit Ganne (bganne) ; vpp-dev  > d...@lists.fd.io>
> > Subject: Re: [vpp-dev] VPP 2106 with Sanitizer enabled
> >
> > Hi Chetan,
> >
> > Something like the following should exercise using iperf and homegrown
> > test apps most of the components of the host stack, i.e., vcl, session
> > layer and the transports, including tcp:
> >
> > make test-debug VPP_EXTRA_CMAKE_ARGS=-DVPP_ENABLE_SANITIZE_ADDR=ON
> CC=gcc-
> > 10 TEST=vcl CACHE_OUTPUT=0
> >
> > Having said that, it seems some issue has creeped in since last time I’ve
> > tested (granted it’s been a long time). That is, most of the tests seems
> > to fail but I’ve only checked without Benoit’s patch.
> >
> > Regards,
> > Florin
> >
> >
> >   On Sep 7, 2021, at 9:57 PM, chetan bhasin
> > mailto:chetan.bhasin...@gmail.com> > wrote:
> >
> >   Hi Florin ,
> >
> >   This issue is coming when we are testing with iperf traffic with
> our
> > plugins that involve VPP TCP Host stack (VPP 21.06).
> >
> >   What's the best way to test VPP TCP host stack with Sanitizer
> > without involving our out of tree plugins ?
> >
> >
> >   Thanks,
> >   Chetan
> >
> >   On Tue, Sep 7, 2021 at 7:47 PM Florin Coras <
> fcoras.li...@gmail.com
> > <mailto:fcoras.li...@gmail.com> > wrote:
> >
> >
> >   Hi Chetan,
> >
> >   Out of curiosity, is this while running some make test
> (like
> > the iperf3 one) or with actual traffic through vpp?
> >
> >   Regards,
> >   Florin
> >
> >
> >
> >   On Sep 7, 2021, at 12:17 AM, chetan bhasin
> > mailto:chetan.bhasin...@gmail.com> > wrote:
> >
> >   Hi Ben,
> >
> >   After applying the patch , now its crashing at
> below
> > place
> >
> >
> > static void
> >   session_flush_pending_tx_buffers (session_worker_t * wrk,
> >   vlib_node_runtime_t * node)
> >   {
> >   vlib_buffer_enqueue_to_next (wrk->vm, node,
> wrk->pending_tx_buffers,
> >
> >   wrk->pending_tx_nexts,
> >   vec_len (wrk->pending_tx_nexts));
> >   vec_reset_length (wrk->pending_tx_buffers);
> >   vec_reset_length (wrk->pending_tx_nexts);
> >   }
> >   Program received signal SIGSEGV, Segmentation
> fault.
> >   [Switching to Thread 0x7ffb527f9700 (LWP 27762)]
> >   0x771db5c1 in
> > __asan::FakeStack::AddrIsInFakeStack(unsigned long, unsigned long*,
> > unsigned long*) () from /lib64/libasan.so.5
> >   (gdb) bt
> >   #0  0x771db5c1 in
> > __asan::FakeStack::AddrIsInFakeStack(unsigned long, unsigned long*,
> > unsigned long*) () from /lib64/libasan.so.5
> >   #1  0x772c2a11 in
> > __asan::ThreadStackContainsAddress(__sanitizer::ThreadContextBase*,
> void*)
> > () from /lib64/libasan.so.5
> >   #2  0x772dcdc2 in
> > __sanitizer::ThreadRegistry::FindThreadContextLocked(bool
> > (*)(__sanitizer::ThreadContextBase*, v

Re: [vpp-dev] VPP 2106 with Sanitizer enabled

2021-09-07 Thread chetan bhasin
Hi Florin ,

This issue is coming when we are testing with iperf traffic with our
plugins that involve VPP TCP Host stack (VPP 21.06).

What's the best way to test VPP TCP host stack with Sanitizer without
involving our out of tree plugins ?


Thanks,
Chetan

On Tue, Sep 7, 2021 at 7:47 PM Florin Coras  wrote:

> Hi Chetan,
>
> Out of curiosity, is this while running some make test (like the iperf3
> one) or with actual traffic through vpp?
>
> Regards,
> Florin
>
> On Sep 7, 2021, at 12:17 AM, chetan bhasin 
> wrote:
>
> Hi Ben,
>
> After applying the patch , now its crashing at below place
>
> static void
> session_flush_pending_tx_buffers (session_worker_t * wrk,
> vlib_node_runtime_t * node)
> {
> vlib_buffer_enqueue_to_next (wrk->vm, node, wrk->pending_tx_buffers,
> wrk->pending_tx_nexts,
> vec_len (wrk->pending_tx_nexts));
> vec_reset_length (wrk->pending_tx_buffers);
> * vec_reset_length (wrk->pending_tx_nexts);*
> }
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7ffb527f9700 (LWP 27762)]
> 0x771db5c1 in __asan::FakeStack::AddrIsInFakeStack(unsigned long,
> unsigned long*, unsigned long*) () from /lib64/libasan.so.5
> (gdb) bt
> #0  0x771db5c1 in __asan::FakeStack::AddrIsInFakeStack(unsigned
> long, unsigned long*, unsigned long*) () from /lib64/libasan.so.5
> #1  0x772c2a11 in
> __asan::ThreadStackContainsAddress(__sanitizer::ThreadContextBase*, void*)
> () from /lib64/libasan.so.5
> #2  0x772dcdc2 in
> __sanitizer::ThreadRegistry::FindThreadContextLocked(bool
> (*)(__sanitizer::ThreadContextBase*, void*), void*) () from
> /lib64/libasan.so.5
> #3  0x772c3e5a in __asan::FindThreadByStackAddress(unsigned long)
> () from /lib64/libasan.so.5
> #4  0x771d5fb6 in __asan::GetStackAddressInformation(unsigned
> long, unsigned long, __asan::StackAddressDescription*) () from
> /lib64/libasan.so.5
> #5  0x771d73f9 in
> __asan::AddressDescription::AddressDescription(unsigned long, unsigned
> long, bool) () from /lib64/libasan.so.5
> #6  0x771d9e51 in __asan::ErrorGeneric::ErrorGeneric(unsigned int,
> unsigned long, unsigned long, unsigned long, unsigned long, bool, unsigned
> long) () from /lib64/libasan.so.5
> #7  0x772bdc2a in __asan::ReportGenericError(unsigned long,
> unsigned long, unsigned long, unsigned long, bool, unsigned long, unsigned
> int, bool) () from /lib64/libasan.so.5
> #8  0x772be927 in __asan_report_load4 () from /lib64/libasan.so.5
> *#9  0x758e5185 in session_flush_pending_tx_buffers
> (wrk=0x7fff89101e80, node=0x7fff891a6e80)*
> *at  src/vnet/session/session_node.c:1658*
> #10 0x758e8531 in session_queue_node_fn (vm=0x7fff72bf2c40,
> node=0x7fff891a6e80, frame=0x0)
> at  src/vnet/session/session_node.c:1812
> #11 0x73e04121 in dispatch_node (vm=0x7fff72bf2c40,
> node=0x7fff891a6e80, type=VLIB_NODE_TYPE_INPUT,
> dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x0,
> last_time_stamp=2510336347228480)
> at  src/vlib/main.c:1024
> #12 0x73e13394 in vlib_main_or_worker_loop (vm=0x7fff72bf2c40,
> is_main=0) at  src/vlib/main.c:2949
> #13 0x73e14fb8 in vlib_worker_loop (vm=0x7fff72bf2c40) at
> src/vlib/main.c:3114
> #14 0x73eac8f1 in vlib_worker_thread_fn (arg=0x7fff700ef480) at
> src/vlib/threads.c:1560
> #15 0x734d9504 in clib_calljmp () at  src/vppinfra/longjmp.S:123
> #16 0x7ffb527f8230 in ?? ()
> #17 0x73ea0141 in vlib_worker_thread_bootstrap_fn
> (arg=0x7fff700ef480) at  src/vlib/threads.c:431
> #18 0x7fff6d41971b in eal_thread_loop () from
> /opt/opwv/integra/99.9/tools/vpp_2106_asan/bin/../lib/dpdk_plugin.so
> #19 0x738abea5 in start_thread () from /lib64/libpthread.so.0
> #20 0x72e328cd in clone () from /lib64/libc.so.6
> (gdb)
>
>
> On Mon, Sep 6, 2021 at 6:10 PM Benoit Ganne (bganne) 
> wrote:
>
>> Yes we are aware of it, still working on the correct fix though.
>> In the meantime you can try to apply https://gerrit.fd.io/r/c/vpp/+/32765
>>  which should workaround that for now.
>>
>> Best
>> ben
>>
>> > -Original Message-
>> > From: chetan bhasin 
>> > Sent: lundi 6 septembre 2021 14:21
>> > To: Benoit Ganne (bganne) 
>> > Cc: vpp-dev 
>> > Subject: Re: [vpp-dev] VPP 2106 with Sanitizer enabled
>> >
>> > Hi,
>> >
>> > The below crash is coming as we involved VPP TCP host stack.
>> >
>> >
>> > Program received signal SIGSEGV, Segmentation fault.
>> > [Switching to Thread 0x7ffb527f9700 (LWP 2013)]
>> > 0x0

Re: [vpp-dev] VPP 2106 with Sanitizer enabled

2021-09-07 Thread chetan bhasin
Hi Ben,

After applying the patch , now its crashing at below place

static void
session_flush_pending_tx_buffers (session_worker_t * wrk,
vlib_node_runtime_t * node)
{
vlib_buffer_enqueue_to_next (wrk->vm, node, wrk->pending_tx_buffers,
wrk->pending_tx_nexts,
vec_len (wrk->pending_tx_nexts));
vec_reset_length (wrk->pending_tx_buffers);
* vec_reset_length (wrk->pending_tx_nexts);*
}
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffb527f9700 (LWP 27762)]
0x771db5c1 in __asan::FakeStack::AddrIsInFakeStack(unsigned long,
unsigned long*, unsigned long*) () from /lib64/libasan.so.5
(gdb) bt
#0  0x771db5c1 in __asan::FakeStack::AddrIsInFakeStack(unsigned
long, unsigned long*, unsigned long*) () from /lib64/libasan.so.5
#1  0x772c2a11 in
__asan::ThreadStackContainsAddress(__sanitizer::ThreadContextBase*, void*)
() from /lib64/libasan.so.5
#2  0x772dcdc2 in
__sanitizer::ThreadRegistry::FindThreadContextLocked(bool
(*)(__sanitizer::ThreadContextBase*, void*), void*) () from
/lib64/libasan.so.5
#3  0x772c3e5a in __asan::FindThreadByStackAddress(unsigned long)
() from /lib64/libasan.so.5
#4  0x771d5fb6 in __asan::GetStackAddressInformation(unsigned long,
unsigned long, __asan::StackAddressDescription*) () from /lib64/libasan.so.5
#5  0x771d73f9 in
__asan::AddressDescription::AddressDescription(unsigned long, unsigned
long, bool) () from /lib64/libasan.so.5
#6  0x771d9e51 in __asan::ErrorGeneric::ErrorGeneric(unsigned int,
unsigned long, unsigned long, unsigned long, unsigned long, bool, unsigned
long) () from /lib64/libasan.so.5
#7  0x772bdc2a in __asan::ReportGenericError(unsigned long,
unsigned long, unsigned long, unsigned long, bool, unsigned long, unsigned
int, bool) () from /lib64/libasan.so.5
#8  0x772be927 in __asan_report_load4 () from /lib64/libasan.so.5
*#9  0x758e5185 in session_flush_pending_tx_buffers
(wrk=0x7fff89101e80, node=0x7fff891a6e80)*
*at  src/vnet/session/session_node.c:1658*
#10 0x758e8531 in session_queue_node_fn (vm=0x7fff72bf2c40,
node=0x7fff891a6e80, frame=0x0)
at  src/vnet/session/session_node.c:1812
#11 0x73e04121 in dispatch_node (vm=0x7fff72bf2c40,
node=0x7fff891a6e80, type=VLIB_NODE_TYPE_INPUT,
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x0,
last_time_stamp=2510336347228480)
at  src/vlib/main.c:1024
#12 0x73e13394 in vlib_main_or_worker_loop (vm=0x7fff72bf2c40,
is_main=0) at  src/vlib/main.c:2949
#13 0x73e14fb8 in vlib_worker_loop (vm=0x7fff72bf2c40) at
src/vlib/main.c:3114
#14 0x73eac8f1 in vlib_worker_thread_fn (arg=0x7fff700ef480) at
src/vlib/threads.c:1560
#15 0x734d9504 in clib_calljmp () at  src/vppinfra/longjmp.S:123
#16 0x7ffb527f8230 in ?? ()
#17 0x73ea0141 in vlib_worker_thread_bootstrap_fn
(arg=0x7fff700ef480) at  src/vlib/threads.c:431
#18 0x7fff6d41971b in eal_thread_loop () from
/opt/opwv/integra/99.9/tools/vpp_2106_asan/bin/../lib/dpdk_plugin.so
#19 0x738abea5 in start_thread () from /lib64/libpthread.so.0
#20 0x72e328cd in clone () from /lib64/libc.so.6
(gdb)


On Mon, Sep 6, 2021 at 6:10 PM Benoit Ganne (bganne) 
wrote:

> Yes we are aware of it, still working on the correct fix though.
> In the meantime you can try to apply https://gerrit.fd.io/r/c/vpp/+/32765
> which should workaround that for now.
>
> Best
> ben
>
> > -Original Message-
> > From: chetan bhasin 
> > Sent: lundi 6 septembre 2021 14:21
> > To: Benoit Ganne (bganne) 
> > Cc: vpp-dev 
> > Subject: Re: [vpp-dev] VPP 2106 with Sanitizer enabled
> >
> > Hi,
> >
> > The below crash is coming as we involved VPP TCP host stack.
> >
> >
> > Program received signal SIGSEGV, Segmentation fault.
> > [Switching to Thread 0x7ffb527f9700 (LWP 2013)]
> > 0x771db5c1 in __asan::FakeStack::AddrIsInFakeStack(unsigned long,
> > unsigned long*, unsigned long*) () from /lib64/libasan.so.5
> > (gdb) bt
> > #0  0x771db5c1 in __asan::FakeStack::AddrIsInFakeStack(unsigned
> > long, unsigned long*, unsigned long*) () from /lib64/libasan.so.5
> > #1  0x772c2a11 in
> > __asan::ThreadStackContainsAddress(__sanitizer::ThreadContextBase*,
> void*)
> > () from /lib64/libasan.so.5
> > #2  0x772dcdc2 in
> > __sanitizer::ThreadRegistry::FindThreadContextLocked(bool
> > (*)(__sanitizer::ThreadContextBase*, void*), void*) () from
> > /lib64/libasan.so.5
> > #3  0x772c3e5a in __asan::FindThreadByStackAddress(unsigned long)
> > () from /lib64/libasan.so.5
> > #4  0x771d5fb6 in __asan::GetStackAddressInformation(unsigned
> > long, unsigned long, __asan::StackAddressDescription*) () from
> > /lib64/libasan.so.5
> > #5  0x771d73f9 i

Re: [vpp-dev] VPP 2106 with Sanitizer enabled

2021-09-06 Thread chetan bhasin
Thanks a ton Ben!

On Mon, Sep 6, 2021 at 6:10 PM Benoit Ganne (bganne) 
wrote:

> Yes we are aware of it, still working on the correct fix though.
> In the meantime you can try to apply https://gerrit.fd.io/r/c/vpp/+/32765
> which should workaround that for now.
>
> Best
> ben
>
> > -Original Message-----
> > From: chetan bhasin 
> > Sent: lundi 6 septembre 2021 14:21
> > To: Benoit Ganne (bganne) 
> > Cc: vpp-dev 
> > Subject: Re: [vpp-dev] VPP 2106 with Sanitizer enabled
> >
> > Hi,
> >
> > The below crash is coming as we involved VPP TCP host stack.
> >
> >
> > Program received signal SIGSEGV, Segmentation fault.
> > [Switching to Thread 0x7ffb527f9700 (LWP 2013)]
> > 0x771db5c1 in __asan::FakeStack::AddrIsInFakeStack(unsigned long,
> > unsigned long*, unsigned long*) () from /lib64/libasan.so.5
> > (gdb) bt
> > #0  0x771db5c1 in __asan::FakeStack::AddrIsInFakeStack(unsigned
> > long, unsigned long*, unsigned long*) () from /lib64/libasan.so.5
> > #1  0x772c2a11 in
> > __asan::ThreadStackContainsAddress(__sanitizer::ThreadContextBase*,
> void*)
> > () from /lib64/libasan.so.5
> > #2  0x772dcdc2 in
> > __sanitizer::ThreadRegistry::FindThreadContextLocked(bool
> > (*)(__sanitizer::ThreadContextBase*, void*), void*) () from
> > /lib64/libasan.so.5
> > #3  0x772c3e5a in __asan::FindThreadByStackAddress(unsigned long)
> > () from /lib64/libasan.so.5
> > #4  0x771d5fb6 in __asan::GetStackAddressInformation(unsigned
> > long, unsigned long, __asan::StackAddressDescription*) () from
> > /lib64/libasan.so.5
> > #5  0x771d73f9 in
> > __asan::AddressDescription::AddressDescription(unsigned long, unsigned
> > long, bool) () from /lib64/libasan.so.5
> > #6  0x771d9e51 in __asan::ErrorGeneric::ErrorGeneric(unsigned
> int,
> > unsigned long, unsigned long, unsigned long, unsigned long, bool,
> unsigned
> > long) () from /lib64/libasan.so.5
> > #7  0x772bdc2a in __asan::ReportGenericError(unsigned long,
> > unsigned long, unsigned long, unsigned long, bool, unsigned long,
> unsigned
> > int, bool) () from /lib64/libasan.so.5
> > #8  0x772bf194 in __asan_report_load_n () from
> /lib64/libasan.so.5
> > #9  0x73f45463 in clib_mask_compare_u16_x64 (v=2,
> > a=0x7fff89fd6150, n_elts=2) at  src/vppinfra/vector_funcs.h:24
> > #10 0x73f4571e in clib_mask_compare_u16 (v=2, a=0x7fff89fd6150,
> > mask=0x7ffb51fe2310, n_elts=2)
> > at  src/vppinfra/vector_funcs.h:79
> > #11 0x73f45b81 in enqueue_one (vm=0x7fff7314ec40,
> > node=0x7fff89fe1e00, used_elt_bmp=0x7ffb51fe2440, next_index=2,
> > buffers=0x7fff7045a9e0, nexts=0x7fff89fd6150, n_buffers=2, n_left=2,
> > tmp=0x7ffb51fe2480) at  src/vlib/buffer_funcs.c:30
> > #12 0x73f6bdae in vlib_buffer_enqueue_to_next_fn_skx
> > (vm=0x7fff7314ec40, node=0x7fff89fe1e00, buffers=0x7fff7045a9e0,
> > nexts=0x7fff89fd6150, count=2)
> > at  src/vlib/buffer_funcs.c:110
> > #13 0x758cd2b6 in vlib_buffer_enqueue_to_next (vm=0x7fff7314ec40,
> > node=0x7fff89fe1e00, buffers=0x7fff7045a9e0, nexts=0x7fff89fd6150,
> > count=2)
> > at  src/vlib/buffer_node.h:344
> > #14 0x758e4cf1 in session_flush_pending_tx_buffers
> > (wrk=0x7fff8912b780, node=0x7fff89fe1e00)
> > at  src/vnet/session/session_node.c:1654
> > #15 0x758e844f in session_queue_node_fn (vm=0x7fff7314ec40,
> > node=0x7fff89fe1e00, frame=0x0)
> > at  src/vnet/session/session_node.c:1812
> > #16 0x73e0402f in dispatch_node (vm=0x7fff7314ec40,
> > node=0x7fff89fe1e00, type=VLIB_NODE_TYPE_INPUT,
> > dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x0,
> > last_time_stamp=2463904994699296)
> > at  src/vlib/main.c:1024
> > #17 0x73e132a2 in vlib_main_or_worker_loop (vm=0x7fff7314ec40,
> > is_main=0) at  src/vlib/main.c:2949
> > #18 0x73e14ec6 in vlib_worker_loop (vm=0x7fff7314ec40) at
> > src/vlib/main.c:3114
> > #19 0x73eac7ff in vlib_worker_thread_fn (arg=0x7fff7050ef40) at
> > src/vlib/threads.c:1560
> > #20 0x734d9504 in clib_calljmp () at  src/vppinfra/longjmp.S:123
> > #21 0x7ffb527f8230 in ?? ()
> > #22 0x73ea004f in vlib_worker_thread_bootstrap_fn
> > (arg=0x7fff7050ef40) at  src/vlib/threads.c:431
> > #23 0x7fff6d41971b in eal_thread_loop () from
> > /opt/opwv/integra/99.9/tools/vpp_2106_asan/bin/../lib/dpdk_plugin.so
> > #24 0x738abea5 in start_thread () from /lib64/libpthread.so.0
> &g

Re: [vpp-dev] VPP 2106 with Sanitizer enabled

2021-09-06 Thread chetan bhasin
Hi,

The below crash is coming as we involved VPP TCP host stack.


Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffb527f9700 (LWP 2013)]
0x771db5c1 in __asan::FakeStack::AddrIsInFakeStack(unsigned long,
unsigned long*, unsigned long*) () from /lib64/libasan.so.5
(gdb) bt
#0  0x771db5c1 in __asan::FakeStack::AddrIsInFakeStack(unsigned
long, unsigned long*, unsigned long*) () from /lib64/libasan.so.5
#1  0x772c2a11 in
__asan::ThreadStackContainsAddress(__sanitizer::ThreadContextBase*, void*)
() from /lib64/libasan.so.5
#2  0x772dcdc2 in
__sanitizer::ThreadRegistry::FindThreadContextLocked(bool
(*)(__sanitizer::ThreadContextBase*, void*), void*) () from
/lib64/libasan.so.5
#3  0x772c3e5a in __asan::FindThreadByStackAddress(unsigned long)
() from /lib64/libasan.so.5
#4  0x771d5fb6 in __asan::GetStackAddressInformation(unsigned long,
unsigned long, __asan::StackAddressDescription*) () from /lib64/libasan.so.5
#5  0x771d73f9 in
__asan::AddressDescription::AddressDescription(unsigned long, unsigned
long, bool) () from /lib64/libasan.so.5
#6  0x771d9e51 in __asan::ErrorGeneric::ErrorGeneric(unsigned int,
unsigned long, unsigned long, unsigned long, unsigned long, bool, unsigned
long) () from /lib64/libasan.so.5
#7  0x772bdc2a in __asan::ReportGenericError(unsigned long,
unsigned long, unsigned long, unsigned long, bool, unsigned long, unsigned
int, bool) () from /lib64/libasan.so.5
#8  0x772bf194 in __asan_report_load_n () from /lib64/libasan.so.5
#9  0x73f45463 in clib_mask_compare_u16_x64 (v=2, a=0x7fff89fd6150,
n_elts=2) at  src/vppinfra/vector_funcs.h:24
#10 0x73f4571e in clib_mask_compare_u16 (v=2, a=0x7fff89fd6150,
mask=0x7ffb51fe2310, n_elts=2)
at  src/vppinfra/vector_funcs.h:79
#11 0x73f45b81 in enqueue_one (vm=0x7fff7314ec40,
node=0x7fff89fe1e00, used_elt_bmp=0x7ffb51fe2440, next_index=2,
buffers=0x7fff7045a9e0, nexts=0x7fff89fd6150, n_buffers=2, n_left=2,
tmp=0x7ffb51fe2480) at  src/vlib/buffer_funcs.c:30
#12 0x73f6bdae in vlib_buffer_enqueue_to_next_fn_skx
(vm=0x7fff7314ec40, node=0x7fff89fe1e00, buffers=0x7fff7045a9e0,
nexts=0x7fff89fd6150, count=2)
at  src/vlib/buffer_funcs.c:110
#13 0x758cd2b6 in vlib_buffer_enqueue_to_next (vm=0x7fff7314ec40,
node=0x7fff89fe1e00, buffers=0x7fff7045a9e0, nexts=0x7fff89fd6150, count=2)
at  src/vlib/buffer_node.h:344
#14 0x758e4cf1 in session_flush_pending_tx_buffers
(wrk=0x7fff8912b780, node=0x7fff89fe1e00)
at  src/vnet/session/session_node.c:1654
#15 0x758e844f in session_queue_node_fn (vm=0x7fff7314ec40,
node=0x7fff89fe1e00, frame=0x0)
at  src/vnet/session/session_node.c:1812
#16 0x73e0402f in dispatch_node (vm=0x7fff7314ec40,
node=0x7fff89fe1e00, type=VLIB_NODE_TYPE_INPUT,
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x0,
last_time_stamp=2463904994699296)
at  src/vlib/main.c:1024
#17 0x73e132a2 in vlib_main_or_worker_loop (vm=0x7fff7314ec40,
is_main=0) at  src/vlib/main.c:2949
#18 0x73e14ec6 in vlib_worker_loop (vm=0x7fff7314ec40) at
src/vlib/main.c:3114
#19 0x73eac7ff in vlib_worker_thread_fn (arg=0x7fff7050ef40) at
src/vlib/threads.c:1560
#20 0x734d9504 in clib_calljmp () at  src/vppinfra/longjmp.S:123
#21 0x7ffb527f8230 in ?? ()
#22 0x73ea004f in vlib_worker_thread_bootstrap_fn
(arg=0x7fff7050ef40) at  src/vlib/threads.c:431
#23 0x7fff6d41971b in eal_thread_loop () from
/opt/opwv/integra/99.9/tools/vpp_2106_asan/bin/../lib/dpdk_plugin.so
#24 0x738abea5 in start_thread () from /lib64/libpthread.so.0
#25 0x72e328cd in clone () from /lib64/libc.so.6
(gdb) :q

On Mon, Sep 6, 2021 at 1:52 PM chetan bhasin 
wrote:

> Hi Ben,
>
> Thanks for the direction. Looks like it will fix both the issues as
> mentioned above. I will update you with the results after applying the
> patch.
>
> Is there any ASAN related patch inside the TCP host stack code. I will be
> sharing the issue shortly with you.
>
> Thanks,
> Chetan
>
> On Mon, Sep 6, 2021 at 1:22 PM Benoit Ganne (bganne) 
> wrote:
>
>> It should be fixed in master by https://gerrit.fd.io/r/c/vpp/+/32643
>>
>> ben
>>
>> > -Original Message-
>> > From: vpp-dev@lists.fd.io  On Behalf Of chetan
>> bhasin
>> > Sent: lundi 6 septembre 2021 09:36
>> > To: vpp-dev 
>> > Subject: [vpp-dev] VPP 2106 with Sanitizer enabled
>> >
>> > Hi
>> >
>> >
>> > We are facing two errors with vpp2106 and Address Sanitizer enabled.
>> >
>> > make V=1 -j4 build VPP_EXTRA_CMAKE_ARGS=-DVPP_ENABLE_SANITIZE_ADDR=ON
>> >
>> >
>> > Work-around - After adding the two api’s string_key_sum and
>> > strnlen_s_inline  to ASAN suppression, vpp2106 comes up f

Re: [vpp-dev] VPP 2106 with Sanitizer enabled

2021-09-06 Thread chetan bhasin
Hi Ben,

Thanks for the direction. Looks like it will fix both the issues as
mentioned above. I will update you with the results after applying the
patch.

Is there any ASAN related patch inside the TCP host stack code. I will be
sharing the issue shortly with you.

Thanks,
Chetan

On Mon, Sep 6, 2021 at 1:22 PM Benoit Ganne (bganne) 
wrote:

> It should be fixed in master by https://gerrit.fd.io/r/c/vpp/+/32643
>
> ben
>
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of chetan
> bhasin
> > Sent: lundi 6 septembre 2021 09:36
> > To: vpp-dev 
> > Subject: [vpp-dev] VPP 2106 with Sanitizer enabled
> >
> > Hi
> >
> >
> > We are facing two errors with vpp2106 and Address Sanitizer enabled.
> >
> > make V=1 -j4 build VPP_EXTRA_CMAKE_ARGS=-DVPP_ENABLE_SANITIZE_ADDR=ON
> >
> >
> > Work-around - After adding the two api’s string_key_sum and
> > strnlen_s_inline  to ASAN suppression, vpp2106 comes up fine.
> >
> >
> > Error 1:
> > --
> > Program received signal SIGSEGV, Segmentation fault.
> > 0x771db5c1 in __asan::FakeStack::AddrIsInFakeStack(unsigned long,
> > unsigned long*, unsigned long*) ()
> >from /lib64/libasan.so.5
> > (gdb) bt
> > #0  0x771db5c1 in __asan::FakeStack::AddrIsInFakeStack(unsigned
> > long, unsigned long*, unsigned long*) ()
> >from /lib64/libasan.so.5
> > #1  0x772c2a11 in
> > __asan::ThreadStackContainsAddress(__sanitizer::ThreadContextBase*,
> void*)
> > ()
> >from /lib64/libasan.so.5
> > #2  0x772dcdc2 in
> > __sanitizer::ThreadRegistry::FindThreadContextLocked(bool
> > (*)(__sanitizer::ThreadContextBase*, void*), void*) () from
> > /lib64/libasan.so.5
> > #3  0x772c3e5a in __asan::FindThreadByStackAddress(unsigned long)
> > () from /lib64/libasan.so.5
> > #4  0x771d5fb6 in __asan::GetStackAddressInformation(unsigned
> > long, unsigned long, __asan::StackAddressDescription*) () from
> > /lib64/libasan.so.5
> > #5  0x771d73f9 in
> > __asan::AddressDescription::AddressDescription(unsigned long, unsigned
> > long, bool) ()
> >from /lib64/libasan.so.5
> > #6  0x771d9e51 in __asan::ErrorGeneric::ErrorGeneric(unsigned
> int,
> > unsigned long, unsigned long, unsigned long, unsigned long, bool,
> unsigned
> > long) () from /lib64/libasan.so.5
> > #7  0x772bdc2a in __asan::ReportGenericError(unsigned long,
> > unsigned long, unsigned long, unsigned long, bool, unsigned long,
> unsigned
> > int, bool) () from /lib64/libasan.so.5
> > #8  0x7720ef9c in __interceptor_strlen.part.0 () from
> > /lib64/libasan.so.5
> > #9  0x734ce2ec in string_key_sum (h=0x7fff6ff6e970,
> > key=140735097062688)
> > at  src/vppinfra/hash.c:947
> > #10 0x734caf15 in key_sum (h=0x7fff6ff6e970, key=140735097062688)
> > at  src/vppinfra/hash.c:333
> > #11 0x734cbf76 in lookup (v=0x7fff6ff6e9b8, key=140735097062688,
> > op=GET, new_value=0x0, old_value=0x0)
> > at  src/vppinfra/hash.c:557
> > #12 0x734cc59d in _hash_get_pair (v=0x7fff6ff6e9b8,
> > key=140735097062688)
> > at  src/vppinfra/hash.c:653
> > #13 0x0042e885 in lookup_hash_index (name=0x7fff7177c520
> > "/mem/stat segment")
> > at  src/vpp/stats/stat_segment.c:69
> > #14 0x00431790 in stat_segment_new_entry (name=0x7fff7177c520
> > "/mem/stat segment",
> > t=STAT_DIR_TYPE_COUNTER_VECTOR_SIMPLE)
> > at  src/vpp/stats/stat_segment.c:402
> > #15 0x004401fb in vlib_stats_register_mem_heap
> > (heap=0x7ffb67e0)
> > at  src/vpp/stats/stat_segment_provider.c:96
> > #16 0x004327fa in vlib_map_stat_segment_init ()
> > at  src/vpp/stats/stat_segment.c:493
> > ---Type  to continue, or q  to quit---
> > #17 0x73e15d19 in vlib_main (vm=0x7fff6eeff680,
> > input=0x7fff6a0a9f70)
> > at  src/vlib/main.c:3272
> > #18 0x73f0d924 in thread0 (arg=140735054608000)
> > at  src/vlib/unix/main.c:671
> > #19 0x734d9504 in clib_calljmp ()
> > at  src/vppinfra/longjmp.S:123
> > #20 0x7fffc940 in ?? ()
> > #21 0x73f0e67e in vlib_unix_main (argc=282, argv=0x61d1a480)
> > at  src/vlib/unix/main.c:751
> > #22 0x0040b482 in main (argc=282, argv=0x61d1a480)
> > at  src/vpp/vnet/main.c:336
> > (gdb) q
> >
> >
> >
> >
> > Error 2:
> > ---
> > Program received signal SI

[vpp-dev] VPP 2106 with Sanitizer enabled

2021-09-06 Thread chetan bhasin
Hi

We are facing two errors with vpp2106 and Address Sanitizer enabled.

make V=1 -j4 build VPP_EXTRA_CMAKE_ARGS=-DVPP_ENABLE_SANITIZE_ADDR=ON

*Work-around* - After adding the two api’s string_key_sum and
strnlen_s_inline  to ASAN suppression, vpp2106 comes up fine.


Error 1:
--
Program received signal SIGSEGV, Segmentation fault.
0x771db5c1 in __asan::FakeStack::AddrIsInFakeStack(unsigned long,
unsigned long*, unsigned long*) ()
   from /lib64/libasan.so.5
(gdb) bt
#0  0x771db5c1 in __asan::FakeStack::AddrIsInFakeStack(unsigned
long, unsigned long*, unsigned long*) ()
   from /lib64/libasan.so.5
#1  0x772c2a11 in
__asan::ThreadStackContainsAddress(__sanitizer::ThreadContextBase*, void*)
()
   from /lib64/libasan.so.5
#2  0x772dcdc2 in
__sanitizer::ThreadRegistry::FindThreadContextLocked(bool
(*)(__sanitizer::ThreadContextBase*, void*), void*) () from
/lib64/libasan.so.5
#3  0x772c3e5a in __asan::FindThreadByStackAddress(unsigned long)
() from /lib64/libasan.so.5
#4  0x771d5fb6 in __asan::GetStackAddressInformation(unsigned long,
unsigned long, __asan::StackAddressDescription*) () from /lib64/libasan.so.5
#5  0x771d73f9 in
__asan::AddressDescription::AddressDescription(unsigned long, unsigned
long, bool) ()
   from /lib64/libasan.so.5
#6  0x771d9e51 in __asan::ErrorGeneric::ErrorGeneric(unsigned int,
unsigned long, unsigned long, unsigned long, unsigned long, bool, unsigned
long) () from /lib64/libasan.so.5
#7  0x772bdc2a in __asan::ReportGenericError(unsigned long,
unsigned long, unsigned long, unsigned long, bool, unsigned long, unsigned
int, bool) () from /lib64/libasan.so.5
#8  0x7720ef9c in __interceptor_strlen.part.0 () from
/lib64/libasan.so.5
*#9  0x734ce2ec in string_key_sum (h=0x7fff6ff6e970,
key=140735097062688)*
*at  src/vppinfra/hash.c:947*
#10 0x734caf15 in key_sum (h=0x7fff6ff6e970, key=140735097062688)
at  src/vppinfra/hash.c:333
#11 0x734cbf76 in lookup (v=0x7fff6ff6e9b8, key=140735097062688,
op=GET, new_value=0x0, old_value=0x0)
at  src/vppinfra/hash.c:557
#12 0x734cc59d in _hash_get_pair (v=0x7fff6ff6e9b8,
key=140735097062688)
at  src/vppinfra/hash.c:653
*#13 0x0042e885 in lookup_hash_index (name=0x7fff7177c520
"/mem/stat segment")*
at  src/vpp/stats/stat_segment.c:69
#14 0x00431790 in stat_segment_new_entry (name=0x7fff7177c520
"/mem/stat segment",
t=STAT_DIR_TYPE_COUNTER_VECTOR_SIMPLE)
at  src/vpp/stats/stat_segment.c:402
#15 0x004401fb in vlib_stats_register_mem_heap (heap=0x7ffb67e0)
at  src/vpp/stats/stat_segment_provider.c:96
#16 0x004327fa in vlib_map_stat_segment_init ()
at  src/vpp/stats/stat_segment.c:493
---Type  to continue, or q  to quit---
#17 0x73e15d19 in vlib_main (vm=0x7fff6eeff680,
input=0x7fff6a0a9f70)
at  src/vlib/main.c:3272
#18 0x73f0d924 in thread0 (arg=140735054608000)
at  src/vlib/unix/main.c:671
#19 0x734d9504 in clib_calljmp ()
at  src/vppinfra/longjmp.S:123
#20 0x7fffc940 in ?? ()
#21 0x73f0e67e in vlib_unix_main (argc=282, argv=0x61d1a480)
at  src/vlib/unix/main.c:751
#22 0x0040b482 in main (argc=282, argv=0x61d1a480)
at  src/vpp/vnet/main.c:336
(gdb) q


Error 2:
---
Program received signal SIGSEGV, Segmentation fault.
0x771db5c1 in __asan::FakeStack::AddrIsInFakeStack(unsigned long,
unsigned long*, unsigned long*) ()
   from /lib64/libasan.so.5
(gdb) bt
#0  0x771db5c1 in __asan::FakeStack::AddrIsInFakeStack(unsigned
long, unsigned long*, unsigned long*) ()
   from /lib64/libasan.so.5
#1  0x772c2a11 in
__asan::ThreadStackContainsAddress(__sanitizer::ThreadContextBase*, void*)
()
   from /lib64/libasan.so.5
#2  0x772dcdc2 in
__sanitizer::ThreadRegistry::FindThreadContextLocked(bool
(*)(__sanitizer::ThreadContextBase*, void*), void*) () from
/lib64/libasan.so.5
#3  0x772c3e5a in __asan::FindThreadByStackAddress(unsigned long)
() from /lib64/libasan.so.5
#4  0x771d5fb6 in __asan::GetStackAddressInformation(unsigned long,
unsigned long, __asan::StackAddressDescription*) () from /lib64/libasan.so.5
#5  0x771d73f9 in
__asan::AddressDescription::AddressDescription(unsigned long, unsigned
long, bool) ()
   from /lib64/libasan.so.5
#6  0x771d9e51 in __asan::ErrorGeneric::ErrorGeneric(unsigned int,
unsigned long, unsigned long, unsigned long, unsigned long, bool, unsigned
long) () from /lib64/libasan.so.5
#7  0x772bdc2a in __asan::ReportGenericError(unsigned long,
unsigned long, unsigned long, unsigned long, bool, unsigned long, unsigned
int, bool) () from /lib64/libasan.so.5
#8  0x7721252c in __interceptor_strnlen.part.0 () from
/lib64/libasan.so.5
*#9  0x73595a0c in strnlen_s_inline (s=0x7fff7177c520 "/mem/stat
segment", maxsize=128)*
*at  src/vppinfra/string.h:800*
#10 0x73595f63 in strcpy_s_

Re: [vpp-dev] VPp 2101 (pcap trace)

2021-09-02 Thread chetan bhasin
Hi Dave,

As we have tested on vpp 21.06 , it works fine.

Thanks,
Chetan

On Tue, Aug 31, 2021 at 11:25 PM  wrote:

> The following configuration works as expected:
>
>
>
> set term pag off
>
> loop create
>
> loop create
>
> set int ip address loop0 10.20.36.1/24
>
> set int ip address loop1 10.20.37.1/24
>
> set int state loop0 up
>
> set int state loop1 up
>
>
>
> packet-generator new {
>
> name s0
>
> limit 10
>
> size 128-128
>
> interface loop0
>
> node ethernet-input
>
> data { IP4: feed.face.000 -> dead.0.0
>
>UDP: 10.20.36.168 -> 10.20.37.10
>
>UDP: 1234 -> 2345
>
>incrementing 114
>
> }
>
> }
>
> packet-generator new {
>
> name s1
>
> limit 10
>
> size 128-128
>
> interface loop1
>
> node ethernet-input
>
> data { IP4: feed.face.001 -> dead.0.1
>
>UDP: 10.20.37.10 -> 10.20.36.168
>
>UDP: 1234 -> 2345
>
>incrementing 114
>
> }
>
> }
>
>
>
> ip route add 10.20.36.168/32 via drop
>
> ip route add 10.20.37.10/23 via drop
>
>
>
>
>
> classify filter pcap mask l3 ip4 src match l3 ip4 src 10.20.37.10
>
> classify filter pcap mask l3 ip4 dst match l3 ip4 dst 10.20.37.10
>
>
>
> DBGvpp# sh class filter
>
> Filter Used By Table(s)
>
> -- 
>
> packet tracer: first table none
>
> pcap rx/tx/drop:   first table 1
>
>
>
> DBGvpp# sh cla t index 1 verbose # Note the NextTbl field...
>
>   TableIdx  Sessions   NextTbl  NextNode
>
>  1 1 0-1
>
>   Heap: base 0x7fffaef45000, size 128k, locked, unmap-on-destroy, name
> 'classify'
>
>   page stats: page-size 4K, total 32, mapped 2, not-mapped 30
>
> numa 0: 2 pages, 8k bytes
>
>   total: 127.95K, used: 1.38K, free: 126.58K, trimmable: 126.50K
>
>   nbuckets 8, skip 1 match 2 flag 0 offset 0
>
>   mask 
>
>   linear-search buckets 0
>
>
>
> [4]: heap offset 1280, elts 2, normal
>
> 0: [1280]: next_index 0 advance 0 opaque 0 action 0 metadata 0
>
> k: 0a14250a
>
> hits 10, last_heard 0.00
>
>
>
> 1 active elements
>
> 1 free lists
>
> 0 linear-search buckets
>
>
>
> DBGvpp# sh cla t index 0 verbose
>
>   TableIdx  Sessions   NextTbl  NextNode
>
>  0 1-1-1
>
>   Heap: base 0x7fffaef66000, size 128k, locked, unmap-on-destroy, name
> 'classify'
>
>   page stats: page-size 4K, total 32, mapped 2, not-mapped 30
>
> numa 0: 2 pages, 8k bytes
>
>   total: 127.95K, used: 1.34K, free: 126.61K, trimmable: 126.53K
>
>   nbuckets 8, skip 1 match 1 flag 0 offset 0
>
>   mask 
>
>   linear-search buckets 0
>
>
>
> [7]: heap offset 1280, elts 2, normal
>
> 0: [1280]: next_index 0 advance 0 opaque 0 action 0 metadata 0
>
> k: 0a14250a
>
> hits 10, last_heard 0.00
>
>
>
> 1 active elements
>
> 1 free lists
>
> 0 linear-search buckets
>
>
>
> Classification which matches either an ip4 src or and ip4 dst requires two
> (chained) tables, as shown in the example above.
>
>
>
> Looking at the output you sent, classifier table 1’s NextTbl field is set
> to ~0 which explains why the classification isn’t working. You don’t need
> to send traffic; make sure that you have a two-table chain with the
> required mask/matches installed.
>
>
>
> If you can come up with a reproducible debug CLI sequence which results in
> incorrect table programming, please send it so we can fix it.
>
>
>
> Thanks... Dave
>
>
>
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Benoit Ganne
> (bganne) via lists.fd.io
> Sent: Tuesday, August 31, 2021 12:36 PM
> To: chetan bhasin ; vpp-dev <
> vpp-dev@lists.fd.io>
> Subject: Re: [vpp-dev] VPp 2101 (pcap trace)
>
>
>
> > Any idea why it's not working? Or what I am doing wrong?
>
>
>
> I am not sure, it should be working but there might be a bug lurking
> somewhere...
>
> I'll look at it when I have time.
>
>
>
> Best
>
> ben
>
>
>
> > On

Re: [vpp-dev] vpp 2101 , we have to reduce the frame size to 128 for our use case

2021-08-30 Thread chetan bhasin
Hi,

As 6windstack uses a vector size of 32  ,so for comparison we are doing
some performance testing with VPP using different vector sizes.

Thanks,
Chetan

On Fri, Aug 27, 2021 at 4:53 PM  wrote:

> Changing VLIB_FRAME_SIZE “should just work” but you could consider
> adjusting whichever input node(s) you’re using to limit the inbound frame
> size.
>
>
>
> Just to ask: why do you want to reduce the frame-size?
>
>
>
> D.
>
>
>
> *From:* vpp-dev@lists.fd.io  *On Behalf Of *chetan
> bhasin
> *Sent:* Friday, August 27, 2021 1:38 AM
> *To:* vpp-dev 
> *Subject:* [vpp-dev] vpp 2101 , we have to reduce the frame size to 128
> for our use case
>
>
>
> Hi,
>
>
>
> We have a use case where we have to reduce the frame-size to 128,
> changing VLIB_FRAME_SIZE to 128 will do the trick ? or do we need to change
> anything else other than this ? Please advise.
>
>
>
> Thanks,
>
> Chetan
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20050): https://lists.fd.io/g/vpp-dev/message/20050
Mute This Topic: https://lists.fd.io/mt/85179707/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP 2106 Linux-cp plugin supports ipv6 handling ?

2021-08-30 Thread chetan bhasin
Hi,

We are on vpp2106 , We are using linux-cp plugin. Does it support Ipv6
traffic as well ?
 I can see Ipv4 traffic is going to tap interface created using below
command but Ipv6 traffic is still handled by VPP plugin nodes
lcp create | host-if  netns 
[tun]

Thank,
Chetan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20044): https://lists.fd.io/g/vpp-dev/message/20044
Mute This Topic: https://lists.fd.io/mt/85245480/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPp 2101 (pcap trace)

2021-08-29 Thread chetan bhasin
Hi Ben,

Any idea why it's not working? Or what I am doing wrong?

Thanks,
Chetan



On Thu, Aug 26, 2021, 20:59 chetan bhasin 
wrote:

> Hi Benoit,
>
> I have tried those options but it is not working , it is only capturing
> the section classify filter rule that is based on dst ip address.
>
> 1) Tried with classify filter [Only dest ip with *10.20.36.168 *is coming
> in pcap]
>
> classify filter pcap mask l3 ip4 src match l3 ip4 src 10.20.36.168
>
> *classify filter pcap mask l3 ip4 dst match l3 ip4 dst 10.20.36.168*
>
> pcap trace rx tx max 100 filter file capture.pcap
>
>
> vpp# show classify filter
>
> Filter Used By Table(s)
>
> -- 
>
> packet tracer: first table none
>
> pcap rx/tx/drop:   first table 1
>
>
> vpp# show classify tables index 1 verbose
>
>   TableIdx  Sessions   NextTbl  NextNode
>
>  1 1-1-1
>
>   Heap: base 0x7f5db406c000, size 128k, locked, unmap-on-destroy, name
> 'classify'
>
>   page stats: page-size 4K, total 32, mapped 2, not-mapped 0,
> unknown 30
>
> numa 0: 2 pages, 8k bytes
>
>   total: 127.95K, used: 1.31K, free: 126.64K, trimmable: 126.56K
>
>   nbuckets 8, skip 1 match 2 flag 0 offset 0
>
>   mask 
>
>   linear-search buckets 0
>
>
> [7]: heap offset 1216, elts 2, normal
>
> 0: [1216]: next_index 0 advance 0 opaque 0 action 0 metadata 0
>
> k: 0a1424a8
>
> hits 45, last_heard 0.00
>
>
> 1 active elements
>
> 1 free lists
>
> 0 linear-search buckets
>
>
> [root@bfs-dl360g10-47-vm14 ~]# tcpdump -n -r /tmp/capture.pcap
>
> reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)
>
> 01:00:33.671990 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id
> 26102, seq 9478, length 64
>
> 01:00:33.672834 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id
> 26102, seq 9478, length 64
>
> 01:00:33.672839 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id
> 26102, seq 9478, length 64
>
> 01:00:34.674316 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id
> 26102, seq 9479, length 64
>
> 01:00:34.675239 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id
> 26102, seq 9479, length 64
>
> 01:00:34.675239 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id
> 26102, seq 9479, length 64
>
> 01:00:35.676526 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id
> 26102, seq 9480, length 64
>
> 01:00:35.676565 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id
> 26102, seq 9480, length 64
>
> 01:00:35.676566 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id
> 26102, seq 9480, length 64
>
>
>
> 2) Default behaviour without any classify filter .
>
> vpp# *pcap trace rx tx max 100 file capture.pcap*
>
> vpp# *pcap trace off*
>
> Write 100 packets to /tmp/capture.pcap, and stop capture...
>
>
>
>
>
> *tcpdump -n -r /tmp/capture.pcap |grep ICMP*
>
> reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)
>
> 01:02:36.635239 IP 10.20.36.168 > 10.20.35.126: ICMP echo request, id
> 26102, seq 11266, length 64
>
> 01:02:36.636018 IP 10.20.36.168 > 10.20.35.126: ICMP echo request, id
> 26102, seq 11266, length 64
>
> 01:02:36.636018 IP 10.20.36.168 > 10.20.35.126: ICMP echo request, id
> 26102, seq 11266, length 64
>
> 01:02:36.636108 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id
> 26102, seq 11266, length 64
>
> 01:02:36.636975 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id
> 26102, seq 11266, length 64
>
> 01:02:36.636975 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id
> 26102, seq 11266, length 64
>
>
> Regards,
>
> Chetan
>
>
>
> On Thu, Aug 26, 2021 at 6:00 PM chetan bhasin 
> wrote:
>
>> Hi Ben,
>>
>> Thanks for your response . Let me try this and get back to you.
>>
>> Thanks,
>> Chetan
>>
>> On Thu, Aug 26, 2021 at 5:52 PM Benoit Ganne (bganne) 
>> wrote:
>>
>>> > We want to capture all packets with src ip or dest ip as 10.20.30.40 .I
>>> > have tried with classify filter but no success. Looks like I am missing
>>> > something.
>>> > Can anybody please suggest the commands .
>>>
>>> Something like this should do it:
>>>
>>> ~# vppctl classify filter pcap mask l3 ip4 src match l3 ip4 src
>>> 10.20.30.40
>>> ~# vppctl classify filter pcap mask l3 ip4 dst match l3 ip4 dst
>>> 10.20.30.40
>>> ~# vppctl pcap trace rx tx max 1000 filter
>>> 
>>> ~# vppctl pcap trace rx tx off
>>>
>>> Best
>>> ben
>>>
>>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20043): https://lists.fd.io/g/vpp-dev/message/20043
Mute This Topic: https://lists.fd.io/mt/85158833/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] vpp 2101 , we have to reduce the frame size to 128 for our use case

2021-08-26 Thread chetan bhasin
Hi,

We have a use case where we have to reduce the frame-size to 128,
changing VLIB_FRAME_SIZE to 128 will do the trick ? or do we need to change
anything else other than this ? Please advise.

Thanks,
Chetan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20035): https://lists.fd.io/g/vpp-dev/message/20035
Mute This Topic: https://lists.fd.io/mt/85179707/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPp 2101 (pcap trace)

2021-08-26 Thread chetan bhasin
Hi Benoit,

I have tried those options but it is not working , it is only capturing the
section classify filter rule that is based on dst ip address.

1) Tried with classify filter [Only dest ip with *10.20.36.168 *is coming
in pcap]

classify filter pcap mask l3 ip4 src match l3 ip4 src 10.20.36.168

*classify filter pcap mask l3 ip4 dst match l3 ip4 dst 10.20.36.168*

pcap trace rx tx max 100 filter file capture.pcap


vpp# show classify filter

Filter Used By Table(s)

-- 

packet tracer: first table none

pcap rx/tx/drop:   first table 1


vpp# show classify tables index 1 verbose

  TableIdx  Sessions   NextTbl  NextNode

 1 1-1-1

  Heap: base 0x7f5db406c000, size 128k, locked, unmap-on-destroy, name
'classify'

  page stats: page-size 4K, total 32, mapped 2, not-mapped 0,
unknown 30

numa 0: 2 pages, 8k bytes

  total: 127.95K, used: 1.31K, free: 126.64K, trimmable: 126.56K

  nbuckets 8, skip 1 match 2 flag 0 offset 0

  mask 

  linear-search buckets 0


[7]: heap offset 1216, elts 2, normal

0: [1216]: next_index 0 advance 0 opaque 0 action 0 metadata 0

k: 0a1424a8

hits 45, last_heard 0.00


1 active elements

1 free lists

0 linear-search buckets


[root@bfs-dl360g10-47-vm14 ~]# tcpdump -n -r /tmp/capture.pcap

reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)

01:00:33.671990 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 9478, length 64

01:00:33.672834 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 9478, length 64

01:00:33.672839 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 9478, length 64

01:00:34.674316 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 9479, length 64

01:00:34.675239 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 9479, length 64

01:00:34.675239 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 9479, length 64

01:00:35.676526 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 9480, length 64

01:00:35.676565 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 9480, length 64

01:00:35.676566 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 9480, length 64



2) Default behaviour without any classify filter .

vpp# *pcap trace rx tx max 100 file capture.pcap*

vpp# *pcap trace off*

Write 100 packets to /tmp/capture.pcap, and stop capture...





*tcpdump -n -r /tmp/capture.pcap |grep ICMP*

reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)

01:02:36.635239 IP 10.20.36.168 > 10.20.35.126: ICMP echo request, id
26102, seq 11266, length 64

01:02:36.636018 IP 10.20.36.168 > 10.20.35.126: ICMP echo request, id
26102, seq 11266, length 64

01:02:36.636018 IP 10.20.36.168 > 10.20.35.126: ICMP echo request, id
26102, seq 11266, length 64

01:02:36.636108 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 11266, length 64

01:02:36.636975 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 11266, length 64

01:02:36.636975 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 11266, length 64


Regards,

Chetan



On Thu, Aug 26, 2021 at 6:00 PM chetan bhasin 
wrote:

> Hi Ben,
>
> Thanks for your response . Let me try this and get back to you.
>
> Thanks,
> Chetan
>
> On Thu, Aug 26, 2021 at 5:52 PM Benoit Ganne (bganne) 
> wrote:
>
>> > We want to capture all packets with src ip or dest ip as 10.20.30.40 .I
>> > have tried with classify filter but no success. Looks like I am missing
>> > something.
>> > Can anybody please suggest the commands .
>>
>> Something like this should do it:
>>
>> ~# vppctl classify filter pcap mask l3 ip4 src match l3 ip4 src
>> 10.20.30.40
>> ~# vppctl classify filter pcap mask l3 ip4 dst match l3 ip4 dst
>> 10.20.30.40
>> ~# vppctl pcap trace rx tx max 1000 filter
>> 
>> ~# vppctl pcap trace rx tx off
>>
>> Best
>> ben
>>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20032): https://lists.fd.io/g/vpp-dev/message/20032
Mute This Topic: https://lists.fd.io/mt/85158833/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPp 2101 (pcap trace)

2021-08-26 Thread chetan bhasin
Hi Ben,

Thanks for your response . Let me try this and get back to you.

Thanks,
Chetan

On Thu, Aug 26, 2021 at 5:52 PM Benoit Ganne (bganne) 
wrote:

> > We want to capture all packets with src ip or dest ip as 10.20.30.40 .I
> > have tried with classify filter but no success. Looks like I am missing
> > something.
> > Can anybody please suggest the commands .
>
> Something like this should do it:
>
> ~# vppctl classify filter pcap mask l3 ip4 src match l3 ip4 src 10.20.30.40
> ~# vppctl classify filter pcap mask l3 ip4 dst match l3 ip4 dst 10.20.30.40
> ~# vppctl pcap trace rx tx max 1000 filter
> 
> ~# vppctl pcap trace rx tx off
>
> Best
> ben
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20029): https://lists.fd.io/g/vpp-dev/message/20029
Mute This Topic: https://lists.fd.io/mt/85158833/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPp 2101 (pcap trace)

2021-08-26 Thread chetan bhasin
Hi,

We want to capture all packets with src ip or dest ip as 10.20.30.40 .I
have tried with classify filter but no success. Looks like I am missing
something.

Can anybody please suggest the commands .

Thanks,
Chetan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20024): https://lists.fd.io/g/vpp-dev/message/20024
Mute This Topic: https://lists.fd.io/mt/85158833/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] DPDK compilation under VPP 2101

2021-07-12 Thread chetan bhasin
Hello Everyone,

 I want to include additional DPDK patches for our use case. So whenever I
compile VPP under non-root user , DPDK compilation fails as it is trying to
checkout meason related tar ball and pip3, which does not have such
permissions.
>>>>>>>>>Any suggestions regarding this would be helpful?

Thanks,
Chetan

On Fri, Mar 26, 2021 at 2:29 PM chetan bhasin via lists.fd.io
 wrote:

> Hello Everyone,
>
> I have two queries with respect to DPDK 20.11 lib compilations under VPP -
> 1) I want to enable KNI and Mellanox compilation , So I will be patching
> dpdk.mk file for the same.
> >>>>>>>>Is this the correct approach ?
>
> 2) I want to include additional DPDK patches for our use case. So whenever
> I compile VPP under username cbhasin , DPDK compilation fails as it is
> trying to checkout meason related tar ball and pip3, which does not have
> such permissions.
> >>>>>>>>>Any suggestion regarding this would be helpful?
>
>
> Regards,
> Chetan
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19747): https://lists.fd.io/g/vpp-dev/message/19747
Mute This Topic: https://lists.fd.io/mt/84158427/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP 2005 crash : How to get DPDK lib source code/symbol info in back-trace

2021-06-30 Thread chetan bhasin
Hi,

I am using VPP 2005 with DPDK enabled. If the application crashes under
dpdk library api , We won't be getting proper symbols information in the
frame of rte_api using gdb.

Will dynamically linking dpdk lib resolve this ?
Any other suggestions?

Thanks,
Chetan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19653): https://lists.fd.io/g/vpp-dev/message/19653
Mute This Topic: https://lists.fd.io/mt/83889120/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Facing Abort in VPP with Sanitizer

2021-05-25 Thread chetan bhasin
Hello Everyone,

We have back-merged ASAN related changes to vpp_1908 . We are using
devtool-set-9 for our compilation of SANITIZER build. Application is
getting abort in
File : src/vpp/api/api_format.c
Function : void vat_api_hookup (vat_main_t * vam)
Code :
22191   /* API messages we can send */
22192 #define _(n,h) hash_set_mem (vam->function_by_name, #n, api_##n);
*22193   foreach_vpe_api_msg;*
22194 #undef _

 Any help here would be appreciated.


*console logs*

Starting program: /opt/opwv/integra/99.9/tools/vpp_asan/./bin/vpp -c
root_startup.conf
Missing separate debuginfo for
/opt/opwv/integra/99.9/tools/vpp_asan/bin/../lib/libasan.so.5
Try: yum --enablerepo='*debug*' install
/usr/lib/debug/.build-id/1e/00d6f3d73b509f1b159047be658313f3dc681d.debug
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
==10089==AddressSanitizer: libc interceptors initialized
|| `[0x10007fff8000, 0x7fff]` || HighMem||
|| `[0x02008fff7000, 0x10007fff7fff]` || HighShadow ||
|| `[0x8fff7000, 0x02008fff6fff]` || ShadowGap  ||
|| `[0x7fff8000, 0x8fff6fff]` || LowShadow  ||
|| `[0x, 0x7fff7fff]` || LowMem ||
MemToShadow(shadow): 0x8fff7000 0x91ff6dff 0x004091ff6e00
0x02008fff6fff
redzone=16
max_redzone=2048
quarantine_size_mb=256M
thread_local_quarantine_size_kb=1024K
malloc_context_size=30
SHADOW_SCALE: 3
SHADOW_GRANULARITY: 8
SHADOW_OFFSET: 0x7fff8000
==10089==Installed the sigaction for signal 11
==10089==Installed the sigaction for signal 7
==10089==Installed the sigaction for signal 8
==10089==T0: stack [0x7f7ff000,0x7000) size 0x80;
local=0x7fffdee4
AddressSanitizer: reading suppressions file at
/opt/opwv/integra/99.9/tools/vpp_asan/asan-suppression
==10089==AddressSanitizer Init done
==10089==poisoning: 0x7fffce20 1000
vlib_plugin_early_init:361: plugin path
/opt/opwv/integra/SystemActivePath/tools/vpp_asan/lib/vpp_plugins

*GDB back trace*
*#0  0x73bce8c9 in clib_memcpy_fast (dst=0x1881349000d2f00,
src=0x13010b390b3b0b, n=4182438362655424791)*
*at
/vlad/p4/gcc9/ngp/mainline_gcc/third-party/vpp/vpp_1908/src/vppinfra/memcpy_sse3.h:187*
*#1  0x73be0758 in lookup (v=0x7fffae6b2be8, key=6960224, op=SET,
new_value=0x7fffaea3a2d0, old_value=0x0)*
*at
/vlad/p4/gcc9/ngp/mainline_gcc/third-party/vpp/vpp_1908/src/vppinfra/hash.c:622*
*#2  0x73be1a9c in _hash_set3 (v=0x7fffae6b2be8, key=6960224,
value=0x7fffaea3a2d0, old_value=0x0)*
*at
/vlad/p4/gcc9/ngp/mainline_gcc/third-party/vpp/vpp_1908/src/vppinfra/hash.c:851*
*#3  0x00632b86 in vat_api_hookup (vam=0x70c180 )*
*at
/vlad/p4/gcc9/ngp/mainline_gcc/third-party/vpp/vpp_1908/src/vpp/api/api_format.c:22193*
*#4  0x00655e85 in vat_api_hookup_shim (vm=0x748d4640
)*
*at
/vlad/p4/gcc9/ngp/mainline_gcc/third-party/vpp/vpp_1908/src/vpp/api/api_format.c:22217*
*#5  0x744b4ea2 in call_init_exit_functions_internal
(vm=0x748d4640 ,*
*headp=0x7491b0c8 , call_once=1,
do_sort=1)*
*at
/vlad/p4/gcc9/ngp/mainline_gcc/third-party/vpp/vpp_1908/src/vlib/init.c:350*
*#6  0x744b4f1d in vlib_call_init_exit_functions (vm=0x748d4640
,*
*headp=0x7491b0c8 , call_once=1)*
*at
/vlad/p4/gcc9/ngp/mainline_gcc/third-party/vpp/vpp_1908/src/vlib/init.c:364*
*#7  0x77fa1777 in vl_api_clnt_process (vm=0x748d4640
, node=0x7fffaea36000, f=0x0)*
*at
/vlad/p4/gcc9/ngp/mainline_gcc/third-party/vpp/vpp_1908/src/vlibmemory/vlib_api.c:285*
*#8  0x7450c727 in vlib_process_bootstrap (_a=140736129047184)*
*at
/vlad/p4/gcc9/ngp/mainline_gcc/third-party/vpp/vpp_1908/src/vlib/main.c:2911*
*#9  0x73bec458 in clib_calljmp () from
/opt/opwv/integra/99.9/tools/vpp_asan/bin/../lib/libvppinfra.so.19.08.1*
*#10 0x7fffaefa9a40 in ?? ()*


Thanks,
Chetan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19449): https://lists.fd.io/g/vpp-dev/message/19449
Mute This Topic: https://lists.fd.io/mt/83071004/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Run VPP as non root user

2021-05-20 Thread chetan bhasin
unix {

  cli-listen 

  gid <>
}
Check c_path access rights

On Thu, May 20, 2021 at 3:57 PM  wrote:

> Hi All,
> We are using VPP version 21.01 on our setup. We are able to  run vpp as
> root user , but getting the following error while running VPP as non root
> user:
> $ vppctl
> clib_socket_init: connect (fd 3, '/run/vpp/cli.sock'): Permission denied
>
> Can you please let us know how can we run VPP on our machine as  non root
> user ?
>
> Thanks and Regards,
> Ashish
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19418): https://lists.fd.io/g/vpp-dev/message/19418
Mute This Topic: https://lists.fd.io/mt/82958340/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Do Vrrp support N+M HA model as well

2021-05-05 Thread chetan bhasin
Hi,

I just want to understand whether VRRP also supports M+N HA model as well ?
Do we have sample configuration for the same?

Thanks,
Chetan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19331): https://lists.fd.io/g/vpp-dev/message/19331
Mute This Topic: https://lists.fd.io/mt/82599771/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] How to enable Mellanox compilation in VPP 21.01

2021-04-15 Thread chetan bhasin
Hi Mohammed ,

After applying the patch you have mentioned , I am no longer facing the
compilation issue.

Thanks for your support.

With Static RDMA linking , do we need to do anything extra while bringing
our application on the server having Mellanox Nic ?

Thanks & Regards,
Chetan

On Wed, Apr 14, 2021 at 8:33 PM chetan bhasin via lists.fd.io
 wrote:

> Thanks a lot Mohammed!
>
> Let me try with the rdma-core and patch mentioned by you and
>
> Thanks,
> Chetan
>
> On Wed, Apr 14, 2021 at 6:20 PM Mohammed Hawari 
> wrote:
>
>> Hi Chetan
>>
>> You are building DPDK with Mellanox support relying on dlopen and the
>> glue library which is fine. The default approach implemented in the VPP
>> build system consists of statically linking DPDK with the the libraries
>> provided by rdma-core (as it’s also built by the VPP build system). The
>> default approach should work fine, provided this patch
>> https://gerrit.fd.io/r/c/vpp/+/31876 is applied (specific to CentOS).
>> But what you should work just fine. Now the reason why  you need to comment
>> dpdk_depends is kind of mysterious here…
>>
>> Regards,
>>
>> Mohammed
>>
>> On 14 Apr 2021, at 10:07, chetan bhasin 
>> wrote:
>>
>> Hi,
>>
>> I have to do the following to enable Mellanox compilation under VPP21.01
>> with dpdk 20.11.
>>
>> If I dont comment rdma-core dependencies , it will lead to undefined
>> symbols .
>>
>> Can anybody please correct , is this the right way to do it?
>>
>> -DPDK_MLX4_PMD?= n
>> -DPDK_MLX5_PMD?= n
>> -DPDK_MLX5_COMMON_PMD ?= n
>> *+DPDK_MLX4_PMD:= y*
>> *+DPDK_MLX5_PMD:= y*
>> *+DPDK_MLX5_COMMON_PMD = y*
>>
>> -DPDK_MLX_IBV_LINK?= static
>> *+DPDK_MLX_IBV_LINK:= dlopen*
>>
>> -dpdk_depends:= rdma-core $(if $(ARCH_X86_64), ipsec-mb)
>> *+#dpdk_depends   := rdma-core $(if $(ARCH_X86_64), ipsec-mb)*
>>
>> Thanks,
>> Chetan
>>
>> On Tue, Apr 6, 2021 at 4:10 PM chetan bhasin via lists.fd.io
>>  wrote:
>>
>>> Thanks Guys !
>>>
>>> We have a requirement to compile vppp with DPDK mellanox driver .
>>>
>>> Right now issue what i understood is , when we compile vpp_2101  , using
>>> make build-release , it builds mellanox , but when it is trying to install
>>> vpp-ext-dep rpm , it got failed because glue library created of mellanox
>>> has some undefined symbols ,
>>> For your reference , i have set following in dpdk.mk file
>>>
>>> -DPDK_MLX4_PMD?= n
>>> -DPDK_MLX5_PMD?= n
>>> +DPDK_MLX4_PMD:= y
>>> +DPDK_MLX5_PMD:= y
>>> -DPDK_MLX_IBV_LINK?= static
>>> +DPDK_MLX_IBV_LINK:= shared
>>>
>>>  please check below logs for reference-
>>>
>>> FAILED: drivers/net/mlx4/librte_net_mlx4_glue.so.21.0
>>> cc  -o drivers/net/mlx4/librte_net_mlx4_glue.so.21.0
>>> 'drivers/net/mlx4/8672f8e@@rte_net_mlx4_glue@sha/mlx4_glue.c.o'
>>> -Wl,--as-needed -Wl,--no-undefined -Wl,-O1 -shared -fPIC -Wl,--start-group
>>> -Wl,-soname,librte_net_mlx4_glue.so.21.0 -Wl,--no-as-needed -pthread -lm
>>> -ldl -lnuma -Wl,-export-dynamic -Wl,-h,librte_net_mlx4_glue.so.21.0
>>> /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/libmlx4.so
>>> /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/libibverbs.so
>>> -Wl,--end-group
>>> drivers/net/mlx4/8672f8e@@rte_net_mlx4_glue@sha/mlx4_glue.c.o: In
>>> function `mlx4_glue_reg_mr':
>>> mlx4_glue.c:(.text+0x277): undefined reference to `ibv_reg_mr_iova2'
>>> collect2: error: ld returned 1 exit status
>>> [8/547] Compiling C object 'drivers/a715181@@tmp_rte_net_mlx4@sta
>>> /net_mlx4_mlx4_flow.c.o'
>>>
>>> ninja: build stopped: subcommand failed.
>>>
>>> On Thu, Apr 1, 2021 at 4:33 PM Юрий Иванов 
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> Or you can use nativa rdma driver as written in this post and build vpp
>>>> as usual. 😉
>>>>
>>>> Regards.
>>>> --
>>>> *От:* vpp-dev@lists.fd.io  от имени Mohammed
>>>> Hawari 
>>>> *Отправлено:* 1 апреля 2021 г. 13:16
>>>> *Кому:* chetan bhasin 
>>>> *Копия:* vpp-dev 
>>>> *Тема:* Re: [vpp-dev] Ho

Re: [vpp-dev] How to enable Mellanox compilation in VPP 21.01

2021-04-14 Thread chetan bhasin
Thanks a lot Mohammed!

Let me try with the rdma-core and patch mentioned by you and

Thanks,
Chetan

On Wed, Apr 14, 2021 at 6:20 PM Mohammed Hawari  wrote:

> Hi Chetan
>
> You are building DPDK with Mellanox support relying on dlopen and the glue
> library which is fine. The default approach implemented in the VPP build
> system consists of statically linking DPDK with the the libraries provided
> by rdma-core (as it’s also built by the VPP build system). The default
> approach should work fine, provided this patch
> https://gerrit.fd.io/r/c/vpp/+/31876 is applied (specific to CentOS). But
> what you should work just fine. Now the reason why  you need to comment
> dpdk_depends is kind of mysterious here…
>
> Regards,
>
> Mohammed
>
> On 14 Apr 2021, at 10:07, chetan bhasin 
> wrote:
>
> Hi,
>
> I have to do the following to enable Mellanox compilation under VPP21.01
> with dpdk 20.11.
>
> If I dont comment rdma-core dependencies , it will lead to undefined
> symbols .
>
> Can anybody please correct , is this the right way to do it?
>
> -DPDK_MLX4_PMD?= n
> -DPDK_MLX5_PMD?= n
> -DPDK_MLX5_COMMON_PMD ?= n
> *+DPDK_MLX4_PMD:= y*
> *+DPDK_MLX5_PMD:= y*
> *+DPDK_MLX5_COMMON_PMD = y*
>
> -DPDK_MLX_IBV_LINK?= static
> *+DPDK_MLX_IBV_LINK:= dlopen*
>
> -dpdk_depends:= rdma-core $(if $(ARCH_X86_64), ipsec-mb)
> *+#dpdk_depends       := rdma-core $(if $(ARCH_X86_64), ipsec-mb)*
>
> Thanks,
> Chetan
>
> On Tue, Apr 6, 2021 at 4:10 PM chetan bhasin via lists.fd.io
>  wrote:
>
>> Thanks Guys !
>>
>> We have a requirement to compile vppp with DPDK mellanox driver .
>>
>> Right now issue what i understood is , when we compile vpp_2101  , using
>> make build-release , it builds mellanox , but when it is trying to install
>> vpp-ext-dep rpm , it got failed because glue library created of mellanox
>> has some undefined symbols ,
>> For your reference , i have set following in dpdk.mk file
>>
>> -DPDK_MLX4_PMD?= n
>> -DPDK_MLX5_PMD?= n
>> +DPDK_MLX4_PMD:= y
>> +DPDK_MLX5_PMD:= y
>> -DPDK_MLX_IBV_LINK?= static
>> +DPDK_MLX_IBV_LINK:= shared
>>
>>  please check below logs for reference-
>>
>> FAILED: drivers/net/mlx4/librte_net_mlx4_glue.so.21.0
>> cc  -o drivers/net/mlx4/librte_net_mlx4_glue.so.21.0
>> 'drivers/net/mlx4/8672f8e@@rte_net_mlx4_glue@sha/mlx4_glue.c.o'
>> -Wl,--as-needed -Wl,--no-undefined -Wl,-O1 -shared -fPIC -Wl,--start-group
>> -Wl,-soname,librte_net_mlx4_glue.so.21.0 -Wl,--no-as-needed -pthread -lm
>> -ldl -lnuma -Wl,-export-dynamic -Wl,-h,librte_net_mlx4_glue.so.21.0
>> /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/libmlx4.so
>> /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/libibverbs.so
>> -Wl,--end-group
>> drivers/net/mlx4/8672f8e@@rte_net_mlx4_glue@sha/mlx4_glue.c.o: In
>> function `mlx4_glue_reg_mr':
>> mlx4_glue.c:(.text+0x277): undefined reference to `ibv_reg_mr_iova2'
>> collect2: error: ld returned 1 exit status
>> [8/547] Compiling C object 'drivers/a715181@@tmp_rte_net_mlx4@sta
>> /net_mlx4_mlx4_flow.c.o'
>>
>> ninja: build stopped: subcommand failed.
>>
>> On Thu, Apr 1, 2021 at 4:33 PM Юрий Иванов 
>> wrote:
>>
>>> Hi,
>>>
>>> Or you can use nativa rdma driver as written in this post and build vpp
>>> as usual. 😉
>>>
>>> Regards.
>>> --
>>> *От:* vpp-dev@lists.fd.io  от имени Mohammed
>>> Hawari 
>>> *Отправлено:* 1 апреля 2021 г. 13:16
>>> *Кому:* chetan bhasin 
>>> *Копия:* vpp-dev 
>>> *Тема:* Re: [vpp-dev] How to enable Mellanox compilation in VPP 21.01
>>>
>>>
>>> Hi Chetan,
>>>
>>> If you are using CentOS, I’d suggest to cherry-pick
>>> https://gerrit.fd.io/r/c/vpp/+/31876. Also please change the dpdk.mk to
>>> also set DPDK_MLX5_COMMON_PMD = y. I hope this solves your issue.
>>> Otherwise, please consider using the rdma native driver that does not rely
>>> on DPDK.
>>>
>>> Regards
>>>
>>> Mohammed
>>>
>>> On 30 Mar 2021, at 12:41, chetan bhasin 
>>> wrote:
>>>
>>> Hello Everyone,
>>>
>>> I am upgrading to vpp 2101 . I am facing a compilation issue after
>>> enabling Mellanox compilation in dpdk.mk .
>>>
>&

Re: [vpp-dev] How to enable Mellanox compilation in VPP 21.01

2021-04-14 Thread chetan bhasin
Hi,

I have to do the following to enable Mellanox compilation under VPP21.01
with dpdk 20.11.

If I dont comment rdma-core dependencies , it will lead to undefined
symbols .

Can anybody please correct , is this the right way to do it?

-DPDK_MLX4_PMD?= n
-DPDK_MLX5_PMD?= n
-DPDK_MLX5_COMMON_PMD ?= n
*+DPDK_MLX4_PMD:= y*
*+DPDK_MLX5_PMD:= y*
*+DPDK_MLX5_COMMON_PMD = y*

-DPDK_MLX_IBV_LINK?= static
*+DPDK_MLX_IBV_LINK:= dlopen*

-dpdk_depends:= rdma-core $(if $(ARCH_X86_64), ipsec-mb)
*+#dpdk_depends   := rdma-core $(if $(ARCH_X86_64), ipsec-mb)*

Thanks,
Chetan

On Tue, Apr 6, 2021 at 4:10 PM chetan bhasin via lists.fd.io
 wrote:

> Thanks Guys !
>
> We have a requirement to compile vppp with DPDK mellanox driver .
>
> Right now issue what i understood is , when we compile vpp_2101  , using
> make build-release , it builds mellanox , but when it is trying to install
> vpp-ext-dep rpm , it got failed because glue library created of mellanox
> has some undefined symbols ,
> For your reference , i have set following in dpdk.mk file
>
> -DPDK_MLX4_PMD?= n
> -DPDK_MLX5_PMD?= n
> +DPDK_MLX4_PMD:= y
> +DPDK_MLX5_PMD:= y
> -DPDK_MLX_IBV_LINK?= static
> +DPDK_MLX_IBV_LINK:= shared
>
>  please check below logs for reference-
>
> FAILED: drivers/net/mlx4/librte_net_mlx4_glue.so.21.0
> cc  -o drivers/net/mlx4/librte_net_mlx4_glue.so.21.0
> 'drivers/net/mlx4/8672f8e@@rte_net_mlx4_glue@sha/mlx4_glue.c.o'
> -Wl,--as-needed -Wl,--no-undefined -Wl,-O1 -shared -fPIC -Wl,--start-group
> -Wl,-soname,librte_net_mlx4_glue.so.21.0 -Wl,--no-as-needed -pthread -lm
> -ldl -lnuma -Wl,-export-dynamic -Wl,-h,librte_net_mlx4_glue.so.21.0
> /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/libmlx4.so
> /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/libibverbs.so
> -Wl,--end-group
> drivers/net/mlx4/8672f8e@@rte_net_mlx4_glue@sha/mlx4_glue.c.o: In
> function `mlx4_glue_reg_mr':
> mlx4_glue.c:(.text+0x277): undefined reference to `ibv_reg_mr_iova2'
> collect2: error: ld returned 1 exit status
> [8/547] Compiling C object 'drivers/a715181@@tmp_rte_net_mlx4@sta
> /net_mlx4_mlx4_flow.c.o'
>
> ninja: build stopped: subcommand failed.
>
> On Thu, Apr 1, 2021 at 4:33 PM Юрий Иванов  wrote:
>
>> Hi,
>>
>> Or you can use nativa rdma driver as written in this post and build vpp
>> as usual. 😉
>>
>> Regards.
>> --
>> *От:* vpp-dev@lists.fd.io  от имени Mohammed Hawari
>> 
>> *Отправлено:* 1 апреля 2021 г. 13:16
>> *Кому:* chetan bhasin 
>> *Копия:* vpp-dev 
>> *Тема:* Re: [vpp-dev] How to enable Mellanox compilation in VPP 21.01
>>
>>
>> Hi Chetan,
>>
>> If you are using CentOS, I’d suggest to cherry-pick
>> https://gerrit.fd.io/r/c/vpp/+/31876. Also please change the dpdk.mk to
>> also set DPDK_MLX5_COMMON_PMD = y. I hope this solves your issue.
>> Otherwise, please consider using the rdma native driver that does not rely
>> on DPDK.
>>
>> Regards
>>
>> Mohammed
>>
>> On 30 Mar 2021, at 12:41, chetan bhasin 
>> wrote:
>>
>> Hello Everyone,
>>
>> I am upgrading to vpp 2101 . I am facing a compilation issue after
>> enabling Mellanox compilation in dpdk.mk .
>>
>> --- a/build/external/packages/dpdk.mk
>> +++ b/build/external/packages/dpdk.mk
>> @@ -14,8 +14,8 @@
>>  DPDK_PKTMBUF_HEADROOM?= 128
>>  DPDK_USE_LIBBSD  ?= n
>>  DPDK_DEBUG   ?= n
>> *-DPDK_MLX4_PMD?= n*
>> *-DPDK_MLX5_PMD?= n*
>> *+DPDK_MLX4_PMD?= y*
>> *+DPDK_MLX5_PMD?= y*
>>  DPDK_MLX5_COMMON_PMD ?= n
>>
>>
>> Getting below errors . Anybody please help here ?
>>
>> [1344/1846] Compiling C object 'drivers/a715181@@tmp_rte_net_nfp@sta
>> /net_nfp_nfpcore_nfp_nsp_cmds.c.o'
>> [1345/1846] Linking target drivers/librte_net_mlx4.so.21.0
>> FAILED: drivers/librte_net_mlx4.so.21.0
>> cc  -o drivers/librte_net_mlx4.so.21.0 
>> 'drivers/a715181@@rte_net_mlx4@sha/meson-generated_.._rte_net_mlx4.pmd.c.o'
>> 'drivers/a715181@@tmp_r
>> te_net_mlx4@sta/net_mlx4_mlx4.c.o' 
>> 'drivers/a715181@@tmp_rte_net_mlx4@sta/net_mlx4_mlx4_ethdev.c.o'
>> 'drivers/a715181@@tmp_rte_net_mlx4@sta
>> /net_mlx4_mlx4_flow.c.o' 
>> 'drivers/a715181@@tmp_rte_net_mlx4@sta/net_mlx4_m

Re: [vpp-dev] How to enable Mellanox compilation in VPP 21.01

2021-04-06 Thread chetan bhasin
Thanks Guys !

We have a requirement to compile vppp with DPDK mellanox driver .

Right now issue what i understood is , when we compile vpp_2101  , using
make build-release , it builds mellanox , but when it is trying to install
vpp-ext-dep rpm , it got failed because glue library created of mellanox
has some undefined symbols ,
For your reference , i have set following in dpdk.mk file

-DPDK_MLX4_PMD?= n
-DPDK_MLX5_PMD?= n
+DPDK_MLX4_PMD:= y
+DPDK_MLX5_PMD:= y
-DPDK_MLX_IBV_LINK?= static
+DPDK_MLX_IBV_LINK:= shared

 please check below logs for reference-

FAILED: drivers/net/mlx4/librte_net_mlx4_glue.so.21.0
cc  -o drivers/net/mlx4/librte_net_mlx4_glue.so.21.0
'drivers/net/mlx4/8672f8e@@rte_net_mlx4_glue@sha/mlx4_glue.c.o'
-Wl,--as-needed -Wl,--no-undefined -Wl,-O1 -shared -fPIC -Wl,--start-group
-Wl,-soname,librte_net_mlx4_glue.so.21.0 -Wl,--no-as-needed -pthread -lm
-ldl -lnuma -Wl,-export-dynamic -Wl,-h,librte_net_mlx4_glue.so.21.0
/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/libmlx4.so
/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/libibverbs.so
-Wl,--end-group
drivers/net/mlx4/8672f8e@@rte_net_mlx4_glue@sha/mlx4_glue.c.o: In function
`mlx4_glue_reg_mr':
mlx4_glue.c:(.text+0x277): undefined reference to `ibv_reg_mr_iova2'
collect2: error: ld returned 1 exit status
[8/547] Compiling C object 'drivers/a715181@@tmp_rte_net_mlx4@sta
/net_mlx4_mlx4_flow.c.o'

ninja: build stopped: subcommand failed.

On Thu, Apr 1, 2021 at 4:33 PM Юрий Иванов  wrote:

> Hi,
>
> Or you can use nativa rdma driver as written in this post and build vpp as
> usual. 😉
>
> Regards.
> --
> *От:* vpp-dev@lists.fd.io  от имени Mohammed Hawari <
> moham...@hawari.fr>
> *Отправлено:* 1 апреля 2021 г. 13:16
> *Кому:* chetan bhasin 
> *Копия:* vpp-dev 
> *Тема:* Re: [vpp-dev] How to enable Mellanox compilation in VPP 21.01
>
>
> Hi Chetan,
>
> If you are using CentOS, I’d suggest to cherry-pick
> https://gerrit.fd.io/r/c/vpp/+/31876. Also please change the dpdk.mk to
> also set DPDK_MLX5_COMMON_PMD = y. I hope this solves your issue.
> Otherwise, please consider using the rdma native driver that does not rely
> on DPDK.
>
> Regards
>
> Mohammed
>
> On 30 Mar 2021, at 12:41, chetan bhasin 
> wrote:
>
> Hello Everyone,
>
> I am upgrading to vpp 2101 . I am facing a compilation issue after
> enabling Mellanox compilation in dpdk.mk .
>
> --- a/build/external/packages/dpdk.mk
> +++ b/build/external/packages/dpdk.mk
> @@ -14,8 +14,8 @@
>  DPDK_PKTMBUF_HEADROOM?= 128
>  DPDK_USE_LIBBSD  ?= n
>  DPDK_DEBUG   ?= n
> *-DPDK_MLX4_PMD?= n*
> *-DPDK_MLX5_PMD?= n*
> *+DPDK_MLX4_PMD?= y*
> *+DPDK_MLX5_PMD?= y*
>  DPDK_MLX5_COMMON_PMD ?= n
>
>
> Getting below errors . Anybody please help here ?
>
> [1344/1846] Compiling C object 'drivers/a715181@@tmp_rte_net_nfp@sta
> /net_nfp_nfpcore_nfp_nsp_cmds.c.o'
> [1345/1846] Linking target drivers/librte_net_mlx4.so.21.0
> FAILED: drivers/librte_net_mlx4.so.21.0
> cc  -o drivers/librte_net_mlx4.so.21.0 
> 'drivers/a715181@@rte_net_mlx4@sha/meson-generated_.._rte_net_mlx4.pmd.c.o'
> 'drivers/a715181@@tmp_r
> te_net_mlx4@sta/net_mlx4_mlx4.c.o' 
> 'drivers/a715181@@tmp_rte_net_mlx4@sta/net_mlx4_mlx4_ethdev.c.o'
> 'drivers/a715181@@tmp_rte_net_mlx4@sta
> /net_mlx4_mlx4_flow.c.o' 
> 'drivers/a715181@@tmp_rte_net_mlx4@sta/net_mlx4_mlx4_intr.c.o'
> 'drivers/a715181@@tmp_rte_net_mlx4@sta/net_mlx4_ml
> x4_mp.c.o' 'drivers/a715181@@tmp_rte_net_mlx4@sta/net_mlx4_mlx4_mr.c.o'
> 'drivers/a715181@@tmp_rte_net_mlx4@sta/net_mlx4_mlx4_rxq.c.o' 'dri
> vers/a715181@@tmp_rte_net_mlx4@sta/net_mlx4_mlx4_rxtx.c.o'
> 'drivers/a715181@@tmp_rte_net_mlx4@sta/net_mlx4_mlx4_txq.c.o'
> 'drivers/a715181@
> @tmp_rte_net_mlx4@sta/net_mlx4_mlx4_utils.c.o' 'drivers/a715181@
> @tmp_rte_net_mlx4@sta/net_mlx4_mlx4_glue.c.o' -Wl,--as-needed -Wl,--no-und
> efined -Wl,-O1 -shared -fPIC -Wl,--start-group
> -Wl,-soname,librte_net_mlx4.so.21 -Wl,--no-as-needed -pthread -lm -ldl
> -lnuma lib/librte_et
> hdev.so.21.0 lib/librte_eal.so.21.0 lib/librte_kvargs.so.21.0
> lib/librte_telemetry.so.21.0 lib/librte_net.so.21.0 lib/librte_mbuf.so.21.0
> lib/librte_mempool.so.21.0 lib/librte_ring.so.21.0
> lib/librte_meter.so.21.0 drivers/librte_bus_pci.so.21.0
> lib/librte_pci.so.21.0 drivers/
> librte_bus_vdev.so.21.0
> -Wl,--version-script=/root/tmp/vpp/build-root/build-vpp-native/external/src-dpdk/drivers/net/mlx4/vers

[vpp-dev] How to enable Mellanox compilation in VPP 21.01

2021-03-30 Thread chetan bhasin
Hello Everyone,

I am upgrading to vpp 2101 . I am facing a compilation issue after enabling
Mellanox compilation in dpdk.mk .

--- a/build/external/packages/dpdk.mk
+++ b/build/external/packages/dpdk.mk
@@ -14,8 +14,8 @@
 DPDK_PKTMBUF_HEADROOM?= 128
 DPDK_USE_LIBBSD  ?= n
 DPDK_DEBUG   ?= n
*-DPDK_MLX4_PMD?= n*
*-DPDK_MLX5_PMD?= n*
*+DPDK_MLX4_PMD?= y*
*+DPDK_MLX5_PMD?= y*
 DPDK_MLX5_COMMON_PMD ?= n


Getting below errors . Anybody please help here ?

[1344/1846] Compiling C object 'drivers/a715181@@tmp_rte_net_nfp@sta
/net_nfp_nfpcore_nfp_nsp_cmds.c.o'
[1345/1846] Linking target drivers/librte_net_mlx4.so.21.0
FAILED: drivers/librte_net_mlx4.so.21.0
cc  -o drivers/librte_net_mlx4.so.21.0
'drivers/a715181@@rte_net_mlx4@sha/meson-generated_.._rte_net_mlx4.pmd.c.o'
'drivers/a715181@@tmp_r
te_net_mlx4@sta/net_mlx4_mlx4.c.o'
'drivers/a715181@@tmp_rte_net_mlx4@sta/net_mlx4_mlx4_ethdev.c.o'
'drivers/a715181@@tmp_rte_net_mlx4@sta
/net_mlx4_mlx4_flow.c.o'
'drivers/a715181@@tmp_rte_net_mlx4@sta/net_mlx4_mlx4_intr.c.o'
'drivers/a715181@@tmp_rte_net_mlx4@sta/net_mlx4_ml
x4_mp.c.o' 'drivers/a715181@@tmp_rte_net_mlx4@sta/net_mlx4_mlx4_mr.c.o'
'drivers/a715181@@tmp_rte_net_mlx4@sta/net_mlx4_mlx4_rxq.c.o' 'dri
vers/a715181@@tmp_rte_net_mlx4@sta/net_mlx4_mlx4_rxtx.c.o' 'drivers/a715181@
@tmp_rte_net_mlx4@sta/net_mlx4_mlx4_txq.c.o' 'drivers/a715181@
@tmp_rte_net_mlx4@sta/net_mlx4_mlx4_utils.c.o' 'drivers/a715181@
@tmp_rte_net_mlx4@sta/net_mlx4_mlx4_glue.c.o' -Wl,--as-needed -Wl,--no-und
efined -Wl,-O1 -shared -fPIC -Wl,--start-group
-Wl,-soname,librte_net_mlx4.so.21 -Wl,--no-as-needed -pthread -lm -ldl
-lnuma lib/librte_et
hdev.so.21.0 lib/librte_eal.so.21.0 lib/librte_kvargs.so.21.0
lib/librte_telemetry.so.21.0 lib/librte_net.so.21.0 lib/librte_mbuf.so.21.0
lib/librte_mempool.so.21.0 lib/librte_ring.so.21.0 lib/librte_meter.so.21.0
drivers/librte_bus_pci.so.21.0 lib/librte_pci.so.21.0 drivers/
librte_bus_vdev.so.21.0
-Wl,--version-script=/root/tmp/vpp/build-root/build-vpp-native/external/src-dpdk/drivers/net/mlx4/version.map
-lpt
hread -L/root/tmp/vpp/build-root/install-vpp-native/external/lib64
-l:libbnxt_re-rdmav25.a -l:libcxgb4-rdmav25.a -l:libefa.a -l:libhns-rdm
av25.a -l:libi40iw-rdmav25.a -l:libmlx4.a -l:libmlx5.a
-l:libmthca-rdmav25.a -l:libocrdma-rdmav25.a -l:libqedr-rdmav25.a
-l:libvmw_pvrdma-rdmav25.a -l:libhfi1verbs-rdmav25.a
-l:libipathverbs-rdmav25.a -l:librxe-rdmav25.a -l:libsiw-rdmav25.a
-l:libibverbs.a -l:librdma_util.a -l:libccan.a -Wl,--end-group
'-Wl,-rpath,$ORIGIN/../lib:$ORIGIN/'
-Wl,-rpath-link,/root/tmp/vpp/build-root/build-vpp-native/external/build-dpdk/lib
-Wl,-rpath-link,/root/tmp/vpp/build-root/build-vpp-native/external/build-dpdk/drivers
/bin/ld: cannot find -l:libbnxt_re-rdmav25.a
/bin/ld: cannot find -l:libcxgb4-rdmav25.a
/bin/ld: cannot find -l:libefa.a
/bin/ld: cannot find -l:libhns-rdmav25.a
/bin/ld: cannot find -l:libi40iw-rdmav25.a
/bin/ld: cannot find -l:libmlx4.a
/bin/ld: cannot find -l:libmlx5.a
/bin/ld: cannot find -l:libmthca-rdmav25.a
/bin/ld: cannot find -l:libocrdma-rdmav25.a
/bin/ld: cannot find -l:libqedr-rdmav25.a
/bin/ld: cannot find -l:libvmw_pvrdma-rdmav25.a
/bin/ld: cannot find -l:libhfi1verbs-rdmav25.a
/bin/ld: cannot find -l:libipathverbs-rdmav25.a
/bin/ld: cannot find -l:librxe-rdmav25.a
/bin/ld: cannot find -l:libsiw-rdmav25.a
/bin/ld: cannot find -l:libibverbs.a
/bin/ld: cannot find -l:librdma_util.a
/bin/ld: cannot find -l:libccan.a
collect2: error: ld returned 1 exit status

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19058): https://lists.fd.io/g/vpp-dev/message/19058
Mute This Topic: https://lists.fd.io/mt/81718770/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] DPDK compilation under VPP 2101

2021-03-26 Thread chetan bhasin
Hello Everyone,

I have two queries with respect to DPDK 20.11 lib compilations under VPP -
1) I want to enable KNI and Mellanox compilation , So I will be patching
dpdk.mk file for the same.
Is this the correct approach ?

2) I want to include additional DPDK patches for our use case. So whenever
I compile VPP under username cbhasin , DPDK compilation fails as it is
trying to checkout meason related tar ball and pip3, which does not have
such permissions.
>Any suggestion regarding this would be helpful?


Regards,
Chetan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19035): https://lists.fd.io/g/vpp-dev/message/19035
Mute This Topic: https://lists.fd.io/mt/81623611/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Requirement of predictable interface name

2021-03-24 Thread chetan bhasin
Thanks a lot Venu.

It works for me.

Regards,
Chetan



On Wed, Mar 24, 2021 at 12:21 PM Venumadhav Josyula 
wrote:

> Hi Chetan,
>
> That ability is already there right now also we are using vpp 20.09 and
> intend to go 21.x also, we do it this way
> ...
> dpdk {
> dev :19:00.1 {
> name vpp-intf19/0/1  <--- "you can give whatever you , we have
> chose bus, slot,fn appended with vpp-intf"
>  }
> ..
>
> }
>
> We doing this way and it works for us. This generation fo startup.conf
> happens to us via some our task ( unix process ). I guess that should good
> enough ?
>
> Thanks,
> Regards,
> Venu
>
>
> On Wed, 24 Mar 2021 at 11:31, chetan bhasin 
> wrote:
>
>> Hi Team,
>>
>> We have a requirement to have a predictable interface name , say
>> "device_" , instead of GigabitEthernet/TenGigabitEthernet etc
>> , so that external scripts can create vpp.conf automatically.
>>
>> One way is to modify plugins/dpdk/device/format.c code as per our
>> requirement . but we want to avoid changes in vpp code base that will make
>> vpp upgrade easy for the future.
>>  Can you please suggest any additional CLI or configuration via which we
>> could alias interface names ?
>>
>> Thanks,
>> Chetan
>>
>> 
>>
>>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19008): https://lists.fd.io/g/vpp-dev/message/19008
Mute This Topic: https://lists.fd.io/mt/81570537/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Requirement of predictable interface name

2021-03-23 Thread chetan bhasin
Hi Team,

We have a requirement to have a predictable interface name , say
"device_" , instead of GigabitEthernet/TenGigabitEthernet etc
, so that external scripts can create vpp.conf automatically.

One way is to modify plugins/dpdk/device/format.c code as per our
requirement . but we want to avoid changes in vpp code base that will make
vpp upgrade easy for the future.
 Can you please suggest any additional CLI or configuration via which we
could alias interface names ?

Thanks,
Chetan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19005): https://lists.fd.io/g/vpp-dev/message/19005
Mute This Topic: https://lists.fd.io/mt/81570537/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP is not coming up

2021-03-22 Thread chetan bhasin
Hi Vijay,

Have you modprobe uio and uio_pci_generic ?

kindly confirm the pci address of your nic again using ethtool -i

Please share what else is coming on the console.

Thanks,
Chetan

On Mon, Mar 22, 2021 at 1:58 PM Vijay Kumar  wrote:

> Hi,
>
> I was running VPP on my VM without any issues. But recently due to a lab
> maintenance, the VM was powered off. But now I am not able to bring it up.
> Getting the error "unknown input dpdk  dev :0b:00.0"
>
> -- I have ensured the ethernet interface is set to DOWN so that VPP takes
> control of it.
> -- As shown below, I have ensured the ethernet port number/PCI number is
> listed in the startup.conf file so that dpdk can start using the port
>
>
> *dpdk {  dev :0b:00.0}*
> -- I have also ensured the line having "*gid vpp"* is commented out in
> the startup.conf. What else could I be missing.
>
> root@ubuntu-10-37-3-75
> :~/VPP-NEW/vpp/build-root/install-vpp_debug-native/vpp/bin# ./vpp -c
> ../etc/vpp/startup.conf
> vlib_call_all_config_functions: unknown input `dpdk  dev :0b:00.0 '
>
>
>
> Regards
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18986): https://lists.fd.io/g/vpp-dev/message/18986
Mute This Topic: https://lists.fd.io/mt/81519523/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Facing issue Ipv6 routing (Vpp 1908)

2020-11-17 Thread chetan bhasin
Hi Neale,

Thanks for your response!

Note - This issue is not reproduced in labs but coming in one of our
deployment sites.

Let me explain the issue -
Client <> SUT(VPP 19.08 + our customized plugins)
<>Server

While sending icmpv6 (Echo Request) from client towards server , sometimes
that Request comes out from the wrong interface. ie

Client <> SUT(VPP 19.08 + our customized plugins)
<>Server
 -ECHO Request-->
 -<ECHO Request--

- Are we aware about any such known issues ?
-Suggest anything we should look into out-of-tree plugin code , which could
narrow down the issue.



Thanks,
Chetan Bhasin


On Tue, Nov 17, 2020 at 3:59 PM Neale Ranns (nranns) 
wrote:

>
>
> Hi Chetan,
>
>
>
> I think you’ll have to expand a bit on ‘sometimes’ for us to help. Under
> what conditions does this happen?
>
>
>
> /neale
>
>
>
>
>
> *From: *vpp-dev@lists.fd.io 
> *Date: *Tuesday, 17 November 2020 at 07:40
> *To: *vpp-dev 
> *Subject: *[vpp-dev] Facing issue Ipv6 routing (Vpp 1908)
>
> Hello Everyone,
>
>
>
> We are facing an issue with respect to ipv6 fib lookup (Vpp-1908).
>
> Sometimes a packet comes out of the wrong interface , looks like due to
> wrong fib lookup.
>
>
>
> I have found one change which is not part of our code
>
> https://gerrit.fd.io/r/c/vpp/+/27255
>
>
>
> Can anybody please suggest , is that change could cause such a problem ?
>
>
>
> Thanks,
>
> Chetan Bhasin
>
>
>
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18051): https://lists.fd.io/g/vpp-dev/message/18051
Mute This Topic: https://lists.fd.io/mt/78311419/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Facing issue Ipv6 routing (Vpp 1908)

2020-11-16 Thread chetan bhasin
Hello Everyone,

We are facing an issue with respect to ipv6 fib lookup (Vpp-1908).
Sometimes a packet comes out of the wrong interface , looks like due to
wrong fib lookup.

I have found one change which is not part of our code
https://gerrit.fd.io/r/c/vpp/+/27255

Can anybody please suggest , is that change could cause such a problem ?

Thanks,
Chetan Bhasin

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18049): https://lists.fd.io/g/vpp-dev/message/18049
Mute This Topic: https://lists.fd.io/mt/78311419/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Query regarding bonding in Vpp 19.08

2020-04-20 Thread chetan bhasin
Thanks Steven for the response.

As per VPP 18.01 , only Bonded interface state is show in "show interface "
CLI .

Thanks,
Chetan



On Mon, Apr 20, 2020 at 8:49 PM Steven Luong (sluong) 
wrote:

> First, your question has nothing to do with bonding. Whatever you are
> seeing is true regardless of bonding configured or not.
>
>
>
> Show interfaces displays the admin state of the interface. Whenever you
> set the admin state to up, it is displayed as up regardless of the physical
> carrier is up or down. While the admin state may be up, the physical
> carrier may be down.
>
>
>
> Show hardware displays the physical state of the interface, carrier up or
> down. Admin state must be set to up prior to seeing the hardware carrier
> state to up.
>
>
>
> Steven
>
>
>
> *From: * on behalf of chetan bhasin <
> chetan.bhasin...@gmail.com>
> *Date: *Sunday, April 19, 2020 at 11:40 PM
> *To: *vpp-dev 
> *Subject: *[vpp-dev] Query regarding bonding in Vpp 19.08
>
>
>
> Hi,
>
>
>
> I am using vpp 19.08 , When I use bonding configuration , I am seeing
> below output of "show int " CLI .
>
> Query : Is it ok to show the status of slave interface as up in "show
> interface" CLI while as per the show hardware-interface its down ?
>
>
>
> vpp# show
>
> *int*
>
> Name
>
> Idx
>
> State
>
> MTU (L3/IP4/IP6/MPLS)
>
> Counter
>
> Count
>
> BondEthernet0
>
> 3
>
> up 9000/0/0/0
>
> rx packets 12
>
> BondEthernet0.811
>
> 4
>
> up 0/0/0/0
>
> rx packets 6
>
> BondEthernet0.812
>
> 5
>
> up 0/0/0/0
>
> rx packets 6
>
> device_5d/0/0
>
> 1
>
> up 9000/0/0/0
>
> rx packets 12
>
> device_5d/0/1
>
> 2
>
> up 9000/0/0/0
>
> rx packets 17
>
> rx bytes 1100
>
> drops 14
>
> local0 0
>
> down 0/0/0/0
>
>
>
> Thanks,
>
> Chetan
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16121): https://lists.fd.io/g/vpp-dev/message/16121
Mute This Topic: https://lists.fd.io/mt/73144225/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Query regarding bonding in Vpp 19.08

2020-04-19 Thread chetan bhasin
Hi,

I am using vpp 19.08 , When I use bonding configuration , I am seeing below
output of "show int " CLI .
Query : Is it ok to show the status of slave interface as up in "show
interface" CLI while as per the show hardware-interface its down ?

vpp# show int Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
BondEthernet0 3 up 9000/0/0/0 rx packets 12 BondEthernet0.811 4 up 0/0/0/0
rx packets 6 BondEthernet0.812 5 up 0/0/0/0 rx packets 6 device_5d/0/0 1 up
9000/0/0/0 rx packets 12 device_5d/0/1 2 up 9000/0/0/0 rx packets 17 rx
bytes 1100 drops 14 local0 0 down 0/0/0/0

Thanks,
Chetan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16112): https://lists.fd.io/g/vpp-dev/message/16112
Mute This Topic: https://lists.fd.io/mt/73144225/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Facing issue while bringing up vpp 19.08 with Bonding configuration and Mellanox nics

2020-03-03 Thread chetan bhasin
Hi,

So we need to backport from stable/vpp/2001  below patch to fix this in
vpp19.08
https://github.com/FDio/vpp/commit/2fd44a00aa26188ca75f0accd734f21758c199bf#diff-3079293198f7807e4e851645851036ab

Vlib: fix cli process stack overflow

Type: fix

Some cli processes, including configuring an test flow
on an i40e interface consume more than the currently
available stack space.

Signed-off-by: Chenmin Sun 
Change-Id: I3df53d251cd43286f94647384d6e50a463bad15c


Chenmin Sun authored and Dave Barach committed on 8 Oct 2019
1 parent dd60b1b
<https://github.com/FDio/vpp/commit/dd60b1b128d8d6c07dc8b8bcbf932b808cedbaab>
commit
2fd44a00aa26188ca75f0accd734f21758c199bf

Thanks,
Chetan

On Mon, Mar 2, 2020 at 7:24 PM chetan bhasin 
wrote:

> Thanks Benoit for the response!
>
> I am trying vpp1908 with DPDK 19.02 now and lets see how it behaves.
>
> Thanks,
> Chetan
>
> On Mon, Mar 2, 2020 at 2:13 PM Benoit Ganne (bganne) 
> wrote:
>
>> The backtrace ends up in the Mellanox DPDK driver, so the bug is most
>> probably in DPDK.
>> The fact that (2) works but not (1) might hint towards a race condition /
>> timing issue.
>>
>> ben
>>
>> > -Original Message-
>> > From: vpp-dev@lists.fd.io  On Behalf Of chetan
>> bhasin
>> > Sent: lundi 2 mars 2020 07:53
>> > To: vpp-dev 
>> > Subject: [vpp-dev] Facing issue while bringing up vpp 19.08 with Bonding
>> > configuration and Mellanox nics
>> >
>> > Hello Everyone,
>> >
>> > We are trying to bring-up stable/vpp1908 with bonded configuration
>> > 1) its getting crash in case we mention
>> > "startup-config /root/vanilla_1908/vpp/vpp.conf" ,
>> > 2)if we bring up vpp first and then apply configuration using exec
>> command
>> > , it works fine
>> > "exec /root/vanilla_1908/vpp/vpp.conf"
>> >
>> > Note -  point (1) mentioned above is working fine with stable/vpp2001.
>> > There is dpdk version difference as well between stable/vpp1908(dpdk
>> > 19.05) vs stable/vpp2001 (dpdk 19.08)
>> >
>> > Can anybody please suggest an area in vpp we can look into or there
>> could
>> > be bug in dpdk 19.05 resolved in dpdk19.08.
>> >
>> > Configuration :(vpp.conf)
>> > create bond mode active-backup
>> > bond add BondEthernet0 TwentyFiveGigabitEthernet5d/0/0
>> > bond add BondEthernet0 TwentyFiveGigabitEthernet5d/0/1
>> > set interface state BondEthernet0 up
>> > create sub-interfaces BondEthernet0 811
>> > set interface tag BondEthernet0.811 vbond10.811
>> > set interface state BondEthernet0.811 up
>> > set interface ip address BondEthernet0.811 192.168.10.11/24
>> > <http://192.168.10.11/24>
>> > create sub-interfaces BondEthernet0 812
>> > set interface tag BondEthernet0.812 vbond10.812
>> > set interface state BondEthernet0.812 up
>> > set interface ip address BondEthernet0.812 192.168.20.11/24
>> > <http://192.168.20.11/24>
>> > ip route add   16.0.0.0/8 <http://16.0.0.0/8>  via 192.168.10.1
>> > ip route add   48.0.0.0/8 <http://48.0.0.0/8>  via 192.168.20.1
>> > set interface state TwentyFiveGigabitEthernet5d/0/0 up
>> > set interface state TwentyFiveGigabitEthernet5d/0/1 up
>> >
>> >
>> > VPP CLI's
>> > vpp# show hardware-interfaces
>> >
>> > BondEthernet0  3 up   BondEthernet0
>> >   Link speed: unknown
>> >   Ethernet address b8:83:03:8b:ef:44
>> > TwentyFiveGigabitEthernet5d/0/01 up
>> > TwentyFiveGigabitEthernet5d/0/0
>> >   Link speed: 25 Gbps
>> >   Ethernet address b8:83:03:8b:ef:44
>> >   Mellanox ConnectX-4 Family
>> > carrier up full duplex mtu 9206
>> > flags: admin-up pmd rx-ip4-cksum
>> > rx: queues 4 (max 65535), desc 1024 (min 0 max 65535 align 1)
>> > tx: queues 4 (max 65535), desc 4096 (min 0 max 65535 align 1)
>> > TwentyFiveGigabitEthernet5d/0/12 up
>> > TwentyFiveGigabitEthernet5d/0/1
>> >   Link speed: 25 Gbps
>> >   Ethernet address b8:83:03:8b:ef:44
>> >   Mellanox ConnectX-4 Family
>> > carrier up full duplex mtu 9206
>> > flags: admin-up pmd rx-ip4-cksum
>> > rx: queues 4 (max 65535), desc 1024 (min 0 max 65535 align 1)
>> > tx: queues 4 (max 65535), desc 4096 (min 0 max 65535 align 1)
>> > pci: device 15b3:1015 subsystem 1590:00d3 address :5d:00.01
>> numa 0
>> > max rx packet len: 65536
>> &g

Re: [vpp-dev] Facing issue while bringing up vpp 19.08 with Bonding configuration and Mellanox nics

2020-03-02 Thread chetan bhasin
Thanks Benoit for the response!

I am trying vpp1908 with DPDK 19.02 now and lets see how it behaves.

Thanks,
Chetan

On Mon, Mar 2, 2020 at 2:13 PM Benoit Ganne (bganne) 
wrote:

> The backtrace ends up in the Mellanox DPDK driver, so the bug is most
> probably in DPDK.
> The fact that (2) works but not (1) might hint towards a race condition /
> timing issue.
>
> ben
>
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of chetan
> bhasin
> > Sent: lundi 2 mars 2020 07:53
> > To: vpp-dev 
> > Subject: [vpp-dev] Facing issue while bringing up vpp 19.08 with Bonding
> > configuration and Mellanox nics
> >
> > Hello Everyone,
> >
> > We are trying to bring-up stable/vpp1908 with bonded configuration
> > 1) its getting crash in case we mention
> > "startup-config /root/vanilla_1908/vpp/vpp.conf" ,
> > 2)if we bring up vpp first and then apply configuration using exec
> command
> > , it works fine
> > "exec /root/vanilla_1908/vpp/vpp.conf"
> >
> > Note -  point (1) mentioned above is working fine with stable/vpp2001.
> > There is dpdk version difference as well between stable/vpp1908(dpdk
> > 19.05) vs stable/vpp2001 (dpdk 19.08)
> >
> > Can anybody please suggest an area in vpp we can look into or there could
> > be bug in dpdk 19.05 resolved in dpdk19.08.
> >
> > Configuration :(vpp.conf)
> > create bond mode active-backup
> > bond add BondEthernet0 TwentyFiveGigabitEthernet5d/0/0
> > bond add BondEthernet0 TwentyFiveGigabitEthernet5d/0/1
> > set interface state BondEthernet0 up
> > create sub-interfaces BondEthernet0 811
> > set interface tag BondEthernet0.811 vbond10.811
> > set interface state BondEthernet0.811 up
> > set interface ip address BondEthernet0.811 192.168.10.11/24
> > <http://192.168.10.11/24>
> > create sub-interfaces BondEthernet0 812
> > set interface tag BondEthernet0.812 vbond10.812
> > set interface state BondEthernet0.812 up
> > set interface ip address BondEthernet0.812 192.168.20.11/24
> > <http://192.168.20.11/24>
> > ip route add   16.0.0.0/8 <http://16.0.0.0/8>  via 192.168.10.1
> > ip route add   48.0.0.0/8 <http://48.0.0.0/8>  via 192.168.20.1
> > set interface state TwentyFiveGigabitEthernet5d/0/0 up
> > set interface state TwentyFiveGigabitEthernet5d/0/1 up
> >
> >
> > VPP CLI's
> > vpp# show hardware-interfaces
> >
> > BondEthernet0  3 up   BondEthernet0
> >   Link speed: unknown
> >   Ethernet address b8:83:03:8b:ef:44
> > TwentyFiveGigabitEthernet5d/0/01 up
> > TwentyFiveGigabitEthernet5d/0/0
> >   Link speed: 25 Gbps
> >   Ethernet address b8:83:03:8b:ef:44
> >   Mellanox ConnectX-4 Family
> > carrier up full duplex mtu 9206
> > flags: admin-up pmd rx-ip4-cksum
> > rx: queues 4 (max 65535), desc 1024 (min 0 max 65535 align 1)
> > tx: queues 4 (max 65535), desc 4096 (min 0 max 65535 align 1)
> > TwentyFiveGigabitEthernet5d/0/12 up
> > TwentyFiveGigabitEthernet5d/0/1
> >   Link speed: 25 Gbps
> >   Ethernet address b8:83:03:8b:ef:44
> >   Mellanox ConnectX-4 Family
> > carrier up full duplex mtu 9206
> > flags: admin-up pmd rx-ip4-cksum
> > rx: queues 4 (max 65535), desc 1024 (min 0 max 65535 align 1)
> > tx: queues 4 (max 65535), desc 4096 (min 0 max 65535 align 1)
> > pci: device 15b3:1015 subsystem 1590:00d3 address :5d:00.01 numa
> 0
> > max rx packet len: 65536
> >
> >
> >
> > Back-trace looks like as below
> > #0  mlx5_read_dev_counters (dev=dev@entry=0x7f71ebf50940
> > , stats=stats@entry=0x1fffc3518)
> > at /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-
> > party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-
> > 19.05/drivers/net/mlx5/mlx5_stats.c:186
> > #1  0x7f71eae756b4 in mlx5_stats_init (dev=dev@entry=0x7f71ebf50940
> > )
> > at /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-
> > party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-
> > 19.05/drivers/net/mlx5/mlx5_stats.c:309
> > #2  0x7f71eae70b27 in mlx5_dev_start (dev=0x7f71ebf50940
> > )
> > at /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-
> > party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-
> > 19.05/drivers/net/mlx5/mlx5_trigger.c:178
> > #3  0x7f71eabc8e18 in rte_eth_dev_start (port_id=0)
> > at /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-
> > pa

[vpp-dev] Facing issue while bringing up vpp 19.08 with Bonding configuration and Mellanox nics

2020-03-01 Thread chetan bhasin
Hello Everyone,

We are trying to bring-up stable/vpp1908 with bonded configuration
1) its getting crash in case we mention
"startup-config /root/vanilla_1908/vpp/vpp.conf" ,
2)if we bring up vpp first and then apply configuration using exec command
, it works fine
"exec /root/vanilla_1908/vpp/vpp.conf"

Note -  *point (1) mentioned above is working fine with stable/vpp2001*.
There is dpdk version difference as well between stable/vpp1908(dpdk 19.05)
vs stable/vpp2001 (dpdk 19.08)

Can anybody please suggest an area in vpp we can look into or there could
be bug in dpdk 19.05 resolved in dpdk19.08.

*Configuration :(vpp.conf)*
create bond mode active-backup
bond add BondEthernet0 TwentyFiveGigabitEthernet5d/0/0
bond add BondEthernet0 TwentyFiveGigabitEthernet5d/0/1
set interface state BondEthernet0 up
create sub-interfaces BondEthernet0 811
set interface tag BondEthernet0.811 vbond10.811
set interface state BondEthernet0.811 up
set interface ip address BondEthernet0.811 192.168.10.11/24
create sub-interfaces BondEthernet0 812
set interface tag BondEthernet0.812 vbond10.812
set interface state BondEthernet0.812 up
set interface ip address BondEthernet0.812 192.168.20.11/24
ip route add   16.0.0.0/8 via 192.168.10.1
ip route add   48.0.0.0/8 via 192.168.20.1
set interface state TwentyFiveGigabitEthernet5d/0/0 up
set interface state TwentyFiveGigabitEthernet5d/0/1 up

*VPP CLI's*
vpp# show hardware-interfaces
BondEthernet0  3 up   BondEthernet0
  Link speed: unknown
  Ethernet address b8:83:03:8b:ef:44
TwentyFiveGigabitEthernet5d/0/01 up
TwentyFiveGigabitEthernet5d/0/0
  Link speed: 25 Gbps
  Ethernet address b8:83:03:8b:ef:44
  Mellanox ConnectX-4 Family
carrier up full duplex mtu 9206
flags: admin-up pmd rx-ip4-cksum
rx: queues 4 (max 65535), desc 1024 (min 0 max 65535 align 1)
tx: queues 4 (max 65535), desc 4096 (min 0 max 65535 align 1)
TwentyFiveGigabitEthernet5d/0/12 up
TwentyFiveGigabitEthernet5d/0/1
  Link speed: 25 Gbps
  Ethernet address b8:83:03:8b:ef:44
  Mellanox ConnectX-4 Family
carrier up full duplex mtu 9206
flags: admin-up pmd rx-ip4-cksum
rx: queues 4 (max 65535), desc 1024 (min 0 max 65535 align 1)
tx: queues 4 (max 65535), desc 4096 (min 0 max 65535 align 1)
pci: device 15b3:1015 subsystem 1590:00d3 address :5d:00.01 numa 0
max rx packet len: 65536


*Back-trace looks like as below *
#0  mlx5_read_dev_counters (dev=dev@entry=0x7f71ebf50940 ,
stats=stats@entry=0x1fffc3518)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_stats.c:186
#1  0x7f71eae756b4 in mlx5_stats_init (dev=dev@entry=0x7f71ebf50940
)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_stats.c:309
#2  0x7f71eae70b27 in mlx5_dev_start (dev=0x7f71ebf50940
)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_trigger.c:178
#3  0x7f71eabc8e18 in rte_eth_dev_start (port_id=0)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/lib/librte_ethdev/rte_ethdev.c:1429
#4  0x7f71eb74fffd in dpdk_device_start (xd=xd@entry=0x7f71ef09fb80)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/src/plugins/dpdk/device/common.c:168
#5  0x7f71eb7555c8 in dpdk_interface_admin_up_down (vnm=, hw_if_index=, flags=)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/src/plugins/dpdk/device/device.c:427
#6  0x7f72f196d9c8 in vnet_sw_interface_set_flags_helper
(vnm=vnm@entry=0x7f72f2191f20
, sw_if_index=,
flags=VNET_SW_INTERFACE_FLAG_ADMIN_UP, helper_flags=(unknown: 0),
helper_flags@entry=VNET_INTERFACE_SET_FLAGS_HELPER_WANT_REDISTRIBUTE)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/src/vnet/interface.c:455
#7  0x7f72f196f10d in vnet_sw_interface_set_flags
(vnm=vnm@entry=0x7f72f2191f20
, sw_if_index=,
flags=)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/src/vnet/interface.c:504
#8  0x7f72f197ecca in set_state (vm=,
input=0x7f71f141bed0, cmd=)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/src/vnet/interface_cli.c:902
#9  0x7f72f1014f4e in vlib_cli_dispatch_sub_commands
(vm=vm@entry=0x7f72f1299300
,
cm=cm@entry=0x7f72f1299530 ,
input=input@entry=0x7f71f141bed0,
parent_command_index=)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/src/vlib/cli.c:649
#10 0x7f72f1015764 in vlib_cli_dispatch_sub_commands
(vm=vm@entry=0x7f72f1299300
,
cm=cm@en

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-21 Thread chetan bhasin
Thanks a lot Damjan for quick response !

We will try latest stable/1908 that has the given patch.

*With Mellanox Technologies MT27710 Family [ConnectX-4 Lx] :*
1) stable/vpp1908 : If we configure buffers (250k) and have 2048  huge
pages of 2MB (total 4GB), we see issue with traffic. "l3 mac mismatch"
2) stable/vpp1908 :If we configure 4 huge pages of 1GB via grub parameters
, vpp works even with 400K buffers.

Could you please guide us what's the best approach here ?

For point 1) we see following logs in one of the vpp thread -

#5  0x7f3375afbae2 in rte_vlog (level=, logtype=77,
format=0x7f3376768df8 *"net_mlx5: port %u unable to find virtually
contiguous chunk for address (%p). rte_memseg_contig_walk() failed.\n%.0s",*
ap=ap@entry=0x7f3379c4fac8)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/lib/librte_eal/common/eal_common_log.c:427
#6  0x7f3375ab2c12 in rte_log (level=level@entry=5, logtype=,
format=format@entry=0x7f3376768df8 "net_mlx5: port %u unable to find
virtually contiguous chunk for address (%p). rte_memseg_contig_walk()
failed.\n%.0s")
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/lib/librte_eal/common/eal_common_log.c:443
#7  0x7f3375dc47fa in mlx5_mr_create_primary (dev=dev@entry=0x7f3376e9d940
,
entry=entry@entry=0x7ef5c00d02ca, addr=addr@entry=69384463936)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_mr.c:627


Thanks,
Chetan


On Fri, Feb 21, 2020 at 3:13 PM Damjan Marion  wrote:

>
>
> On 21 Feb 2020, at 10:31, chetan bhasin 
> wrote:
>
> Hi Nitin,Damjan,
>
> For 40G *XL710* buffers : 537600  (500K+)
> 1) vpp 19.08 (sept 2019) : it worked with vpp 19.08 (sept release) after
> removing intel_iommu=on from Grub params.
> 2) stable/vpp2001(latest) :  It worked even we have "intel_iommu=on" in
> Grub params
>
>
> On stable/vpp2001 , I found a check-in before which it did not work with "
> intel_iommu=on " as grub params, but after the below change-list it work
> even with grub params.
> commit 45495480c8165090722389b08075df06ccfcd7ef
> Author: Yulong Pei 
> Date:   Thu Oct 17 18:41:52 2019 +0800
> vlib: linux: fix wrong iommu_group value issue when using dpdk-plugin
>
> Before above change in vpp 20.01 , when we bring up vpp with vfio-pci, vpp
> change  /sys/module/vfio/parameters/enable_unsafe_noiommu_mode to "Y" ,
> and we face issue with traffic  but after the change  sys file value remain
> as  "N"  in /sys/module/vfio/parameters/enable_unsafe_noiommu_mode and
> traffic works fine.
>
> As it is bare metal so we can remove intel_iommu=on from grub to make it
> work without any patches . Any suggestions?
>
>
> IOMMU gives you following:
>  - protection and security - it prevents misbehaving NIC to read/write
> intentionally or unintentionally memory it is not supposed to access
>  - VA -> PA translation
>
> If you are running bare-metal, single tenant security is probably not
> concern, but still it can protect NIC from doing something bad eventually
> because of driver issues.
> VA -> PA translation helps with performance, as driver doesn’t need to
> lookup for PA when submitting descriptors but this is not critical perf
> issue.
>
> So it is up to you to decide, work without IOMMU or patch your old VPP
> version….
>
>
> Regards,
> Chetan
>
> On Tue, Feb 18, 2020 at 1:07 PM Nitin Saxena  wrote:
>
>> HI Chethan,
>>
>>
>>
>> Your packet trace shows that packet data is all 0 and that’s why you are
>> running into l3 mac mismatch.
>>
>> I am guessing something messed with IOMMU due to which translation is not
>> happening. Although packet length is correct.
>>
>> You can try out AVF plugin to iron out where problem exists, in dpdk
>> plugin or vlib
>>
>>
>>
>> Thanks,
>>
>> Nitin
>>
>>
>>
>> *From:* chetan bhasin 
>> *Sent:* Tuesday, February 18, 2020 12:50 PM
>> *To:* me 
>> *Cc:* Nitin Saxena ; vpp-dev 
>> *Subject:* Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
>>
>>
>>
>> Hi,
>>
>> One more finding related to intel nic and number of buffers (537600)
>>
>>
>>
>> vpp branch
>>
>> driver
>>
>> card
>>
>> buffers
>>
>> Traffic
>>
>> Err
>>
>> stable/1908
>>
>> uio_pci_genric
>>
>&g

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-21 Thread chetan bhasin
Hi Nitin,Damjan,

For 40G *XL710* buffers : 537600  (500K+)
1) vpp 19.08 (sept 2019) : it worked with vpp 19.08 (sept release) after
removing intel_iommu=on from Grub params.
2) stable/vpp2001(latest) :  It worked even we have "intel_iommu=on" in
Grub params


On stable/vpp2001 , I found a check-in before which it did not work with "
intel_iommu=on " as grub params, but after the below change-list it work
even with grub params.
commit 45495480c8165090722389b08075df06ccfcd7ef
Author: Yulong Pei 
Date:   Thu Oct 17 18:41:52 2019 +0800
vlib: linux: fix wrong iommu_group value issue when using dpdk-plugin

Before above change in vpp 20.01 , when we bring up vpp with vfio-pci, vpp
change  /sys/module/vfio/parameters/enable_unsafe_noiommu_mode to "Y" , and
we face issue with traffic  but after the change  sys file value remain as
"N"  in /sys/module/vfio/parameters/enable_unsafe_noiommu_mode and traffic
works fine.

As it is bare metal so we can remove intel_iommu=on from grub to make it
work without any patches . Any suggestions?

Regards,
Chetan

On Tue, Feb 18, 2020 at 1:07 PM Nitin Saxena  wrote:

> HI Chethan,
>
>
>
> Your packet trace shows that packet data is all 0 and that’s why you are
> running into l3 mac mismatch.
>
> I am guessing something messed with IOMMU due to which translation is not
> happening. Although packet length is correct.
>
> You can try out AVF plugin to iron out where problem exists, in dpdk
> plugin or vlib
>
>
>
> Thanks,
>
> Nitin
>
>
>
> *From:* chetan bhasin 
> *Sent:* Tuesday, February 18, 2020 12:50 PM
> *To:* me 
> *Cc:* Nitin Saxena ; vpp-dev 
> *Subject:* Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
>
>
>
> Hi,
>
> One more finding related to intel nic and number of buffers (537600)
>
>
>
> vpp branch
>
> driver
>
> card
>
> buffers
>
> Traffic
>
> Err
>
> stable/1908
>
> uio_pci_genric
>
> X722(10G)
>
> 537600
>
>  Working
>
>
>
> *stable/1908*
>
> *vfio-pci*
>
> *XL710(40G)*
>
> *537600 *
>
> *Not Working*
>
> *l3 mac mismatch*
>
> stable/2001
>
> uio_pci_genric
>
> X722(10G)
>
> 537600
>
>  Working
>
>
>
> stable/2001
>
> vfio-pci
>
> XL710(40G)
>
> 537600
>
>  Working
>
>
>
>
>
>
>
> Thanks,
>
> Chetan
>
>
>
> On Mon, Feb 17, 2020 at 7:17 PM chetan bhasin via Lists.Fd.Io
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__Lists.Fd.Io&d=DwMFaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=S4H7jibYAtA5YOvfL3IkGduCfk9LbZMPOAecQGDzWV0&m=qxJrqbz5sNlCrzJTOZjaJ0jHeaW077bX6ZxmV308jfg&s=ffS1Y8GHllzjueMUVW31gwrVEIK1HVSNTKk2yA-VjG8&e=>
>  wrote:
>
> Hi Nitin,
>
>
>
> https://github.com/FDio/vpp/commits/stable/2001/src/vlib
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_FDio_vpp_commits_stable_2001_src_vlib&d=DwMFaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=S4H7jibYAtA5YOvfL3IkGduCfk9LbZMPOAecQGDzWV0&m=qxJrqbz5sNlCrzJTOZjaJ0jHeaW077bX6ZxmV308jfg&s=LljqKCmXwjl4uzuLM_oB-jhjYV5xVGFpHPDomTZwKAU&e=>
>
> As per stable/2001 branch , the given change is checked-in around Oct 28
> 2019.
>
>
>
> df0191ead2cf39611714b6603cdc5bdddc445b57 is previous commit of
> b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
> Yes (branch vpp 20.01)
>
>
>
> Thanks,
>
> Chetan Bhasin
>
>
>
> On Mon, Feb 17, 2020 at 5:33 PM Nitin Saxena  wrote:
>
> Hi Damjan,
>
> >> if you read Chetan’s email bellow, you will see that this one is
> already excluded…
> Sorry I missed that part. After seeing diffs between stable/1908 and
> stable/2001, commit: b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897 is the only
> visible git commit in dpdk plugin which is playing with mempool buffers. If
> it does not solve the problem then I suspect problem lies outside dpdk
> plugin. I am guessing DPDK-19.08 is being used here with VPP-19.08
>
> Hi Chetan,
> > > 3) I took previous commit of  "vlib: don't use vector for keeping
> buffer
> > indices in the pool " ie "df0191ead2cf39611714b6603cdc5bdddc445b57" :
> > Everything looks fine with Buffers 537600.
> In which branch, Commit: df0191ead2cf39611714b6603cdc5bdddc445b57 is
> previous commit of b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
>
> Thanks,
> Nitin
> > -Original Message-
> > From: Damjan Marion 
> > Sent: Monday, February 17, 2020 3:47 PM
> > To: Nitin Saxena 
> > Cc: chetan bhasin ; vpp-dev@lists.fd.io
> > Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
> >
> >
> > Dear Nitin,
> >
> > if 

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread chetan bhasin
Hi,
One more finding related to intel nic and number of buffers (537600)

vpp branch driver card buffers Traffic Err
stable/1908 uio_pci_genric X722(10G) 537600  Working
*stable/1908* *vfio-pci* *XL710(40G)* *537600 * *Not Working* *l3 mac
mismatch*
stable/2001 uio_pci_genric X722(10G) 537600  Working
stable/2001 vfio-pci XL710(40G) 537600  Working

Thanks,
Chetan

On Mon, Feb 17, 2020 at 7:17 PM chetan bhasin via Lists.Fd.Io
 wrote:

> Hi Nitin,
>
> https://github.com/FDio/vpp/commits/stable/2001/src/vlib
> As per stable/2001 branch , the given change is checked-in around Oct 28
> 2019.
>
> df0191ead2cf39611714b6603cdc5bdddc445b57 is previous commit of
> b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
> Yes (branch vpp 20.01)
>
> Thanks,
> Chetan Bhasin
>
> On Mon, Feb 17, 2020 at 5:33 PM Nitin Saxena  wrote:
>
>> Hi Damjan,
>>
>> >> if you read Chetan’s email bellow, you will see that this one is
>> already excluded…
>> Sorry I missed that part. After seeing diffs between stable/1908 and
>> stable/2001, commit: b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897 is the only
>> visible git commit in dpdk plugin which is playing with mempool buffers. If
>> it does not solve the problem then I suspect problem lies outside dpdk
>> plugin. I am guessing DPDK-19.08 is being used here with VPP-19.08
>>
>> Hi Chetan,
>> > > 3) I took previous commit of  "vlib: don't use vector for keeping
>> buffer
>> > indices in the pool " ie "df0191ead2cf39611714b6603cdc5bdddc445b57" :
>> > Everything looks fine with Buffers 537600.
>> In which branch, Commit: df0191ead2cf39611714b6603cdc5bdddc445b57 is
>> previous commit of b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
>>
>> Thanks,
>> Nitin
>> > -Original Message-
>> > From: Damjan Marion 
>> > Sent: Monday, February 17, 2020 3:47 PM
>> > To: Nitin Saxena 
>> > Cc: chetan bhasin ; vpp-dev@lists.fd.io
>> > Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
>> >
>> >
>> > Dear Nitin,
>> >
>> > if you read Chetan’s email bellow, you will see that this one is already
>> > excluded…
>> >
>> > Also, it will not be easy to explain how this patch blows tx function
>> in dpdk
>> > mlx5 pmd…
>> >
>> > —
>> > Damjan
>> >
>> > > On 17 Feb 2020, at 11:12, Nitin Saxena  wrote:
>> > >
>> > > Hi Prashant/Chetan,
>> > >
>> > > I would try following change first to solve the problem in 1908
>> > >
>> > > commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
>> > > Author: Damjan Marion 
>> > > Date:   Tue Mar 12 18:14:15 2019 +0100
>> > >
>> > >     vlib: don't use vector for keeping buffer indices in
>> > >
>> > > Type: refactor
>> > >
>> > > Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
>> > > Signed-off-by: Damjan Marion damar...@cisco.com
>> > >
>> > > You can also try copying src/plugins/dpdk/buffer.c from stable/2001
>> > branch to stable/1908
>> > >
>> > > Thanks,
>> > > Nitin
>> > >
>> > > From: vpp-dev@lists.fd.io  On Behalf Of Damjan
>> > Marion via Lists.Fd.Io
>> > > Sent: Monday, February 17, 2020 1:52 PM
>> > > To: chetan bhasin 
>> > > Cc: vpp-dev@lists.fd.io
>> > > Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter
>> > >
>> > > External Email
>> > >
>> > > On 17 Feb 2020, at 07:37, chetan bhasin 
>> > wrote:
>> > >
>> > > Bottom line is stable/vpp 908 does not work with higher number of
>> buffers
>> > but stable/vpp2001 does. Could you please advise which area we can look
>> at
>> > ,as it would be difficult for us to move to vpp2001 at this time.
>> > >
>> > > I really don’t have idea what caused this problem to disappear.
>> > > You may try to use “git bisect” to find out which commit fixed it….
>> > >
>> > > —
>> > > Damjan
>> > >
>> > >
>> > >
>> > > On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io
>> >  wrote:
>> > > Thanks Damjan for the reply!
>> > >
>> > > Following are my observations on Intel X710/XL710 pci-
>> > > 1) I took latest code base from stable/vpp19.08  : Seeing error as "
>> > ethernet-input

Re: [vpp-dev] Issue coming while fib lookup in vpp 18.01 between /8 and default route

2020-02-17 Thread chetan bhasin
After taking patches related to ip4_mtrie.c we are not longer seeing the
issue related to /8 routes and default gateway.

Thanks a lot!
Chetan

On Tue, Feb 4, 2020 at 10:55 AM chetan bhasin 
wrote:

> Thanks Neale for response. I will take a look.
>
> On Mon, Feb 3, 2020 at 2:58 PM Neale Ranns (nranns) 
> wrote:
>
>>
>>
>> 18.01 might be missing patches in ip4_mtrie.c.
>>
>>
>>
>> /neale
>>
>>
>>
>> *From: * on behalf of chetan bhasin <
>> chetan.bhasin...@gmail.com>
>> *Date: *Friday 31 January 2020 at 11:47
>> *To: *vpp-dev 
>> *Subject: *[vpp-dev] Issue coming while fib lookup in vpp 18.01 between
>> /8 and default route
>>
>>
>>
>> Hello Everyone,
>>
>>
>>
>> I know that vpp 18.01 is not supported further, but can anybody please
>> provide a direction towards the below issue:
>>
>>
>>
>> We have two routes -
>>
>> 1) defaut gateway via 2.2.2.2
>>
>> 2) 10.0.0.0/8 via 3.3.3.3
>>
>>
>>
>> Trying to ping 10.20.x.x via VPP but it is going via 1) default gateway
>> but it should go via 2) 3.3.3.3
>>
>>
>>
>> Note : its working fine if we add route with subnet /16.
>>
>>
>>
>> show ip fib looks like as below :
>>
>> 0.0.0.0/0
>>   unicast-ip4-chain
>>   [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:43
>> to:[17:1012]]
>> [0] [@5]: ipv4 via 2.2.2.2 device_b/0/0: 0c9ff024000c29b196440800
>> 0.0.0.0/32
>>   unicast-ip4-chain
>>   [@0]: dpo-load-balance: [proto:ip4 index:2 buckets:1 uRPF:1 to:[0:0]]
>> [0] [@0]: dpo-drop ip4
>> 10.0.0.0/8
>>   unicast-ip4-chain
>>   [@0]: dpo-load-balance: [proto:ip4 index:25 buckets:1 uRPF:40 to:[0:0]]
>> [0] [@5]: ipv4 via 3.3.3.3 device_13/0/0: 0c9ff032000c29b1964e0800
>>
>>
>>
>> Thanks,
>>
>> Chetan Bhasin
>>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15435): https://lists.fd.io/g/vpp-dev/message/15435
Mute This Topic: https://lists.fd.io/mt/70639820/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread chetan bhasin
Hi Nitin,

https://github.com/FDio/vpp/commits/stable/2001/src/vlib
As per stable/2001 branch , the given change is checked-in around Oct 28
2019.

df0191ead2cf39611714b6603cdc5bdddc445b57 is previous commit of
b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
Yes (branch vpp 20.01)

Thanks,
Chetan Bhasin

On Mon, Feb 17, 2020 at 5:33 PM Nitin Saxena  wrote:

> Hi Damjan,
>
> >> if you read Chetan’s email bellow, you will see that this one is
> already excluded…
> Sorry I missed that part. After seeing diffs between stable/1908 and
> stable/2001, commit: b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897 is the only
> visible git commit in dpdk plugin which is playing with mempool buffers. If
> it does not solve the problem then I suspect problem lies outside dpdk
> plugin. I am guessing DPDK-19.08 is being used here with VPP-19.08
>
> Hi Chetan,
> > > 3) I took previous commit of  "vlib: don't use vector for keeping
> buffer
> > indices in the pool " ie "df0191ead2cf39611714b6603cdc5bdddc445b57" :
> > Everything looks fine with Buffers 537600.
> In which branch, Commit: df0191ead2cf39611714b6603cdc5bdddc445b57 is
> previous commit of b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
>
> Thanks,
> Nitin
> > -Original Message-----
> > From: Damjan Marion 
> > Sent: Monday, February 17, 2020 3:47 PM
> > To: Nitin Saxena 
> > Cc: chetan bhasin ; vpp-dev@lists.fd.io
> > Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
> >
> >
> > Dear Nitin,
> >
> > if you read Chetan’s email bellow, you will see that this one is already
> > excluded…
> >
> > Also, it will not be easy to explain how this patch blows tx function in
> dpdk
> > mlx5 pmd…
> >
> > —
> > Damjan
> >
> > > On 17 Feb 2020, at 11:12, Nitin Saxena  wrote:
> > >
> > > Hi Prashant/Chetan,
> > >
> > > I would try following change first to solve the problem in 1908
> > >
> > > commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
> > > Author: Damjan Marion 
> > > Date:   Tue Mar 12 18:14:15 2019 +0100
> > >
> > > vlib: don't use vector for keeping buffer indices in
> > >
> > > Type: refactor
> > >
> > > Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
> > > Signed-off-by: Damjan Marion damar...@cisco.com
> > >
> > > You can also try copying src/plugins/dpdk/buffer.c from stable/2001
> > branch to stable/1908
> > >
> > > Thanks,
> > > Nitin
> > >
> > > From: vpp-dev@lists.fd.io  On Behalf Of Damjan
> > Marion via Lists.Fd.Io
> > > Sent: Monday, February 17, 2020 1:52 PM
> > > To: chetan bhasin 
> > > Cc: vpp-dev@lists.fd.io
> > > Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter
> > >
> > > External Email
> > >
> > > On 17 Feb 2020, at 07:37, chetan bhasin 
> > wrote:
> > >
> > > Bottom line is stable/vpp 908 does not work with higher number of
> buffers
> > but stable/vpp2001 does. Could you please advise which area we can look
> at
> > ,as it would be difficult for us to move to vpp2001 at this time.
> > >
> > > I really don’t have idea what caused this problem to disappear.
> > > You may try to use “git bisect” to find out which commit fixed it….
> > >
> > > —
> > > Damjan
> > >
> > >
> > >
> > > On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io
> >  wrote:
> > > Thanks Damjan for the reply!
> > >
> > > Following are my observations on Intel X710/XL710 pci-
> > > 1) I took latest code base from stable/vpp19.08  : Seeing error as "
> > ethernet-input l3 mac mismatch"
> > > With Buffers 537600
> > > vpp# show buffers
> > |
> > > Pool NameIndex NUMA  Size  Data Size  Total  Avail
> Cached   Used
> > > default-numa-0 0 0   2496 2048   537600 510464   1319
>   25817
> > > default-numa-1 1 1   2496 2048   537600 528896390
>   8314
> > >
> > > vpp# show hardware-interfaces
> > >   NameIdx   Link  Hardware
> > > BondEthernet0  3 up   BondEthernet0
> > >   Link speed: unknown
> > >   Ethernet address 3c:fd:fe:b5:5e:40
> > > FortyGigabitEthernet12/0/0 1 up
>  FortyGigabitEthernet12/0/0
> > >   Link speed: 40 Gbps
> > >   Ethernet ad

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread chetan bhasin
Thanks Damjan and Nikhil for your time.

I also find below logs via dmesg (Intel X710/XL710 )

[root@bfs-dl360g10-25 vpp]# uname -a
Linux bfs-dl360g10-25 3.10.0-957.5.1.el7.x86_64 #1 SMP Wed Dec 19 10:46:58
EST 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@bfs-dl360g10-25 vpp]# uname -r
3.10.0-957.5.1.el7.x86_64


Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 400
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 402
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Read] Request device
[12:00.0] fault addr 5ec7f31000 [fault reason 06] PTE Read access is not set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 502
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Read] Request device
[12:00.0] fault addr 5ec804 [fault reason 06] PTE Read access is not set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Read] Request device
[12:00.0] fault addr 5ec53be000 [fault reason 06] PTE Read access is not set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 700
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 702
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Read] Request device
[12:00.0] fault addr 5ec6f24000 [fault reason 06] PTE Read access is not set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Write] Request device
[12:00.0] fault addr 5ec60eb000 [fault reason 05] PTE Write access is not
set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Write] Request device
[12:00.0] fault addr 5ec6684000 [fault reason 05] PTE Write access is not
set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Write] Request device
[12:00.0] fault addr 5ec607d000 [fault reason 05] PTE Write access is not
set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 300
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 302

Thanks,
Chetan

On Mon, Feb 17, 2020 at 3:47 PM Damjan Marion  wrote:

>
> Dear Nitin,
>
> if you read Chetan’s email bellow, you will see that this one is already
> excluded…
>
> Also, it will not be easy to explain how this patch blows tx function in
> dpdk mlx5 pmd…
>
> —
> Damjan
>
> > On 17 Feb 2020, at 11:12, Nitin Saxena  wrote:
> >
> > Hi Prashant/Chetan,
> >
> > I would try following change first to solve the problem in 1908
> >
> > commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
> > Author: Damjan Marion 
> > Date:   Tue Mar 12 18:14:15 2019 +0100
> >
> > vlib: don't use vector for keeping buffer indices in
> >
> > Type: refactor
> >
> > Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
> > Signed-off-by: Damjan Marion damar...@cisco.com
> >
> > You can also try copying src/plugins/dpdk/buffer.c from stable/2001
> branch to stable/1908
> >
> > Thanks,
> > Nitin
> >
> > From: vpp-dev@lists.fd.io  On Behalf Of Damjan
> Marion via Lists.Fd.Io
> > Sent: Monday, February 17, 2020 1:52 PM
> > To: chetan bhasin 
> > Cc: vpp-dev@lists.fd.io
> > Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter
> >
> > External Email
> >
> > On 17 Feb 2020, at 07:37, chetan bhasin 
> wrote:
> >
> > Bottom line is stable/vpp 908 does not work with higher number of
> buffers but stable/vpp2001 does. Could you please advise which area we can
> look at ,as it would be difficult for us to move to vpp2001 at this time.
> >
> > I really don’t have idea what caused this problem to disappear.
> > You may try to use “git bisect” to find out which commit fixed it….
> >
> > —
> > Damjan
> >
> >
> >
> > On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io
>  wrote:
> > Thanks Damjan for the reply!
> >
> > Following are my observations on Intel X710/XL710 pci-
> > 1) I took latest code base from stable/vpp19.08  : Seeing error as "
> ethernet-input l3 mac mismatch"
> > With Buffers 537600
> > vpp# show buffers
>|
> > Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached
>  Used
> > default-numa-0 0 0   2496 2048   537600 510464   1319
> 25817
> > default-numa-1 1 1   2496 2048   537600 528896390
> 8314
> >
> > vpp# show hardware-interfaces
> >   NameIdx   Link  Hardware
> > BondEthernet0  3 up   BondEthernet0
> >   Link speed: unknown
> >   Ethernet address 3c:fd:fe:b5:5e:40
> > FortyGigabitEthernet12/0/0 1 up   FortyGigabitEthernet12/0/0
> >   

Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-16 Thread chetan bhasin
Bottom line is stable/vpp 908 does not work with higher number of buffers
but stable/vpp2001 does. Could you please advise which area we can look at
,as it would be difficult for us to move to vpp2001 at this time.

On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io
 wrote:

> Thanks Damjan for the reply!
>
> Following are my observations on Intel X710/XL710 pci-
> 1) I took latest code base from stable/vpp19.08  : Seeing error as " 
> *ethernet-input
> l3 mac mismatch*"
> *With Buffers 537600*
>
>
>
> *vpp# show
> buffers
> |Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached
> Useddefault-numa-0 0 0   2496 2048   537600 510464
> 131925817default-numa-1 1 1   2496 2048   537600
> 5288963908314*
>
>
> *vpp# show hardware-interfaces*  NameIdx
> Link  Hardware
> BondEthernet0  3 up   BondEthernet0
>   Link speed: unknown
>   Ethernet address 3c:fd:fe:b5:5e:40
> FortyGigabitEthernet12/0/0 1 up   FortyGigabitEthernet12/0/0
>   Link speed: 40 Gbps
>   Ethernet address 3c:fd:fe:b5:5e:40
>   Intel X710/XL710 Family
> carrier up full duplex mtu 9206
> flags: admin-up pmd rx-ip4-cksum
> rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
> tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
> pci: device 8086:1583 subsystem 8086:0001 address :12:00.00 numa 0
> max rx packet len: 9728
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
>outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
>scatter keep-crc
> rx offload active: ipv4-cksum
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum
> sctp-cksum
>tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
>gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
>mbuf-fast-free
> tx offload active: none
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
> ipv6-frag
>ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
> rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6-frag
> ipv6-tcp
>ipv6-udp ipv6-other
> tx burst function: i40e_xmit_pkts_vec_avx2
> rx burst function: i40e_recv_pkts_vec_avx2
> tx errors 17
> rx frames ok4585
> rx bytes ok   391078
> extended stats:
>   rx good packets   4585
>   rx good bytes   391078
>   tx errors   17
>   rx multicast packets  4345
>   rx broadcast packets   243
>   rx unknown protocol packets   4588
>   rx size 65 to 127 packets 4529
>   rx size 128 to 255 packets  32
>   rx size 256 to 511 packets  26
>   rx size 1024 to 1522 packets 1
>   tx size 65 to 127 packets   33
> FortyGigabitEthernet12/0/1 2 up   FortyGigabitEthernet12/0/1
>   Link speed: 40 Gbps
>   Ethernet address 3c:fd:fe:b5:5e:40
>   Intel X710/XL710 Family
> carrier up full duplex mtu 9206
> flags: admin-up pmd rx-ip4-cksum
> rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
> tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
> pci: device 8086:1583 subsystem 8086: address :12:00.01 numa 0
> max rx packet len: 9728
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
>outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
>scatter keep-crc
> rx offload active: ipv4-cksum
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum
> sctp-cksum
>tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
>gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
>mbuf-fast-free
> tx offload active: none
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
> ipv6-frag
>ipv6-tc

Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-16 Thread chetan bhasin
27 packets 4528
  rx size 128 to 255 packets  32
  rx size 256 to 511 packets  26
  rx size 1024 to 1522 packets 1
  tx size 65 to 127 packets   33


*As per packet trace -*
Packet 4
00:00:54:955863: dpdk-input
  FortyGigabitEthernet12/0/0 rx queue 0
  buffer 0x13fc728: current data 0, length 68, buffer-pool 0, ref-count 1,
totlen-nifb 0, trace handle 0x103

ext-hdr-valid
|
l4-cksum-computed l4-cksum-correct
  PKT MBUF: port 0, nb_segs 1, pkt_len 68
buf_len 2176, data_len 68, ol_flags 0x180, data_off 128, phys_addr
0xde91ca80
packet_type 0x1 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
Packet Types
  RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
  0x: 00:00:00:00:00:00 -> 00:00:00:00:00:00
00:00:54:955864: bond-input
  src 00:00:00:00:00:00, dst 00:00:00:00:00:00, FortyGigabitEthernet12/0/0
-> BondEthernet0
00:00:54:955864: ethernet-input
  0x: 00:00:00:00:00:00 -> 00:00:00:00:00:00
00:00:54:955865: error-drop
  rx:BondEthernet0
00:00:54:955865: drop
  ethernet-input: l3 mac mismatch

2) I have took latest code-base from stable/vpp2001 branch: Everything
looks fine with * Buffers 537600*

3) I took previous commit of  "vlib: don't use vector for keeping buffer
indices in the pool " ie "df0191ead2cf39611714b6603cdc5bdddc445b57" :
Everything looks fine with *Buffers 537600.*
So this cleary shows the above commit will not fix our problem.



Thanks,
Chetan

On Wed, Feb 12, 2020 at 9:07 PM Damjan Marion  wrote:

>
> Shouldn’t be too hard to checkout commit prior to that one and test if
> problem is still there…
>
> —
> Damjan
>
>
> On 12 Feb 2020, at 14:50, chetan bhasin 
> wrote:
>
> Hi,
>
> Looking into the changes in vpp 20.1 , the below change looks good
> important related to buffer indices .
>
> *vlib: don't use vector for keeping buffer indices in the pool *
>
> Type: refactor
>
>
> Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
>
> Signed-off-by: Damjan Marion 
>
>
>
> https://github.com/FDio/vpp/commit/b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b#diff-2260a8080303fbcc30ef32f782b4d6df
> <https://url10.mailanyone.net/v1/?m=1j1reR-fQ-5y&i=57e1b682&c=nJZ-BXH5lshb2jA0XjZcJfV589cXx2IknVPOvIfeZzkHN0-1aiqoxkznIe6cMM1Q36XZK9v-i6Rhciwdfyj3g0j5HWsUCsAptLO9zuiQAUmOYUrK1p2_6frehR05g36O6OEk7t1RALQ_8k5obWKPc1_zGGk7sAXIm8hlot1JYDk8Ws8lQq0gFnUcbL4gBsWrDIf5U2-aedLh9p5BR5EWP_jwcQ0qrkyaCJBngVK3ZdTeur5m1tCcUh9RH_Aup9qg9LMelskGtWqpvOOOxBX2sGn3JlsJHk6r56933BJuIKhr7uoUtg4QXyBmbJJjoob40spvLJ4ZLn6oI5GCDZoAWg>
>
>
> Can anybody suggest  ?
>
> Shouldn’t be too hard to checkout commit prior to that one and test if
> problem is still there…
>
> —
> Damjan
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15413): https://lists.fd.io/g/vpp-dev/message/15413
Mute This Topic: https://lists.fd.io/mt/70968414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-12 Thread chetan bhasin
Hi,

Looking into the changes in vpp 20.1 , the below change looks good
important related to buffer indices .

*vlib: don't use vector for keeping buffer indices in the pool *

Type: refactor



Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4

Signed-off-by: Damjan Marion 



https://github.com/FDio/vpp/commit/b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b#diff-2260a8080303fbcc30ef32f782b4d6df
<https://url10.mailanyone.net/v1/?m=1j1reR-fQ-5y&i=57e1b682&c=nJZ-BXH5lshb2jA0XjZcJfV589cXx2IknVPOvIfeZzkHN0-1aiqoxkznIe6cMM1Q36XZK9v-i6Rhciwdfyj3g0j5HWsUCsAptLO9zuiQAUmOYUrK1p2_6frehR05g36O6OEk7t1RALQ_8k5obWKPc1_zGGk7sAXIm8hlot1JYDk8Ws8lQq0gFnUcbL4gBsWrDIf5U2-aedLh9p5BR5EWP_jwcQ0qrkyaCJBngVK3ZdTeur5m1tCcUh9RH_Aup9qg9LMelskGtWqpvOOOxBX2sGn3JlsJHk6r56933BJuIKhr7uoUtg4QXyBmbJJjoob40spvLJ4ZLn6oI5GCDZoAWg>


Can anybody suggest  ?

Thanks,
Chetan

On Tue, Feb 11, 2020 at 1:17 PM chetan bhasin via Lists.Fd.Io
 wrote:

> Hi,
>
> Any direction regarding the crash when we increase vlib_buffer count ? We
> are using vpp
> vpp# show version verbose
> Version:  v19.08.1-155~g09ca6fa-dirty
> Compiled by:  cbhasin
> Compile host: bfs-dl360g10-14-vm25
> Compile date: Tue 11 Feb 05:52:33 GMT 2020
> Compile location:
> /nfs-bfs/workspace/odc/cbhasin/ngp/mainline/third-party/vpp/vpp_1908
> Compiler: GCC 7.3.1 20180303 (Red Hat 7.3.1-5)
>
> Back-trace are provided by Prashant https://pastebin.com/1YS3ZWeb
>
> Thanks,
> Chetan Bhasin
>
> On Tue, Feb 4, 2020 at 3:07 PM Prashant Upadhyaya 
> wrote:
>
>> Thanks Benoit.
>> I don't have the core files at the moment (still taming the huge cores
>> that are generated, so they were disabled on the setup)
>> Backtraces are present at (with indicated config of the parameter) --
>> https://pastebin.com/1YS3ZWeb
>> It is a dual numa setup.
>>
>> Regards
>> -Prashant
>>
>>
>> On Tue, Feb 4, 2020 at 1:55 PM Benoit Ganne (bganne) 
>> wrote:
>> >
>> > Hi Prashant,
>> >
>> > Can you share your configuration and at least a backtrace of the crash?
>> Or even better a corefile:
>> https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingissues.html
>> >
>> > Best
>> > ben
>> >
>> > > -Original Message-
>> > > From: vpp-dev@lists.fd.io  On Behalf Of Prashant
>> > > Upadhyaya
>> > > Sent: mardi 4 février 2020 09:15
>> > > To: vpp-dev@lists.fd.io
>> > > Subject: Re: [vpp-dev] Regarding buffers-per-numa parameter
>> > >
>> > >  Woops, my mistake. I think I multiplied by 1024 extra.
>> > > Mbuf's are 2KB's, not 2 MB's (that's the huge page size)
>> > >
>> > > But the fact remains that my usecase is unstable at higher configured
>> > > buffers but is stable at lower values like 10 (this can by all
>> > > means be my usecase/code specific issue)
>> > >
>> > > If anybody else facing issues with higher configured buffers, please
>> do
>> > > share.
>> > >
>> > > Regards
>> > > -Prashant
>> > >
>> > >
>> > > On Tue, Feb 4, 2020 at 1:31 PM Prashant Upadhyaya
>> > >  wrote:
>> > > >
>> > > > Hi,
>> > > >
>> > > > I am using DPDK Plugin with VPP19.08.
>> > > > When I set the buffers-per-numa parameter to a high value, say,
>> > > > 25, I am seeing crashes in the system.
>> > > >
>> > > > (The corresponding parameter controlling number of mbufs in VPP18.01
>> > > > used to work well. This was in dpdk config section as num-mbufs)
>> > > >
>> > > > I quickly checked in VPP19.08 that vlib_buffer_main_t uses fields
>> > > > which are uwords :-
>> > > >  uword buffer_mem_start;
>> > > >   uword buffer_mem_size;
>> > > >
>> > > > Is it a mem size overflow in case the buffers-per-numa parameter is
>> > > > set to a high value ?
>> > > > I do need a high number of DPDK mbuf's in my usecase.
>> > > >
>> > > > Regards
>> > > > -Prashant
>>
>> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15383): https://lists.fd.io/g/vpp-dev/message/15383
Mute This Topic: https://lists.fd.io/mt/70968414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-10 Thread chetan bhasin
Hi,

Any direction regarding the crash when we increase vlib_buffer count ? We
are using vpp
vpp# show version verbose
Version:  v19.08.1-155~g09ca6fa-dirty
Compiled by:  cbhasin
Compile host: bfs-dl360g10-14-vm25
Compile date: Tue 11 Feb 05:52:33 GMT 2020
Compile location:
/nfs-bfs/workspace/odc/cbhasin/ngp/mainline/third-party/vpp/vpp_1908
Compiler: GCC 7.3.1 20180303 (Red Hat 7.3.1-5)

Back-trace are provided by Prashant https://pastebin.com/1YS3ZWeb

Thanks,
Chetan Bhasin

On Tue, Feb 4, 2020 at 3:07 PM Prashant Upadhyaya 
wrote:

> Thanks Benoit.
> I don't have the core files at the moment (still taming the huge cores
> that are generated, so they were disabled on the setup)
> Backtraces are present at (with indicated config of the parameter) --
> https://pastebin.com/1YS3ZWeb
> It is a dual numa setup.
>
> Regards
> -Prashant
>
>
> On Tue, Feb 4, 2020 at 1:55 PM Benoit Ganne (bganne) 
> wrote:
> >
> > Hi Prashant,
> >
> > Can you share your configuration and at least a backtrace of the crash?
> Or even better a corefile:
> https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingissues.html
> >
> > Best
> > ben
> >
> > > -Original Message-
> > > From: vpp-dev@lists.fd.io  On Behalf Of Prashant
> > > Upadhyaya
> > > Sent: mardi 4 février 2020 09:15
> > > To: vpp-dev@lists.fd.io
> > > Subject: Re: [vpp-dev] Regarding buffers-per-numa parameter
> > >
> > >  Woops, my mistake. I think I multiplied by 1024 extra.
> > > Mbuf's are 2KB's, not 2 MB's (that's the huge page size)
> > >
> > > But the fact remains that my usecase is unstable at higher configured
> > > buffers but is stable at lower values like 10 (this can by all
> > > means be my usecase/code specific issue)
> > >
> > > If anybody else facing issues with higher configured buffers, please do
> > > share.
> > >
> > > Regards
> > > -Prashant
> > >
> > >
> > > On Tue, Feb 4, 2020 at 1:31 PM Prashant Upadhyaya
> > >  wrote:
> > > >
> > > > Hi,
> > > >
> > > > I am using DPDK Plugin with VPP19.08.
> > > > When I set the buffers-per-numa parameter to a high value, say,
> > > > 25, I am seeing crashes in the system.
> > > >
> > > > (The corresponding parameter controlling number of mbufs in VPP18.01
> > > > used to work well. This was in dpdk config section as num-mbufs)
> > > >
> > > > I quickly checked in VPP19.08 that vlib_buffer_main_t uses fields
> > > > which are uwords :-
> > > >  uword buffer_mem_start;
> > > >   uword buffer_mem_size;
> > > >
> > > > Is it a mem size overflow in case the buffers-per-numa parameter is
> > > > set to a high value ?
> > > > I do need a high number of DPDK mbuf's in my usecase.
> > > >
> > > > Regards
> > > > -Prashant
> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15377): https://lists.fd.io/g/vpp-dev/message/15377
Mute This Topic: https://lists.fd.io/mt/70968414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Issue coming while fib lookup in vpp 18.01 between /8 and default route

2020-02-03 Thread chetan bhasin
Thanks Neale for response. I will take a look.

On Mon, Feb 3, 2020 at 2:58 PM Neale Ranns (nranns) 
wrote:

>
>
> 18.01 might be missing patches in ip4_mtrie.c.
>
>
>
> /neale
>
>
>
> *From: * on behalf of chetan bhasin <
> chetan.bhasin...@gmail.com>
> *Date: *Friday 31 January 2020 at 11:47
> *To: *vpp-dev 
> *Subject: *[vpp-dev] Issue coming while fib lookup in vpp 18.01 between
> /8 and default route
>
>
>
> Hello Everyone,
>
>
>
> I know that vpp 18.01 is not supported further, but can anybody please
> provide a direction towards the below issue:
>
>
>
> We have two routes -
>
> 1) defaut gateway via 2.2.2.2
>
> 2) 10.0.0.0/8 via 3.3.3.3
>
>
>
> Trying to ping 10.20.x.x via VPP but it is going via 1) default gateway
> but it should go via 2) 3.3.3.3
>
>
>
> Note : its working fine if we add route with subnet /16.
>
>
>
> show ip fib looks like as below :
>
> 0.0.0.0/0
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:43
> to:[17:1012]]
> [0] [@5]: ipv4 via 2.2.2.2 device_b/0/0: 0c9ff024000c29b196440800
> 0.0.0.0/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:2 buckets:1 uRPF:1 to:[0:0]]
> [0] [@0]: dpo-drop ip4
> 10.0.0.0/8
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:25 buckets:1 uRPF:40 to:[0:0]]
> [0] [@5]: ipv4 via 3.3.3.3 device_13/0/0: 0c9ff032000c29b1964e0800
>
>
>
> Thanks,
>
> Chetan Bhasin
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15320): https://lists.fd.io/g/vpp-dev/message/15320
Mute This Topic: https://lists.fd.io/mt/70639820/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Issue coming while fib lookup in vpp 18.01 between /8 and default route

2020-01-31 Thread chetan bhasin
Hello Everyone,

I know that vpp 18.01 is not supported further, but can anybody please
provide a direction towards the below issue:

We have two routes -
1) defaut gateway via 2.2.2.2
2) 10.0.0.0/8 via 3.3.3.3

Trying to ping 10.20.x.x via VPP but it is going via 1) default gateway but
it should go via 2) 3.3.3.3

Note : its working fine if we add route with subnet /16.

show ip fib looks like as below :
0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:43 to:[17:1012]]
[0] [@5]: ipv4 via 2.2.2.2 device_b/0/0: 0c9ff024000c29b196440800
0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:2 buckets:1 uRPF:1 to:[0:0]]
[0] [@0]: dpo-drop ip4
10.0.0.0/8
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:25 buckets:1 uRPF:40 to:[0:0]]
[0] [@5]: ipv4 via 3.3.3.3 device_13/0/0: 0c9ff032000c29b1964e0800


Thanks,
Chetan Bhasin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15296): https://lists.fd.io/g/vpp-dev/message/15296
Mute This Topic: https://lists.fd.io/mt/70639820/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] (Vpp 19,08)Facing issue with vmxnet3 with 1rx/tx queue

2020-01-13 Thread chetan bhasin
Thanks Benoit! I will try the above mentioned steps.

I am not sure why it works fine with 2Rx and 2Tx queue configuration


 GigabitEthernet13/0/0  1 up   GigabitEthernet13/0/0
  Link speed: 10 Gbps
  Ethernet address 00:50:56:9b:f5:c5
  VMware VMXNET3
carrier up full duplex mtu 9206
flags: admin-up pmd maybe-multiseg rx-ip4-cksum
Devargs:
rx: queues 2 (max 16), desc 1024 (min 128 max 4096 align 1)
tx: queues 2 (max 8), desc 1024 (min 512 max 4096 align 1)
pci: device 15ad:07b0 subsystem 15ad:07b0 address :13:00.00 numa 0
max rx packet len: 16384
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
   vlan-filter jumbo-frame scatter
rx offload active: ipv4-cksum jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
   multi-segs
tx offload active: multi-segs
rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp ipv6-udp ipv6
rss active:none
tx burst function: vmxnet3_xmit_pkts
rx burst function: vmxnet3_recv_pkts


Thanks,
Chetan Bhasin

On Mon, Jan 13, 2020 at 9:17 PM Benoit Ganne (bganne) 
wrote:

> Hmm,
>
>  - I suppose you run VPP as root and not in a container
>  - if you use CentOS/RHEL can you check disabling SELinux ('setenforce 0')
>  - can you share the output of Linux dmesg and VPP 'show pci'
>
> Best
> ben
>
> > -Original Message-
> > From: chetan bhasin 
> > Sent: lundi 13 janvier 2020 15:51
> > To: Benoit Ganne (bganne) 
> > Cc: vpp-dev 
> > Subject: Re: [vpp-dev] (Vpp 19,08)Facing issue with vmxnet3 with 1rx/tx
> > queue
> >
> > Hi Benoit,
> >
> > Thanks for your prompt response.
> >
> > We are migrating from vpp 18.01 to vpp.19.08 , that's why we want least
> > modification in our build system and we want to use DPDK as we were using
> > earlier
> >
> > .
> > DBGvpp# show log
> > 2020/01/13 14:44:42:014 notice dhcp/clientplugin initialized
> > 2020/01/13 14:44:42:051 warn   dpdk   EAL init args: -c 14 -n
> > 4 --in-memory --log-level debug --file-prefix vpp -w :1b:00.0 -w
> > :13:00.0 --master-lcore 4
> > 2020/01/13 14:44:42:603 notice dpdk   DPDK drivers found 2
> > ports...
> > 2020/01/13 14:44:42:622 notice dpdk   EAL: Detected 6
> lcore(s)
> > 2020/01/13 14:44:42:622 notice dpdk   EAL: Detected 1 NUMA
> > nodes
> >
> > 2020/01/13 14:44:42:623 notice dpdk   EAL: PCI device
> > :13:00.0 on NUMA socket -1
> > 2020/01/13 14:44:42:623 notice dpdk   EAL:   Invalid NUMA
> > socket, default to 0
> > 2020/01/13 14:44:42:623 notice dpdk   EAL:   probe driver:
> > 15ad:7b0 net_vmxnet3
> > 2020/01/13 14:44:42:623 notice dpdk   EAL:   using IOMMU type
> > 8 (No-IOMMU)
> > 2020/01/13 14:44:42:623 notice dpdk   EAL: Ignore mapping IO
> > port bar(3)
> > 2020/01/13 14:44:42:623 notice dpdk   EAL: PCI device
> > :1b:00.0 on NUMA socket -1
> > 2020/01/13 14:44:42:623 notice dpdk   EAL:   Invalid NUMA
> > socket, default to 0
> > 2020/01/13 14:44:42:623 notice dpdk   EAL:   probe driver:
> > 15ad:7b0 net_vmxnet3
> > 2020/01/13 14:44:42:623 notice dpdk   EAL: Ignore mapping IO
> > port bar(3)
> > 2020/01/13 14:45:02:475 err    dpdk   Interface
> > GigabitEthernet13/0/0 error 1: Operation not permitted
> > 2020/01/13 14:45:02:475 notice dpdk
> > vmxnet3_v4_rss_configure(): Set RSS fields (v4) failed: 1
> > 2020/01/13 14:45:02:475 notice dpdk   vmxnet3_dev_start():
> > Failed to configure v4 RSS
> >
> >
> >
> > Thanks,
> > Chetan Bhasin
> >
> > On Mon, Jan 13, 2020 at 7:58 PM Benoit Ganne (bganne)  > <mailto:bga...@cisco.com> > wrote:
> >
> >
> >   Hi Chetan,
> >
> >   Any reason for not using VPP built-in vmxnet3 driver instead of
> > DPDK? That should give you better performance and would be easier for us
> > to debug. See https://docs.fd.io/vpp/20.01/d2/d1a/vmxnet3_doc.html
> >
> >   Otherwise, can you share 'show logging' output?
> >
> >   Ben
> >
> >   > -Original Message-
> >   > From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>   > d...@lists.fd.io <mailto:vpp-dev@lists.fd.io> > On Behalf Of chetan
> bhasin
> >   > Sent: lundi 13 

Re: [vpp-dev] (Vpp 19,08)Facing issue with vmxnet3 with 1rx/tx queue

2020-01-13 Thread chetan bhasin
Hi Benoit,

Thanks for your prompt response.

We are migrating from vpp 18.01 to vpp.19.08 , that's why we want least
modification in our build system and we want to use DPDK as we were using
earlier

.
DBGvpp# show log
2020/01/13 14:44:42:014 notice dhcp/clientplugin initialized
2020/01/13 14:44:42:051 warn   dpdk   EAL init args: -c 14 -n 4
--in-memory --log-level debug --file-prefix vpp -w :1b:00.0 -w
:13:00.0 --master-lcore 4
2020/01/13 14:44:42:603 notice dpdk   DPDK drivers found 2
ports...
2020/01/13 14:44:42:622 notice dpdk   EAL: Detected 6 lcore(s)
2020/01/13 14:44:42:622 notice dpdk   EAL: Detected 1 NUMA nodes
2020/01/13 14:44:42:623 notice dpdk   EAL: PCI device
:13:00.0 on NUMA socket -1
2020/01/13 14:44:42:623 notice dpdk   EAL:   Invalid NUMA
socket, default to 0
2020/01/13 14:44:42:623 notice dpdk   EAL:   probe driver:
15ad:7b0 net_vmxnet3
2020/01/13 14:44:42:623 notice dpdk   EAL:   using IOMMU type 8
(No-IOMMU)
2020/01/13 14:44:42:623 notice dpdk   EAL: Ignore mapping IO
port bar(3)
2020/01/13 14:44:42:623 notice dpdk   EAL: PCI device
:1b:00.0 on NUMA socket -1
2020/01/13 14:44:42:623 notice dpdk   EAL:   Invalid NUMA
socket, default to 0
2020/01/13 14:44:42:623 notice dpdk   EAL:   probe driver:
15ad:7b0 net_vmxnet3
2020/01/13 14:44:42:623 notice dpdk   EAL: Ignore mapping IO
port bar(3)
2020/01/13 14:45:02:475 errdpdk   Interface
GigabitEthernet13/0/0 error 1: Operation not permitted

*2020/01/13 14:45:02:475 notice dpdk
vmxnet3_v4_rss_configure(): Set RSS fields (v4) failed: 12020/01/13
14:45:02:475 notice dpdk   vmxnet3_dev_start(): Failed to
configure v4 RSS*


Thanks,
Chetan Bhasin

On Mon, Jan 13, 2020 at 7:58 PM Benoit Ganne (bganne) 
wrote:

> Hi Chetan,
>
> Any reason for not using VPP built-in vmxnet3 driver instead of DPDK? That
> should give you better performance and would be easier for us to debug. See
> https://docs.fd.io/vpp/20.01/d2/d1a/vmxnet3_doc.html
>
> Otherwise, can you share 'show logging' output?
>
> Ben
>
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of chetan
> bhasin
> > Sent: lundi 13 janvier 2020 15:20
> > To: vpp-dev 
> > Subject: [vpp-dev] (Vpp 19,08)Facing issue with vmxnet3 with 1rx/tx queue
> >
> > Hello Everyone,
> >
> > I am facing an issue while bringing up vpp with less than 2 rx and 2 tx
> > queue. I am using vpp19.08. I have configured pci's under the dpdk
> section
> > like below -
> >
> > 1)
> > dpdk {
> > # dpdk-config
> >  dev default {
> >  num-rx-desc 1024
> >  num-rx-queues 1
> >  num-tx-desc 1024
> >  num-tx-queues 1
> > # vlan-strip-offload off
> >  }
> > dev :1b:00.0 {
> > }
> > dev :13:00.0 {
> > }
> > }
> >
> > When I bring pci state to up  , it is showing error in "show hardware-
> > interfaces"
> >
> >  DBGvpp# set interface state GigabitEthernet13/0/0 up  DBGvpp# show
> > hardware-interfaces
> >   NameIdx   Link  Hardware
> > GigabitEthernet13/0/0  1down  GigabitEthernet13/0/0
> >   Link speed: 10 Gbps
> >   Ethernet address 00:50:56:9b:f5:c5
> >   VMware VMXNET3
> > carrier down
> > flags: admin-up pmd maybe-multiseg rx-ip4-cksum
> > Devargs:
> > rx: queues 1 (max 16), desc 1024 (min 128 max 4096 align 1)
> > tx: queues 1 (max 8), desc 1024 (min 512 max 4096 align 1)
> > pci: device 15ad:07b0 subsystem 15ad:07b0 address :13:00.00 numa
> 0
> > max rx packet len: 16384
> > promiscuous: unicast off all-multicast off
> > vlan offload: strip off filter off qinq off
> > rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
> >vlan-filter jumbo-frame scatter
> > rx offload active: ipv4-cksum jumbo-frame scatter
> > tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
> >multi-segs
> > tx offload active: multi-segs
> > rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp ipv6-udp ipv6
> > rss active:none
> > tx burst function: vmxnet3_xmit_pkts
> > rx burst function: vmxnet3_recv_pkts
> >   Errors:
> >
> > rte_eth_dev_start[port:0, errno:1]: Operation not permitted
> >
> > 2) When bring up system without "dev default " section , still facing the
> > same issue , this time default [Rx-queue is 1 and tx-queue is 2 (main
> > thread + 1 wo

[vpp-dev] (Vpp 19,08)Facing issue with vmxnet3 with 1rx/tx queue

2020-01-13 Thread chetan bhasin
Hello Everyone,

I am facing an issue while bringing up vpp with *less than 2 rx and 2 tx
queue*. I am using vpp19.08. I have configured pci's under the dpdk section
like below -

1)
dpdk {
# dpdk-config
 dev default {
 num-rx-desc 1024
 num-rx-queues 1
 num-tx-desc 1024
 num-tx-queues 1
# vlan-strip-offload off
 }
dev :1b:00.0 {
}
dev :13:00.0 {
}
}

When I bring pci state to up  , it is showing error in "show
hardware-interfaces"

 DBGvpp# set interface state GigabitEthernet13/0/0 up
 DBGvpp# show hardware-interfaces
  NameIdx   Link  Hardware
GigabitEthernet13/0/0  1down  GigabitEthernet13/0/0
  Link speed: 10 Gbps
  Ethernet address 00:50:56:9b:f5:c5
  VMware VMXNET3
carrier down
flags: admin-up pmd maybe-multiseg rx-ip4-cksum
Devargs:


*rx: queues 1 (max 16), desc 1024 (min 128 max 4096 align 1)tx:
queues 1 (max 8), desc 1024 (min 512 max 4096 align 1)*pci: device
15ad:07b0 subsystem 15ad:07b0 address :13:00.00 numa 0
max rx packet len: 16384
promiscuous: unicast off all-multicast off
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
   vlan-filter jumbo-frame scatter
rx offload active: ipv4-cksum jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
   multi-segs
tx offload active: multi-segs
rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp ipv6-udp ipv6
rss active:none
tx burst function: vmxnet3_xmit_pkts
rx burst function: vmxnet3_recv_pkts
*  Errors:*
*rte_eth_dev_start[port:0, errno:1]: Operation not permitted*

2) When bring up system without "*dev default* " section , still facing the
same issue , this time default [Rx-queue is 1 and tx-queue is 2 (main
thread + 1 worker)]

DBGvpp# show hardware-interfaces
  NameIdx   Link  Hardware
GigabitEthernet13/0/0  1down  GigabitEthernet13/0/0
  Link speed: 10 Gbps
  Ethernet address 00:50:56:9b:f5:c5
  VMware VMXNET3
carrier down
flags: admin-up pmd maybe-multiseg rx-ip4-cksum
Devargs:


*rx: queues 1 (max 16), desc 1024 (min 128 max 4096 align 1)tx:
queues 2 (max 8), desc 1024 (min 512 max 4096 align 1)*pci: device
15ad:07b0 subsystem 15ad:07b0 address :13:00.00 numa 0
max rx packet len: 16384
promiscuous: unicast off all-multicast off
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
   vlan-filter jumbo-frame scatter
rx offload active: ipv4-cksum jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
   multi-segs
tx offload active: multi-segs
rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp ipv6-udp ipv6
rss active:none
tx burst function: vmxnet3_xmit_pkts
rx burst function: vmxnet3_recv_pkts

*  Errors:rte_eth_dev_start[port:0, errno:1]: Operation not permitted*

Thanks,
Chetan Bhasin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15145): https://lists.fd.io/g/vpp-dev/message/15145
Mute This Topic: https://lists.fd.io/mt/69669298/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Compilation failure in master branch after enabling ENABLE_SANITIZE_ADDR

2020-01-09 Thread chetan bhasin
Thanks Benoit for the clarification of compilation issue. I will try with
gcc8.

Thanks and Regards,
Chetan Bhasin

On Thu, Jan 9, 2020 at 7:04 PM Benoit Ganne (bganne) 
wrote:

> Hi,
>
> > I have tried with GCC  (gcc version 7.3.1 20180303 (Red Hat 7.3.1-5)
> > (GCC)) , but still facing the compilation error.
>
> I just reproduced the issue with GCC-7 for release builds, however GCC-8
> and GCC-9 works fine. If possible, please use GCC-8 for that:
> ~# yum install devtoolset-8-libasan-devel devtoolset-8
> ~# make rebuild-release VPP_EXTRA_CMAKE_ARGS=-DENABLE_SANITIZE_ADDR=ON
> CC=/opt/rh/devtoolset-8/root/bin/gcc
>
> To be honest, I am using GCC-9 myself. Address Sanitizer always get a lot
> of bugfixes/enhancements with each update...
>
> > Another issue I am facing while bringing up debug sanitized build in api
>
> Please at least share the whole backtrace, there are not enough
> information here.
>
> Best
> ben
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15127): https://lists.fd.io/g/vpp-dev/message/15127
Mute This Topic: https://lists.fd.io/mt/69505770/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Compilation failure in master branch after enabling ENABLE_SANITIZE_ADDR

2020-01-08 Thread chetan bhasin
Hi Benoit,

Thank you for your response.

I have tried with GCC  (gcc version 7.3.1 20180303 (Red Hat 7.3.1-5) (GCC))
, but still facing the compilation error.

Another issue I am facing while bringing up debug sanitized build in api

#5  0x2bca8a33 in VL_MSG_API_POISON (a=0x2aaab988ed90)
at vpp_1908/src/vlibapi/api_common.h:162
#6  0x2bcad50a in vl_msg_api_send_shmem (q=0x20048480,
elem=0x2aaab988ed90 "\b\311\032")
at vpp_1908/src/vlibmemory/memory_shared.c:793
#7  0x2bce79e3 in send_one_plugin_msg_ids_msg (name=0x2aaaba711080
"abf_56330065", first_msg_id=844, last_msg_id=853)
at vpp_1908/src/vlibmemory/vlib_api.c:203
#8  0x2bce80a3 in vl_api_clnt_process (vm=0x2ee337c0
, node=0x2aaab9886000, f=0x0)
at vpp_1908/src/vlibmemory/vlib_api.c:299


Thanks,
Chetan Bhasin

On Wed, Jan 8, 2020 at 2:42 PM Benoit Ganne (bganne) 
wrote:

> >>> While working with Address sanitizer I am facing compilation
> >>> error in vpp
> >> Can you share on which sha1 you are, which distro and which
> >> compiler and compiler version?
> > gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC)
>
> Outch... Your problem is right there: gcc-4 is too old. Address Sanitizer
> is still in very active development, I'd recommend *at a bare minimum*
> gcc-7, but if you can use gcc-8 or gcc-9 it is even better.
> I wouldn't invest too much time trying to make it work with gcc-4.
>
> ben
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15103): https://lists.fd.io/g/vpp-dev/message/15103
Mute This Topic: https://lists.fd.io/mt/69505770/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Compilation failure in master branch after enabling ENABLE_SANITIZE_ADDR

2020-01-07 Thread chetan bhasin
+
1) make build-release (Issue is coming while compilation)
2) make build (Compilation successful.)


On Wed, Jan 8, 2020 at 10:19 AM chetan bhasin 
wrote:

> Hi Benoit,
>
> Please find the details as below :
>
> 1) As per git log , last check-in is :
>  commit 22e108d9a94a9ccc0c31c2479740c57cf2a09126
>   Author: Ole Troan 
>  Date:   Tue Jan 7 09:30:05 2020 +0100
>  Change-Id: I8bd6bb95135dc280565f357aa5850292f66979a1
>
> 2) gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC)
>
> 3) -bash-4.2$ uname -a
> Linux bfs-dl360g10-14-vm25 3.10.0-514.16.1.el7.x86_64 #1 SMP Fri Mar 10
> 13:12:32 EST 2017 x86_64 x86_64 x86_64 GNU/Linux
>
>
> Thanks,
> Chetan Bhasin
>
> On Tue, Jan 7, 2020 at 10:22 PM Benoit Ganne (bganne) 
> wrote:
>
>> > While working with Address sanitizer I am facing compilation error in
>> vpp
>> > master branch.
>>
>> Can you share on which sha1 you are, which distro and which compiler and
>> compiler version?
>>
>> Ben
>>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15091): https://lists.fd.io/g/vpp-dev/message/15091
Mute This Topic: https://lists.fd.io/mt/69505770/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Compilation failure in master branch after enabling ENABLE_SANITIZE_ADDR

2020-01-07 Thread chetan bhasin
Hi Benoit,

Please find the details as below :

1) As per git log , last check-in is :
 commit 22e108d9a94a9ccc0c31c2479740c57cf2a09126
  Author: Ole Troan 
 Date:   Tue Jan 7 09:30:05 2020 +0100
 Change-Id: I8bd6bb95135dc280565f357aa5850292f66979a1

2) gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC)

3) -bash-4.2$ uname -a
Linux bfs-dl360g10-14-vm25 3.10.0-514.16.1.el7.x86_64 #1 SMP Fri Mar 10
13:12:32 EST 2017 x86_64 x86_64 x86_64 GNU/Linux


Thanks,
Chetan Bhasin

On Tue, Jan 7, 2020 at 10:22 PM Benoit Ganne (bganne) 
wrote:

> > While working with Address sanitizer I am facing compilation error in vpp
> > master branch.
>
> Can you share on which sha1 you are, which distro and which compiler and
> compiler version?
>
> Ben
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15089): https://lists.fd.io/g/vpp-dev/message/15089
Mute This Topic: https://lists.fd.io/mt/69505770/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Compilation failure in master branch after enabling ENABLE_SANITIZE_ADDR

2020-01-07 Thread chetan bhasin
Hello Everyone,

While working with Address sanitizer I am facing compilation error in vpp
master branch.

Can anybody guide me please .

vpp/src/vppinfra/dlmalloc.c:8:
vpp/src/vppinfra/dlmalloc.c: In function ‘mspace_get_aligned’:
vpp/src/vppinfra/clib.h:226:1: error: inlining failed in call to
always_inline ‘max_pow2’: function attribute mismatch
 max_pow2 (uword x)
 ^~~~
vpp/src/vppinfra/dlmalloc.c:4256:9: note: called from here
   align = max_pow2 (align);

Thanks,
Chetan Bhasin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15083): https://lists.fd.io/g/vpp-dev/message/15083
Mute This Topic: https://lists.fd.io/mt/69505770/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp 19.08 stuck at internal_mallinfo with 8 workers configuration

2019-12-24 Thread chetan bhasin
Hi Shiva,

After back-porting , VPP is coming up fine even with 16 workers.

Thanks a lot!

Regards,
Chetan Bhasin

On Mon, Dec 23, 2019 at 6:08 PM chetan bhasin 
wrote:

> Thanks Shiva for reply!
>
> We are using VPP 19.08.1. Let me try with fix provided.
>
> Thanks,
> Chetan Bhasin
>
>
> On Mon, Dec 23, 2019 at 5:37 PM Shiva Shankar 
> wrote:
>
>> Hi Chetan,
>> Are you using latest master code? If not, can you verify your issue with
>> below commit?
>> https://gerrit.fd.io/r/#/c/vpp/+/22527/
>>
>> On Mon, Dec 23, 2019 at 4:32 PM chetan bhasin 
>> wrote:
>>
>>> Hello Everyone,
>>>
>>> Merry Christmas everybody!
>>>
>>> We were using VPP 18.01 earlier that worked fine with 16 workers , now
>>> we moved to vpp 19.08 and facing the below problem.
>>>
>>> A direction from you is much appreciated.
>>>
>>> Thread 1 (Thread 0x2b027f144ec0 (LWP 68074)):
>>> #0  0x2b028125d448 in internal_mallinfo (m=0x2b050cacb010) at
>>> third-party/vpp/vpp_1908/src/vppinfra/dlmalloc.c:2094
>>> #1  mspace_mallinfo (msp=0x2b050cacb010) at
>>> /third-party/vpp/vpp_1908/src/vppinfra/dlmalloc.c:4797
>>> #2  0x2b028125fcb6 in mheap_usage (heap=,
>>> usage=usage@entry=0x2b0284d04f40) at
>>> third-party/vpp/vpp_1908/src/vppinfra/mem_dlmalloc.c:389
>>> #3  0x0040a49e in do_stat_segment_updates (sm=0x6cca00
>>> ) at
>>> third-party/vpp/vpp_1908/src/vpp/stats/stat_segment.c:620
>>> #4  stat_segment_collector_process (vm=0x2b028096dc40
>>> , rt=, f=) at
>>> third-party/vpp/vpp_1908/src/vpp/stats/stat_segment.c:717
>>> #5  0x2b02807022d6 in vlib_process_bootstrap (_a=) at
>>> third-party/vpp/vpp_1908/src/vlib/main.c:2754
>>> #6  0x2b02811fff54 in clib_calljmp ()
>>>
>>> Thanks ,
>>> Chetan Bhasin
>>> -=-=-=-=-=-=-=-=-=-=-=-
>>> Links: You receive all messages sent to this group.
>>>
>>> View/Reply Online (#14950): https://lists.fd.io/g/vpp-dev/message/14950
>>> Mute This Topic: https://lists.fd.io/mt/69230345/3587972
>>> Group Owner: vpp-dev+ow...@lists.fd.io
>>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [
>>> shivaashankar1...@gmail.com]
>>> -=-=-=-=-=-=-=-=-=-=-=-
>>>
>>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14963): https://lists.fd.io/g/vpp-dev/message/14963
Mute This Topic: https://lists.fd.io/mt/69230345/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp 19.08 stuck at internal_mallinfo with 8 workers configuration

2019-12-23 Thread chetan bhasin
Thanks Shiva for reply!

We are using VPP 19.08.1. Let me try with fix provided.

Thanks,
Chetan Bhasin


On Mon, Dec 23, 2019 at 5:37 PM Shiva Shankar 
wrote:

> Hi Chetan,
> Are you using latest master code? If not, can you verify your issue with
> below commit?
> https://gerrit.fd.io/r/#/c/vpp/+/22527/
>
> On Mon, Dec 23, 2019 at 4:32 PM chetan bhasin 
> wrote:
>
>> Hello Everyone,
>>
>> Merry Christmas everybody!
>>
>> We were using VPP 18.01 earlier that worked fine with 16 workers , now we
>> moved to vpp 19.08 and facing the below problem.
>>
>> A direction from you is much appreciated.
>>
>> Thread 1 (Thread 0x2b027f144ec0 (LWP 68074)):
>> #0  0x2b028125d448 in internal_mallinfo (m=0x2b050cacb010) at
>> third-party/vpp/vpp_1908/src/vppinfra/dlmalloc.c:2094
>> #1  mspace_mallinfo (msp=0x2b050cacb010) at
>> /third-party/vpp/vpp_1908/src/vppinfra/dlmalloc.c:4797
>> #2  0x2b028125fcb6 in mheap_usage (heap=,
>> usage=usage@entry=0x2b0284d04f40) at
>> third-party/vpp/vpp_1908/src/vppinfra/mem_dlmalloc.c:389
>> #3  0x0040a49e in do_stat_segment_updates (sm=0x6cca00
>> ) at
>> third-party/vpp/vpp_1908/src/vpp/stats/stat_segment.c:620
>> #4  stat_segment_collector_process (vm=0x2b028096dc40 ,
>> rt=, f=) at
>> third-party/vpp/vpp_1908/src/vpp/stats/stat_segment.c:717
>> #5  0x00002b02807022d6 in vlib_process_bootstrap (_a=) at
>> third-party/vpp/vpp_1908/src/vlib/main.c:2754
>> #6  0x2b02811fff54 in clib_calljmp ()
>>
>> Thanks ,
>> Chetan Bhasin
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>>
>> View/Reply Online (#14950): https://lists.fd.io/g/vpp-dev/message/14950
>> Mute This Topic: https://lists.fd.io/mt/69230345/3587972
>> Group Owner: vpp-dev+ow...@lists.fd.io
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [
>> shivaashankar1...@gmail.com]
>> -=-=-=-=-=-=-=-=-=-=-=-
>>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14952): https://lists.fd.io/g/vpp-dev/message/14952
Mute This Topic: https://lists.fd.io/mt/69230345/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vpp 19.08 stuck at internal_mallinfo with 8 workers configuration

2019-12-23 Thread chetan bhasin
Hello Everyone,

Merry Christmas everybody!

We were using VPP 18.01 earlier that worked fine with 16 workers , now we
moved to vpp 19.08 and facing the below problem.

A direction from you is much appreciated.

Thread 1 (Thread 0x2b027f144ec0 (LWP 68074)):
#0  0x2b028125d448 in internal_mallinfo (m=0x2b050cacb010) at
third-party/vpp/vpp_1908/src/vppinfra/dlmalloc.c:2094
#1  mspace_mallinfo (msp=0x2b050cacb010) at
/third-party/vpp/vpp_1908/src/vppinfra/dlmalloc.c:4797
#2  0x2b028125fcb6 in mheap_usage (heap=,
usage=usage@entry=0x2b0284d04f40) at
third-party/vpp/vpp_1908/src/vppinfra/mem_dlmalloc.c:389
#3  0x0040a49e in do_stat_segment_updates (sm=0x6cca00
) at
third-party/vpp/vpp_1908/src/vpp/stats/stat_segment.c:620
#4  stat_segment_collector_process (vm=0x2b028096dc40 ,
rt=, f=) at
third-party/vpp/vpp_1908/src/vpp/stats/stat_segment.c:717
#5  0x2b02807022d6 in vlib_process_bootstrap (_a=) at
third-party/vpp/vpp_1908/src/vlib/main.c:2754
#6  0x2b02811fff54 in clib_calljmp ()

Thanks ,
Chetan Bhasin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14950): https://lists.fd.io/g/vpp-dev/message/14950
Mute This Topic: https://lists.fd.io/mt/69230345/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP VSZ shoots to 200GB because of DPDK plugin

2019-12-15 Thread chetan bhasin
Hi,

I am also looking for the solution of the below stated problem.

Can anybody please provide direction regarding the same.

Thanks,
Chetan Bhasin

On Wed, Dec 11, 2019 at 11:30 AM siddarth rai  wrote:

> Hello all,
>
> I am working with VPP 19_04. I noticed that the VSZ of VPP is showing 200+
> GB.
>
> On further debugging, I discovered that the 'dpdk_plugin' is the one
> causing this. If I disable the dpdk plugin, the VSZ falls below 20G.
>
> Can anyone help me understand what is it in the dpdk plugin that is
> causing this bulge in VSZ? Is there anyway to reduce it ?
>
> Any help would be appreciated
>
> Regards,
> Siddarth Rai
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#14860): https://lists.fd.io/g/vpp-dev/message/14860
> Mute This Topic: https://lists.fd.io/mt/68143971/856484
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [
> chetan.bhasin...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14896): https://lists.fd.io/g/vpp-dev/message/14896
Mute This Topic: https://lists.fd.io/mt/68143971/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding high speed I/O with kernel

2019-12-10 Thread chetan bhasin
Sounds good. Thanks Ben for the response!



On Tue, Dec 10, 2019 at 5:00 PM Benoit Ganne (bganne) 
wrote:

> Hi,
>
> > I have used below CLI's to create rdma interfaces over Mellanox , Can you
> > suggest what set of CLi's I should use so that packets from rdma will
> also
> > have mbuff fields set properly , so that we can directly right on KNI?
>
> You do not have to. Just create a KNI interface in VPP with the DPDK
> plugin and switch packets between KNI and rdma interfaces.
> VPP never use DPDK mbuf internally, when you get packets from/to DPDK in
> VPP you have a buffer metadata translation anyway. From our PoV this is not
> different than switching packets between a vhost interface and a DPDK
> hardware interface (eg. VIC).
>
> Best
> ben
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14861): https://lists.fd.io/g/vpp-dev/message/14861
Mute This Topic: https://lists.fd.io/mt/67470059/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding high speed I/O with kernel

2019-12-10 Thread chetan bhasin
Hi Damjan,

I have used below CLI's to create rdma interfaces over Mellanox , Can you
suggest what set of CLi's I should use so that packets from rdma will also
have mbuff fields set properly , so that we can directly right on KNI?

create interface rdma host-if ens2f0 name device_9/0/0
create interface rdma host-if ens2f1 name device_9/0/1

Thanks,
Chetan Bhasin

On Fri, Dec 6, 2019 at 9:32 PM Damjan Marion via Lists.Fd.Io  wrote:

>
>
> > On 6 Dec 2019, at 07:16, Prashant Upadhyaya 
> wrote:
> >
> > Hi,
> >
> > I use VPP with DPDK driver for I/O with NIC.
> > For high speed switching of packets to and from kernel, I use DPDK KNI
> > (kernel module and user space API's provided by DPDK)
> > This works well because the vlib buffer is backed by the DPDK mbuf
> > (KNI uses DPDK mbuf's)
> >
> > Now, if I choose to use a native driver of VPP for I/O with NIC, is
> > there a native equivalent in VPP to replace KNI as well ? The native
> > equivalent should not lose out on performance as compared to KNI so I
> > believe the tap interface can be ruled out here.
> >
> > If I keep using DPDK KNI and VPP native non-dpdk driver, then I fear I
> > would have to do a data copy between the vlib buffer and an mbuf  in
> > addition to doing all the DPDK pool maintenance etc. The copies would
> > be destructive for performance surely.
> >
> > So I believe, the question is -- in presence of native drivers in VPP,
> > what is the high speed equivalent of DPDK KNI.
>
> You can use dpdk and native drivers on the same time.
> How KNI performance compares to tap with vhost-net backend?
>
>
> --
> Damjan
>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#14826): https://lists.fd.io/g/vpp-dev/message/14826
> Mute This Topic: https://lists.fd.io/mt/67470059/856484
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [
> chetan.bhasin...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14845): https://lists.fd.io/g/vpp-dev/message/14845
Mute This Topic: https://lists.fd.io/mt/67470059/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] How to save VPP configuration

2019-07-19 Thread chetan bhasin
Hi,

setup.gate should contain set of VPP CLI which you want to be persistent.

Regards,
Chetan Bhasin

On Sat, Jul 20, 2019, 06:43  wrote:

> Hi Dave,
>
> what's the 'startup-config /etc/setup.gate' should be looks like?
> Does it have a doc describe the syntax of this file?
>
> Thanks! -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#13534): https://lists.fd.io/g/vpp-dev/message/13534
> Mute This Topic: https://lists.fd.io/mt/27738030/856484
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [
> chetan.bhasin...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13535): https://lists.fd.io/g/vpp-dev/message/13535
Mute This Topic: https://lists.fd.io/mt/27738030/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Vpp 19.04 (process showing 200Gig of virtual memory on VM)

2019-06-24 Thread chetan bhasin
Hi Dave,

Thanks for the response.

Resident memory is coming around 100Mb.

As virtual memory is coming around 200GB , so I am worried in case vpp
crash , core file going to be 200Gb .

Thanks,
Chetan Bhasin

On Mon, Jun 24, 2019 at 8:03 PM Dave Barach (dbarach) 
wrote:

> Ack.
>
>
>
> I’ve asked folks to clean up a number of features which create [large]
> data structures only when features are enabled for the first time. That’s
> the fundamental problem. Virtual space not backed by physical pages costs
> very little, so it’s not a drop-everything come-running emergency.
>
>
>
> What is the resident set size? 35mb or so?
>
>
>
> HTH... Dave
>
>
>
> *From: * on behalf of chetan bhasin <
> chetan.bhasin...@gmail.com>
> *Date: *Monday, June 24, 2019 at 9:46 AM
> *To: *vpp-dev 
> *Subject: *[vpp-dev] Vpp 19.04 (process showing 200Gig of virtual memory
> on VM)
>
>
>
> Hello Everyone,
>
>
>
> While bringing vpp 19.04 app , we are observing 200Gig of virtual memory
> via top command.
>
>
>
> Any direction/recommendation is welcome.
>
>
>
> Thanks,
>
> Chetan Bhasin
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13358): https://lists.fd.io/g/vpp-dev/message/13358
Mute This Topic: https://lists.fd.io/mt/32192221/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Vpp 19.04 (process showing 200Gig of virtual memory on VM)

2019-06-24 Thread chetan bhasin
Hello Everyone,

While bringing vpp 19.04 app , we are observing 200Gig of virtual memory
via top command.

Any direction/recommendation is welcome.

Thanks,
Chetan Bhasin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13353): https://lists.fd.io/g/vpp-dev/message/13353
Mute This Topic: https://lists.fd.io/mt/32192221/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] I want to disable acl plugin but VPP is not coming up

2019-03-14 Thread chetan bhasin
Thanks a lot Andrew

I will try this and update you the status.

Thanks,
Chetan

On Thu, Mar 14, 2019 at 3:38 PM Andrew 👽 Yourtchenko 
wrote:

> hi Chetan,
>
> ABF plugin uses ACL plugin, so you need to disable also abf_plugin.so.
>
> --a
>
> On 3/14/19, chetan bhasin  wrote:
> > Hi,
> >
> > I want to disable acl plugin ,but VPP is not coming up without ACL.
> >
> > Is there any hard dependency with ACl , I want to decrease vsz of VPP
> > process by removing ACL lugin,
> >
> > Can anybody please suggest.
> >
> > Thanks,
> > Chetan
> >
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12527): https://lists.fd.io/g/vpp-dev/message/12527
Mute This Topic: https://lists.fd.io/mt/30425767/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] I want to disable acl plugin but VPP is not coming up

2019-03-14 Thread chetan bhasin
Hi,

I want to disable acl plugin ,but VPP is not coming up without ACL.

Is there any hard dependency with ACl , I want to decrease vsz of VPP
process by removing ACL lugin,

Can anybody please suggest.

Thanks,
Chetan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12524): https://lists.fd.io/g/vpp-dev/message/12524
Mute This Topic: https://lists.fd.io/mt/30425767/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Getting invalid address while doing pool_alloc_aligned with release build

2019-03-07 Thread chetan bhasin
Hello Everyone,

As I find a document where it is mentioned that -

Setting the main heap size to 4GB or more requires recompilation of the
entire system with CLIB_VEC64 > 0. See …/clib/clib/vec_bootstrap.h.

Can anybody please suggest the best way for setting CLIB_VEC64 in vpp 18.01
release.

Thanks ,
Chetan Bhasin

Thanks,
Chetan Bhasin

On Thu, Mar 7, 2019 at 12:43 PM chetan bhasin 
wrote:

> Hello everyone,
>
> I am getting invalid address while I am doing "pool_alloc_aligned" of 1 M
> sessions for 10 workers on release build.
>
> But when I do the same with debug build , Process getting crash at below
> ASSERT
>
> always_inline mheap_elt_t *
>
> mheap_elt_at_uoffset (void *v, uword uo)
>
> {
>
>   ASSERT (mheap_offset_is_valid (v, uo)); àHere
>
>   return (mheap_elt_t *) (v + uo - STRUCT_OFFSET_OF (mheap_elt_t,
> user_data));
>
> }
>
> Query 1 : If we get invalid address in release build then application is
> crashing at random places because of heap corruption. What's the best way
> to fix this?
> Query 2 : I have increase the "heapsize from 10G to 40G" , still facing
> the same issue, is it because of low memory or issue is somewhere else?
>
> Thanks,
> Chetan Bhasin
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12464): https://lists.fd.io/g/vpp-dev/message/12464
Mute This Topic: https://lists.fd.io/mt/30295236/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Getting invalid address while doing pool_alloc_aligned with release build

2019-03-06 Thread chetan bhasin
Hello everyone,

I am getting invalid address while I am doing "pool_alloc_aligned" of 1 M
sessions for 10 workers on release build.

But when I do the same with debug build , Process getting crash at below
ASSERT

always_inline mheap_elt_t *

mheap_elt_at_uoffset (void *v, uword uo)

{

  ASSERT (mheap_offset_is_valid (v, uo)); àHere

  return (mheap_elt_t *) (v + uo - STRUCT_OFFSET_OF (mheap_elt_t,
user_data));

}

Query 1 : If we get invalid address in release build then application is
crashing at random places because of heap corruption. What's the best way
to fix this?
Query 2 : I have increase the "heapsize from 10G to 40G" , still facing the
same issue, is it because of low memory or issue is somewhere else?

Thanks,
Chetan Bhasin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12453): https://lists.fd.io/g/vpp-dev/message/12453
Mute This Topic: https://lists.fd.io/mt/30295236/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP doesn't seem to recognise 25GbE link speed NICs logging warning: (18.01)

2019-02-23 Thread chetan bhasin
Thanks Damjan for the response.

On Fri, Feb 22, 2019 at 9:50 PM Damjan Marion  wrote:

>
>
> On 22 Feb 2019, at 16:51, chetan bhasin 
> wrote:
>
> Hi Damajan,
>
> I need your guidance.
>
> I am working on vpp 18.01 .
>
> VPP doesn't seem to recognise 25GbE  as per the link speed NICs logging
> warning.
>
> Query1 Does it have any impact on the packet processing ?
>
>
> likely not, it is just cosmetic...
>
>
> I found a change-list committed by you , is it safe to back-merge  only
> this change to vpp18.01?
>
> commit 7c748bbe40f102f15c70bc33ed491be5283ed69a
> Author: Damjan Marion 
> Date: Sun Feb 25 22:55:03 2018 +0100
>
> vnet: add 25G interface speed flag
>
> Change-Id: I1d3ede2b043e1fd4abc54f540bb1d3ac9863016e
> Signed-off-by: Damjan Marion 
>
> Query 2 : If its safe to back-merge , Do I need to take any other changes
> which have any link with 25G card handling ?
>
>
> No idea, i suggest moving to newer vpp version
>
> --
> Damjan
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12332): https://lists.fd.io/g/vpp-dev/message/12332
Mute This Topic: https://lists.fd.io/mt/2584/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP doesn't seem to recognise 25GbE link speed NICs logging warning: (18.01)

2019-02-22 Thread chetan bhasin
Hi Damajan,

I need your guidance.

I am working on vpp 18.01 .

VPP doesn't seem to recognise 25GbE  as per the link speed NICs logging
warning.

Query1 Does it have any impact on the packet processing ?

I found a change-list committed by you , is it safe to back-merge  only
this change to vpp18.01?

commit 7c748bbe40f102f15c70bc33ed491be5283ed69a
Author: Damjan Marion 
Date: Sun Feb 25 22:55:03 2018 +0100

vnet: add 25G interface speed flag

Change-Id: I1d3ede2b043e1fd4abc54f540bb1d3ac9863016e
Signed-off-by: Damjan Marion 

Query 2 : If its safe to back-merge , Do I need to take any other changes
which have any link with 25G card handling ?

Your assistance is much appreciated!

Thanks,
Chetan Bhasin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12327): https://lists.fd.io/g/vpp-dev/message/12327
Mute This Topic: https://lists.fd.io/mt/2584/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Getting core in vec_resize (vpp 18.01)

2019-02-19 Thread chetan bhasin
Hi Damjan,Dave

I have tried running (mheap validation )CLI  "test heap-validate now" and
it leads to crash that means mheap is corrupted when system is coming up.

Then I have created a mheap_validation function and put the same in various
places , just to check which code leg is causing mheap corruption.

I found below code which is called for 12 Workers is causing issue
once "fm->fp_per_worker_sessions"
is "1M" , but works fine when we set it to 524288.

mheap_validation_check();
pool_alloc_aligned(wk->fp_sessions_pool, fm->fp_per_worker_sessions ,
CLIB_CACHE_LINE_BYTES);
mheap_validation_check(); --> panic here

Any suggestion/advise is really helpful.


Thanks,
Chetan Bhasin

On Tue, Jan 29, 2019 at 5:35 PM Damjan Marion  wrote:

> Please search this mailing list archive, Dave provided some hints some
> time ago
>
> 90M is not terribly high, but it can also be victim of something else
> holding memory.
>
>
> On 29 Jan 2019, at 12:54, chetan bhasin 
> wrote:
>
> Hi Damjan,
>
> Thanks for the reply.
>
> what should be a typical way of debugging a corrupt vector pointer eg. can
> we set a watchpoint on some field in vector header which will most
> likelygetting disturbed so that we can nab who is corrupting the vector.
>
> With 1M entries do you think 90M is an issue.
>
>
> Clearly we have a lurking bug somewhere.
>
> Thanks,
> Chetan Bhasin
>
>
> On Tue, Jan 29, 2019, 16:53 Damjan Marion 
>>
>> typically this happens when you run out of memory / main heap size or you
>> have corrupted vector pointer..
>>
>> It will be easier to read your traceback if it is captured with debug
>> image, but according to frame 11, your vector is already 90MB big.
>> Is this expected to be?
>>
>>
>> On 29 Jan 2019, at 11:31, chetan bhasin 
>> wrote:
>>
>> Hello Everyone, I know 18.01 is not supported now , but just want to
>> understand what could be the reason for the below crash, we are adding
>> entries in pool using pool_get_alligned which is causing vec_resize. This
>> issue comes when reaches around 1M entries. Whether it is due to limited
>> memory or some memory corruption or something else? Core was generated by 
>> `bin/vpp
>> -c co'.
>> Program terminated with signal 6, Aborted.
>> #0  0x2ab534028207 in __GI_raise (sig=sig@entry=6) at
>> ../nptl/sysdeps/unix/sysv/linux/raise.c:56
>> 56return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
>> Missing separate debuginfos, use: debuginfo-install
>> OPWVmepCR-7.0-el7.x86_64
>> (gdb) bt
>> #0  0x2ab534028207 in __GI_raise (sig=sig@entry=6) at
>> ../nptl/sysdeps/unix/sysv/linux/raise.c:56
>> #1  0x2ab5340298f8 in __GI_abort () at abort.c:90
>> #2  0x00405ea9 in os_panic () at
>> /bfs-build/build-area.42/builds/LinuxNBngp_7.X_RH7/2019-01-07-2044/third-party/vpp/vpp_1801/build-data/../src/vpp/vnet/main.c:266
>> #3  0x2ab53213aad9 in unix_signal_handler (signum=,
>> si=, uc=)
>> at vpp/vpp_1801/build-data/../src/vlib/unix/main.c:126
>> #4  
>> #5  _mm_storeu_si128 (__B=..., __P=) at
>> /usr/lib/gcc/x86_64-redhat-linux/4.8.5/include/emmintrin.h:702
>> #6  clib_mov16 (src=, dst=)
>> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:60
>> #7  clib_mov32 (src=, dst=)
>> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:66
>> #8  clib_mov64 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
>> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:74
>> #9  clib_mov128 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
>> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:80
>> #10 clib_mov256 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
>> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:87
>> #11 clib_memcpy (n=90646888, src=0x2ab62d1b04e0, dst=0x2ab5426e1fe0)
>> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:325
>> #12 vec_resize_allocate_memory (v=,
>> length_increment=length_increment@entry=1, data_bytes=,
>> header_bytes=, header_bytes@entry=48,
>> data_align=data_align@entry=64) at
>> vpp/vpp_1801/build-data/../src/vppinfra/vec.c:95
>> #13 0x2ab7b74a61c1 in _vec_resize (data_align=64, header_bytes=48,
>> data_bytes=, length_increment=1, v=)
>> at include/vppinfra/vec.h:142
>> #14 xxx_allocate_flow (fm=0x2ab7b76c8fc0 )
>> atvpp/plugins/src/fastpath/fastpath.c:1502 Regards, Chetan Bhasin
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>>
>> View/Reply Online (#12039)

Re: [vpp-dev] Issue coming with memif

2019-02-14 Thread chetan bhasin
Thanks Damjan for looking into this.

I also review the code for the same but If that is the case then app will
not work properly even first time.

Thanks,
Chetan Bhasin

On Thu, Feb 14, 2019 at 9:33 PM Damjan Marion  wrote:

>
>
> On 14 Feb 2019, at 07:54, chetan bhasin 
> wrote:
>
> Hi,
>
> We are using vpp 18.0.1 . We have created a memif client app , which is
> connecting to vpp via memif and sending UDP packet .
>
> One of our plugin is receiving that packet, processing it and generating
> an UDP response.
>
> First time it is works fine , when we stop and re-start our client app ,
> we are getting crash at vpp.
>
> Please provide hint/suggestion
>
> Back-trace for the same is as below -
> (gdb) bt
> #0  0x2b8d3ba61207 in raise () from /lib64/libc.so.6
> #1  0x2b8d3ba628f8 in abort () from /lib64/libc.so.6
> #2  0x00406a29 in os_panic ()
> at vpp_1801/build-data/../src/vpp/vnet/main.c:266
> #3  0x2b8d3ae2538b in debugger ()
> at vpp_1801/build-data/../src/vppinfra/error.c:84
> #4  0x2b8d3ae25792 in _clib_error (how_to_die=2, function_name=0x0,
> line_number=0,
> fmt=0x2b8d3a154290 "%s:%d (%s) assertion `%s' fails")
> at vpp_1801/build-data/../src/vppinfra/error.c:143
> *#5  0x2b8d3986416d in vlib_buffer_advance (b=0x2ac0, l=14)*
> *at vpp_1801/build-data/../src/vlib/buffer.h:210*
> *Here packet is blank*
>
> *210   ASSERT (b->current_length >= l);*
>
> *(gdb) p (b->current_length)*
>
> *$8 = 0*
>
> *(gdb) p *b*
>
> *$9 = {cacheline0 = 0x2ac0 "", template_start = 0x2ac0 "",
> current_data = 0, current_length = 0, flags = 262144,*
>
> *  template_end = 0x2ac8 "", next_buffer = 0, error = 0,
> current_config_index = 0, feature_arc_index = 0 '\000', n_add_refs = 0
> '\000',*
>
> *  buffer_pool_index = 0 '\000', dont_waste_me = "", opaque = {0, 0, 0, 0,
> 2864712168, 10922, 7824, 1, 4072669248, 7},*
>
> *  cacheline1 = 0x2ac00040 "@\001\300\252\252*", trace_index =
> 2864709952, recycle_count = 10922, total_length_not_including_first_buffer
> = 4072669504,*
>
> *  align_pad = 7, opaque2 = {65664, 1, 0, 0, 0, 60, 60, 0, 0, 142606336,
> 0, 0}, cacheline2 = 0x2ac00080 "",*
>
> *  pre_data = "\000\000\000\000\000\000\000\000\300\276O\003\001", '\000'
> , "\200", '\000' ,
> "<\000\000\021\006\300\000\000\000\000\000@\031\000\000\000\000\000\000\000\000\000\001\000\000\000\004\000\000\000\000\000\016",
> '\000' , data = 0x2ac00100 ""}*
>
>
>
> #6  0x2b8d398671ab in ethernet_input_inline (vm=0x2b8d3d0650e0,
> node=0x2b8d3cc25c8c, from_frame=0x2b8d3cc3b1c0,
> variant=ETHERNET_INPUT_VARIANT_ETHERNET)
> at vpp_1801/build-data/../src/vnet/ethernet/node.c:457
> #7  0x2b8d39868252 in ethernet_input (vm=0x2b8d3d0650e0,
> node=0x2b8d3cc25c8c, from_frame=0x2b8d3cc3b1c0)
> at vpp_1801/build-data/../src/vnet/ethernet/node.c:796
> #8  0x2b8d394b58e3 in dispatch_node (vm=0x2b8d3d0650e0,
> node=0x2b8d3cc25c8c, type=VLIB_NODE_TYPE_INTERNAL,
> dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x2b8d3cc3b1c0,
> last_time_stamp=14871127365603654)
> at vpp_1801/build-data/../src/vlib/main.c:988
> #9  0x2b8d394b5ec6 in dispatch_pending_node (vm=0x2b8d3d0650e0,
> pending_frame_index=0,
> last_time_stamp=14871127365603654)
> at vpp_1801/build-data/../src/vlib/main.c:1138
> #10 0x2b8d394b80d7 in vlib_main_or_worker_loop (vm=0x2b8d3d0650e0,
> is_main=0)
> at vpp_1801/build-data/../src/vlib/main.c:1609
> #11 0x2b8d394b81c8 in vlib_worker_loop (vm=0x2b8d3d0650e0)
> at vpp_1801/build-data/../src/vlib/main.c:1634
> #12 0x2b8d394faf09 in vlib_worker_thread_fn (arg=0x2b8d3ddec280)
> at vpp/vpp_1801/build-data/../src/vlib/threads.c:1744
> #13 0x2b8d3ae4a7ac in clib_calljmp ()
> at vpp_1801/build-data/../src/vppinfra/longjmp.S:110
> #14 0x2b8f50924d40 in ?? ()
> ---Type  to continue, or q  to quit---
> #15 0x2b8d394f5dc3 in vlib_worker_thread_bootstrap_fn
> (arg=0x2b8d3ddec280)
>
>
> Looks like your app i not setting packet length correctly. Packet cannot
> be 0 length...
>
>
> --
> Damjan
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12261): https://lists.fd.io/g/vpp-dev/message/12261
Mute This Topic: https://lists.fd.io/mt/29840194/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Issue coming with memif

2019-02-14 Thread chetan bhasin
Hi,

We are using vpp 18.0.1 . We have created a memif client app , which is
connecting to vpp via memif and sending UDP packet .

One of our plugin is receiving that packet, processing it and generating an
UDP response.

First time it is works fine , when we stop and re-start our client app , we
are getting crash at vpp.

Please provide hint/suggestion

Back-trace for the same is as below -

(gdb) bt

#0  0x2b8d3ba61207 in raise () from /lib64/libc.so.6

#1  0x2b8d3ba628f8 in abort () from /lib64/libc.so.6

#2  0x00406a29 in os_panic ()

at vpp_1801/build-data/../src/vpp/vnet/main.c:266

#3  0x2b8d3ae2538b in debugger ()

at vpp_1801/build-data/../src/vppinfra/error.c:84

#4  0x2b8d3ae25792 in _clib_error (how_to_die=2, function_name=0x0,
line_number=0,

fmt=0x2b8d3a154290 "%s:%d (%s) assertion `%s' fails")

at vpp_1801/build-data/../src/vppinfra/error.c:143

*#5  0x2b8d3986416d in vlib_buffer_advance (b=0x2ac0, l=14)*

*at vpp_1801/build-data/../src/vlib/buffer.h:210*

*Here packet is blank*

*210   ASSERT (b->current_length >= l);*

*(gdb) p (b->current_length)*

*$8 = 0*

*(gdb) p *b*

*$9 = {cacheline0 = 0x2ac0 "", template_start = 0x2ac0 "",
current_data = 0, current_length = 0, flags = 262144,*

*  template_end = 0x2ac8 "", next_buffer = 0, error = 0,
current_config_index = 0, feature_arc_index = 0 '\000', n_add_refs = 0
'\000',*

*  buffer_pool_index = 0 '\000', dont_waste_me = "", opaque = {0, 0, 0, 0,
2864712168, 10922, 7824, 1, 4072669248, 7},*

*  cacheline1 = 0x2ac00040 "@\001\300\252\252*", trace_index =
2864709952, recycle_count = 10922, total_length_not_including_first_buffer
= 4072669504,*

*  align_pad = 7, opaque2 = {65664, 1, 0, 0, 0, 60, 60, 0, 0, 142606336, 0,
0}, cacheline2 = 0x2ac00080 "",*

*  pre_data = "\000\000\000\000\000\000\000\000\300\276O\003\001", '\000'
, "\200", '\000' ,
"<\000\000\021\006\300\000\000\000\000\000@\031\000\000\000\000\000\000\000\000\000\001\000\000\000\004\000\000\000\000\000\016",
'\000' , data = 0x2ac00100 ""}*




#6  0x2b8d398671ab in ethernet_input_inline (vm=0x2b8d3d0650e0,
node=0x2b8d3cc25c8c, from_frame=0x2b8d3cc3b1c0,

variant=ETHERNET_INPUT_VARIANT_ETHERNET)

at vpp_1801/build-data/../src/vnet/ethernet/node.c:457

#7  0x2b8d39868252 in ethernet_input (vm=0x2b8d3d0650e0,
node=0x2b8d3cc25c8c, from_frame=0x2b8d3cc3b1c0)

at vpp_1801/build-data/../src/vnet/ethernet/node.c:796

#8  0x2b8d394b58e3 in dispatch_node (vm=0x2b8d3d0650e0,
node=0x2b8d3cc25c8c, type=VLIB_NODE_TYPE_INTERNAL,

dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x2b8d3cc3b1c0,
last_time_stamp=14871127365603654)

at vpp_1801/build-data/../src/vlib/main.c:988

#9  0x2b8d394b5ec6 in dispatch_pending_node (vm=0x2b8d3d0650e0,
pending_frame_index=0,

last_time_stamp=14871127365603654)

at vpp_1801/build-data/../src/vlib/main.c:1138

#10 0x2b8d394b80d7 in vlib_main_or_worker_loop (vm=0x2b8d3d0650e0,
is_main=0)

at vpp_1801/build-data/../src/vlib/main.c:1609

#11 0x2b8d394b81c8 in vlib_worker_loop (vm=0x2b8d3d0650e0)

at vpp_1801/build-data/../src/vlib/main.c:1634

#12 0x2b8d394faf09 in vlib_worker_thread_fn (arg=0x2b8d3ddec280)

at vpp/vpp_1801/build-data/../src/vlib/threads.c:1744

#13 0x2b8d3ae4a7ac in clib_calljmp ()

at vpp_1801/build-data/../src/vppinfra/longjmp.S:110

#14 0x2b8f50924d40 in ?? ()

---Type  to continue, or q  to quit---

#15 0x2b8d394f5dc3 in vlib_worker_thread_bootstrap_fn
(arg=0x2b8d3ddec280)


Regards,

Chetan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12259): https://lists.fd.io/g/vpp-dev/message/12259
Mute This Topic: https://lists.fd.io/mt/29840194/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Want to subscribe Ipv6 Router Advertisement packets

2019-02-06 Thread chetan bhasin
Hi Ole,

After register via the api , then RA packets to be consumed by our node, or
sent futher down the line to any next node in VPP ? (on your existing
thread)

Thanks,
Chetan Bhasin




On Mon, Feb 4, 2019 at 3:57 PM Ole Troan  wrote:

> Chetan,
>
> > Is there a way by which I can get RA packets on my node. I am ready to
> register on any existing arc.
>
> Instead of having the IPv6 ND code in vnet/ip register, you can register
> for the RA type yourself.
>   icmp6_register_type (vm, ICMP6_router_advertisement,
>ip6_icmp_router_advertisement_node.index);
>
> Cheers,
> Ole
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12189): https://lists.fd.io/g/vpp-dev/message/12189
Mute This Topic: https://lists.fd.io/mt/29620449/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Want to subscribe Ipv6 Router Advertisement packets

2019-02-04 Thread chetan bhasin
Hi,

Thanks for the reply.

Is there a way by which I can get RA packets on my node. I am ready to
register on any existing arc.

Thanks,
Chetan Bhasin

On Sat, Feb 2, 2019 at 1:18 AM Ole Troan  wrote:

> Chetan.
>
> >
> > I want to subscribe for Router Advertisement packets on my node,so what
> is the best way, eg. can we use the ip6-unicast arc of ip6-input and do a
> .runs_before of ip6-mfib-forward-lookup?
>
> You could do it via the API.
>
> /** \brief Register for ip6 router advertisement events
> @param client_index - opaque cookie to identify the sender
> @param context - sender context, to match reply w/ request
> @param enable_disable - 1 => register for events, 0 => cancel
> registration
> @param pid - sender's pid
> */
> autoreply define want_ip6_ra_events
> {
>   u32 client_index;
>   u32 context;
>   u8 enable_disable;
>   u32 pid;
> };
>
> Cheers,
> Ole
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12154): https://lists.fd.io/g/vpp-dev/message/12154
Mute This Topic: https://lists.fd.io/mt/29620449/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Want to subscribe Ipv6 Router Advertisement packets

2019-02-01 Thread chetan bhasin
Hello Everyone,

I want to subscribe for Router Advertisement packets on my node,so what is
the best way, eg. can we use the ip6-unicast arc of ip6-input and do a
.runs_before of* ip6-mfib-forward-lookup?*

Thanks,
Chetan Bhasin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12130): https://lists.fd.io/g/vpp-dev/message/12130
Mute This Topic: https://lists.fd.io/mt/29620449/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Getting core in vec_resize (vpp 18.01)

2019-01-29 Thread chetan bhasin
Hi Damjan,


Thanks for the reply.


what should be a typical way of debugging a corrupt vector pointer eg. can
we set a watchpoint on some field in vector header which will most
likelygetting disturbed so that we can nab who is corrupting the vector.


With 1M entries do you think 90M is an issue.



Clearly we have a lurking bug somewhere.


Thanks,

Chetan Bhasin


On Tue, Jan 29, 2019, 16:53 Damjan Marion 
> typically this happens when you run out of memory / main heap size or you
> have corrupted vector pointer..
>
> It will be easier to read your traceback if it is captured with debug
> image, but according to frame 11, your vector is already 90MB big.
> Is this expected to be?
>
>
> On 29 Jan 2019, at 11:31, chetan bhasin 
> wrote:
>
> Hello Everyone, I know 18.01 is not supported now , but just want to
> understand what could be the reason for the below crash, we are adding
> entries in pool using pool_get_alligned which is causing vec_resize. This
> issue comes when reaches around 1M entries. Whether it is due to limited
> memory or some memory corruption or something else? Core was generated by 
> `bin/vpp
> -c co'.
> Program terminated with signal 6, Aborted.
> #0  0x2ab534028207 in __GI_raise (sig=sig@entry=6) at
> ../nptl/sysdeps/unix/sysv/linux/raise.c:56
> 56return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
> Missing separate debuginfos, use: debuginfo-install
> OPWVmepCR-7.0-el7.x86_64
> (gdb) bt
> #0  0x2ab534028207 in __GI_raise (sig=sig@entry=6) at
> ../nptl/sysdeps/unix/sysv/linux/raise.c:56
> #1  0x2ab5340298f8 in __GI_abort () at abort.c:90
> #2  0x00405ea9 in os_panic () at
> /bfs-build/build-area.42/builds/LinuxNBngp_7.X_RH7/2019-01-07-2044/third-party/vpp/vpp_1801/build-data/../src/vpp/vnet/main.c:266
> #3  0x2ab53213aad9 in unix_signal_handler (signum=,
> si=, uc=)
> at vpp/vpp_1801/build-data/../src/vlib/unix/main.c:126
> #4  
> #5  _mm_storeu_si128 (__B=..., __P=) at
> /usr/lib/gcc/x86_64-redhat-linux/4.8.5/include/emmintrin.h:702
> #6  clib_mov16 (src=, dst=)
> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:60
> #7  clib_mov32 (src=, dst=)
> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:66
> #8  clib_mov64 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:74
> #9  clib_mov128 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:80
> #10 clib_mov256 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:87
> #11 clib_memcpy (n=90646888, src=0x2ab62d1b04e0, dst=0x2ab5426e1fe0)
> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:325
> #12 vec_resize_allocate_memory (v=,
> length_increment=length_increment@entry=1, data_bytes=,
> header_bytes=, header_bytes@entry=48,
> data_align=data_align@entry=64) at
> vpp/vpp_1801/build-data/../src/vppinfra/vec.c:95
> #13 0x2ab7b74a61c1 in _vec_resize (data_align=64, header_bytes=48,
> data_bytes=, length_increment=1, v=)
> at include/vppinfra/vec.h:142
> #14 xxx_allocate_flow (fm=0x2ab7b76c8fc0 )
> atvpp/plugins/src/fastpath/fastpath.c:1502 Regards, Chetan Bhasin
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#12039): https://lists.fd.io/g/vpp-dev/message/12039
> Mute This Topic: https://lists.fd.io/mt/29580803/675642
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
>
> --
> Damjan
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12042): https://lists.fd.io/g/vpp-dev/message/12042
Mute This Topic: https://lists.fd.io/mt/29580803/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Getting core in vec_resize (vpp 18.01)

2019-01-29 Thread chetan bhasin
Hello Everyone, I know 18.01 is not supported now , but just want to
understand what could be the reason for the below crash, we are adding
entries in pool using pool_get_alligned which is causing vec_resize. This
issue comes when reaches around 1M entries. Whether it is due to limited
memory or some memory corruption or something else? Core was generated
by `bin/vpp
-c co'.
Program terminated with signal 6, Aborted.
#0  0x2ab534028207 in __GI_raise (sig=sig@entry=6) at
../nptl/sysdeps/unix/sysv/linux/raise.c:56
56return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
Missing separate debuginfos, use: debuginfo-install OPWVmepCR-7.0-el7.x86_64
(gdb) bt
#0  0x2ab534028207 in __GI_raise (sig=sig@entry=6) at
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x2ab5340298f8 in __GI_abort () at abort.c:90
#2  0x00405ea9 in os_panic () at
/bfs-build/build-area.42/builds/LinuxNBngp_7.X_RH7/2019-01-07-2044/third-party/vpp/vpp_1801/build-data/../src/vpp/vnet/main.c:266
#3  0x2ab53213aad9 in unix_signal_handler (signum=,
si=, uc=)
at vpp/vpp_1801/build-data/../src/vlib/unix/main.c:126
#4  
#5  _mm_storeu_si128 (__B=..., __P=) at
/usr/lib/gcc/x86_64-redhat-linux/4.8.5/include/emmintrin.h:702
#6  clib_mov16 (src=, dst=)
at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:60
#7  clib_mov32 (src=, dst=)
at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:66
#8  clib_mov64 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:74
#9  clib_mov128 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:80
#10 clib_mov256 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:87
#11 clib_memcpy (n=90646888, src=0x2ab62d1b04e0, dst=0x2ab5426e1fe0)
at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:325
#12 vec_resize_allocate_memory (v=,
length_increment=length_increment@entry=1, data_bytes=,
header_bytes=, header_bytes@entry=48,
data_align=data_align@entry=64) at
vpp/vpp_1801/build-data/../src/vppinfra/vec.c:95
#13 0x2ab7b74a61c1 in _vec_resize (data_align=64, header_bytes=48,
data_bytes=, length_increment=1, v=)
at include/vppinfra/vec.h:142
#14 xxx_allocate_flow (fm=0x2ab7b76c8fc0 )
atvpp/plugins/src/fastpath/fastpath.c:1502 Regards, Chetan Bhasin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12039): https://lists.fd.io/g/vpp-dev/message/12039
Mute This Topic: https://lists.fd.io/mt/29580803/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] I have back-port changes from vpp18.10 to vpp.18.01 (0610039fd06c760924fb92d0fc7b4d3e0ffeb8e3)

2019-01-04 Thread chetan bhasin
Hi Dave,

I have back-merged changes committed in VPP18.10 to 18.01.

Change details is as given below -

commit 0610039fd06c760924fb92d0fc7b4d3e0ffeb8e3
Author: Dave Barach 
Date: Thu Jul 12 13:00:47 2018 -0400

Now we are facing a crash during init time , it looks to me if we fire
vppctl while VPP is coming is causing the below crash, It is not always
re-producible. Do you have any idea about this issue?


*Crashing at*  (File *src/vlib/main.c)*

569 void
570 vlib_node_sync_stats (vlib_main_t * vm, vlib_node_t * n)
571 {
572   vlib_node_runtime_t *rt;
573
574   if (n->type == VLIB_NODE_TYPE_PROCESS)
575 {
576   /* Nothing to do for PROCESS nodes except in main thread */
577   if (vm != &vlib_global_main)
578 return;
579
580   vlib_process_t *p = vlib_get_process_from_node (vm, n);
581   n->stats_total.suspends += p->n_suspends;
582   p->n_suspends = 0;
583   rt = &p->node_runtime;
584 }
585   else
586 rt =
587   vec_elt_at_index (vm->node_main.nodes_by_type[n->type],
588 n->runtime_index);
589
590   vlib_node_runtime_sync_stats (vm, rt, 0, 0, 0);
591
592   /* Sync up runtime next frame vector counters with main node
structure. */
593   {
594 vlib_next_frame_t *nf;
595 uword i;
596 for (i = 0; i < rt->n_next_nodes; i++)
597   {
598 nf = vlib_node_runtime_get_next_frame (vm, rt, i);

*599 vec_elt (n->n_vectors_by_next_node, i) +=600
nf->vectors_since_last_overflow;*
601 nf->vectors_since_last_overflow = 0;
602   }
603   }
604 }

Back Trace is as below -

(gdb) info thr
  Id   Target Id Frame
  9Thread 0x2b7d72188700 (LWP 63125) 0x2b7825278eed in nanosleep ()
from /lib64/libpthread.so.0
  8Thread 0x2b7d71381700 (LWP 63105) 0x2b7825789113 in epoll_wait
() from /lib64/libc.so.6
  7Thread 0x2b7d71f87700 (LWP 63111) vlib_worker_thread_barrier_check ()
/src/vlib/threads.h:403
  6Thread 0x2b7d71d86700 (LWP 63110) vlib_worker_thread_barrier_check ()
src/vlib/threads.h:403
  5Thread 0x2b7d71b85700 (LWP 63109) vlib_worker_thread_barrier_check ()
src/vlib/threads.h:403
  4Thread 0x2b7d71984700 (LWP 63108) vlib_worker_thread_barrier_check ()
src/vlib/threads.h:403
  3Thread 0x2b7d71783700 (LWP 63107) vlib_worker_thread_barrier_check ()
src/vlib/threads.h:403
  2Thread 0x2b7d71582700 (LWP 63106) vlib_worker_thread_barrier_check ()
src/vlib/threads.h:403
* 1Thread 0x2b78231439c0 (LWP 62940) 0x2b78256c0207 in raise ()
from /lib64/libc.so.6
#0  0x2b78256c0207 in raise () from /lib64/libc.so.6
#1  0x2b78256c18f8 in abort () from /lib64/libc.so.6
#2  0x00405ea9 in os_panic ()
src/vpp/vnet/main.c:266
#3  0x2b78237d2ad9 in unix_signal_handler (signum=,
si=, uc=)
src/vlib/unix/main.c:126
#4  



*#5  0x2b782379ab97 in vlib_node_sync_stats (vm=vm@entry=0x2b78239ed240
, n=0x2b7826196468)src/vlib/main.c:599#6
0x2b78237bda3a in worker_thread_node_runtime_update_internal ()
src/vlib/threads.c:1046*

*#7  vlib_worker_thread_barrier_release (vm=vm@entry=0x2b78239ed240
)src/vlib/threads.c:1528*
#8  0x2b78237c8463 in unix_cli_file_add (name=,
name@entry=0x2b7827082494 "local:10", fd=49,
cm=0x2b78239ed0e0 , cm=0x2b78239ed0e0 )
src/vlib/unix/cli.c:2607
#9  0x2b78237cd373 in unix_cli_listen_read_ready (uf=)
src/vlib/unix/cli.c:2659
#10 0x2b78237d21f0 in linux_epoll_input (vm=,
node=, frame=)
src/vlib/unix/input.c:203
#11 0x2b782379bc16 in dispatch_node (last_time_stamp=12704185179896535,
frame=0x0, dispatch_state=VLIB_NODE_STATE_POLLING,
type=VLIB_NODE_TYPE_PRE_INPUT, node=0x2b78288142c0, vm=0x2b78239ed240
)
src/vlib/main.c:988
#12 vlib_main_or_worker_loop (is_main=1, vm=0x2b78239ed240
)
src/vlib/main.c:1498
#13 vlib_main_loop (vm=0x2b78239ed240 )
src/vlib/main.c:1628
#14 vlib_main (vm=vm@entry=0x2b78239ed240 ,
input=input@entry=0x2b782858afa0)
src/vlib/main.c:1783
#15 0x2b78237d2c13 in thread0 (arg=47794993680960)
src/vlib/unix/main.c:548
#16 0x00002b7824b07988 in clib_calljmp ()

Thanks,
Chetan Bhasin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11833): https://lists.fd.io/g/vpp-dev/message/11833
Mute This Topic: https://lists.fd.io/mt/28932141/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP/DPDK performance with Madvise (Transparent Huge pages)

2019-01-04 Thread chetan bhasin
Hi Damjan,

I want to ensure that VPP uses THP and other Linux applications don't use
THP feature.

Thanks,
Chetan Bhasin

On Thu, Jan 3, 2019 at 4:46 PM Damjan Marion  wrote:

>
>
> On 2 Jan 2019, at 16:28, chetan bhasin  wrote:
>
> Hi,
>
> We are using VPP 18.01. We have seen that in case Transparent Hugepage is
> enable with Madvise . VPP does not take benefit of Annonymous Huge Pages.
> Does anybody have any idea about the same ?
>
>
> Can you provide more details, What exactly you did and what exactly you
> observed
>
> --
> Damjan
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11831): https://lists.fd.io/g/vpp-dev/message/11831
Mute This Topic: https://lists.fd.io/mt/28914567/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP/DPDK performance with Madvise (Transparent Huge pages)

2019-01-02 Thread chetan bhasin
Hi,

We are using VPP 18.01. We have seen that in case Transparent Hugepage is
enable with Madvise . VPP does not take benefit of Annonymous Huge Pages.
Does anybody have any idea about the same ?

Thanks,
CB
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11817): https://lists.fd.io/g/vpp-dev/message/11817
Mute This Topic: https://lists.fd.io/mt/28914567/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Getting crash while running load on VPP18.01 for 6 hours

2018-11-20 Thread chetan bhasin
Hi Dave,

Thanks a lot.

one more query ,what is the purpose of null_node ? and when this scenario
will be provoked that null_node is hit?

Thanks,
Chetan Bhasin

On Tue, Nov 20, 2018 at 10:57 PM Dave Barach (dbarach) 
wrote:

> See
> https://wiki.fd.io/view/VPP/Pulling,_Building,_Running,_Hacking_and_Pushing_VPP_Code#Pushing_Code_with_git_review
>
>
>
> *From:* chetan bhasin 
> *Sent:* Tuesday, November 20, 2018 11:43 AM
> *To:* Dave Barach (dbarach) 
> *Subject:* Re: [vpp-dev] Getting crash while running load on VPP18.01 for
> 6 hours
>
>
>
> Thanks Dave!
>
>
>
> I will try with DEBUG too.
>
>
>
> Just want to understand the procedure to check in the patches,  actually
> we have done several fixes in VPP,  so we are planning to Check-in all
> patches.
>
>
>
> Thanks,
>
> Chetan Bhasin
>
>
>
> On Tue, Nov 20, 2018, 18:02 Dave Barach (dbarach)  wrote:
>
> Several suggestions:
>
>
>
>- Try a debug image (PLATFORM=vpp TAG=vpp_debug) so the crash will be
>more enlightening
>- Switch to 18.10. 18.01 is no longer supported. We don’t use the
>mheap.c memory allocator anymore, and so on and so forth.
>- See https://wiki.fd.io/view/VPP/BugReports
>
>
>
>
>
> *From:* vpp-dev@lists.fd.io  *On Behalf Of *chetan
> bhasin
> *Sent:* Tuesday, November 20, 2018 5:31 AM
> *To:* vpp-dev@lists.fd.io
> *Subject:* [vpp-dev] Getting crash while running load on VPP18.01 for 6
> hours
>
>
>
> Hi Vpp-dev,
>
>
>
> We are facing issues while running load for ~6 hours . getting below crash.
>
>
>
> Your Suggestion is really appreciated.
>
>
>
>
>
> #1  0x2b00b990e8f8 in __GI_abort () at abort.c:90
> #2  0x00405f23 in os_panic () at
> /bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vpp/vnet/main.c:268
> #3  0x2b00b8d60710 in mheap_put (v=0x2b00ba3d8000, uoffset=2382207096)
> at
> /bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vppinfra/mheap.c:798
> #4  0x2b00b8d8959e in clib_mem_free (p=0x2b00c8ba84a0) at
> /bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vppinfra/mem.h:213
> #5  vec_resize_allocate_memory (v=,
> length_increment=length_increment@entry=1, data_bytes=,
> header_bytes=, header_bytes@entry=0,
> data_align=data_align@entry=4) at
> /bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vppinfra/vec.c:96
> #6  0x2b00b79e899d in _vec_resize (data_align=,
> header_bytes=, data_bytes=,
> length_increment=, v=) at
> /nfs-bfs/workspace/build-data/../src/vppinfra/vec.h:142
> #7  get_frame_size_info (n_scalar_bytes=,
> n_vector_bytes=, nm=0x2b00c87a3160, nm=0x2b00c87a3160) at
> /nfs-bfs/workspace//build-data/../src/vlib/main.c:107
> #8  0x2b00b79e8d79 in vlib_frame_free (vm=vm@entry=0x2b00c87a3050,
> r=r@entry=0x2b00c86ca368, f=f@entry=0x2b014b2ecb80) at
> /nfs-bfs//vpp_1801/build-data/../src/vlib/main.c:221
> #9  0x2b00b79fe6e6 in null_node_fn (vm=0x2b00c87a3050,
> node=0x2b00c86ca368, frame=0x2b014b2ecb80) at
> /nfs-bfs/workspace/build-data/../src/vlib/node.c:512
>
>
>
> Thanks,
>
> Chetan
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11344): https://lists.fd.io/g/vpp-dev/message/11344
Mute This Topic: https://lists.fd.io/mt/28266210/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Getting crash while running load on VPP18.01 for 6 hours

2018-11-20 Thread chetan bhasin
Hi Vpp-dev,

We are facing issues while running load for ~6 hours . getting below crash.

Your Suggestion is really appreciated.


#1  0x2b00b990e8f8 in __GI_abort () at abort.c:90
#2  0x00405f23 in os_panic () at
/bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vpp/vnet/main.c:268
#3  0x2b00b8d60710 in mheap_put (v=0x2b00ba3d8000, uoffset=2382207096)
at
/bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vppinfra/mheap.c:798
#4  0x2b00b8d8959e in clib_mem_free (p=0x2b00c8ba84a0) at
/bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vppinfra/mem.h:213
#5  vec_resize_allocate_memory (v=,
length_increment=length_increment@entry=1, data_bytes=,
header_bytes=, header_bytes@entry=0,
data_align=data_align@entry=4) at
/bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vppinfra/vec.c:96
#6  0x2b00b79e899d in _vec_resize (data_align=,
header_bytes=, data_bytes=,
length_increment=, v=) at
/nfs-bfs/workspace/build-data/../src/vppinfra/vec.h:142
#7  get_frame_size_info (n_scalar_bytes=,
n_vector_bytes=, nm=0x2b00c87a3160, nm=0x2b00c87a3160) at
/nfs-bfs/workspace//build-data/../src/vlib/main.c:107
#8  0x2b00b79e8d79 in vlib_frame_free (vm=vm@entry=0x2b00c87a3050,
r=r@entry=0x2b00c86ca368, f=f@entry=0x2b014b2ecb80) at
/nfs-bfs//vpp_1801/build-data/../src/vlib/main.c:221
#9  0x2b00b79fe6e6 in null_node_fn (vm=0x2b00c87a3050,
node=0x2b00c86ca368, frame=0x2b014b2ecb80) at
/nfs-bfs/workspace/build-data/../src/vlib/node.c:512

Thanks,
Chetan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11332): https://lists.fd.io/g/vpp-dev/message/11332
Mute This Topic: https://lists.fd.io/mt/28266210/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Getting crash in null_node after running load for ~6hours (vpp 18.01)

2018-11-20 Thread chetan bhasin
Hi All,

We are facing issues while running load for ~6 hours . getting below crash.

Your Suggestion is really appreciated.


#1  0x2b00b990e8f8 in __GI_abort () at abort.c:90
#2  0x00405f23 in os_panic () at
/bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vpp/vnet/main.c:268
#3  0x2b00b8d60710 in mheap_put (v=0x2b00ba3d8000, uoffset=2382207096)
at
/bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vppinfra/mheap.c:798
#4  0x2b00b8d8959e in clib_mem_free (p=0x2b00c8ba84a0) at
/bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vppinfra/mem.h:213
#5  vec_resize_allocate_memory (v=,
length_increment=length_increment@entry=1, data_bytes=,
header_bytes=, header_bytes@entry=0,
data_align=data_align@entry=4) at
/bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vppinfra/vec.c:96
#6  0x2b00b79e899d in _vec_resize (data_align=,
header_bytes=, data_bytes=,
length_increment=, v=) at
/nfs-bfs/workspace/build-data/../src/vppinfra/vec.h:142
#7  get_frame_size_info (n_scalar_bytes=,
n_vector_bytes=, nm=0x2b00c87a3160, nm=0x2b00c87a3160) at
/nfs-bfs/workspace//build-data/../src/vlib/main.c:107
#8  0x2b00b79e8d79 in vlib_frame_free (vm=vm@entry=0x2b00c87a3050,
r=r@entry=0x2b00c86ca368, f=f@entry=0x2b014b2ecb80) at
/nfs-bfs//vpp_1801/build-data/../src/vlib/main.c:221
#9  0x2b00b79fe6e6 in null_node_fn (vm=0x2b00c87a3050,
node=0x2b00c86ca368, frame=0x2b014b2ecb80) at
/nfs-bfs/workspace/build-data/../src/vlib/node.c:512

Thanks,
Chetan Bhasin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11329): https://lists.fd.io/g/vpp-dev/message/11329
Mute This Topic: https://lists.fd.io/mt/28266080/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] "Incompatible UPT version” error when running VPP v18.01 with DPDK v17.11 on VMWare with VMXNET3 interface ,ESXI Version 6.5/6.7

2018-10-03 Thread chetan bhasin
Hi,

We also faced the same issue . If you set "nopku" in grub , this solves our
problem. Although we need to see what its impact.

Thanks,
Chetan Bhasin

On Mon, Oct 1, 2018 at 11:48 PM steven luong via Lists.Fd.Io  wrote:

> DPDK is expecting UPT version > 0 and ESXi 6.5/6.7 seems to be returning
> UPT version 0, when it was queried, which is not a supported version. I am
> using ESXi 6.0 and it is working fine. You could try ESXi 6.0 to see if it
> helps.
>
>
>
> Steven
>
>
>
> *From: * on behalf of truring truring <
> trurin...@gmail.com>
> *Date: *Monday, October 1, 2018 at 10:12 AM
> *To: *"vpp-dev@lists.fd.io" 
> *Subject: *[vpp-dev] "Incompatible UPT version” error when running VPP
> v18.01 with DPDK v17.11 on VMWare with VMXNET3 interface ,ESXI Version
> 6.5/6.7
>
>
>
> Hi Everyone,
>
>
>
> We're trying to run VPP-18.01 with DPDK plugin in a guest machine running
> Red Hat 7.5. The host is ESXi version 6.5/6.7.
>
>
>
> guest machine have VMXNET3 Interface ,i am getting following error while
> running the vpp :
>
>
>
> PMD: eth_vmxnet3_dev_init():  >>
>
>
>
> PMD: eth_vmxnet3_dev_init(): Hardware version : 1
>
>
>
> PMD: eth_vmxnet3_dev_init(): Using device version 1
>
>
>
>
>
>
>
> PMD: eth_vmxnet3_dev_init(): UPT hardware version : 0
>
>
>
> PMD: eth_vmxnet3_dev_init(): Incompatible UPT version.
>
>
>
>
>
>
>
> Any help to resolve above issue would be greatly appreciated. Thanks!
>
>
>
>
>
>
>
> Regards
>
> Puneet
>
>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#10725): https://lists.fd.io/g/vpp-dev/message/10725
> Mute This Topic: https://lists.fd.io/mt/26443715/856484
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [
> chetan.bhasin...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10739): https://lists.fd.io/g/vpp-dev/message/10739
Mute This Topic: https://lists.fd.io/mt/26443715/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


  1   2   >