[vpp-dev] Regarding DPDK's rte_timer api's usage with VPP

2023-03-26 Thread Prashant Upadhyaya
Hi,

I am bringing in some legacy code which worked with DPDK standalone,
and converting it into a VPP plugin (VPP 22.10)
The legacy code uses the DPDK rte_timer api's
Now as soon as my VPP plugin calls the DPDK API
rte_timer_subsystem_init, I get the following error from EAL of DPDK

EAL: memzone_reserve_aligned_thread_unsafe(): Number of requested
memzone segments exceeds RTE_MAX_MEMZONE

Of course I can uproot the entire timer implementation in legacy code
and replace it with various other alternatives, but if I can get over
the issue I have reported above somehow that would be great.

If anybody has resolved a similar issue, please do advise.

Regards
-Prashant

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22768): https://lists.fd.io/g/vpp-dev/message/22768
Mute This Topic: https://lists.fd.io/mt/97876096/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Crash in VPP22.06 in ip4_mtrie_16_lookup_step

2022-10-13 Thread Prashant Upadhyaya
Thanks Benoit.
I found the root cause for this in my plugin. I was using the handoff
functions incorrectly.

There is no problem in VPP. After fixing my plugin, the usecase runs as
solid in 22.06 as in 21.06.

Regards
-Prashant


On Wed, 12 Oct 2022, 18:26 Benoit Ganne (bganne) via lists.fd.io,  wrote:

> I did not heard anything like this.
> Can you try to reproduce with latest master? Do you have some proprietary
> plugins loaded also?
>
> Best
> ben
>
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of Prashant
> > Upadhyaya
> > Sent: Wednesday, October 12, 2022 11:32
> > To: vpp-dev 
> > Subject: [vpp-dev] Crash in VPP22.06 in ip4_mtrie_16_lookup_step
> >
> > Hi,
> >
> > I am migrating from VPP21.06 where my usecase works without issues
> > overnight, but in VPP22.06 it gives the following crash in 7 to 8
> > minutes of run.
> > Just wondering if this is a known issue or if anybody else has seen this.
> > Further, when I run in VPP22.06 with a single worker thread, this
> > crash is not seen, but when I run with 2 worker threads, then this
> > crash is seen as below.
> > With VPP21.06 the crash is not seen regardless of the number of worker
> > threads.
> >
> > Thread 5 "vpp_wk_1" received signal SIGSEGV, Segmentation fault.
> > [Switching to Thread 0x7fff69156700 (LWP 5408)]
> > 0x778327f8 in ip4_mtrie_16_lookup_step
> > (dst_address_byte_index=2, dst_address=0x1025d44ea4,
> > current_leaf=1915695212)
> > at /home/centos/vpp/src/vnet/ip/ip4_mtrie.h:215
> > 215   return (ply->leaves[dst_address-
> > >as_u8[dst_address_byte_index]]);
> > (gdb) bt
> > #0  0x778327f8 in ip4_mtrie_16_lookup_step
> > (dst_address_byte_index=2, dst_address=0x1025d44ea4,
> > current_leaf=1915695212)
> > at /home/centos/vpp/src/vnet/ip/ip4_mtrie.h:215
> > #1  ip4_fib_forwarding_lookup (addr=0x1025d44ea4, fib_index=1) at
> > /home/centos/vpp/src/vnet/fib/ip4_fib.h:146
> > #2  ip4_lookup_inline (frame=0x7fffbc805a80, node=,
> > vm=0x7fffbc72bc40) at /home/centos/vpp/src/vnet/ip/ip4_forward.h:327
> > #3  ip4_lookup_node_fn_skx (vm=0x7fffbc72bc40, node=0x7fffbc7bc400,
> > frame=0x7fffbc805a80) at
> > /home/centos/vpp/src/vnet/ip/ip4_forward.c:101
> > #4  0x77ea2a45 in dispatch_node (last_time_stamp= > out>, frame=0x7fffbc805a80, dispatch_state=VLIB_NODE_STATE_POLLING,
> > type=VLIB_NODE_TYPE_INTERNAL, node=0x7fffbc7bc400,
> > vm=0x7fffbc7bc400) at /home/centos/vpp/src/vlib/main.c:961
> > #5  dispatch_pending_node (vm=vm@entry=0x7fffbc72bc40,
> > pending_frame_index=pending_frame_index@entry=6,
> > last_time_stamp=)
> > at /home/centos/vpp/src/vlib/main.c:1120
> > #6  0x77ea4639 in vlib_main_or_worker_loop (is_main=0,
> > vm=0x7fffbc72bc40, vm@entry=0x7fffb8c96700)
> > at /home/centos/vpp/src/vlib/main.c:1589
> > #7  vlib_worker_loop (vm=vm@entry=0x7fffbc72bc40) at
> > /home/centos/vpp/src/vlib/main.c:1723
> > #8  0x77edea81 in vlib_worker_thread_fn (arg=0x7fffb8cd5640)
> > at /home/centos/vpp/src/vlib/threads.c:1579
> > #9  0x77edde33 in vlib_worker_thread_bootstrap_fn
> > (arg=0x7fffb8cd5640) at /home/centos/vpp/src/vlib/threads.c:418
> > #10 0x76638ea5 in start_thread (arg=0x7fff69156700) at
> > pthread_create.c:307
> > #11 0x75db5b0d in clone () at
> > ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
> >
> > Regards
> > -Prashant
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22026): https://lists.fd.io/g/vpp-dev/message/22026
Mute This Topic: https://lists.fd.io/mt/94277299/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Crash in VPP22.06 in ip4_mtrie_16_lookup_step

2022-10-12 Thread Prashant Upadhyaya
Hi,

I am migrating from VPP21.06 where my usecase works without issues
overnight, but in VPP22.06 it gives the following crash in 7 to 8
minutes of run.
Just wondering if this is a known issue or if anybody else has seen this.
Further, when I run in VPP22.06 with a single worker thread, this
crash is not seen, but when I run with 2 worker threads, then this
crash is seen as below.
With VPP21.06 the crash is not seen regardless of the number of worker threads.

Thread 5 "vpp_wk_1" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fff69156700 (LWP 5408)]
0x778327f8 in ip4_mtrie_16_lookup_step
(dst_address_byte_index=2, dst_address=0x1025d44ea4,
current_leaf=1915695212)
at /home/centos/vpp/src/vnet/ip/ip4_mtrie.h:215
215   return (ply->leaves[dst_address->as_u8[dst_address_byte_index]]);
(gdb) bt
#0  0x778327f8 in ip4_mtrie_16_lookup_step
(dst_address_byte_index=2, dst_address=0x1025d44ea4,
current_leaf=1915695212)
at /home/centos/vpp/src/vnet/ip/ip4_mtrie.h:215
#1  ip4_fib_forwarding_lookup (addr=0x1025d44ea4, fib_index=1) at
/home/centos/vpp/src/vnet/fib/ip4_fib.h:146
#2  ip4_lookup_inline (frame=0x7fffbc805a80, node=,
vm=0x7fffbc72bc40) at /home/centos/vpp/src/vnet/ip/ip4_forward.h:327
#3  ip4_lookup_node_fn_skx (vm=0x7fffbc72bc40, node=0x7fffbc7bc400,
frame=0x7fffbc805a80) at
/home/centos/vpp/src/vnet/ip/ip4_forward.c:101
#4  0x77ea2a45 in dispatch_node (last_time_stamp=, frame=0x7fffbc805a80, dispatch_state=VLIB_NODE_STATE_POLLING,
type=VLIB_NODE_TYPE_INTERNAL, node=0x7fffbc7bc400,
vm=0x7fffbc7bc400) at /home/centos/vpp/src/vlib/main.c:961
#5  dispatch_pending_node (vm=vm@entry=0x7fffbc72bc40,
pending_frame_index=pending_frame_index@entry=6,
last_time_stamp=)
at /home/centos/vpp/src/vlib/main.c:1120
#6  0x77ea4639 in vlib_main_or_worker_loop (is_main=0,
vm=0x7fffbc72bc40, vm@entry=0x7fffb8c96700)
at /home/centos/vpp/src/vlib/main.c:1589
#7  vlib_worker_loop (vm=vm@entry=0x7fffbc72bc40) at
/home/centos/vpp/src/vlib/main.c:1723
#8  0x77edea81 in vlib_worker_thread_fn (arg=0x7fffb8cd5640)
at /home/centos/vpp/src/vlib/threads.c:1579
#9  0x77edde33 in vlib_worker_thread_bootstrap_fn
(arg=0x7fffb8cd5640) at /home/centos/vpp/src/vlib/threads.c:418
#10 0x76638ea5 in start_thread (arg=0x7fff69156700) at
pthread_create.c:307
#11 0x75db5b0d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Regards
-Prashant

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22004): https://lists.fd.io/g/vpp-dev/message/22004
Mute This Topic: https://lists.fd.io/mt/94277299/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Prioritized packet sending

2022-01-06 Thread Prashant Upadhyaya
Thanks Benoit !

Regards
-Prashant

On Thu, Jan 6, 2022 at 6:16 PM Benoit Ganne (bganne)  wrote:
>
> Yes, that should be the net effect, unless you have a very weird node hitting 
> drivers tx-node directly - but you'd be aware of it if that's the case.
>
> Best
> ben
>
> > -Original Message-
> > From: Prashant Upadhyaya 
> > Sent: jeudi 6 janvier 2022 12:33
> > To: Benoit Ganne (bganne) 
> > Cc: vpp-dev 
> > Subject: Re: [vpp-dev] Prioritized packet sending
> >
> > Thanks Benoit, let me be more specific than last time --
> >
> > Suppose my node is on the ip4-unicast feature arc and thus handling
> > the packets from ip4-input, say I get a frame here of 25 packets.
> > Now my node runs through these 25 packets, the first packet is special
> > and needs to go out on priority so I enqueue it to interface output at
> > L2 level like you said which is mighty fine. The node also finishes
> > the rest of packets processing and pushes them all to ip4-lookup for
> > further routing and sending.
> > In this scenario is it guaranteed that the first packet would be sent
> > out of the NIC first i.e. before the other 24 packets -- I am not
> > aware of the entire mechanics of the scheduling out here, but if
> > that's the net effect then that's exactly what I want.
> >
> > Regards
> > -Prashant
> >
> > On Thu, Jan 6, 2022 at 4:17 PM Benoit Ganne (bganne) 
> > wrote:
> > >
> > > Depends upon what you mean by "right now".
> > > The normal way to send a packet out in VPP is to just put the buffers to
> > the interface-output node with vlib_buffer_t.sw_if_index[VLIB_TX] set to
> > the correct interface.
> > > There will be some latency, as the interface-output node and then the
> > drivers node must run afterwards, but you'll skip any other nodes (eg.
> > let's say you're on the device-input feature arc and you want to skip the
> > whole L3 path - ip4-lookup and friends).
> > >
> > > Best
> > > ben
> > >
> > > > -Original Message-
> > > > From: vpp-dev@lists.fd.io  On Behalf Of Prashant
> > > > Upadhyaya
> > > > Sent: jeudi 6 janvier 2022 11:31
> > > > To: vpp-dev 
> > > > Subject: [vpp-dev] Prioritized packet sending
> > > >
> > > > Hi,
> > > >
> > > > Assume we are inside the code of a node in a  plugin on a worker.
> > > > Normally we would do the packet processing the usual way, enqueue the
> > > > packets to various other nodes and return and the graph scheduler
> > > > would send the packets out as normal dispatch logic.
> > > > But what if from my node code, I want to send a packet out the NIC
> > > > like right now ? We can assume that I have the fully constructed L2
> > > > packet with me. Is it possible to achieve this somehow from the plugin
> > > > node code so that this packet goes out right away and the rest of the
> > > > packets undergo the normal dispatch logic ?
> > > >
> > > > Regards
> > > > -Prashant

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20684): https://lists.fd.io/g/vpp-dev/message/20684
Mute This Topic: https://lists.fd.io/mt/88234983/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Prioritized packet sending

2022-01-06 Thread Prashant Upadhyaya
Thanks Benoit, let me be more specific than last time --

Suppose my node is on the ip4-unicast feature arc and thus handling
the packets from ip4-input, say I get a frame here of 25 packets.
Now my node runs through these 25 packets, the first packet is special
and needs to go out on priority so I enqueue it to interface output at
L2 level like you said which is mighty fine. The node also finishes
the rest of packets processing and pushes them all to ip4-lookup for
further routing and sending.
In this scenario is it guaranteed that the first packet would be sent
out of the NIC first i.e. before the other 24 packets -- I am not
aware of the entire mechanics of the scheduling out here, but if
that's the net effect then that's exactly what I want.

Regards
-Prashant

On Thu, Jan 6, 2022 at 4:17 PM Benoit Ganne (bganne)  wrote:
>
> Depends upon what you mean by "right now".
> The normal way to send a packet out in VPP is to just put the buffers to the 
> interface-output node with vlib_buffer_t.sw_if_index[VLIB_TX] set to the 
> correct interface.
> There will be some latency, as the interface-output node and then the drivers 
> node must run afterwards, but you'll skip any other nodes (eg. let's say 
> you're on the device-input feature arc and you want to skip the whole L3 path 
> - ip4-lookup and friends).
>
> Best
> ben
>
> > -Original Message-----
> > From: vpp-dev@lists.fd.io  On Behalf Of Prashant
> > Upadhyaya
> > Sent: jeudi 6 janvier 2022 11:31
> > To: vpp-dev 
> > Subject: [vpp-dev] Prioritized packet sending
> >
> > Hi,
> >
> > Assume we are inside the code of a node in a  plugin on a worker.
> > Normally we would do the packet processing the usual way, enqueue the
> > packets to various other nodes and return and the graph scheduler
> > would send the packets out as normal dispatch logic.
> > But what if from my node code, I want to send a packet out the NIC
> > like right now ? We can assume that I have the fully constructed L2
> > packet with me. Is it possible to achieve this somehow from the plugin
> > node code so that this packet goes out right away and the rest of the
> > packets undergo the normal dispatch logic ?
> >
> > Regards
> > -Prashant

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20682): https://lists.fd.io/g/vpp-dev/message/20682
Mute This Topic: https://lists.fd.io/mt/88234983/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Prioritized packet sending

2022-01-06 Thread Prashant Upadhyaya
Hi,

Assume we are inside the code of a node in a  plugin on a worker.
Normally we would do the packet processing the usual way, enqueue the
packets to various other nodes and return and the graph scheduler
would send the packets out as normal dispatch logic.
But what if from my node code, I want to send a packet out the NIC
like right now ? We can assume that I have the fully constructed L2
packet with me. Is it possible to achieve this somehow from the plugin
node code so that this packet goes out right away and the rest of the
packets undergo the normal dispatch logic ?

Regards
-Prashant

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20680): https://lists.fd.io/g/vpp-dev/message/20680
Mute This Topic: https://lists.fd.io/mt/88234983/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Calling system("foo.ksh") from main thread

2021-10-01 Thread Prashant Upadhyaya
Hi,

Wanted to find out if it is safe to call a shell script like
system("foo.ksh") in VPP from the main thread, say, periodically. The
foo.ksh does some simple operations and does not block for a long
time.

Reason I ask is that I have noticed that the fastpath seems to stop
working once I do the above. Before I dig further, wanted to check if
something is already known around this.

Regards
-Prashant

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20268): https://lists.fd.io/g/vpp-dev/message/20268
Mute This Topic: https://lists.fd.io/mt/86003206/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Regarding assert in vlib_buffer_advance

2021-09-08 Thread Prashant Upadhyaya
Hi Damjan,

Thanks for the feedback.
Out of curiosity, what is the motivation of this contract about
minimal length of chained buffer data -- surely, my case being in
point, the chaining framework should not make any assumptions about
how the user would use it.

Regards
-Prashant

On Tue, Sep 7, 2021 at 12:59 AM Damjan Marion  wrote:
>
>
> —
> Damjan
>
>
>
> On 06.09.2021., at 15:27, Prashant Upadhyaya  wrote:
>
> Hi,
>
> I am using VPP21.06
> In vlib_buffer_advance there is the following assert --
> ASSERT ((b->flags & VLIB_BUFFER_NEXT_PRESENT) == 0 ||
>  b->current_length >= VLIB_BUFFER_MIN_CHAIN_SEG_SIZE);
>
> The above is problematic as I have a usecase where I construct a chained 
> packet.
> The first packet in the chain is containing just an ip4/udp/gtp header
> and the second packet in the chain is an IP4 packet of arbitrary
> length -- you can see that I am trying to wrap the packet into gtp via
> chaining.
> As a result this assert hits and brings the house down.
> My usecase works fine when I use the non-debug build of VPP.
>
> Perhaps this assert should be removed ?
>
>
> This assert  enforces contract with the rest of the VPP code about minimal 
> length of chaine buffer data.
> You can remove it, but be aware of consequences. At some point things may 
> just blow up….
>
> —
> Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20093): https://lists.fd.io/g/vpp-dev/message/20093
Mute This Topic: https://lists.fd.io/mt/85411974/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Regarding assert in vlib_buffer_advance

2021-09-06 Thread Prashant Upadhyaya
Hi,

I am using VPP21.06
In vlib_buffer_advance there is the following assert --
ASSERT ((b->flags & VLIB_BUFFER_NEXT_PRESENT) == 0 ||
  b->current_length >= VLIB_BUFFER_MIN_CHAIN_SEG_SIZE);

The above is problematic as I have a usecase where I construct a chained packet.
The first packet in the chain is containing just an ip4/udp/gtp header
and the second packet in the chain is an IP4 packet of arbitrary
length -- you can see that I am trying to wrap the packet into gtp via
chaining.
As a result this assert hits and brings the house down.
My usecase works fine when I use the non-debug build of VPP.

Perhaps this assert should be removed ?

Regards
-Prashant

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20082): https://lists.fd.io/g/vpp-dev/message/20082
Mute This Topic: https://lists.fd.io/mt/85411974/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Regarding VPP IPSec pipeline

2021-09-06 Thread Prashant Upadhyaya
Hi,

I am using VPP21.06.
I have successfully created an IPSec tunnel between VPP and a Strong Swan peer.
Packets from VPP are going into ESP towards the peer, the peer is
responding back with ESP as well (inner cleartext packets are ICMP)

Now then, I have a node of my own which is sitting on the ip4-unicast
arc and has a runs before clause like thus --
.runs_before = VNET_FEATURES ("ip4-lookup")

I am expecting that when the ESP packet lands at VPP, it will undergo
decryption and the inner IP packet would go again to ip4-input and
from there hit my node on the ip4-unicast arc. However this does not
happen. It appears that the packet is going to ip4-lookup bypassing my
node.

So the question is how do I get the decrypted inner packet on ESP to my node.

Regards
-Prashant

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20076): https://lists.fd.io/g/vpp-dev/message/20076
Mute This Topic: https://lists.fd.io/mt/85408250/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Regarding RDMA

2020-10-03 Thread Prashant Upadhyaya
Thanks Benoit for the information.

Regards
-Prashant


On Tue, Sep 29, 2020 at 3:14 PM Benoit Ganne (bganne)  wrote:
>
> Hi Prashant,
>
> > 1. What is the expected performance benefit if RDMA driver is used in
> > VPP instead of DPDK driver ?
>
>  - out-of-the-box support for Mellanox CX4/5 NIC in VPP. DPDK Mellanox 
> support is not compiled by default in VPP, but you can enable it by adding 
> DPDK_MLX[4|5]_PMD=y in your env when building VPP dependencies, however it is 
> harder for us to support than RDMA driver
>  - dynamic interfaces creation/deletion at runtime, contrary to DPDK where 
> everything must be configured at init time (and any config change requires a 
> restart)
>  - does not require hugepages which simplifies deployment esp. in containers
>
> > 2. Which NIC's are supported with RDMA driver in VPP ?
>
> Mellanox CX4 and CX5. CX6 should be supported too but has not been widely 
> tested.
>
> > Conversely are there any NIC's which are supported by DPDK
> > driver in VPP which are not supported by RDMA ?
>
> Of course: ark, atlantic, avp, axgbe, bnxt, cxgbe, dpaa2, e1000, ena, enetc, 
> enic, fm10k, hinic, hns3, ice, igc, ixgbe, netvsc, nfp, octeontx, octeontx2, 
> vdev_netvsc to name a few.
>
> ben

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17637): https://lists.fd.io/g/vpp-dev/message/17637
Mute This Topic: https://lists.fd.io/mt/77190454/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Regarding RDMA

2020-09-28 Thread Prashant Upadhyaya
Hi,

I am trying to evaluate usage of RDMA driver with VPP.
So two immediate questions which I need to answer to my higher ups :-

1. What is the expected performance benefit if RDMA driver is used in
VPP instead of DPDK driver ?
2. Which NIC's are supported with RDMA driver in VPP ? Conversely are
there any NIC's which are supported by DPDK driver in VPP which are
not supported by RDMA ?

Would really appreciate any feedback on this.

Regards
-Prashant

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17570): https://lists.fd.io/g/vpp-dev/message/17570
Mute This Topic: https://lists.fd.io/mt/77190454/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Regarding worker loop in VPP

2020-07-23 Thread Prashant Upadhyaya
Thanks Dave for the useful suggestions.

Regards
-Prashant

On Thu, Jul 23, 2020 at 4:24 PM Dave Barach (dbarach)  wrote:
>
> You could use the vlib_node_runtime_perf_counter callback hook to run code 
> between node dispatches, which SHOULD give adequate precision.
>
> Alternatively, spin up 1-N threads to run the shaper and driver TX path, and 
> nothing else. See also the handoff node.
>
> HTH... Dave
>
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Prashant 
> Upadhyaya
> Sent: Thursday, July 23, 2020 2:39 AM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Regarding worker loop in VPP
>
> Hi,
>
> I have implemented a shaper as a poll node in VPP. worker.
> The implementation is such that the shaper needs to send packets out which 
> are sitting/scheduled in a timer wheel with microsecond granularity slots.
> The shaper must invoke at a precise regular interval, say every 250 
> microseconds where it will rotate the wheel and if any timers expire then 
> send packets out corresponding to those timers.
>
> Everything works well, till the various other nodes start getting loaded and 
> disturb the invocation of  the shaper poll node at precise intervals. This 
> leads to multiple slots expiring from the timer wheel at times leading to 
> sending out of uneven amount of data depending on how many slots expire in 
> the wheel.
>
> Given the nature of while(1) loop operating in the worker and the graph 
> scheduling present there, is there any way I can have my poll node invoke at 
> high precision time boundary as an exception out of the main loop, do the job 
> there and go back to what the worker loop was doing.
>
> Regards
> -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17074): https://lists.fd.io/g/vpp-dev/message/17074
Mute This Topic: https://lists.fd.io/mt/75740975/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding worker loop in VPP

2020-07-23 Thread Prashant Upadhyaya
Hi,

I have implemented a shaper as a poll node in VPP. worker.
The implementation is such that the shaper needs to send packets out
which are sitting/scheduled in a timer wheel with microsecond
granularity slots.
The shaper must invoke at a precise regular interval, say every 250
microseconds where it will rotate the wheel and if any timers expire
then send packets out corresponding to those timers.

Everything works well, till the various other nodes start getting
loaded and disturb the invocation of  the shaper poll node at precise
intervals. This leads to multiple slots expiring from the timer wheel
at times leading to sending out of uneven amount of data depending on
how many slots expire in the wheel.

Given the nature of while(1) loop operating in the worker and the
graph scheduling present there, is there any way I can have my poll
node invoke at high precision time boundary as an exception out of the
main loop, do the job there and go back to what the worker loop was
doing.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17050): https://lists.fd.io/g/vpp-dev/message/17050
Mute This Topic: https://lists.fd.io/mt/75740975/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding vlib_time_now

2020-06-15 Thread Prashant Upadhyaya
Thanks Dave.
Yes I have taken care of that in my code. I always use the highest
value seen as the ceiling in the logic of my code. If I see a lower
value I use the ceiling else move on and record the new ceiling.
Earlier math counted on an increasing value, so bad things were
happening in my code, looks ok now.

Regards
-Prashant

On Mon, Jun 15, 2020 at 7:07 PM Dave Barach (dbarach)  wrote:
>
> That's within reason given that thread time offsets are not recalculated 
> immediately, and that (for stability reasons) the clock-rate update algorithm 
> uses exponential smoothing.
>
> Aside from accounting for the issue in your code, there probably isn't much 
> to be done about it...
>
> D
>
> -----Original Message-
> From: Prashant Upadhyaya 
> Sent: Monday, June 15, 2020 8:58 AM
> To: Dave Barach (dbarach) 
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Regarding vlib_time_now
>
> Hi Dave,
>
> Thanks, on a VM I am observing the reduction from a couple of microseconds to 
> 50 microseconds at times NTP was turned on. After turning it off, I don't see 
> the time reduction.
> The output of the command is below
>
> vppctl show clock verbose
>
> Time now 16712.719968, reftime 16712.719967, error .01, clocks/sec
> 2596982853.770165
>
> Time last barrier release 16709.938950671
>
> 1: Time now 16710.417730, reftime 16710.417730, error 0.00, clocks/sec 
> 2596982875.038256
>
> Thread 1 offset 2.302279669 error -.00032
>
> [root@bfs-dl360g9-16-vm4 iptabl]#
> /opt/opwv/integra/SystemActivePath/tools/vpp/bin/vppctl show clock verbose
>
> Time now 16715.621101, reftime 16715.621101, error 0.00, clocks/sec 
> 2596982853.770165
>
> Time last barrier release 16712.721636492
>
> 1: Time now 16713.318854, reftime 16713.318854, error 0.00, clocks/sec 
> 2596982875.038256
>
> Thread 1 offset 2.302279482 error -.8
>
> [root@bfs-dl360g9-16-vm4 iptabl]#
> /opt/opwv/integra/SystemActivePath/tools/vpp/bin/vppctl show clock verbose
>
> Time now 16718.249427, reftime 16718.249427, error 0.00, clocks/sec 
> 2596982853.770165
>
> Time last barrier release 16715.621212275
>
> 1: Time now 16715.947179, reftime 16715.947179, error 0.00, clocks/sec 
> 2596982875.038256
>
> Thread 1 offset 2.302279562 error -.8
>
> [root@bfs-dl360g9-16-vm4 iptabl]#
> /opt/opwv/integra/SystemActivePath/tools/vpp/bin/vppctl show clock verbose
>
> Time now 16719.646461, reftime 16719.646461, error 0.00, clocks/sec 
> 2596982853.770165
>
> Time last barrier release 16718.249525477
>
> 1: Time now 16717.344206, reftime 16717.344206, error 0.00, clocks/sec 
> 2596982875.038256
>
> Thread 1 offset 2.302279598 error -.9
>
> [root@bfs-dl360g9-16-vm4 iptabl]#
> /opt/opwv/integra/SystemActivePath/tools/vpp/bin/vppctl show clock verbose
>
> Time now 16721.162232, reftime 16721.162232, error 0.00, clocks/sec 
> 2596982853.770165
>
> Time last barrier release 16720.702629716
>
> 1: Time now 16718.859979, reftime 16718.859979, error 0.00, clocks/sec 
> 2596982875.038256
>
> Thread 1 offset 2.302279598 error -.8
>
> [root@bfs-dl360g9-16-vm4 iptabl]#
> /opt/opwv/integra/SystemActivePath/tools/vpp/bin/vppctl show clock verbose
>
> Time now 16722.313997, reftime 16722.313997, error 0.00, clocks/sec 
> 2596982853.770165
>
> Time last barrier release 16721.162470894
>
> 1: Time now 16720.011753, reftime 16720.011753, error 0.00, clocks/sec 
> 2596982875.038256
>
> Thread 1 offset 2.302279597 error -.9
>
> Regards
> -Prashant
>
> On Sun, Jun 14, 2020 at 8:12 PM Dave Barach (dbarach)  
> wrote:
> >
> > What is the magnitude of the delta that you observe? What does "show clock 
> > verbose" say about the state of clock-rate convergence? Is a deus ex 
> > machina (e.g. NTP) involved?
> >
> >
> >
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of Prashant
> > Upadhyaya
> > Sent: Sunday, June 14, 2020 10:32 AM
> > To: vpp-dev@lists.fd.io
> > Subject: [vpp-dev] Regarding vlib_time_now
> >
> >
> >
> > Hi,
> >
> >
> >
> > I am using VPP 19.08
> >
> > In my worker threads, I am observing that when I am making successive calls 
> > to vlib_time_now in a polling node, sometimes the value of the time reduces.
> >
> > Is this expected to happen ? (presumably because of the implementation 
> > which tries to align the times in workers ?) I have an implementation which 
> > is extremely sensitive to time at microsecond level and depends on the the 
> > vlib_time_now onl

Re: [vpp-dev] Regarding vlib_time_now

2020-06-15 Thread Prashant Upadhyaya
Hi Dave,

Thanks, on a VM I am observing the reduction from a couple of
microseconds to 50 microseconds at times
NTP was turned on. After turning it off, I don't see the time reduction.
The output of the command is below

vppctl show clock verbose

Time now 16712.719968, reftime 16712.719967, error .01, clocks/sec
2596982853.770165

Time last barrier release 16709.938950671

1: Time now 16710.417730, reftime 16710.417730, error 0.00,
clocks/sec 2596982875.038256

Thread 1 offset 2.302279669 error -.00032

[root@bfs-dl360g9-16-vm4 iptabl]#
/opt/opwv/integra/SystemActivePath/tools/vpp/bin/vppctl show clock
verbose

Time now 16715.621101, reftime 16715.621101, error 0.00,
clocks/sec 2596982853.770165

Time last barrier release 16712.721636492

1: Time now 16713.318854, reftime 16713.318854, error 0.00,
clocks/sec 2596982875.038256

Thread 1 offset 2.302279482 error -.8

[root@bfs-dl360g9-16-vm4 iptabl]#
/opt/opwv/integra/SystemActivePath/tools/vpp/bin/vppctl show clock
verbose

Time now 16718.249427, reftime 16718.249427, error 0.00,
clocks/sec 2596982853.770165

Time last barrier release 16715.621212275

1: Time now 16715.947179, reftime 16715.947179, error 0.00,
clocks/sec 2596982875.038256

Thread 1 offset 2.302279562 error -.8

[root@bfs-dl360g9-16-vm4 iptabl]#
/opt/opwv/integra/SystemActivePath/tools/vpp/bin/vppctl show clock
verbose

Time now 16719.646461, reftime 16719.646461, error 0.00,
clocks/sec 2596982853.770165

Time last barrier release 16718.249525477

1: Time now 16717.344206, reftime 16717.344206, error 0.00,
clocks/sec 2596982875.038256

Thread 1 offset 2.302279598 error -.9

[root@bfs-dl360g9-16-vm4 iptabl]#
/opt/opwv/integra/SystemActivePath/tools/vpp/bin/vppctl show clock
verbose

Time now 16721.162232, reftime 16721.162232, error 0.00,
clocks/sec 2596982853.770165

Time last barrier release 16720.702629716

1: Time now 16718.859979, reftime 16718.859979, error 0.00,
clocks/sec 2596982875.038256

Thread 1 offset 2.302279598 error -.8

[root@bfs-dl360g9-16-vm4 iptabl]#
/opt/opwv/integra/SystemActivePath/tools/vpp/bin/vppctl show clock
verbose

Time now 16722.313997, reftime 16722.313997, error 0.00,
clocks/sec 2596982853.770165

Time last barrier release 16721.162470894

1: Time now 16720.011753, reftime 16720.011753, error 0.00,
clocks/sec 2596982875.038256

Thread 1 offset 2.302279597 error -.9

Regards
-Prashant

On Sun, Jun 14, 2020 at 8:12 PM Dave Barach (dbarach)  wrote:
>
> What is the magnitude of the delta that you observe? What does "show clock 
> verbose" say about the state of clock-rate convergence? Is a deus ex machina 
> (e.g. NTP) involved?
>
>
>
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Prashant 
> Upadhyaya
> Sent: Sunday, June 14, 2020 10:32 AM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Regarding vlib_time_now
>
>
>
> Hi,
>
>
>
> I am using VPP 19.08
>
> In my worker threads, I am observing that when I am making successive calls 
> to vlib_time_now in a polling node, sometimes the value of the time reduces.
>
> Is this expected to happen ? (presumably because of the implementation which 
> tries to align the times in workers ?) I have an implementation which is 
> extremely sensitive to time at microsecond level and depends on the the 
> vlib_time_now only increasing monotonically across calls individually in the 
> workers (or remain the same but never decrease) on a per worker basis even if 
> the times within the workers are not synchronized.
>
>
>
> Regards
>
> -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16727): https://lists.fd.io/g/vpp-dev/message/16727
Mute This Topic: https://lists.fd.io/mt/74875583/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding vlib_time_now

2020-06-14 Thread Prashant Upadhyaya
Hi,

I am using VPP 19.08
In my worker threads, I am observing that when I am making successive
calls to vlib_time_now in a polling node, sometimes the value of the
time reduces.
Is this expected to happen ? (presumably because of the implementation
which tries to align the times in workers ?)
I have an implementation which is extremely sensitive to time at
microsecond level and depends on the the vlib_time_now only increasing
monotonically across calls individually in the workers (or remain the
same but never decrease) on a per worker basis even if the times
within the workers are not synchronized.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16721): https://lists.fd.io/g/vpp-dev/message/16721
Mute This Topic: https://lists.fd.io/mt/74875583/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-04 Thread Prashant Upadhyaya
Thanks Dave for the tip on core compression.

I was able to solve the issue of huge VSZ resulting into huge cores
afterall -- the culprit is DPDK.
There is a parameter in DPDK called CONFIG_RTE_MAX_MEM_MB which can be
set to a lower value than the default.

Regards
-Prashant

On Tue, Feb 4, 2020 at 5:22 PM Dave Barach (dbarach)  wrote:
>
> As Ben wrote, please check out: 
> https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingissues.html
>
> Note the section(s) on core file handling; in particular, how to set up 
> on-the-fly core file compression...:
>
> Depending on operational requirements, it’s possible to compress corefiles as 
> they are generated. Please note that it takes several seconds’ worth of 
> wall-clock time to compress a vpp core file on the fly, during which all 
> packet processing activities are suspended.
>
> To create compressed core files on the fly, create the following script, e.g. 
> in /usr/local/bin/compressed_corefiles, owned by root, executable:
>
> #!/bin/sh
> exec /bin/gzip -f - >"/tmp/dumps/core-$1.$2.gz"
>
> Adjust the kernel core file pattern as shown:
>
> sysctl -w kernel.core_pattern="|/usr/local/bin/compressed_corefiles %e %t"
>
> HTH... Dave
>
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Prashant 
> Upadhyaya
> Sent: Tuesday, February 4, 2020 4:38 AM
> To: Benoit Ganne (bganne) 
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Regarding buffers-per-numa parameter
>
> Thanks Benoit.
> I don't have the core files at the moment (still taming the huge cores that 
> are generated, so they were disabled on the setup) Backtraces are present at 
> (with indicated config of the parameter) -- https://pastebin.com/1YS3ZWeb It 
> is a dual numa setup.
>
> Regards
> -Prashant
>
>
> On Tue, Feb 4, 2020 at 1:55 PM Benoit Ganne (bganne)  wrote:
> >
> > Hi Prashant,
> >
> > Can you share your configuration and at least a backtrace of the
> > crash? Or even better a corefile:
> > https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportin
> > gissues.html
> >
> > Best
> > ben
> >
> > > -Original Message-
> > > From: vpp-dev@lists.fd.io  On Behalf Of
> > > Prashant Upadhyaya
> > > Sent: mardi 4 février 2020 09:15
> > > To: vpp-dev@lists.fd.io
> > > Subject: Re: [vpp-dev] Regarding buffers-per-numa parameter
> > >
> > >  Woops, my mistake. I think I multiplied by 1024 extra.
> > > Mbuf's are 2KB's, not 2 MB's (that's the huge page size)
> > >
> > > But the fact remains that my usecase is unstable at higher
> > > configured buffers but is stable at lower values like 10 (this
> > > can by all means be my usecase/code specific issue)
> > >
> > > If anybody else facing issues with higher configured buffers, please
> > > do share.
> > >
> > > Regards
> > > -Prashant
> > >
> > >
> > > On Tue, Feb 4, 2020 at 1:31 PM Prashant Upadhyaya
> > >  wrote:
> > > >
> > > > Hi,
> > > >
> > > > I am using DPDK Plugin with VPP19.08.
> > > > When I set the buffers-per-numa parameter to a high value, say,
> > > > 25, I am seeing crashes in the system.
> > > >
> > > > (The corresponding parameter controlling number of mbufs in
> > > > VPP18.01 used to work well. This was in dpdk config section as
> > > > num-mbufs)
> > > >
> > > > I quickly checked in VPP19.08 that vlib_buffer_main_t uses fields
> > > > which are uwords :-  uword buffer_mem_start;
> > > >   uword buffer_mem_size;
> > > >
> > > > Is it a mem size overflow in case the buffers-per-numa parameter
> > > > is set to a high value ?
> > > > I do need a high number of DPDK mbuf's in my usecase.
> > > >
> > > > Regards
> > > > -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15329): https://lists.fd.io/g/vpp-dev/message/15329
Mute This Topic: https://lists.fd.io/mt/70968414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-04 Thread Prashant Upadhyaya
Thanks Benoit.
I don't have the core files at the moment (still taming the huge cores
that are generated, so they were disabled on the setup)
Backtraces are present at (with indicated config of the parameter) --
https://pastebin.com/1YS3ZWeb
It is a dual numa setup.

Regards
-Prashant


On Tue, Feb 4, 2020 at 1:55 PM Benoit Ganne (bganne)  wrote:
>
> Hi Prashant,
>
> Can you share your configuration and at least a backtrace of the crash? Or 
> even better a corefile: 
> https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingissues.html
>
> Best
> ben
>
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of Prashant
> > Upadhyaya
> > Sent: mardi 4 février 2020 09:15
> > To: vpp-dev@lists.fd.io
> > Subject: Re: [vpp-dev] Regarding buffers-per-numa parameter
> >
> >  Woops, my mistake. I think I multiplied by 1024 extra.
> > Mbuf's are 2KB's, not 2 MB's (that's the huge page size)
> >
> > But the fact remains that my usecase is unstable at higher configured
> > buffers but is stable at lower values like 10 (this can by all
> > means be my usecase/code specific issue)
> >
> > If anybody else facing issues with higher configured buffers, please do
> > share.
> >
> > Regards
> > -Prashant
> >
> >
> > On Tue, Feb 4, 2020 at 1:31 PM Prashant Upadhyaya
> >  wrote:
> > >
> > > Hi,
> > >
> > > I am using DPDK Plugin with VPP19.08.
> > > When I set the buffers-per-numa parameter to a high value, say,
> > > 25, I am seeing crashes in the system.
> > >
> > > (The corresponding parameter controlling number of mbufs in VPP18.01
> > > used to work well. This was in dpdk config section as num-mbufs)
> > >
> > > I quickly checked in VPP19.08 that vlib_buffer_main_t uses fields
> > > which are uwords :-
> > >  uword buffer_mem_start;
> > >   uword buffer_mem_size;
> > >
> > > Is it a mem size overflow in case the buffers-per-numa parameter is
> > > set to a high value ?
> > > I do need a high number of DPDK mbuf's in my usecase.
> > >
> > > Regards
> > > -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15327): https://lists.fd.io/g/vpp-dev/message/15327
Mute This Topic: https://lists.fd.io/mt/70968414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-04 Thread Prashant Upadhyaya
 Woops, my mistake. I think I multiplied by 1024 extra.
Mbuf's are 2KB's, not 2 MB's (that's the huge page size)

But the fact remains that my usecase is unstable at higher configured
buffers but is stable at lower values like 10 (this can by all
means be my usecase/code specific issue)

If anybody else facing issues with higher configured buffers, please do share.

Regards
-Prashant


On Tue, Feb 4, 2020 at 1:31 PM Prashant Upadhyaya
 wrote:
>
> Hi,
>
> I am using DPDK Plugin with VPP19.08.
> When I set the buffers-per-numa parameter to a high value, say,
> 25, I am seeing crashes in the system.
>
> (The corresponding parameter controlling number of mbufs in VPP18.01
> used to work well. This was in dpdk config section as num-mbufs)
>
> I quickly checked in VPP19.08 that vlib_buffer_main_t uses fields
> which are uwords :-
>  uword buffer_mem_start;
>   uword buffer_mem_size;
>
> Is it a mem size overflow in case the buffers-per-numa parameter is
> set to a high value ?
> I do need a high number of DPDK mbuf's in my usecase.
>
> Regards
> -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15325): https://lists.fd.io/g/vpp-dev/message/15325
Mute This Topic: https://lists.fd.io/mt/70968414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding buffers-per-numa parameter

2020-02-04 Thread Prashant Upadhyaya
Hi,

I am using DPDK Plugin with VPP19.08.
When I set the buffers-per-numa parameter to a high value, say,
25, I am seeing crashes in the system.

(The corresponding parameter controlling number of mbufs in VPP18.01
used to work well. This was in dpdk config section as num-mbufs)

I quickly checked in VPP19.08 that vlib_buffer_main_t uses fields
which are uwords :-
 uword buffer_mem_start;
  uword buffer_mem_size;

Is it a mem size overflow in case the buffers-per-numa parameter is
set to a high value ?
I do need a high number of DPDK mbuf's in my usecase.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15324): https://lists.fd.io/g/vpp-dev/message/15324
Mute This Topic: https://lists.fd.io/mt/70968414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding high speed I/O with kernel

2019-12-07 Thread Prashant Upadhyaya
On Fri, Dec 6, 2019 at 9:32 PM Damjan Marion  wrote:
>
>
>
> > On 6 Dec 2019, at 07:16, Prashant Upadhyaya  wrote:
> >
> > Hi,
> >
> > I use VPP with DPDK driver for I/O with NIC.
> > For high speed switching of packets to and from kernel, I use DPDK KNI
> > (kernel module and user space API's provided by DPDK)
> > This works well because the vlib buffer is backed by the DPDK mbuf
> > (KNI uses DPDK mbuf's)
> >
> > Now, if I choose to use a native driver of VPP for I/O with NIC, is
> > there a native equivalent in VPP to replace KNI as well ? The native
> > equivalent should not lose out on performance as compared to KNI so I
> > believe the tap interface can be ruled out here.
> >
> > If I keep using DPDK KNI and VPP native non-dpdk driver, then I fear I
> > would have to do a data copy between the vlib buffer and an mbuf  in
> > addition to doing all the DPDK pool maintenance etc. The copies would
> > be destructive for performance surely.
> >
> > So I believe, the question is -- in presence of native drivers in VPP,
> > what is the high speed equivalent of DPDK KNI.
>
> You can use dpdk and native drivers on the same time.
> How KNI performance compares to tap with vhost-net backend?
>
>
> --
> Damjan
>

Thanks Damjan.
If I use the native driver for NIC, would I get the vlib buffer still
backed by a DPDK mbuf ?
I don't know the perf difference between KNI and tap with vhost-net backend.
I would need to poll the tap to pick the packets from kernel side; and
write into the tap to send the packets to the kernel in VPP workers.
I suppose the mere copies from user space into kernel space and vice
versa would make it more expensive than KNI where just an exchange of
mbuf's takes place in both directions. Plus, I wonder whether system
call usage is a good idea at all in a worker which is also
multiplexing the packet I/O with the NIC.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14831): https://lists.fd.io/g/vpp-dev/message/14831
Mute This Topic: https://lists.fd.io/mt/67470059/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding high speed I/O with kernel

2019-12-06 Thread Prashant Upadhyaya
Hi,

I use VPP with DPDK driver for I/O with NIC.
For high speed switching of packets to and from kernel, I use DPDK KNI
(kernel module and user space API's provided by DPDK)
This works well because the vlib buffer is backed by the DPDK mbuf
(KNI uses DPDK mbuf's)

Now, if I choose to use a native driver of VPP for I/O with NIC, is
there a native equivalent in VPP to replace KNI as well ? The native
equivalent should not lose out on performance as compared to KNI so I
believe the tap interface can be ruled out here.

If I keep using DPDK KNI and VPP native non-dpdk driver, then I fear I
would have to do a data copy between the vlib buffer and an mbuf  in
addition to doing all the DPDK pool maintenance etc. The copies would
be destructive for performance surely.

So I believe, the question is -- in presence of native drivers in VPP,
what is the high speed equivalent of DPDK KNI.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14824): https://lists.fd.io/g/vpp-dev/message/14824
Mute This Topic: https://lists.fd.io/mt/67470059/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Approximate effective CPU utilization of VPP worker

2019-06-10 Thread Prashant Upadhyaya
Thanks Dave for the feedback.
I will give a look at 19.04 and master/latest.

Regards
-Prashant



On Mon, Jun 10, 2019 at 5:11 PM Dave Barach (dbarach)  wrote:
>
> We've added purpose-built callback hooks in the main dispatch loop [in 
> master/latest] so that folks can hook up whatever sort of instrumentation 
> they like. Please have a look at that scheme.
>
> There's no point in attempting to upstream your work to stable/1801. We long 
> since shut down the CSIT performance jobs for 18.01, and we will not merge 
> feature patches into stable/1801.
>
> Many things have changed since 18.01 - thousands of patches' worth, including 
> significant performance tunes and a new build system - IIWY I'd rebase to 
> 19.04 and/or master/latest.
>
> HTH... Dave
>
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Prashant 
> Upadhyaya
> Sent: Monday, June 10, 2019 6:13 AM
> To: Dave Barach (dbarach) 
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Approximate effective CPU utilization of VPP worker
>
> Hi Dave,
>
> Thanks for your suggestion.
>
> As an attempt towards an alternate strategy, I did go ahead and make a change 
> in the main loop of VPP18.01 (similar change as I describe below should be 
> valid much in a similar fashion to subsequent
> releases) and it seems to be working for me, but there might be many catches 
> so I am disclosing what I did below.
>
> In vlib_main_or_worker_loop, on a per thread basis, I accumulated time in the 
> following areas :- . vl_api_send_pending_rpc_requests (if there were indeed 
> some pending) . dispatch_node (if the node's callback returned n > 0) . 
> vlib_frame_queue_dequeue (again if return value was > 0) The above acted as a 
> numerator for a sampling time of 1 second, the denominator. At the stroke of 
> this window, I divide, store result and clear all accumulations to begin 
> afresh.
> The calculations are valid only for worker threads as I don't care for main 
> thread whose cpu is reported by linux anyway.
>
> Empirically it seems  to work ok for me. I did check by making my own plugins 
> quite busy and so forth.
>
> Do you think, the above is a reasonable custom strategy or is there a chance 
> of destroying performance of main loop (and thus the whole
> system) due to frequent calls to vlib_time_now, would appreciate if you can 
> give your opinion.
>
> Regards
> -Prashant
>
>
>
>
>
> On Fri, Jun 7, 2019 at 5:22 PM Dave Barach (dbarach)  
> wrote:
> >
> > The instantaneous vector size - in the stats segment, per-thread - is the 
> > best measure of how hard vpp is working.
> >
> >
> >
> > It's not a linear function of the offered load, but for any given offered 
> > load / feature set you can model it with three segments:
> >
> >
> >
> > Dead asleep: vector size < 3, offered load up to maybe 1 mpps’ worth
> > of simple forwarding Linear region: vector size increases smoothly as
> > offered load increases, up a vector size of maybe 128 or so Hit the wall: a 
> > slight increate in offered load causes a huge jump in the vector size, 
> > traffic loss, etc.
> >
> >
> >
> > HTH... Dave
> >
> >
> >
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of Prashant
> > Upadhyaya
> > Sent: Friday, June 7, 2019 5:41 AM
> > To: vpp-dev@lists.fd.io
> > Subject: [vpp-dev] Approximate effective CPU utilization of VPP worker
> >
> >
> >
> > Hi,
> >
> >
> >
> > I understand that VPP workers poll so they are always at 100 % CPU.
> >
> > However, I want to get an estimate of what is the effective CPU utilization 
> > i.e. the amount which was spent in doing the work for packet processing.
> >
> >
> >
> > In the native applications, the crude way I used to employ was to take a 
> > time window of say 1 second and I already know how many cpu cycles it is 
> > worth, this acted as the denominator. Then in every packet poll, I used to 
> > count the cycles spent when the packet was actually brought in and 
> > processed. So empty polls were not counted in the used cycles.
> >
> > This acted like the numerator.  When the window was exhausted, I used the 
> > numerator/denominator to get an approximation of effective CPU utilization 
> > of the worker.
> >
> >
> >
> > Is something like this already possible in VPP, I need to constantly 
> > monitor the effective CPU utilization of the worker at runtime to take some 
> > decisions and inform the system peers eg. to reduce sending traffic etc.
> >
> >
> >
> > Regards
> >
> > -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13251): https://lists.fd.io/g/vpp-dev/message/13251
Mute This Topic: https://lists.fd.io/mt/31973018/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Approximate effective CPU utilization of VPP worker

2019-06-10 Thread Prashant Upadhyaya
Hi Dave,

Thanks for your suggestion.

As an attempt towards an alternate strategy, I did go ahead and make a
change in the main loop of VPP18.01 (similar change as I describe
below should be valid much in a similar fashion to subsequent
releases) and it seems to be working for me, but there might be many
catches so I am disclosing what I did below.

In vlib_main_or_worker_loop, on a per thread basis, I accumulated time
in the following areas :-
. vl_api_send_pending_rpc_requests (if there were indeed some pending)
. dispatch_node (if the node's callback returned n > 0)
. vlib_frame_queue_dequeue (again if return value was > 0)
The above acted as a numerator for a sampling time of 1 second, the
denominator. At the stroke of this window, I divide, store result and
clear all accumulations to begin afresh.
The calculations are valid only for worker threads as I don't care for
main thread whose cpu is reported by linux anyway.

Empirically it seems  to work ok for me. I did check by making my own
plugins quite busy and so forth.

Do you think, the above is a reasonable custom strategy or is there a
chance of destroying performance of main loop (and thus the whole
system) due to frequent calls to vlib_time_now, would appreciate if
you can give your opinion.

Regards
-Prashant





On Fri, Jun 7, 2019 at 5:22 PM Dave Barach (dbarach)  wrote:
>
> The instantaneous vector size - in the stats segment, per-thread - is the 
> best measure of how hard vpp is working.
>
>
>
> It's not a linear function of the offered load, but for any given offered 
> load / feature set you can model it with three segments:
>
>
>
> Dead asleep: vector size < 3, offered load up to maybe 1 mpps’ worth of 
> simple forwarding
> Linear region: vector size increases smoothly as offered load increases, up a 
> vector size of maybe 128 or so
> Hit the wall: a slight increate in offered load causes a huge jump in the 
> vector size, traffic loss, etc.
>
>
>
> HTH... Dave
>
>
>
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Prashant 
> Upadhyaya
> Sent: Friday, June 7, 2019 5:41 AM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Approximate effective CPU utilization of VPP worker
>
>
>
> Hi,
>
>
>
> I understand that VPP workers poll so they are always at 100 % CPU.
>
> However, I want to get an estimate of what is the effective CPU utilization 
> i.e. the amount which was spent in doing the work for packet processing.
>
>
>
> In the native applications, the crude way I used to employ was to take a time 
> window of say 1 second and I already know how many cpu cycles it is worth, 
> this acted as the denominator. Then in every packet poll, I used to count the 
> cycles spent when the packet was actually brought in and processed. So empty 
> polls were not counted in the used cycles.
>
> This acted like the numerator.  When the window was exhausted, I used the 
> numerator/denominator to get an approximation of effective CPU utilization of 
> the worker.
>
>
>
> Is something like this already possible in VPP, I need to constantly monitor 
> the effective CPU utilization of the worker at runtime to take some decisions 
> and inform the system peers eg. to reduce sending traffic etc.
>
>
>
> Regards
>
> -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13245): https://lists.fd.io/g/vpp-dev/message/13245
Mute This Topic: https://lists.fd.io/mt/31973018/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Approximate effective CPU utilization of VPP worker

2019-06-07 Thread Prashant Upadhyaya
Hi,

I understand that VPP workers poll so they are always at 100 % CPU.
However, I want to get an estimate of what is the effective CPU
utilization i.e. the amount which was spent in doing the work for
packet processing.

In the native applications, the crude way I used to employ was to take
a time window of say 1 second and I already know how many cpu cycles
it is worth, this acted as the denominator. Then in every packet poll,
I used to count the cycles spent when the packet was actually brought
in and processed. So empty polls were not counted in the used cycles.
This acted like the numerator.  When the window was exhausted, I used
the numerator/denominator to get an approximation of effective CPU
utilization of the worker.

Is something like this already possible in VPP, I need to constantly
monitor the effective CPU utilization of the worker at runtime to take
some decisions and inform the system peers eg. to reduce sending
traffic etc.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13236): https://lists.fd.io/g/vpp-dev/message/13236
Mute This Topic: https://lists.fd.io/mt/31973018/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding vlib_time_now

2019-05-28 Thread Prashant Upadhyaya
Thanks Dave !
I was at an older version of VPP, but I do see these changes in v19.04
and they seem to be easy to retrofit.
Thanks for the education.

Regards
-Prashant


On Tue, May 28, 2019 at 5:28 PM Dave Barach (dbarach)  wrote:
>
> Vlib_time_now(...) works reasonably hard to return the same time on all 
> worker threads:
>
>   /*
>* Note when we let go of the barrier.
>* Workers can use this to derive a reasonably accurate
>* time offset. See vlib_time_now(...)
>*/
>   vm->time_last_barrier_release = vlib_time_now (vm);
>
>
> always_inline f64
> vlib_time_now (vlib_main_t * vm)
> {
>   return clib_time_now (>clib_time) + vm->time_offset;
> }
>
>   /*
>* Recompute the offset from thread-0 time.
>* Note that vlib_time_now adds vm->time_offset, so
>* clear it first. Save the resulting idea of "now", to
>* see how well we're doing. See show_clock_command_fn(...)
>*/
>   {
> f64 now;
> vm->time_offset = 0.0;
> now = vlib_time_now (vm);
> vm->time_offset = vlib_global_main.time_last_barrier_release - now;
> vm->time_last_barrier_release = vlib_time_now (vm);
>   }
>
> See also the "show clock" command.
>
> There should be no need to re-solve this problem yourself.
>
> Thanks...
>
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Prashant 
> Upadhyaya
> Sent: Tuesday, May 28, 2019 7:48 AM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Regarding vlib_time_now
>
> Hi,
>
> I suppose vlib_time_now can return different times on different workers.
> If I want to have some global notion of time between workers (and possibly 
> main thread), what would be the best way, please suggest.
>
> I thought of passing the vm of main thread for the above usecase, from 
> workers, while calling vlib_time_now but that seems like a spectacularly bad 
> idea as the code inside  modifies some variables related to time inside the 
> vm (and therefore looks like one must pass the correct vm from the correct 
> thread)
>
> Is it ok to use a combination of os_cpu_clock_frequency() and 
> clib_cpu_time_now ?
>
> Regards
> -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13160): https://lists.fd.io/g/vpp-dev/message/13160
Mute This Topic: https://lists.fd.io/mt/31820577/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding vlib_time_now

2019-05-28 Thread Prashant Upadhyaya
Hi,

I suppose vlib_time_now can return different times on different workers.
If I want to have some global notion of time between workers (and
possibly main thread), what would be the best way, please suggest.

I thought of passing the vm of main thread for the above usecase, from
workers, while calling vlib_time_now but that seems like a
spectacularly bad idea as the code inside  modifies some variables
related to time inside the vm (and therefore looks like one must pass
the correct vm from the correct thread)

Is it ok to use a combination of os_cpu_clock_frequency() and
clib_cpu_time_now ?

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13158): https://lists.fd.io/g/vpp-dev/message/13158
Mute This Topic: https://lists.fd.io/mt/31820577/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding interface-output arc and node

2019-03-06 Thread Prashant Upadhyaya
Hi,

I see that there is a node called 'interface-output'
And there is a feature arc called 'interface-output'

My understanding is that if I send a packet to the node
interface-output then that will further send the packet to the device
specific node to accomplish the actual output.

If I make a new node and make it sit on the arc interface-output, then
will my new node get packets if someone tries to send the packets to
the node interface-output ?
If yes, can my node then do a normal inspection of next node with
vnet_feature_next and send the packets down the line effectively
reaching the node interface-output to complete the pipeline.

The objective of my new node sitting on the arc interface-output is to
snoop on all the outgoing packets without breaking the output
pipeline.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12452): https://lists.fd.io/g/vpp-dev/message/12452
Mute This Topic: https://lists.fd.io/mt/30295140/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding node on a feature arc

2019-03-04 Thread Prashant Upadhyaya
Hi Neale,

In one of the usecases I deviated from the normal boilerplate and
started getting frames to target nodes with vlib_get_frame_to_node and
doing put's to those, the advantage was that I could iterate through
the entire input vector and queue up elements on frames of relevant
nodes and then finally do a put of those frames when I returned from
the node. It could be debated whether this should be done at all, but
then the strategy was dependent on the index of vlib_node_t of the
target node. Hence the question for the usecase when I don't have the
next node index in terms of the vlib_node_t's index of the target
node.

Regards
-Prashant




On Mon, Mar 4, 2019 at 9:31 PM Neale Ranns (nranns)  wrote:
>
>
> I'll bite __ why would you want to do that?
>
> /neale
>
> -Message d'origine-----
> De :  au nom de Prashant Upadhyaya 
> 
> Date : lundi 4 mars 2019 à 16:06
> À : "Dave Barach (dbarach)" 
> Cc : "vpp-dev@lists.fd.io" 
> Objet : Re: [vpp-dev] Regarding node on a feature arc
>
> Thanks Dave, this is cool !
>
> Regards
> -Prashant
>
>
> On Mon, Mar 4, 2019 at 8:29 PM Dave Barach (dbarach)  
> wrote:
> >
> > You have: vlib_node_runtime_t *node. Use n = vlib_get_node(vm, 
> node->node_index) to recover the vlib_node_t.
> >
> > Index n->next_nodes to recover the node index corresponding to the next 
> index you have in mind: nNext = vlib_get_node (vm, n->next_nodes[i])
>     >
> > HTH... Dave
> >
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of Prashant 
> Upadhyaya
> > Sent: Monday, March 4, 2019 9:29 AM
> > To: vpp-dev@lists.fd.io
> > Subject: [vpp-dev] Regarding node on a feature arc
> >
> > Hi,
> >
> > When a node is on a feature arc, a good practice for that node is to 
> inspect the next feature with vnet_feature_next, obtain the next0 from that 
> and send the packets to the next0. All this works properly.
> >
> > My question -- how can I obtain the true vlib_node_t pointer 
> corresponding to the next0 as obtained from vnet_feature_next ?
> >
> > Regards
> > -Prashant
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12418): https://lists.fd.io/g/vpp-dev/message/12418
Mute This Topic: https://lists.fd.io/mt/30213927/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding node on a feature arc

2019-03-04 Thread Prashant Upadhyaya
Thanks Dave, this is cool !

Regards
-Prashant


On Mon, Mar 4, 2019 at 8:29 PM Dave Barach (dbarach)  wrote:
>
> You have: vlib_node_runtime_t *node. Use n = vlib_get_node(vm, 
> node->node_index) to recover the vlib_node_t.
>
> Index n->next_nodes to recover the node index corresponding to the next index 
> you have in mind: nNext = vlib_get_node (vm, n->next_nodes[i])
>
> HTH... Dave
>
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Prashant 
> Upadhyaya
> Sent: Monday, March 4, 2019 9:29 AM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Regarding node on a feature arc
>
> Hi,
>
> When a node is on a feature arc, a good practice for that node is to inspect 
> the next feature with vnet_feature_next, obtain the next0 from that and send 
> the packets to the next0. All this works properly.
>
> My question -- how can I obtain the true vlib_node_t pointer corresponding to 
> the next0 as obtained from vnet_feature_next ?
>
> Regards
> -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12416): https://lists.fd.io/g/vpp-dev/message/12416
Mute This Topic: https://lists.fd.io/mt/30213927/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding node on a feature arc

2019-03-04 Thread Prashant Upadhyaya
Hi,

When a node is on a feature arc, a good practice for that node is to
inspect the next feature with vnet_feature_next, obtain the next0 from
that and send the packets to the next0. All this works properly.

My question -- how can I obtain the true vlib_node_t pointer
corresponding to the next0 as obtained from vnet_feature_next ?

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12414): https://lists.fd.io/g/vpp-dev/message/12414
Mute This Topic: https://lists.fd.io/mt/30213927/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding vec_validate

2019-02-11 Thread Prashant Upadhyaya
Hi,

I am facing a strange issue in vec_validate.

I use an incrementing id and keep doing a vec_validate based on that
id for a certain single dimensional array of structures. Each
structure in the array itself is 216 bytes long. Around the time when
the id reaches 1.3 million, the vec_validate seems to be corrupting
another data structure for which a clib_mem_alloc was done separately
and only once.

I circumvented the problem by knocking off the vector for which I was
doing a vec_validate of, by using an array for which I allocated
memory once at the beginning using clib_mem_alloc (fortunately I know
at startup what will be the max value ever of my incrementing id) and
the usecase works well for me and everybody is happy.

But now I don't have a root cause to explain except to give some dodgy story.
So my question -- is it at all possible that vec_validate can lead to
such issues or I have just been able to throw something under the
carpet which will stink one day in my usecase ? I am at VPP 18.01

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12229): https://lists.fd.io/g/vpp-dev/message/12229
Mute This Topic: https://lists.fd.io/mt/29738945/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding clib_mem_alloc

2019-02-08 Thread Prashant Upadhyaya
Hi,

Assuming the heap is sufficiently dimensioned, can any size be
requested from clib_mem_alloc which fits in u32 ? Are there any issues
if higher sizes are requested eg. can clib_mem_alloc actually return a
pointer but the allocated size is not as much as the requested size ?

Perhaps it is bizarre to ask the above, but one of my usecases works
nicely when I request a size of around 216 MB, but runs into
application level corruption when I request double of this i.e. around
432 MB. So thought of asking. It could by all means be and likely is
an application level problem. This is with VPP 18.01

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12219): https://lists.fd.io/g/vpp-dev/message/12219
Mute This Topic: https://lists.fd.io/mt/29701184/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Fetching all GTP-u packets

2018-12-24 Thread Prashant Upadhyaya
Hi,

I want to write a node which gets all the GTP-u packets which enter
into VPP via an interface.

I understand from
https://docs.fd.io/vpp/17.07/clicmd_src_plugins_gtpu.html
that I have to run the following command for the interface --
set interface ip gtpu-bypass 

In my code, kindly confirm, if the following is the right thing to do for v4 --
VNET_FEATURE_INIT (my_gtp_lookup4, static) = {
  .arc_name = "ip4-unicast",
  .node_name = "my-flow-gtp-lookup",
  .runs_before = VNET_FEATURES ("ip4-gtpu-bypass"),
};

Where  "my-flow-gtp-lookup" is the name of my node which has been
registered to obtain the GTP packets.

If I have to do something else, would appreciate the guidance.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11773): https://lists.fd.io/g/vpp-dev/message/11773
Mute This Topic: https://lists.fd.io/mt/28845399/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding own pthreads

2018-11-28 Thread Prashant Upadhyaya
Hi,

If I spawn some of my own pthreads from the main thread, then can I
use the clib functions inside my own pthread, eg. clib_mem_alloc/free
safely ?
In general, are there any guidelines to be followed regarding own
pthreads, would appreciate any input on this front.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11447): https://lists.fd.io/g/vpp-dev/message/11447
Mute This Topic: https://lists.fd.io/mt/28430132/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Difference between PRE_INPUT and INPUT nodes

2018-11-22 Thread Prashant Upadhyaya
Thanks Dave !
I got it working with your suggestions :)

Regards
-Prashant

On Thu, Nov 22, 2018 at 7:31 PM Dave Barach (dbarach)  wrote:
>
> Try setting the node state to VLIB_NODE_STATE_DISABLED on the main thread:
>
> Vlib_node_state_state (vlib_mains[0] or _global_main, ...):
>
>
> /** \brief Set node dispatch state.
>  @param vm vlib_main_t pointer, varies by thread
>  @param node_index index of the node
>  @param new_state new state for node, see vlib_node_state_t
> */
> void vlib_node_set_state (vlib_main_t * vm, u32 node_index, vlib_node_state_t 
> new_state)
>
> -Original Message-
> From: Prashant Upadhyaya 
> Sent: Thursday, November 22, 2018 8:53 AM
> To: Dave Barach (dbarach) 
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Difference between PRE_INPUT and INPUT nodes
>
> Thanks Dave.
> I used the VLIB_NODE_TYPE_INPUT, but I now observe that, in presence of 
> worker threads, my main thread is going to 100 % cpu utilization as well.
> Without my new node, the main thread does not go to 100 % cpu utilization.
> I want the polling to happen only in worker threads (or the main thread if 
> there are no workers) Can I do something at the runtime to achieve that ?
>
> Regards
> -Prashant
>
> On Thu, Nov 22, 2018 at 7:02 PM Dave Barach (dbarach)  
> wrote:
> >
> > Use VLIB_NODE_TYPE_INPUT. Pre-input nodes - of which there is one - exist 
> > to make sure that a certain epoll(...) call happens at the top of the loop.
> >
> > D.
> >
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of Prashant
> > Upadhyaya
> > Sent: Thursday, November 22, 2018 7:41 AM
> > To: vpp-dev@lists.fd.io
> > Subject: [vpp-dev] Difference between PRE_INPUT and INPUT nodes
> >
> > Hi,
> >
> > What is the difference between --
> > .type = VLIB_NODE_TYPE_PRE_INPUT
> > and
> > .type = VLIB_NODE_TYPE_INPUT
> >
> > when the --
> > .state = VLIB_NODE_STATE_POLLING
> >
> > Typically when should the PRE_INPUT be used and when the INPUT, would 
> > appreciate any advice on this. My usecase needs to do a high speed polling.
> >
> > Regards
> > -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11369): https://lists.fd.io/g/vpp-dev/message/11369
Mute This Topic: https://lists.fd.io/mt/28286049/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Difference between PRE_INPUT and INPUT nodes

2018-11-22 Thread Prashant Upadhyaya
Thanks Dave.
I used the VLIB_NODE_TYPE_INPUT, but I now observe that, in presence
of worker threads, my main thread is going to 100 % cpu utilization as
well.
Without my new node, the main thread does not go to 100 % cpu utilization.
I want the polling to happen only in worker threads (or the main
thread if there are no workers)
Can I do something at the runtime to achieve that ?

Regards
-Prashant

On Thu, Nov 22, 2018 at 7:02 PM Dave Barach (dbarach)  wrote:
>
> Use VLIB_NODE_TYPE_INPUT. Pre-input nodes - of which there is one - exist to 
> make sure that a certain epoll(...) call happens at the top of the loop.
>
> D.
>
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Prashant 
> Upadhyaya
> Sent: Thursday, November 22, 2018 7:41 AM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Difference between PRE_INPUT and INPUT nodes
>
> Hi,
>
> What is the difference between --
> .type = VLIB_NODE_TYPE_PRE_INPUT
> and
> .type = VLIB_NODE_TYPE_INPUT
>
> when the --
> .state = VLIB_NODE_STATE_POLLING
>
> Typically when should the PRE_INPUT be used and when the INPUT, would 
> appreciate any advice on this. My usecase needs to do a high speed polling.
>
> Regards
> -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11367): https://lists.fd.io/g/vpp-dev/message/11367
Mute This Topic: https://lists.fd.io/mt/28286049/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Difference between PRE_INPUT and INPUT nodes

2018-11-22 Thread Prashant Upadhyaya
Hi,

What is the difference between --
.type = VLIB_NODE_TYPE_PRE_INPUT
and
.type = VLIB_NODE_TYPE_INPUT

when the --
.state = VLIB_NODE_STATE_POLLING

Typically when should the PRE_INPUT be used and when the INPUT, would
appreciate any advice on this. My usecase needs to do a high speed
polling.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11363): https://lists.fd.io/g/vpp-dev/message/11363
Mute This Topic: https://lists.fd.io/mt/28286049/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding vlib_time_now

2018-11-20 Thread Prashant Upadhyaya
Hi,

Can the vlib_time_now return way-off different values in different
workers threads/main thread even if it is called around the same time
?

I have the following situation --

I initialize a per worker data structure in the main thread with a
call to vlib_time_now (by passing the vm of the corresponding worker
threads). Then the workers start calling the vlib_time_now eventually.
What I find at the start is that vlib_time_now return value as
observed in the call in worker's context is way ahead of the
vlib_time_now which was seeded in the initialization for that worker
by the main thread. This does not happen on all setups. So wanted to
find out if this is possible at all. At the moment I cured the problem
by doing the initial seeding in the worker itself for the very first
time so that the relative times seen by worker itself in calls to
vlib_time_now are good.

I am on VPP 18.01

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11345): https://lists.fd.io/g/vpp-dev/message/11345
Mute This Topic: https://lists.fd.io/mt/28274759/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding IP Fragmentation

2018-11-14 Thread Prashant Upadhyaya
Hi,

I have a usecase where I have two IP Datagrams, let's call them IP1,
IP2. Each contains UDP Payload.
IP1 size is bigger than mtu.
IP2 size is lesser than mtu.

I ship these both one after the other in that order to ip4-lookup.

IP1 gets fragmented, as expected, to IP1.1 and IP1.2 and the fragments
are shipped out.
IP2 does not get fragmented and gets shipped out.

However I see that the final destination UDP receiver gets UDP payload
of IP2 packet first, then it gets the UDP payload of IP1 packet.
It seems that VPP has sent out the IP2 packet out first and then the
fragments of IP1.

Is this an expected behaviour ? I do intuitively understand that
fragmentation must be involving further graph nodes to traverse and
thus the fragments are sent out later for IP1.


Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11240): https://lists.fd.io/g/vpp-dev/message/11240
Mute This Topic: https://lists.fd.io/mt/28133898/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding vlib_worker_thread_barrier_sync_int

2018-11-13 Thread Prashant Upadhyaya
Sorry about this, this was a brain-freeze case.
I was made to realize that it is the wait_at_barrier field initialized
to 1 and not workers_at_barrier
Once again sorry for spamming on this one. Everything's fine here.

Regards
-Prashant

On Tue, Nov 13, 2018 at 5:50 PM Prashant Upadhyaya
 wrote:
>
> Hi,
>
> I am on VPP 18.01.
>
> I see the following while loop in the
> vlib_worker_thread_barrier_sync_int function
> (src/vlib/threads.c)
>  while (*vlib_worker_threads->workers_at_barrier != count)
> {
>   if ((now = vlib_time_now (vm)) > deadline)
> {
>   fformat (stderr, "%s: worker thread deadlock\n", __FUNCTION__);
>   os_panic ();
> }
> }
> Suppose I have 4 workers (and 1 main thread), so the value of count will be 4.
> Each worker bumps up the workers_at_barrier by 1.
> So if the while loop in its sampling misses seeing the value bumping
> by the third worker, and the 4th worker bumps up the value, then
> workers_at_barrier will become 5 and the while will never break
> eventually missing the deadline.
>
> I am seeing this happening occasionally on some of my setups.
>
> Do you think it is safe to change the while like this instead  ?
>
> while (*vlib_worker_threads->workers_at_barrier <= count)
>
> Regards
> -Prashant
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#11218): https://lists.fd.io/g/vpp-dev/message/11218
> Mute This Topic: https://lists.fd.io/mt/28122014/1005467
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [praupadhy...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11221): https://lists.fd.io/g/vpp-dev/message/11221
Mute This Topic: https://lists.fd.io/mt/28122014/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding communication from main thread to worker threads

2018-11-11 Thread Prashant Upadhyaya
Thanks Dave !
I could figure out the vm->worker_thread_main_loop_callback with your hint.
It seems to be a newer feature, but looks to be quite easily back
portable to older releases as well -- introduce the field and call it
from the main loop.

So I could setup a piece of memory as the request message and reply
message, set the request message in my main thread, point the callback
of vm[i] to the desired function to be called in the worker, the
worker does what it has to do and sets up the reply message and nulls
out the callback, the main thread can poll for the reply to arrive
with a field there and voila !

Many thanks for adding this feature !

Regards
-Prashant


On Sun, Nov 11, 2018 at 6:46 PM Dave Barach (dbarach)  wrote:
>
> Check out src/plugins/perfmon/perfmon_periodic.c for one take on that 
> problem...
>
> Dave
>
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Prashant 
> Upadhyaya
> Sent: Sunday, November 11, 2018 2:37 AM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Regarding communication from main thread to worker threads
>
> Hi,
>
> The function, vl_api_rpc_call_main_thread, is useful to do message transfer 
> from worker thread to the main thread.
>
> Is there any mechanism where I can call a function in worker thread from the 
> main thread ? That is, just the reverse of the above.
>
> I can use interrupts (vlib_node_set_interrupt_pending) but I can't send any 
> data with the interrupt I suppose.
>
> So basically I want to find out if there is any infra available where I can 
> send some data to my worker thread from the main thread instead of creating a 
> frame of data and enqueueing it the normal way to the worker towards a node.
>
> Regards
> -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11196): https://lists.fd.io/g/vpp-dev/message/11196
Mute This Topic: https://lists.fd.io/mt/28079125/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding communication from main thread to worker threads

2018-11-10 Thread Prashant Upadhyaya
Hi,

The function, vl_api_rpc_call_main_thread, is useful to do message
transfer from worker thread to the main thread.

Is there any mechanism where I can call a function in worker thread
from the main thread ? That is, just the reverse of the above.

I can use interrupts (vlib_node_set_interrupt_pending) but I can't
send any data with the interrupt I suppose.

So basically I want to find out if there is any infra available where
I can send some data to my worker thread from the main thread instead
of creating a frame of data and enqueueing it the normal way to the
worker towards a node.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11194): https://lists.fd.io/g/vpp-dev/message/11194
Mute This Topic: https://lists.fd.io/mt/28079125/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding IP Fragmentation

2018-11-08 Thread Prashant Upadhyaya
Hi Ole,

Thanks for the information.

Suppose my IP datagram gets fragmented into fragments F.1 and F.2, do
these again get submitted to ip4-input ?
I am trying to figure out the path in the graph from the time I submit
my v4 IP datagram to ip4-lookup.
What is crucial for me to know is if the fragments do land up at
ip4-input or not.

Regards
-Prashant


On Thu, Nov 8, 2018 at 10:11 PM Ole Troan  wrote:
>
>
>
> > On 8 Nov 2018, at 23:26, Prashant Upadhyaya  wrote:
> >
> > Hi,
> >
> > If I hand-construct an IP datagram (bigger than mtu) using chained
> > vlib buffers and send this chain to ip4-lookup, would that be
> > fragmented and sent out as per the mtu requirements of the interface
> > from which the packet is determined to be sent out ? (assume that the
> > rx and tx sw_if_index is properly set)
> >
> > If not, which node do I send this datagram to, to achieve effectively
> > the above ?
>
> Yes, if IPv4 and DF==0.
> And you better remember that IP fragments are almost never a good idea.
>
> Cheers
> Ole
>
>
> > -=-=-=-=-=-=-=-=-=-=-=-
> > Links: You receive all messages sent to this group.
> >
> > View/Reply Online (#11172): https://lists.fd.io/g/vpp-dev/message/11172
> > Mute This Topic: https://lists.fd.io/mt/28037801/675193
> > Group Owner: vpp-dev+ow...@lists.fd.io
> > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [otr...@employees.org]
> > -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11174): https://lists.fd.io/g/vpp-dev/message/11174
Mute This Topic: https://lists.fd.io/mt/28037801/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding IP Fragmentation

2018-11-08 Thread Prashant Upadhyaya
Hi,

If I hand-construct an IP datagram (bigger than mtu) using chained
vlib buffers and send this chain to ip4-lookup, would that be
fragmented and sent out as per the mtu requirements of the interface
from which the packet is determined to be sent out ? (assume that the
rx and tx sw_if_index is properly set)

If not, which node do I send this datagram to, to achieve effectively
the above ?

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11172): https://lists.fd.io/g/vpp-dev/message/11172
Mute This Topic: https://lists.fd.io/mt/28037801/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding vlib_buffer_t metadata

2018-11-03 Thread Prashant Upadhyaya
Hi,

I am writing a plugin node which will get packets the usual way.
I will pipeline these packets between my worker threads (in the same
plugin node) using the handoff mechanism available.

Thus, when my plugin node gets a packet, it could have been from an
internal handoff from one of my workers or it could be from anywhere
else (eg. ip4-input).
Question is, how can I safely distinguish what kind of a packet it is
i.e. is it coming because of handoff  or from an external node not due
to a handoff ?

It is clear that I can setup one of the opaque fields of vlib_buffer_t
to some non zero custom values when I do internal handoff and then
inspect at packet reception at my node. I will also set these back to
zero when I am sending the packet to any other node. But are they
guaranteed to be zero when the packet originates from the dpdk plugin
and then via traversal eventually lands up at my node for the first
time ?

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11085): https://lists.fd.io/g/vpp-dev/message/11085
Mute This Topic: https://lists.fd.io/mt/27845920/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding vlib_buffer_alloc

2018-11-02 Thread Prashant Upadhyaya
Thanks Dave, the link is very useful !

On Sat, Nov 3, 2018 at 2:20 AM Dave Barach (dbarach)  wrote:
>
> See also 
> https://fdio-vpp.readthedocs.io/en/latest/gettingstarted/developers/vnet.html#creating-packets-from-scratch
>
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Prashant 
> Upadhyaya
> Sent: Friday, November 2, 2018 12:54 PM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Regarding vlib_buffer_alloc
>
> Hi,
>
> When I allocate a buffer using the vlib_buffer_alloc, are the fields in the 
> vlib_buffer_t guaranteed to be properly initialized (eg. with all zero 
> values) or is there any obligation on the caller to initialize these because 
> the values may be unpredictable ?
>
> Regards
> -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11081): https://lists.fd.io/g/vpp-dev/message/11081
Mute This Topic: https://lists.fd.io/mt/27829857/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding vlib_buffer_alloc

2018-11-02 Thread Prashant Upadhyaya
Hi,

When I allocate a buffer using the vlib_buffer_alloc, are the fields
in the vlib_buffer_t guaranteed to be properly initialized (eg. with
all zero values) or is there any obligation on the caller to
initialize these because the values may be unpredictable ?

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11078): https://lists.fd.io/g/vpp-dev/message/11078
Mute This Topic: https://lists.fd.io/mt/27829857/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding buffer chains with vlib_buffer_t

2018-10-31 Thread Prashant Upadhyaya
Hi,

I have two buffer chains whose starting vlib_buffer_t's are --
vlib_buffer_t* chainHead1;  (let's call this chain1)
vlib_buffer_t* chainHead2; (let's call this chain2)
The chain1, chain2 may have one or more buffers each.

Is there any convenience function which connects the last buffer of
first chain1 to the first buffer of chain2, so that the entire bigger
chain can be accessed via chainHead1 as the starting point.

So I need something like this --
void vlib_buffer_cat(vlib_buffer_t* chain1, vlib_buffer_t* chain2)

I suppose I will have to chase the last buffer of chain1 and then
connect it to the first of chain2 and then modify the chain1 first
buffer contents suitably for the length, flags etc. not to forget the
possible modifications in the first buffer of chain2.

If someone has this already, that will save me some rookie mistakes
and hours of debugging when it goofs up my packet processing at my
business logic level :)


Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11057): https://lists.fd.io/g/vpp-dev/message/11057
Mute This Topic: https://lists.fd.io/mt/27808613/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Snooping non-IP packets

2018-10-30 Thread Prashant Upadhyaya
Thanks Damjan !
Just to be sure, when I use the device-input feature arc thus --

VNET_FEATURE_INIT (myfeature, static) =
{
  .arc_name = "device-input",
  .node_name = "mynode",
  .runs_before = VNET_FEATURES ("ethernet-input"),
};

I get to see only the non-IP packets inside mynode, is that correct ?
I do get your point about sub-interfacing and that is fine.

Do the IPv4 and IPv6 packets go directly from dpdk-input to ip4-input
and ip6-input, just to be sure ?

I also see a wiki link
https://wiki.fd.io/view/VPP/Feature_Arcs

It is mentioned there how to setup the next0 in mynode. I suppose
doing a call to vnet_feature_next is a shortcut. Perhaps the wiki can
mention that I think.

Regards
-Prashant



Regards
-Prashant



On Mon, Oct 29, 2018 at 11:31 PM Damjan Marion  wrote:
>
>
> Yes, you can just use device-input feature arc instead.
>
> Please note that packets on that arc are not (optionally) assigned to 
> sub-interfaces yet.
> So if you have vlan subinterface, packets will show up as main interface 
> packets and with
> original l2 header.
>
> --
> Damjan
>
> On 29 Oct 2018, at 18:22, Prashant Upadhyaya  wrote:
>
> Hi,
>
> I am using DPDK Plugin
> I want to write a node which will get to observe all the  non-IP packets.
> By observe, I mean that I want my node to see the non-IP packets, but
> when I return from my node, the packets should traverse the original
> graph of VPP which they would have followed had my node not been
> there.
>
> I am wondering what's the best way to achieve the above.
> I have done VNET_FEATURE_INIT before using which eg. I got to observe
> all the IPv4 packets by making my feature node sit on the ip4-unicast
> arc and running before ip4-lookup.
>
> Can I do something similar for my requirement of observing all the
> non-IP packets in a similar way or some other way, any advice would be
> helpful.
>
> Regards
> -Prashant
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#11021): https://lists.fd.io/g/vpp-dev/message/11021
> Mute This Topic: https://lists.fd.io/mt/27783711/675642
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11035): https://lists.fd.io/g/vpp-dev/message/11035
Mute This Topic: https://lists.fd.io/mt/27783711/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Snooping non-IP packets

2018-10-29 Thread Prashant Upadhyaya
Hi,

I am using DPDK Plugin
I want to write a node which will get to observe all the  non-IP packets.
By observe, I mean that I want my node to see the non-IP packets, but
when I return from my node, the packets should traverse the original
graph of VPP which they would have followed had my node not been
there.

I am wondering what's the best way to achieve the above.
I have done VNET_FEATURE_INIT before using which eg. I got to observe
all the IPv4 packets by making my feature node sit on the ip4-unicast
arc and running before ip4-lookup.

Can I do something similar for my requirement of observing all the
non-IP packets in a similar way or some other way, any advice would be
helpful.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11021): https://lists.fd.io/g/vpp-dev/message/11021
Mute This Topic: https://lists.fd.io/mt/27783711/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding clib_file_add API

2018-10-29 Thread Prashant Upadhyaya
Hi,

I understand that when I  do something like the following --
clib_file_t template = {0};
template.read_function = my_read_cb;
template.file_descriptor = my_fd;
template.private_data = my_data;
my_handle = clib_file_add (_main, );

Then my_read_cb will be called whenever the my_fd has data available to be read.
Question is, in which thread's context is the my_read_cb called ? Is
it the main thread itself or some other thread ?

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11019): https://lists.fd.io/g/vpp-dev/message/11019
Mute This Topic: https://lists.fd.io/mt/27783070/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding vlib_buffer_t construction for tx of a self-made packet

2018-10-29 Thread Prashant Upadhyaya
Thanks a bunch Dave !
The vlib_buffer_add_data(...) does the trick for me ! All that I
really had to do was to just set up the following fields in the
vlib_buffer_t for the buffer index created by the function
 vnet_buffer (b)->sw_if_index[VLIB_RX]
 vnet_buffer (b)->sw_if_index[VLIB_TX]

What a nice function !



On Mon, Oct 29, 2018 at 4:50 AM Dave Barach (dbarach)  wrote:
>
> Look at .../src/vnet/ipfix-export/flow_report.c : send_template_packet(...) 
> for a decent example of most of the mechanics involved. Also look at 
> .../src/vlib/buffer.c : vlib_buffer_add_data(...).
>
> If you're going to send lots of such packets, it may well improve performance 
> to allocate a fair number of buffers at a time, and maintain a private buffer 
> cache. See ../src/vnet/tcp/tcp_output.c : tcp_alloc_tx_buffers(...).
>
> HTH... Dave
>
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Prashant 
> Upadhyaya
> Sent: Sunday, October 28, 2018 3:13 AM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Regarding vlib_buffer_t construction for tx of a self-made 
> packet
>
> Hi,
>
> I have a situation where one of my plugins needs to construct an l2 packet 
> (containing an ip datagram inside).
> So I have a local buffer which contains the following bytes laid out -- dst 
> mac src mac ethertype ip header + payload
>
> Assume for the sake of generality that the above buffer is 4K bytes (because 
> I have an ip payload of that much size) Assume further that I magically 
> already know the sw if indices of the RX and TX.
> It is also ensured that the l2 paylod size is lesser than or equal to the MTU 
> of the TX interface
>
> Now once I have all of the above information, I need to create a 
> vlib_buffer_t (possibly a chained set) So I need some guidelines/best 
> practices of how to go about it flawlessly so that once the vlib_buffer_t is 
> constructed, I can simply send it to the "interface-output" and accomplish 
> the act of transmission.
>
> The high level set of things that I can think of is --
>
> 1. Allocate a vlib_buffer_t (or a chained flavour of those) 2. Start 
> appending data to those buffers 3. What about the various control fields to 
> be set in the above buffers ?
> . I suppose I will have to do the following --
>   vnet_buffer (b0)->sw_if_index[VLIB_RX] = ;
> vnet_buffer (b0)->sw_if_index[VLIB_TX] = ; Do 
> I need to do anything with b0->buffer_pool_index ?
> What to do with b0->current_data ?
> What to do with b0->flags ?
> What to do with b->current_length ?
> And if it is a chained set, what is to be done on individual vlib_buffer_t of 
> the chain and what on the first one of the chain ?
>
> I suppose I need a convenience function like the following
>
> vlib_buffer_t* foo(char* myl2packet, int lengthOfMyL2Packet, u32 
> mySwIfRxIndex, u32 mySwIfTxIndex)
>
> The above should return to me a nice little vlib_buffer_t (possibly
> chained) which I can then ship to interface output.
> I am willing to write such a function by all means but want to find out if 
> anything like this (or closer) already exists.
> If it doesn't exist, what idioms can I follow from the functions which 
> already exist and what I have to be careful about.
>
> Regards
> -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11017): https://lists.fd.io/g/vpp-dev/message/11017
Mute This Topic: https://lists.fd.io/mt/27770677/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding vlib_buffer_t construction for tx of a self-made packet

2018-10-28 Thread Prashant Upadhyaya
Hi,

I have a situation where one of my plugins needs to construct an l2
packet (containing an ip datagram inside).
So I have a local buffer which contains the following bytes laid out --
dst mac
src mac
ethertype
ip header + payload

Assume for the sake of generality that the above buffer is 4K bytes
(because I have an ip payload of that much size)
Assume further that I magically already know the sw if indices of the RX and TX.
It is also ensured that the l2 paylod size is lesser than or equal to
the MTU of the TX interface

Now once I have all of the above information, I need to create a
vlib_buffer_t (possibly a chained set)
So I need some guidelines/best practices of how to go about it
flawlessly so that once the vlib_buffer_t is constructed, I can simply
send it to the "interface-output" and accomplish the act of
transmission.

The high level set of things that I can think of is --

1. Allocate a vlib_buffer_t (or a chained flavour of those)
2. Start appending data to those buffers
3. What about the various control fields to be set in the above buffers ?
. I suppose I will have to do the following --
  vnet_buffer (b0)->sw_if_index[VLIB_RX] = ;
vnet_buffer (b0)->sw_if_index[VLIB_TX] = ;
Do I need to do anything with b0->buffer_pool_index ?
What to do with b0->current_data ?
What to do with b0->flags ?
What to do with b->current_length ?
And if it is a chained set, what is to be done on individual
vlib_buffer_t of the chain and what on the first one of the chain ?

I suppose I need a convenience function like the following

vlib_buffer_t* foo(char* myl2packet, int lengthOfMyL2Packet, u32
mySwIfRxIndex, u32 mySwIfTxIndex)

The above should return to me a nice little vlib_buffer_t (possibly
chained) which I can then ship to interface output.
I am willing to write such a function by all means but want to find
out if anything like this (or closer) already exists.
If it doesn't exist, what idioms can I follow from the functions which
already exist and what I have to be careful about.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11011): https://lists.fd.io/g/vpp-dev/message/11011
Mute This Topic: https://lists.fd.io/mt/27770677/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding VLIB_REGISTER_NODE

2018-10-25 Thread Prashant Upadhyaya
Thanks Dave, this is exactly what I needed for my usecase for the
next0 in MyNode as you guessed it !

Regards
-Prashant


On Wed, Oct 24, 2018 at 4:28 PM Dave Barach (dbarach)  wrote:
>
> Use vlib_node_add_next(...) to create the graph arc at your convenience. 
> Memorize the arc index when you create it, so you can set e.g. next0 to the 
> correct value in MyNode.
>
> HTH... Dave
>
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Prashant 
> Upadhyaya
> Sent: Wednesday, October 24, 2018 2:15 AM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Regarding VLIB_REGISTER_NODE
>
> Hi,
>
> I have a registered node like the following --
>
> VLIB_REGISTER_NODE (MyNode) = {
> .name = "MyNode",
> .
> .
> .n_next_nodes =N,
> .next_nodes = {
> [firstone] = "error-drop",
> [secondone] = "ip4-lookup",
> [thirdone] = "ip6-lookup",
> [fourthone] = "may-or-may-not-be-loaded-node",
> .
> .
>
> Now I want to be able to run the system whether the fourth one above is 
> loaded or not as a .so The business logic in my code takes care that I never 
> use the fourth one in case it is not loaded. I have a private configuration 
> which tells me whether the fourth one is present in deployment or not.
>
> But the VLIB_REGISTER_NODE requires the wiring at compile time like the above.
> And the system will not startup if the fourth one is not actually present in 
> deployment
>
> So I want to avoid mentioning the fourth one in the VLIB_REGISTER_NODE of 
> MyNode as one of the next_nodes.
> And then how to add it as one of the next_nodes from runtime for MyNode when 
> my private configuration tells me that the fourthone indeed is existing in 
> the deployment ? I am looking for some API to wink in the fourth one at 
> runtime into .next_nodes of MyNode.
>
> Regards
> -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10984): https://lists.fd.io/g/vpp-dev/message/10984
Mute This Topic: https://lists.fd.io/mt/27616631/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding VLIB_REGISTER_NODE

2018-10-24 Thread Prashant Upadhyaya
Hi,

I have a registered node like the following --

VLIB_REGISTER_NODE (MyNode) = {
.name = "MyNode",
.
.
.n_next_nodes =N,
.next_nodes = {
[firstone] = "error-drop",
[secondone] = "ip4-lookup",
[thirdone] = "ip6-lookup",
[fourthone] = "may-or-may-not-be-loaded-node",
.
.

Now I want to be able to run the system whether the fourth one above
is loaded or not as a .so
The business logic in my code takes care that I never use the fourth
one in case it is not loaded. I have a private configuration which
tells me whether the fourth one is present in deployment or not.

But the VLIB_REGISTER_NODE requires the wiring at compile time like the above.
And the system will not startup if the fourth one is not actually
present in deployment

So I want to avoid mentioning the fourth one in the VLIB_REGISTER_NODE
of MyNode as one of the next_nodes.
And then how to add it as one of the next_nodes from runtime for
MyNode when my private configuration tells me that the fourthone
indeed is existing in the deployment ? I am looking for some API to
wink in the fourth one at runtime into .next_nodes of MyNode.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10950): https://lists.fd.io/g/vpp-dev/message/10950
Mute This Topic: https://lists.fd.io/mt/27616631/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding

2018-10-09 Thread Prashant Upadhyaya
Hi,

I am using clib_bihash_40_8_t  (VPP version 18.01)
The number of buckets is 65536.

I am going to add 1 million records into the bihash. My application
takes care that at any point of time there will never be more than 1
million records going into the bihash.

Given the above, what should be the memory size which I should pass
into the init function of bihash to guarantee that there will no panic
memory allocation failures during the record additions into the bihash
(assuming that init succeeds).
Please make no assumptions about hash collisions.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10780): https://lists.fd.io/g/vpp-dev/message/10780
Mute This Topic: https://lists.fd.io/mt/27122953/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding clib_bihash_40_8_t

2018-10-09 Thread Prashant Upadhyaya
I am using clib_bihash_40_8_t  (VPP version 18.01)
The number of buckets is 65536.

I am going to add 1 million records into the bihash. My application
takes care that at any point of time there will never be more than 1
million records going into the bihash.

Given the above, what should be the memory size which I should pass
into the init function of bihash to guarantee that there will no panic
memory allocation failures during the record additions into the bihash
(assuming that init succeeds).
Please make no assumptions about hash collisions.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10781): https://lists.fd.io/g/vpp-dev/message/10781
Mute This Topic: https://lists.fd.io/mt/27122954/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding pre_data field in vlib_buffer_t

2018-09-21 Thread Prashant Upadhyaya
...and I must quickly add that I do recognize that the field is lying
inside a union where the next bigger member of the union is 16 bytes
long.
So I will be careful to skip the first 16 bytes of the u32 unused[10];
and claim the rest of the real estate as my own.

Please do let me know if I am about to shoot at my own foot.
On Fri, Sep 21, 2018 at 9:58 PM Prashant Upadhyaya
 wrote:
>
> Thanks Dave, Damjan for the shove in the right direction !
> I believe that the safest field to use is vnet_buffer_opaque2_t-> u32 
> unused[10]
> So far as I confine myself to the above, I should be able to use it
> the way I want without affecting anything else.
> If the above is a naive understanding, please do educate me further on this.
>
> Regards
> -Prashant
>
> On Fri, Sep 21, 2018 at 5:32 PM Dave Barach (dbarach)  
> wrote:
> >
> > +1. Note the vnet_buffer(b) and vnet_buffer2(b) macros. Track down the 
> > definitions in .../src/vlib/buffer.h and .../src/vnet/buffer.h and you’ll 
> > understand how these opaque buffer metadata spaces are to be used.
> >
> >
> >
> > Depending on where your graph nodes land, you must be careful not to smash 
> > metadata which will be used downstream.
> >
> >
> >
> > Due to load-store unit pressure, (especially) vnet_buffer(b) fields are 
> > overlaid and have multiple uses. Any new use-case needs careful [manual] 
> > life-cycle analysis.
> >
> >
> >
> > Example: if you set vnet_buffer(b)->sw_if_index[VLIB_TX] to a random number 
> > and ship that buffer to ip4/6-lookup, you’ll get a nasty surprise.
> >
> >
> >
> > D.
> >
> >
> >
> > From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion 
> > via Lists.Fd.Io
> > Sent: Friday, September 21, 2018 6:36 AM
> > To: Prashant Upadhyaya 
> > Cc: vpp-dev@lists.fd.io
> > Subject: Re: [vpp-dev] Regarding pre_data field in vlib_buffer_t
> >
> >
> >
> >
> >
> > pre_data is not intended for that. It is headroom space which may be used
> >
> > for outer headers in case of different tunnel encapsulation.
> >
> >
> >
> > You can use opaque area in vlib_buffer_t
> >
> >
> >
> > --
> > Damjan
> >
> >
> >
> > On 21 Sep 2018, at 07:59, Prashant Upadhyaya  wrote:
> >
> >
> >
> > Hi,
> >
> > I want to pass around some custom data between my plugins using 
> > vlib_buffer_t.
> > I believe the pre_data field of 128 bytes can be used for this. (I use
> > the DPDK plugin)
> >
> > However, are there some best practices to use this field ? I ask this
> > because I do see that some debug capabilities of VPP use this field
> > and if I start using this field then it might interfere with those
> > debug capabilties.
> >
> > Regards
> > -Prashant
> > -=-=-=-=-=-=-=-=-=-=-=-
> > Links: You receive all messages sent to this group.
> >
> > View/Reply Online (#10594): https://lists.fd.io/g/vpp-dev/message/10594
> > Mute This Topic: https://lists.fd.io/mt/25840898/675642
> > Group Owner: vpp-dev+ow...@lists.fd.io
> > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
> > -=-=-=-=-=-=-=-=-=-=-=-
> >
> >
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10601): https://lists.fd.io/g/vpp-dev/message/10601
Mute This Topic: https://lists.fd.io/mt/25840898/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding pre_data field in vlib_buffer_t

2018-09-21 Thread Prashant Upadhyaya
Hi,

I want to pass around some custom data between my plugins using vlib_buffer_t.
I believe the pre_data field of 128 bytes can be used for this. (I use
the DPDK plugin)

However, are there some best practices to use this field ? I ask this
because I do see that some debug capabilities of VPP use this field
and if I start using this field then it might interfere with those
debug capabilties.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10594): https://lists.fd.io/g/vpp-dev/message/10594
Mute This Topic: https://lists.fd.io/mt/25840898/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding symmetric RSS hash with DPDK plugin

2018-08-16 Thread Prashant Upadhyaya
Hi,

I need to use symmetric RSS hash in VPP while using DPDK plugin, so
that both sides of my TCP flows land on the same worker.

Requesting to please advise what would be the correct way of achieving
this in VPP via the hardware (say Intel82599 NIC) -- I believe it is
possible to configure an RSS key which does that. So the question
really is, if this configuration is supported by VPP or the code needs
to be changed somewhere to achieve this.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10184): https://lists.fd.io/g/vpp-dev/message/10184
Mute This Topic: https://lists.fd.io/mt/24569696/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding CLI command parsing

2018-08-14 Thread Prashant Upadhyaya
Thanks Dave !
Your trick worked for me exactly as I needed.

Regards
-Prashant


On Mon, Aug 13, 2018 at 11:39 PM, Dave Barach (dbarach)
 wrote:
> Try this in mycmd:
>
> u8 * line;
> if (unformat (input, "%U", unformat_line, ))
>process_line;
>
> Note that line will be a true u8 * vector: no null-termination. If you need 
> null termination: vec_add1 (line, 0);
>
> Remember to vec_free(...) it unless you're planning to keep it.
>
> HTH... Dave
>
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Prashant 
> Upadhyaya
> Sent: Monday, August 13, 2018 1:53 PM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Regarding CLI command parsing
>
> Hi,
>
> I am relatively new to this and trying to learn the art of using 
> format/unformat.
> My requirement is that if my command is "mycmd" followed by some string 
> (which may have spaces in it), then I should be able to read the entire 
> string up to the \n character typed by the user.
>
> Eg. if the command is typed like
>
> mycmd foo1 foo2 foo3
>
> (the "path" of the VLIB_CLI_COMMAND  is "mycmd") Then in my CLI callback 
> function, I want to be able to get the string
> "foo1 foo2 foo3" setup in a u8* variable. If the leading and trailing spaces 
> are trimmed, I am fine by that. I am looking for the correct way to call the 
> unformat functions in my command callback function to achieve this.
>
> Regards
> -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10139): https://lists.fd.io/g/vpp-dev/message/10139
Mute This Topic: https://lists.fd.io/mt/24505388/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding CLI command parsing

2018-08-13 Thread Prashant Upadhyaya
Hi,

I am relatively new to this and trying to learn the art of using
format/unformat.
My requirement is that if my command is "mycmd" followed by some
string (which may have spaces in it), then I should be able to read
the entire string up to the \n character typed by the user.

Eg. if the command is typed like

mycmd foo1 foo2 foo3

(the "path" of the VLIB_CLI_COMMAND  is "mycmd")
Then in my CLI callback function, I want to be able to get the string
"foo1 foo2 foo3" setup in a u8* variable. If the leading and trailing
spaces are trimmed, I am fine by that. I am looking for the correct
way to call the unformat functions in my command callback function to
achieve this.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10130): https://lists.fd.io/g/vpp-dev/message/10130
Mute This Topic: https://lists.fd.io/mt/24505388/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding VPP TCP Stack usage

2018-07-30 Thread Prashant Upadhyaya
Hi,

I have compiled VPP and it's running. I have an interface up and can
ping the IP applied there.

Now I am trying to bring up a legacy application TCP server (the one
which uses POSIX calls). So I set the LD_PRELOAD to point to
.../vpp/build-root/install-vpp_debug-native/vpp/lib64/libvcl_ldpreload.so
But the server application now crashes on startup.
Even the ldd command starts crashing.

Can somebody point me to the correct set of steps to be used for
LD_PRELOAD to bring up my legacy tcp server which will then engage the
VPP TCP stack instead of the kernel's

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9971): https://lists.fd.io/g/vpp-dev/message/9971
Mute This Topic: https://lists.fd.io/mt/23858819/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding /var/run/.rte_config

2018-07-28 Thread Prashant Upadhyaya
Hi again Damjan,

I am currently using VPP release 18.01 which just has the tx capture
functionality.
I see that VPP release 18.07 has introduced the rx capture functionality.

At the moment, upgrading my release to 18.01 is not an option.
Can you please advise how easy or difficult it will be to backport the
rx capture functionality and add it as a patch to 18.01.

Regards
-Prashant

On Thu, Jul 26, 2018 at 4:56 PM, Prashant Upadhyaya
 wrote:
> Thanks Damjan,
>
> I had a temporary brain-freeze, I do see there is a max parameter as well.
>
> Regards
> -Prashant
>
>
> On Thu, Jul 26, 2018 at 4:46 PM, Prashant Upadhyaya
>  wrote:
>> Hi Damjan,
>>
>> Thanks for your CLI pointers.
>> It seems there is a limit of 100 packets which would work against my
>> requirements.
>> If you have any suggestions there before I look into the code to
>> tinker with the 100, please do let me know.
>>
>> Regards
>> -Prashant
>>
>>
>> On Thu, Jul 26, 2018 at 4:24 PM, Damjan Marion  wrote:
>>> Dear Prashant,
>>>
>>> Try "pcap rx trace" and "pcap tx trace" command.
>>>
>>> https://git.fd.io/vpp/tree/src/plugins/dpdk/device/cli.c#n373
>>>
>>> Also please note that VPP packet generator (pg) supports both pcap play and
>>> capture...
>>>
>>> --
>>> Damjan
>>>
>>> On 26 Jul 2018, at 12:42, Prashant Upadhyaya  wrote:
>>>
>>> Hi Damjan,
>>>
>>> I was hoping to use the dpdk-pdump utility (built as part of DPDK) to
>>> generate the pcap files for traffic passing through the various ports.
>>> I have used that successfully with pure vanilla DPDK primary processes
>>> (non VPP usecases). It runs as a secondary process.
>>>
>>> Well, my requirement is to generate pcap's for ports on both tx and rx
>>> -- if you have an alternate VPP-core solution, that would be great.
>>>
>>> Regards
>>> -Prashant
>>>
>>>
>>> On Wed, Jul 25, 2018 at 11:32 PM, Damjan Marion  wrote:
>>>
>>>
>>> Dear Prashant,
>>>
>>> We had lot of operational issues with dpdk leftover files in the past, so
>>> we took special care to made them disappear ASAP.
>>>
>>> Even if they are present you will not be able to use secondary process,
>>> we simply don't support that use case. Secondary process will not be
>>> able to access buffer memory as buffer memory is allocated by VPP and not by
>>> DPDK.
>>>
>>> Please note that VPP is not application built on top of DPDK, for us DPDK is
>>> just
>>> source of device drivers wit a bit of weight around them. Typical VPP
>>> feature
>>> code can work even without DPDK being present.
>>>
>>> If you explain your use case we might be able to advise you how to approach
>>> your problem
>>> in a way native to VPP,  without relying on DPDK legacy features.
>>>
>>> Regards,
>>>
>>> --
>>> Damjan
>>>
>>> On 25 Jul 2018, at 13:11, Prashant Upadhyaya  wrote:
>>>
>>> Hi,
>>>
>>> I am running VPP with DPDK plugin.
>>> I see that when I normally run other non VPP DPDK applications, they
>>> create a file called /var/run/.rte_config
>>> But VPP running as a DPDK application (and calling rte_eal_init) does
>>> not create this file.
>>>
>>> There are consequences of the above in certain usecases where
>>> secondary processes try to look at this file.
>>>
>>> Can somebody advise why this file is not created when VPP is run with
>>> DPDK plugin. Or what needs to be done so that it does get created, or
>>> perhaps it is getting created somewhere that I am not aware of. I am
>>> running VPP as root.
>>>
>>> Regards
>>> -Prashant
>>> -=-=-=-=-=-=-=-=-=-=-=-
>>> Links: You receive all messages sent to this group.
>>>
>>> View/Reply Online (#9929): https://lists.fd.io/g/vpp-dev/message/9929
>>> Mute This Topic: https://lists.fd.io/mt/23811660/675642
>>> Group Owner: vpp-dev+ow...@lists.fd.io
>>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
>>> -=-=-=-=-=-=-=-=-=-=-=-
>>>
>>>
>>>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9957): https://lists.fd.io/g/vpp-dev/message/9957
Mute This Topic: https://lists.fd.io/mt/23811660/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding /var/run/.rte_config

2018-07-26 Thread Prashant Upadhyaya
Thanks Damjan,

I had a temporary brain-freeze, I do see there is a max parameter as well.

Regards
-Prashant


On Thu, Jul 26, 2018 at 4:46 PM, Prashant Upadhyaya
 wrote:
> Hi Damjan,
>
> Thanks for your CLI pointers.
> It seems there is a limit of 100 packets which would work against my
> requirements.
> If you have any suggestions there before I look into the code to
> tinker with the 100, please do let me know.
>
> Regards
> -Prashant
>
>
> On Thu, Jul 26, 2018 at 4:24 PM, Damjan Marion  wrote:
>> Dear Prashant,
>>
>> Try "pcap rx trace" and "pcap tx trace" command.
>>
>> https://git.fd.io/vpp/tree/src/plugins/dpdk/device/cli.c#n373
>>
>> Also please note that VPP packet generator (pg) supports both pcap play and
>> capture...
>>
>> --
>> Damjan
>>
>> On 26 Jul 2018, at 12:42, Prashant Upadhyaya  wrote:
>>
>> Hi Damjan,
>>
>> I was hoping to use the dpdk-pdump utility (built as part of DPDK) to
>> generate the pcap files for traffic passing through the various ports.
>> I have used that successfully with pure vanilla DPDK primary processes
>> (non VPP usecases). It runs as a secondary process.
>>
>> Well, my requirement is to generate pcap's for ports on both tx and rx
>> -- if you have an alternate VPP-core solution, that would be great.
>>
>> Regards
>> -Prashant
>>
>>
>> On Wed, Jul 25, 2018 at 11:32 PM, Damjan Marion  wrote:
>>
>>
>> Dear Prashant,
>>
>> We had lot of operational issues with dpdk leftover files in the past, so
>> we took special care to made them disappear ASAP.
>>
>> Even if they are present you will not be able to use secondary process,
>> we simply don't support that use case. Secondary process will not be
>> able to access buffer memory as buffer memory is allocated by VPP and not by
>> DPDK.
>>
>> Please note that VPP is not application built on top of DPDK, for us DPDK is
>> just
>> source of device drivers wit a bit of weight around them. Typical VPP
>> feature
>> code can work even without DPDK being present.
>>
>> If you explain your use case we might be able to advise you how to approach
>> your problem
>> in a way native to VPP,  without relying on DPDK legacy features.
>>
>> Regards,
>>
>> --
>> Damjan
>>
>> On 25 Jul 2018, at 13:11, Prashant Upadhyaya  wrote:
>>
>> Hi,
>>
>> I am running VPP with DPDK plugin.
>> I see that when I normally run other non VPP DPDK applications, they
>> create a file called /var/run/.rte_config
>> But VPP running as a DPDK application (and calling rte_eal_init) does
>> not create this file.
>>
>> There are consequences of the above in certain usecases where
>> secondary processes try to look at this file.
>>
>> Can somebody advise why this file is not created when VPP is run with
>> DPDK plugin. Or what needs to be done so that it does get created, or
>> perhaps it is getting created somewhere that I am not aware of. I am
>> running VPP as root.
>>
>> Regards
>> -Prashant
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>>
>> View/Reply Online (#9929): https://lists.fd.io/g/vpp-dev/message/9929
>> Mute This Topic: https://lists.fd.io/mt/23811660/675642
>> Group Owner: vpp-dev+ow...@lists.fd.io
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
>> -=-=-=-=-=-=-=-=-=-=-=-
>>
>>
>>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9940): https://lists.fd.io/g/vpp-dev/message/9940
Mute This Topic: https://lists.fd.io/mt/23811660/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding /var/run/.rte_config

2018-07-26 Thread Prashant Upadhyaya
Hi Damjan,

Thanks for your CLI pointers.
It seems there is a limit of 100 packets which would work against my
requirements.
If you have any suggestions there before I look into the code to
tinker with the 100, please do let me know.

Regards
-Prashant


On Thu, Jul 26, 2018 at 4:24 PM, Damjan Marion  wrote:
> Dear Prashant,
>
> Try "pcap rx trace" and "pcap tx trace" command.
>
> https://git.fd.io/vpp/tree/src/plugins/dpdk/device/cli.c#n373
>
> Also please note that VPP packet generator (pg) supports both pcap play and
> capture...
>
> --
> Damjan
>
> On 26 Jul 2018, at 12:42, Prashant Upadhyaya  wrote:
>
> Hi Damjan,
>
> I was hoping to use the dpdk-pdump utility (built as part of DPDK) to
> generate the pcap files for traffic passing through the various ports.
> I have used that successfully with pure vanilla DPDK primary processes
> (non VPP usecases). It runs as a secondary process.
>
> Well, my requirement is to generate pcap's for ports on both tx and rx
> -- if you have an alternate VPP-core solution, that would be great.
>
> Regards
> -Prashant
>
>
> On Wed, Jul 25, 2018 at 11:32 PM, Damjan Marion  wrote:
>
>
> Dear Prashant,
>
> We had lot of operational issues with dpdk leftover files in the past, so
> we took special care to made them disappear ASAP.
>
> Even if they are present you will not be able to use secondary process,
> we simply don't support that use case. Secondary process will not be
> able to access buffer memory as buffer memory is allocated by VPP and not by
> DPDK.
>
> Please note that VPP is not application built on top of DPDK, for us DPDK is
> just
> source of device drivers wit a bit of weight around them. Typical VPP
> feature
> code can work even without DPDK being present.
>
> If you explain your use case we might be able to advise you how to approach
> your problem
> in a way native to VPP,  without relying on DPDK legacy features.
>
> Regards,
>
> --
> Damjan
>
> On 25 Jul 2018, at 13:11, Prashant Upadhyaya  wrote:
>
> Hi,
>
> I am running VPP with DPDK plugin.
> I see that when I normally run other non VPP DPDK applications, they
> create a file called /var/run/.rte_config
> But VPP running as a DPDK application (and calling rte_eal_init) does
> not create this file.
>
> There are consequences of the above in certain usecases where
> secondary processes try to look at this file.
>
> Can somebody advise why this file is not created when VPP is run with
> DPDK plugin. Or what needs to be done so that it does get created, or
> perhaps it is getting created somewhere that I am not aware of. I am
> running VPP as root.
>
> Regards
> -Prashant
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#9929): https://lists.fd.io/g/vpp-dev/message/9929
> Mute This Topic: https://lists.fd.io/mt/23811660/675642
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9938): https://lists.fd.io/g/vpp-dev/message/9938
Mute This Topic: https://lists.fd.io/mt/23811660/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding /var/run/.rte_config

2018-07-26 Thread Prashant Upadhyaya
Hi Damjan,

I was hoping to use the dpdk-pdump utility (built as part of DPDK) to
generate the pcap files for traffic passing through the various ports.
I have used that successfully with pure vanilla DPDK primary processes
(non VPP usecases). It runs as a secondary process.

Well, my requirement is to generate pcap's for ports on both tx and rx
-- if you have an alternate VPP-core solution, that would be great.

Regards
-Prashant


On Wed, Jul 25, 2018 at 11:32 PM, Damjan Marion  wrote:
>
> Dear Prashant,
>
> We had lot of operational issues with dpdk leftover files in the past, so
> we took special care to made them disappear ASAP.
>
> Even if they are present you will not be able to use secondary process,
> we simply don't support that use case. Secondary process will not be
> able to access buffer memory as buffer memory is allocated by VPP and not by
> DPDK.
>
> Please note that VPP is not application built on top of DPDK, for us DPDK is
> just
> source of device drivers wit a bit of weight around them. Typical VPP
> feature
> code can work even without DPDK being present.
>
> If you explain your use case we might be able to advise you how to approach
> your problem
> in a way native to VPP,  without relying on DPDK legacy features.
>
> Regards,
>
> --
> Damjan
>
> On 25 Jul 2018, at 13:11, Prashant Upadhyaya  wrote:
>
> Hi,
>
> I am running VPP with DPDK plugin.
> I see that when I normally run other non VPP DPDK applications, they
> create a file called /var/run/.rte_config
> But VPP running as a DPDK application (and calling rte_eal_init) does
> not create this file.
>
> There are consequences of the above in certain usecases where
> secondary processes try to look at this file.
>
> Can somebody advise why this file is not created when VPP is run with
> DPDK plugin. Or what needs to be done so that it does get created, or
> perhaps it is getting created somewhere that I am not aware of. I am
> running VPP as root.
>
> Regards
> -Prashant
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#9929): https://lists.fd.io/g/vpp-dev/message/9929
> Mute This Topic: https://lists.fd.io/mt/23811660/675642
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9937): https://lists.fd.io/g/vpp-dev/message/9937
Mute This Topic: https://lists.fd.io/mt/23811660/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding /var/run/.rte_config

2018-07-25 Thread Prashant Upadhyaya
Hi,

I am running VPP with DPDK plugin.
I see that when I normally run other non VPP DPDK applications, they
create a file called /var/run/.rte_config
But VPP running as a DPDK application (and calling rte_eal_init) does
not create this file.

There are consequences of the above in certain usecases where
secondary processes try to look at this file.

Can somebody advise why this file is not created when VPP is run with
DPDK plugin. Or what needs to be done so that it does get created, or
perhaps it is getting created somewhere that I am not aware of. I am
running VPP as root.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9929): https://lists.fd.io/g/vpp-dev/message/9929
Mute This Topic: https://lists.fd.io/mt/23811660/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Packet tx functions via DPDK

2018-05-14 Thread Prashant Upadhyaya
Thanks a bunch Nitin, your mail helps me connect the dots -- the thing
I was missing was the connection with ethernet_register_interface()
Cool browsing done by you !
Please do check my other mail too on the list (for frames) and it
would be great if we can drill down on that topic too.

Regards
-Prashant


On Fri, May 11, 2018 at 7:08 PM, Nitin Saxena <nitin.sax...@cavium.com> wrote:
> Hi Prashant,
>
> Hope you are doing fine.
>
> Regarding your question, I am not able to see macswap plugin in current
> master branch but I will try to explain wrt dpdk_plugin:
>
> With respect to low level device each VPP device driver registers for
>
> 1) INPUT_NODE (For Rx) VLIB_REGISTER_NODE (This you already figured out)
> 2) And Tx function via VNET_DEVICE_CLASS (), Device class like "dpdk"
> There are couple of more function pointers registered but let stick to Rx/Tx
> part.
>
> As part of startup low level plugin/driver calls
> ethernet_register_interface() which in turn calls vnet_register_interface().
>
> vnet_register_interface:
> For a particular interface like Intel 40G, init time interface node is
> created and the tx function of that node is copied from
> VNET_DEVICE_CLASS{.tx_function="}. node->tx and node->output functions
> are properly initialized and node is registered.
>
> VPP stack sends packet to this low level Tx node via sw_if_index. I am
> guessing sw_if_index is determined by IPv4 routing or L2 switching.
>
> I think vnet_set_interface_output_node() is called for those interface (Tx
> path) whose DEVICE_CLASS do not provide tx_functions but I am not sure.
>
> "show vlib graph" will tell you how nodes are arranged in vpp graph.
>
> To be specific for your question
>
>   next0 = hi0->output_node_next_index;
>
> output_node_next_index is the index of next node at which the current vector
> has to be copied. (Transition from one node to another along the graph)
>
> Note: All this I got through browsing code and if this information is not
> correct, I request VPP experts to correct it.
>
> Thanks,
> Nitin
>
>
> On Thursday 10 May 2018 02:19 PM, Prashant Upadhyaya wrote:
>>
>> Hi,
>>
>> I am trying to walk throught the code to see how the packet arrives
>> into the system at dpdk rx side and finally leaves it at the dpdk tx
>> side. I am using the context of the macswap sample plugin for this.
>>
>> It is clear to me that dpdk-input is a graph node and it is an 'input'
>> type graph node so it polls for the packets using dpdk functions. The
>> frame is then eventually passed to the sample plugin because the
>> sample plugin inserts itself at the right place. The sample plugin
>> queues the packets to the interface-output graph node.
>>
>> So now I check the interface-output graph node function.
>> I locate that in vpp/src/vnet/interface_output.c
>> So the dispatch function for the graph node is
>> vnet_per_buffer_interface_output
>> Here the interface-output node is queueing the packets to a next node
>> based on the following code --
>>
>>   hi0 =
>>  vnet_get_sup_hw_interface (vnm,
>> vnet_buffer (b0)->sw_if_index
>> [VLIB_TX]);
>>
>>next0 = hi0->output_node_next_index;
>>
>> Now I am a little lost, what is this output_node_next_index ? Which
>> graph node function is really called for really emitting the packet ?
>> Where exactly is this setup ?
>>
>> I do see that the actual dpdk tx burst function is called from
>> tx_burst_vector_internal, which itself is called from
>> dpdk_interface_tx (vpp/src/plugins/dpdk/device/device.c). But how the
>> code reaches the dpdk_interface_tx after the packets are queued from
>> interface-output graph node is not clear to me. If somebody could help
>> me connect the dots, that would be great.
>>
>> Regards
>> -Prashant
>>
>> 
>>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9275): https://lists.fd.io/g/vpp-dev/message/9275
View All Messages In Topic (3): https://lists.fd.io/g/vpp-dev/topic/19023164
Mute This Topic: https://lists.fd.io/mt/19023164/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Packet tx functions via DPDK

2018-05-10 Thread Prashant Upadhyaya
Hi,

I am trying to walk throught the code to see how the packet arrives
into the system at dpdk rx side and finally leaves it at the dpdk tx
side. I am using the context of the macswap sample plugin for this.

It is clear to me that dpdk-input is a graph node and it is an 'input'
type graph node so it polls for the packets using dpdk functions. The
frame is then eventually passed to the sample plugin because the
sample plugin inserts itself at the right place. The sample plugin
queues the packets to the interface-output graph node.

So now I check the interface-output graph node function.
I locate that in vpp/src/vnet/interface_output.c
So the dispatch function for the graph node is vnet_per_buffer_interface_output
Here the interface-output node is queueing the packets to a next node
based on the following code --

 hi0 =
vnet_get_sup_hw_interface (vnm,
   vnet_buffer (b0)->sw_if_index
   [VLIB_TX]);

  next0 = hi0->output_node_next_index;

Now I am a little lost, what is this output_node_next_index ? Which
graph node function is really called for really emitting the packet ?
Where exactly is this setup ?

I do see that the actual dpdk tx burst function is called from
tx_burst_vector_internal, which itself is called from
dpdk_interface_tx (vpp/src/plugins/dpdk/device/device.c). But how the
code reaches the dpdk_interface_tx after the packets are queued from
interface-output graph node is not clear to me. If somebody could help
me connect the dots, that would be great.

Regards
-Prashant

-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9244): https://lists.fd.io/g/vpp-dev/message/9244
View All Messages In Topic (1): https://lists.fd.io/g/vpp-dev/topic/19023164
Mute This Topic: https://lists.fd.io/mt/19023164/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Regarding Frames

2018-05-09 Thread Prashant Upadhyaya
Hi,

I am looking for some white paper or some slides which explain deeply
how the data structures related to the frames are organized in fd.io.

 Specifically, I need to create a mind map of the following data
structures in the code –

vlib_frame_t
vlib_next_frame_t
vlib_pending_frame_t

Then the presence of next_frames and pending_frames in
vlib_node_main_t, and the presence of next_frame_index in
vlib_node_runtime_t.

Some graphical description of how the frames are organized, which one
pointing to what from the above and so forth.

This will help in critically understanding the lifecycle of a frame (I
do understand in general in terms of a plugin, but want to see the
organization slightly more deeply). I believe I can figure all this
out if I haggle enough with the code (and which I would have to do
eventually if I want to get far and have taken a pass at it), but if
someone has done this hard work and created some kind of a friendly
document around this, I would lap it up before making another pass at
the code.

Regards
-Prashant

-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9219): https://lists.fd.io/g/vpp-dev/message/9219
View All Messages In Topic (1): https://lists.fd.io/g/vpp-dev/topic/18965602
Mute This Topic: https://lists.fd.io/mt/18965602/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-