Hi,

Comments inline.

Br,
Yusuf

On Mon, Feb 13, 2017 at 9:20 PM, Damjan Marion <dmarion.li...@gmail.com>
wrote:

>
> > On 10 Feb 2017, at 18:03, yusuf khan <yusuf.at...@gmail.com> wrote:
> >
> > Hi,
> >
> > I am testing vpp performance for l3 routing. I am pumping traffic from
> moongen which is sending packet at 10Gbps line rate with 84 bytes packet
> size.
> > If i start vpp with single worker thread(in addition to main thread),
> vpp is able to route almost at the line rate. Almost because i see some
> drop at the receive of nic.
> > avg vector per node is 97 in this case.
> >
> > Success case stats from moongen below...
> >
> > Thread 1 vpp_wk_0 (lcore 11)
> > Time 122.6, average vectors/node 96.78, last 128 main loops 12.00 per
> node 256.00
> >   vector rates in 3.2663e6, out 3.2660e6, drop 1.6316e-2, punt 0.0000e0
> > ------------------------Moongen output------------------------
> ------------------------------------------
> > [Device: id=5] TX: 11.57 Mpps, 8148 Mbit/s (10000 Mbit/s with framing)
> > [Device: id=6] RX: 11.41 Mpps, 8034 Mbit/s (9860 Mbit/s with framing)
>
> Here seems that moongen is not able to send faster….
>
    [Yusuf] Here moongen is sending 10000 Mbit/s but receive is some what
less, may be due to nic drop...

>
> >
> >
> > But when i start vpp with 2 worker threads , each polling seperate nic.
> i see thre throught put almost reduce by 40%! The other thread is not
> receiving any packets its just polling idle nic but impacting other thread?
>
> Looks like one worker is polling both interfaces and another one is idle.
> That’s why you see drop of performance.
>
> Can you provide output of “show dpdk interface placement” command?
>

    [Yusuf] Each thread is polling individual interface. please find the
output below
    Thread 1 (vpp_wk_0 at lcore 11):
  TenGigabitEthernet5/0/1 queue 0
Thread 2 (vpp_wk_1 at lcore 24):
  TenGigabitEthernet5/0/0 queue 0

Infact in case of single worker thread , it polls both interfaces and i
dont see any performance issue. But as soon as additional worker thread is
created it cause performance issue.


>
> > Is polling pci bus causing contention?
>
> We are not polling PCI bus….
>
   [Yusuf] Ok. what i really meant was, do we have any pci command overhead
due to polling but i guess not.

>
> > what could be the reason. in this case avg vector per node is 256! some
> excerpt below…
> > Thread 2 vpp_wk_1 (lcore 24)
> > Time 70.9, average vectors/node 256.00, last 128 main loops 12.00 per
> node 256.00
> >   vector rates in 7.2937e6, out 7.2937e6, drop 0.0000e0, punt 0.0000e0
> > ------------------------Moongen output------------------------
> ------------------------------------------
> > [Device: id=5] TX: 11.49 Mpps, 8088 Mbit/s (9927 Mbit/s with framing)
> > [Device: id=6] RX: 7.34 Mpps, 5167 Mbit/s (6342 Mbit/s with framing)
> >
> > One more information, its dual port nic  82599ES on pci2 x8 bus.
> >
>
>
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to