Re: [vpp-dev] How to trigger perf test and compare the results

2018-05-30 Thread Zhiyong Yang
Hi damjan,

I try to post commit “csit-dev”, however nothing happens, 
what’s wrong with me?

Thanks
Zhiyong

From: Damjan Marion [mailto:dmarion.li...@gmail.com]
Sent: Wednesday, May 30, 2018 5:52 PM
To: Yang, Zhiyong 
Cc: vpp-dev@lists.fd.io; Kinsella, Ray ; csit-dev 

Subject: Re: How to trigger perf test and compare the results

+csit-dev

You can compare with numbers available in perf dashboard.

https://docs.fd.io/csit/master/trending/introduction/index.html

--
Damjan


On 29 May 2018, at 11:17, Yang, Zhiyong 
mailto:zhiyong.y...@intel.com>> wrote:

Hi Guys,

   I need CSIT perf testing  to test patch.
   I have known that vpp-verify-perf-l2 can trigger perf test, but I 
only see the X520 results.

My questions are :

1.  how to trigger XL710 perf test?
2.  What do we compare it to … have the nightly build results?  where do I 
find those?
3.  These are MRR (maximum receive rate tests), how do we trigger NDP/PDR?


Thanks
Zhiyong



Re: [vpp-dev] syslog in snat

2018-05-30 Thread Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES@Cisco)
Hi,

As far I know there is no work on syslog for NAT plugin. If you want contribute 
and have some questions, do not hesitate contact me.

Matus


From: Matt Paska 
Sent: Thursday, May 31, 2018 3:43 AM
To: Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco) 

Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] syslog in snat

Hi all,

I'm just checking in to see if anyone made progress on syslog? I've tried to 
look at the code and submit a patch myself to no avail.

Thanks.

On Sun, Apr 8, 2018 at 11:44 PM, Matus Fabian -X (matfabia - PANTHEON 
TECHNOLOGIES@Cisco) mailto:matfa...@cisco.com>> wrote:
Deterministic NAT is dedicated to CGN so no logging of sessions planed.
Syslog is still in todo list, but contribution of patch is welcome.

Matus

From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Hamid via 
Lists.Fd.Io
Sent: Monday, April 9, 2018 7:53 AM
To: vpp-dev@lists.fd.io
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] syslog in snat

Another vote for syslog.

Did anyone made any progress?
In deterministic CGN, logging is not required but you dont have timestamps to 
verify the flows. Is there any hook to have nat ipfix logging for deterministic 
CGNAT as well!?

Regards,
Hamid




Re: [vpp-dev] syslog in snat

2018-05-30 Thread Matt Paska
Hi all,

I'm just checking in to see if anyone made progress on syslog? I've tried
to look at the code and submit a patch myself to no avail.

Thanks.

On Sun, Apr 8, 2018 at 11:44 PM, Matus Fabian -X (matfabia - PANTHEON
TECHNOLOGIES@Cisco)  wrote:

> Deterministic NAT is dedicated to CGN so no logging of sessions planed.
>
> Syslog is still in todo list, but contribution of patch is welcome.
>
>
>
> Matus
>
>
>
> *From:* vpp-dev@lists.fd.io  *On Behalf Of *Hamid
> via Lists.Fd.Io
> *Sent:* Monday, April 9, 2018 7:53 AM
> *To:* vpp-dev@lists.fd.io
> *Cc:* vpp-dev@lists.fd.io
> *Subject:* Re: [vpp-dev] syslog in snat
>
>
>
> Another vote for syslog.
>
> Did anyone made any progress?
> In deterministic CGN, logging is not required but you dont have timestamps
> to verify the flows. Is there any hook to have nat ipfix logging for
> deterministic CGNAT as well!?
>
> Regards,
> Hamid
>
> 
>
>


Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-30 Thread Ravi Kerur
Hi Steven,

I am testing both memif and vhost-virtio, unfortunately memif is not
working as well. I posted question to the list, let me know if
something is wrong. Below is the link

https://lists.fd.io/g/vpp-dev/topic/q_on_memif_between_vpp/20371922?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,20371922

Thanks.

On Wed, May 30, 2018 at 4:41 PM, Ravi Kerur  wrote:
> Hi Steve,
>
> Thank you for your inputs, I added feature-mask to see if it helps in
> setting up queues correctly, it didn't so I will remove it. I have
> tried following combination
>
> (1) VPP->vhost-user (on host) and DPDK/testpmd->virtio-user (in a
> container)  -- VPP crashes
> (2) DPDK/testpmd->vhost-user (on host) and DPDK/testpmd->virtio-user
> (in a container) -- works fine
>
> To use DPDK vhost-user inside VPP, I defined configuration in
> startup.conf as mentioned by you and it looks as follows
>
> unix {
>   nodaemon
>   log /var/log/vpp/vpp.log
>   full-coredump
>   cli-listen /run/vpp/cli.sock
>   gid vpp
> }
>
> api-segment {
>   gid vpp
> }
>
> cpu {
> main-core 1
> corelist-workers 6-9
> }
>
> dpdk {
> dev :04:10.4
> dev :04:10.6
> uio-driver vfio-pci
> vdev net_vhost0,iface=/var/run/vpp/sock1.sock
> vdev net_vhost1,iface=/var/run/vpp/sock2.sock
> huge-dir /dev/hugepages_1GB
> socket-mem 2048,2048
> }
>
> From VPP logs
> dpdk: EAL init args: -c 3c2 -n 4 --vdev
> net_vhost0,iface=/var/run/vpp/sock1.sock --vdev
> net_vhost1,iface=/var/run/vpp/sock2.sock --huge-dir /dev/hugepages_1GB
> -w :04:10.4 -w :04:10.6 --master-lcore 1 --socket-mem
> 2048,2048
>
> However, VPP doesn't create interface at all
>
> vpp# show interface
>   Name   Idx   State  Counter
> Count
> VirtualFunctionEthernet4/10/4 1down
> VirtualFunctionEthernet4/10/6 2down
> local00down
>
> since it is a static mapping I am assuming it should be created, correct?
>
> Thanks.
>
> On Wed, May 30, 2018 at 3:43 PM, Steven Luong (sluong)  
> wrote:
>> Ravi,
>>
>> First and foremost, get rid of the feature-mask option. I don't know what 
>> 0x4040 does for you. If that does not help, try testing it with dpdk 
>> based vhost-user instead of VPP native vhost-user to make sure that they can 
>> work well with each other first. To use dpdk vhost-user, add a vdev command 
>> in the startup.conf for each vhost-user device that you have.
>>
>> dpdk { vdev net_vhost0,iface=/var/run/vpp/sock1.sock }
>>
>> dpdk based vhost-user interface is named VhostEthernet0, VhostEthernet1, 
>> etc. Make sure you use the right interface name to set the state to up.
>>
>> If dpdk based vhost-user does not work with testpmd either, it looks like 
>> some problem with the way that you invoke testpmd.
>>
>> If dpdk based vhost-user works well with the same testpmd device driver and 
>> not vpp native vhost-user, I can set up something similar to yours to look 
>> into it.
>>
>> The device driver, testpmd, is supposed to pass the shared memory region to 
>> VPP for TX/RX queues. It looks like VPP vhost-user might have run into a 
>> bump there with using the shared memory (txvq->avail).
>>
>> Steven
>>
>> PS. vhost-user is not an optimum interface for containers. You may want to 
>> look into using memif if you don't already know about it.
>>
>>
>> On 5/30/18, 2:06 PM, "Ravi Kerur"  wrote:
>>
>> I am not sure what is wrong with the setup or a bug in vpp, vpp
>> crashes with vhost<-->virtio communication.
>>
>> (1) Vhost-interfaces are created and attached to bridge-domain as follows
>>
>> create vhost socket /var/run/vpp/sock1.sock server feature-mask 
>> 0x4040
>> create vhost socket /var/run/vpp/sock2.sock server feature-mask 
>> 0x4040
>> set interface state VirtualEthernet0/0/0 up
>> set interface state VirtualEthernet0/0/1 up
>>
>> set interface l2 bridge VirtualEthernet0/0/0 1
>> set interface l2 bridge VirtualEthernet0/0/1 1
>>
>>
>> (2) DPDK/testpmd is started in a container to talk to vpp/vhost-user
>> interface as follows
>>
>> docker run -it --privileged -v
>> /var/run/vpp/sock1.sock:/var/run/usvhost1 -v
>> /var/run/vpp/sock2.sock:/var/run/usvhost2 -v
>> /dev/hugepages:/dev/hugepages dpdk-app-testpmd ./bin/testpmd -c 0x3 -n
>> 4 --log-level=9 -m 64 --no-pci --single-file-segments
>> --vdev=virtio_user0,path=/var/run/usvhost1,mac=54:00:00:01:01:01
>> --vdev=virtio_user1,path=/var/run/usvhost2,mac=54:00:00:01:01:02 --
>> -i
>>
>> (3) show vhost-user VirtualEthernet0/0/1
>> Virtio vhost-user interfaces
>> Global:
>>   coalesce frames 32 time 1e-3
>>   number of rx virtqueues in interrupt mode: 0
>> Interface: VirtualEthernet0/0/1 (ifindex 4)
>> virtio_net_hdr_sz 10
>>  features mask (0x4040):
>>  features (0x0):
>>   protocol features (0x0)
>>
>>  socket filename 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-30 Thread steven luong
Ravi,

First and foremost, get rid of the feature-mask option. I don't know what 
0x4040 does for you. If that does not help, try testing it with dpdk based 
vhost-user instead of VPP native vhost-user to make sure that they can work 
well with each other first. To use dpdk vhost-user, add a vdev command in the 
startup.conf for each vhost-user device that you have.

dpdk { vdev net_vhost0,iface=/var/run/vpp/sock1.sock }

dpdk based vhost-user interface is named VhostEthernet0, VhostEthernet1, etc. 
Make sure you use the right interface name to set the state to up.

If dpdk based vhost-user does not work with testpmd either, it looks like some 
problem with the way that you invoke testpmd.

If dpdk based vhost-user works well with the same testpmd device driver and not 
vpp native vhost-user, I can set up something similar to yours to look into it.

The device driver, testpmd, is supposed to pass the shared memory region to VPP 
for TX/RX queues. It looks like VPP vhost-user might have run into a bump there 
with using the shared memory (txvq->avail). 

Steven

PS. vhost-user is not an optimum interface for containers. You may want to look 
into using memif if you don't already know about it.


On 5/30/18, 2:06 PM, "Ravi Kerur"  wrote:

I am not sure what is wrong with the setup or a bug in vpp, vpp
crashes with vhost<-->virtio communication.

(1) Vhost-interfaces are created and attached to bridge-domain as follows

create vhost socket /var/run/vpp/sock1.sock server feature-mask 0x4040
create vhost socket /var/run/vpp/sock2.sock server feature-mask 0x4040
set interface state VirtualEthernet0/0/0 up
set interface state VirtualEthernet0/0/1 up

set interface l2 bridge VirtualEthernet0/0/0 1
set interface l2 bridge VirtualEthernet0/0/1 1


(2) DPDK/testpmd is started in a container to talk to vpp/vhost-user
interface as follows

docker run -it --privileged -v
/var/run/vpp/sock1.sock:/var/run/usvhost1 -v
/var/run/vpp/sock2.sock:/var/run/usvhost2 -v
/dev/hugepages:/dev/hugepages dpdk-app-testpmd ./bin/testpmd -c 0x3 -n
4 --log-level=9 -m 64 --no-pci --single-file-segments
--vdev=virtio_user0,path=/var/run/usvhost1,mac=54:00:00:01:01:01
--vdev=virtio_user1,path=/var/run/usvhost2,mac=54:00:00:01:01:02 --
-i

(3) show vhost-user VirtualEthernet0/0/1
Virtio vhost-user interfaces
Global:
  coalesce frames 32 time 1e-3
  number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/1 (ifindex 4)
virtio_net_hdr_sz 10
 features mask (0x4040):
 features (0x0):
  protocol features (0x0)

 socket filename /var/run/vpp/sock2.sock type server errno "Success"

 rx placement:
 tx placement: spin-lock
   thread 0 on vring 0
   thread 1 on vring 0
   thread 2 on vring 0
   thread 3 on vring 0
   thread 4 on vring 0

 Memory regions (total 1)
 region fdguest_phys_addrmemory_sizeuserspace_addr
mmap_offsetmmap_addr
 == = == == ==
== ==
  0 550x7ff7c000 0x4000 0x7ff7c000
0x 0x7ffbc000

vpp# show vhost-user VirtualEthernet0/0/0
Virtio vhost-user interfaces
Global:
  coalesce frames 32 time 1e-3
  number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 3)
virtio_net_hdr_sz 10
 features mask (0x4040):
 features (0x0):
  protocol features (0x0)

 socket filename /var/run/vpp/sock1.sock type server errno "Success"

 rx placement:
 tx placement: spin-lock
   thread 0 on vring 0
   thread 1 on vring 0
   thread 2 on vring 0
   thread 3 on vring 0
   thread 4 on vring 0

 Memory regions (total 1)
 region fdguest_phys_addrmemory_sizeuserspace_addr
mmap_offsetmmap_addr
 == = == == ==
== ==
  0 510x7ff7c000 0x4000 0x7ff7c000
0x 0x7ffc

(4) vpp stack trace
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffd0e090700 (LWP 46570)]
0x77414642 in vhost_user_if_input
(mode=VNET_HW_INTERFACE_RX_MODE_POLLING,
node=0x7fffb76bab00, qid=, vui=0x7fffb6739700,
vum=0x778f4480 , vm=0x7fffb672a9c0)
at 
/var/venom/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1596
1596  if (PREDICT_FALSE (txvq->avail->flags & 0xFFFE))
(gdb) bt
#0  0x77414642 in vhost_user_if_input
(mode=VNET_HW_INTERFACE_RX_MODE_POLLING,
node=0x7fffb76bab00, qid=, vui=0x7fffb6739700,
vum=0x778f4480 , vm=0x7fffb672a9c0)

Re: [vpp-dev] Rx stuck to 0 after a while

2018-05-30 Thread Andrew Yourtchenko
Dear Rubina,

okay, I think I am reasonably happy with the change in
https://gerrit.fd.io/r/#/c/12770/ -
I also have rebased it onto the latest master so that it is ready to
commit if it works for you.

Please give it a shot and let me know. Note that you might need to
adjust the bihash
memory - as I'm storing forward and reverse entry now explicitly
(rather than per-packet calculating them).

Please let me know how it works in your test setup.

thanks,
andrew

On 5/30/18, Andrew Yourtchenko  wrote:
> Dear Rubina,
>
> Thanks for checking it!
>
> yeah actually that patch was leaking the sessions in the session reuse
> path. I have got the setup in the lab locally yesterday and am working
> on a better way to do it...
>
> Will get back to you when I am happy with the way the code works..
>
> --a
>
>
>
> On 5/29/18, Rubina Bianchi  wrote:
>> Dear Andrew
>>
>> I cleaned everything and created a new deb packages by your patch once
>> again. With your patch I never see deadlock again, but still I have
>> throughput problem in my scenario.
>>
>> -Per port stats table
>>   ports |   0 |   1
>> -
>>opackets |   474826597 |   452028770
>>  obytes |207843848531 |199591809555
>>ipackets |71010677 |72028456
>>  ibytes | 31441646551 | 31687562468
>> ierrors |   0 |   0
>> oerrors |   0 |   0
>>   Tx Bw |   9.56 Gbps |   9.16 Gbps
>>
>> -Global stats enabled
>>  Cpu Utilization : 88.4  %  7.1 Gb/core
>>  Platform_factor : 1.0
>>  Total-Tx:  18.72 Gbps
>>  Total-Rx:  59.30 Mbps
>>  Total-PPS   :   5.31 Mpps
>>  Total-CPS   :  79.79 Kcps
>>
>>  Expected-PPS:   9.02 Mpps
>>  Expected-CPS: 135.31 Kcps
>>  Expected-BPS:  31.77 Gbps
>>
>>  Active-flows:88837  Clients :  252   Socket-util : 0.5598 %
>>  Open-flows  : 14708455  Servers :65532   Socket :88837
>> Socket/Clients :  352.5
>>  Total_queue_full : 328355248
>>  drop-rate   :  18.66 Gbps
>>  current time: 180.9 sec
>>  test duration   : 99819.1 sec
>>
>> In best case (4 interface in one numa that only 2 of them has acl) my
>> device
>> (HP DL380 G9) throughput is maximum (18.72Gbps) but in worst case (4
>> interface in one numa that all of them has acl) my device throughput will
>> decrease from maximum to around 60Mbps. Actually patch just prevent
>> deadlock
>> in my case but throughput is same as before.
>>
>> 
>> From: Andrew  Yourtchenko 
>> Sent: Tuesday, May 29, 2018 10:11 AM
>> To: Rubina Bianchi
>> Cc: vpp-dev@lists.fd.io
>> Subject: Re: [vpp-dev] Rx stuck to 0 after a while
>>
>> Dear Rubina,
>>
>> thank you for quickly checking it!
>>
>> Judging by the logs the VPP quits, so I would say there should be a
>> core file, could you check ?
>>
>> If you find it (doublecheck by the timestamps that it is indeed the
>> fresh one), you can load it in gdb (using gdb 'path-to-vpp-binary'
>> 'path-to-core') and then get the backtrace using 'bt', this will give
>> more idea on what is going on.
>>
>> --a
>>
>> On 5/29/18, Rubina Bianchi  wrote:
>>> Dear Andrew
>>>
>>> I tested your patch and my problem still exist, but my service status
>>> changed and now there isn't any information about deadlock problem. Do
>>> you
>>> have any idea about how I can provide you more information?
>>>
>>> root@MYRB:~# service vpp status
>>> * vpp.service - vector packet processing engine
>>>Loaded: loaded (/lib/systemd/system/vpp.service; disabled; vendor
>>> preset:
>>> enabled)
>>>Active: inactive (dead)
>>>
>>> May 29 09:27:06 MYRB /usr/bin/vpp[30805]: load_one_vat_plugin:67: Loaded
>>> plugin: udp_ping_test_plugin.so
>>> May 29 09:27:06 MYRB /usr/bin/vpp[30805]: load_one_vat_plugin:67: Loaded
>>> plugin: stn_test_plugin.so
>>> May 29 09:27:06 MYRB vpp[30805]: /usr/bin/vpp[30805]: dpdk: EAL init
>>> args:
>>> -c 1ff -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w
>>> :08:00.0
>>> -w :08:00.1 -w :08
>>> May 29 09:27:06 MYRB /usr/bin/vpp[30805]: dpdk: EAL init args: -c 1ff -n
>>> 4
>>> --huge-dir /run/vpp/hugepages --file-prefix vpp -w :08:00.0 -w
>>> :08:00.1 -w :08:00.2 -w 000
>>> May 29 09:27:07 MYRB vnet[30805]: dpdk_ipsec_process:1012: not enough
>>> DPDK
>>> crypto resources, default to OpenSSL
>>> May 29 09:27:13 MYRB vnet[30805]: unix_signal_handler:124: received
>>> signal
>>> SIGCONT, PC 0x7fa535dfbac0
>>> May 29 09:27:13 MYRB vnet[30805]: received SIGTERM, exiting...
>>> May 29 09:27:13 MYRB systemd[1]: Stopping vector packet processing
>>> engine...
>>> May 29 09:27:13 MYRB vnet[30805]: unix_signal_handler:124: received
>>> signal
>>> SIGTERM, PC 0x7fa534121867
>>> May 29 09:27:13 MYRB systemd[1]: Stopped vector packet processing
>>> engine.
>>>
>>>
>>> 

Re: [vpp-dev] overflow hardening for vpp

2018-05-30 Thread Jin Sheng (jisheng)
Thank you, Dave!

From: "Dave Barach (dbarach)" 
Date: Wednesday, May 30, 2018 at 3:03 PM
To: Jin Sheng , "vpp-dev@lists.fd.io" 
Subject: RE: overflow hardening for vpp

Yup. Looks like -pie disappeared for no reason that I can remember. I’ll turn 
it back on.

D.

From: vpp-dev@lists.fd.io  On Behalf Of Jin Sheng (jisheng)
Sent: Wednesday, May 30, 2018 12:11 PM
To: vpp-dev@lists.fd.io
Cc: Dave Wallace 
Subject: [vpp-dev] overflow hardening for vpp

Hi,

We noticed that PIE and immediate binding isn’t enabled for vpp:

/usr/bin/vpp:
Position Independent Executable: no, normal executable!
Stack protected: yes
Fortify Source functions: yes (some protected functions found)
Read-only relocations: yes
Immediate binding: no, not found!

In the wiki page https://wiki.fd.io/view/VPP/Build_System_Deep_Dive, I see at 
least PIE should be enabled:

vpp_TAG_CFLAGS = -g -O2 -DFORTIFY_SOURCE=2 -march=$(MARCH) \
-fstack-protector -fPIC -pie
vpp_TAG_LDFLAGS = -g -O2 -DFORTIFY_SOURCE=2 -march=$(MARCH) \
-fstack-protector -fPIC -pie

But in the repository, it’s not even included in the initial commit in 2015.

Should we enable those hardening options? If so is vpp.mk the right place to 
add them?

Thanks,
Jin



Re: [vpp-dev] overflow hardening for vpp

2018-05-30 Thread Dave Barach
Yup. Looks like -pie disappeared for no reason that I can remember. I’ll turn 
it back on.

D.

From: vpp-dev@lists.fd.io  On Behalf Of Jin Sheng (jisheng)
Sent: Wednesday, May 30, 2018 12:11 PM
To: vpp-dev@lists.fd.io
Cc: Dave Wallace 
Subject: [vpp-dev] overflow hardening for vpp

Hi,

We noticed that PIE and immediate binding isn’t enabled for vpp:

/usr/bin/vpp:
Position Independent Executable: no, normal executable!
Stack protected: yes
Fortify Source functions: yes (some protected functions found)
Read-only relocations: yes
Immediate binding: no, not found!

In the wiki page https://wiki.fd.io/view/VPP/Build_System_Deep_Dive, I see at 
least PIE should be enabled:

vpp_TAG_CFLAGS = -g -O2 -DFORTIFY_SOURCE=2 -march=$(MARCH) \
-fstack-protector -fPIC -pie
vpp_TAG_LDFLAGS = -g -O2 -DFORTIFY_SOURCE=2 -march=$(MARCH) \
-fstack-protector -fPIC -pie

But in the repository, it’s not even included in the initial commit in 2015.

Should we enable those hardening options? If so is vpp.mk the right place to 
add them?

Thanks,
Jin



[vpp-dev] overflow hardening for vpp

2018-05-30 Thread Jin Sheng (jisheng)
Hi,

We noticed that PIE and immediate binding isn’t enabled for vpp:

/usr/bin/vpp:
Position Independent Executable: no, normal executable!
Stack protected: yes
Fortify Source functions: yes (some protected functions found)
Read-only relocations: yes
Immediate binding: no, not found!

In the wiki page https://wiki.fd.io/view/VPP/Build_System_Deep_Dive, I see at 
least PIE should be enabled:

vpp_TAG_CFLAGS = -g -O2 -DFORTIFY_SOURCE=2 -march=$(MARCH) \
-fstack-protector -fPIC -pie
vpp_TAG_LDFLAGS = -g -O2 -DFORTIFY_SOURCE=2 -march=$(MARCH) \
-fstack-protector -fPIC -pie

But in the repository, it’s not even included in the initial commit in 2015.

Should we enable those hardening options? If so is vpp.mk the right place to 
add them?

Thanks,
Jin


Re: [vpp-dev] VCL and LD Preload

2018-05-30 Thread Dave Wallace

Ville,

VCL is still under development and in the experimental phase. There are 
a number of known issues and it has yet to be tuned for performance 
(connections per second, throughput, or latency) or scale.


LDP was an early means of comparison testing the host stack .vs. the 
linux kernel, but there are currently no plans to enhance it any 
further.  The current focus is utilizing VCL directly in applications 
because the overhead of multiplexing VPP sessions and kernel sockets is 
LDP is problematic in high performance applications.


However, that being said, contributions to LDP are welcome.

Thanks,
-daw-

On 5/29/2018 11:27 AM, Kapanen, Ville (Nokia - FI/Espoo) wrote:


Hi!

I have been trying to test VPP’s potential for TCP/IP latency 
optimization. When I run socket_test.sh without any special parameters 
(so just comparison between native-kernel, native-vcl and 
native-preload) it seems that preload around six times worse than 
kernel. I have also tried this same test with few different hardware 
setups. I also tried to compare LD preload to kernel performance 
between cloud VM’s without using the provided script and ended up 
getting similar results. All test are run on 18.07 (build-release).


What is the status of VCL and LD preload? Should it be working and do 
I miss something obvious maybe?


Thanks,

Ville Kapanen






Re: [vpp-dev] anomaly in deleting tcp idle session in vpp

2018-05-30 Thread Andrew Yourtchenko
If the table is full it should fifo-reuse the tcp transient sessions, not the 
established ones.

--a

> On 30 May 2018, at 14:00, emma sdi  wrote:
> 
> Dear Folks,
> I have a problem with vpp stateful mode. I observed that vpp start to delete 
> tcp idle sessions when session table is full. my question is this behavior is 
> implemented and indeed it is normal routine? or is this an anomaly? because 
> this behavior is not normal generally (for example in conntrack) and an 
> established session have to exist till its timeout will be zero. I expect vpp 
> holds all old tcp idle sessions instead of creating new sessions when session 
> table doesn't have any empty entry.
> Best Regards,
> 


[vpp-dev] Nexus fd.io.master.centos7 VPP artifacts

2018-05-30 Thread Peter Mikus
Hello,

I have recently spotted that CentOS repo got reduced and old binaries are 
missing [1].

Is this expected?
Will the similar be done for Ubuntu repos?

Was this announced somewhere?

Thank you.

[1] https://nexus.fd.io/content/repositories/fd.io.master.centos7/io/fd/vpp/vpp/

Peter Mikus
Engineer - Software
Cisco Systems Limited
[http://www.cisco.com/web/europe/images/email/signature/logo05.jpg]
Think before you print.
This email may contain confidential and privileged material for the sole use of 
the intended recipient. Any review, use, distribution or disclosure by others 
is strictly prohibited. If you are not the intended recipient (or authorized to 
receive for the recipient), please contact the sender by reply email and delete 
all copies of this message.
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/index.html



Re: [vpp-dev] query on use_pthread in startup.conf

2018-05-30 Thread bindiya Kurle
Thanks Damjan . l was trying that because I want to move threads to other
partition.At the system start if main core is associated with some control
core group worker thread inherits this from parent thread and later set
affinity fails if you try to move worker thread to different partition.
Hence wanted to explore pthread with dpdk .is there any way where we can
specify worker thread group association ?

Regards,
Bindiya

On Wed, May 23, 2018 at 9:31 PM, Damjan Marion 
wrote:

> You cannot do that if you use DPDK. If you disable DPDK you will get all
> worker threads started as pthreads.
>
> And to be even more precise, DPDK threads are also pthreads, they just
> come with some extra stuff.
>
> What do you want to achieve?
>
> —
> Damjan
>
> On 17 May 2018, at 13:43, bindiya Kurle  wrote:
>
> Hi all,
> I am tried to use use_pthread in startup.conf.but still it takes
> Tried following option for creating multiple threads
>
> cpu {
> use-pthreads
>
>main-core 1
>corelist-workers 2,3
> }
>
> With this config , code always hit else part of below config  ,
> file : src/vlib/threads.c
>
>  if (tr->use_pthreads || tm->use_pthreads)
> {
>
>   for (j = 0; j < tr->count; j++)
> {
>   w = vlib_worker_threads + worker_thread_index++;
>   err = vlib_launch_thread_int (vlib_worker_thread_bootstrap_
> fn,
> w, 0);
>   if (err)
> clib_error_report (err);
> }
>
> }
>
>
>
>
>
>
>
>
>
>
> *else{  uword c;  /* *INDENT-OFF* */
> clib_bitmap_foreach (c, tr->coremask, ({w = vlib_worker_threads
> + worker_thread_index++;err = vlib_launch_thread_int
> (vlib_worker_thread_bootstrap_fn,
> w, c);if (err)  clib_error_report (err);
> }));*
>   /* *INDENT-ON* */
> }
>
> Can somebody help how can I create multiple thread using pthread instead
> dpdk_launch_thread?
>
>
> Regards,
> Bindiya
> 
>
>


[vpp-dev] anomaly in deleting tcp idle session in vpp

2018-05-30 Thread emma sdi
Dear Folks,
I have a problem with vpp stateful mode. I observed that vpp start to
delete tcp idle sessions when session table is full. my question is this
behavior is implemented and indeed it is normal routine? or is this an
anomaly? because this behavior is not normal generally (for example in
conntrack) and an established session have to exist till its timeout will
be zero. I expect vpp holds all old tcp idle sessions instead of creating
new sessions when session table doesn't have any empty entry.
Best Regards,


Re: [vpp-dev] How to trigger perf test and compare the results

2018-05-30 Thread Zhiyong Yang
Thank you very much , Damjan.

From: Damjan Marion [mailto:dmarion.li...@gmail.com]
Sent: Wednesday, May 30, 2018 5:52 PM
To: Yang, Zhiyong 
Cc: vpp-dev@lists.fd.io; Kinsella, Ray ; csit-dev 

Subject: Re: How to trigger perf test and compare the results

+csit-dev

You can compare with numbers available in perf dashboard.

https://docs.fd.io/csit/master/trending/introduction/index.html

--
Damjan


On 29 May 2018, at 11:17, Yang, Zhiyong 
mailto:zhiyong.y...@intel.com>> wrote:

Hi Guys,

   I need CSIT perf testing  to test patch.
   I have known that vpp-verify-perf-l2 can trigger perf test, but I 
only see the X520 results.

My questions are :

1.  how to trigger XL710 perf test?
2.  What do we compare it to … have the nightly build results?  where do I 
find those?
3.  These are MRR (maximum receive rate tests), how do we trigger NDP/PDR?


Thanks
Zhiyong



Re: [vpp-dev] Rx stuck to 0 after a while

2018-05-30 Thread Andrew Yourtchenko
Dear Rubina,

Thanks for checking it!

yeah actually that patch was leaking the sessions in the session reuse
path. I have got the setup in the lab locally yesterday and am working
on a better way to do it...

Will get back to you when I am happy with the way the code works..

--a



On 5/29/18, Rubina Bianchi  wrote:
> Dear Andrew
>
> I cleaned everything and created a new deb packages by your patch once
> again. With your patch I never see deadlock again, but still I have
> throughput problem in my scenario.
>
> -Per port stats table
>   ports |   0 |   1
> -
>opackets |   474826597 |   452028770
>  obytes |207843848531 |199591809555
>ipackets |71010677 |72028456
>  ibytes | 31441646551 | 31687562468
> ierrors |   0 |   0
> oerrors |   0 |   0
>   Tx Bw |   9.56 Gbps |   9.16 Gbps
>
> -Global stats enabled
>  Cpu Utilization : 88.4  %  7.1 Gb/core
>  Platform_factor : 1.0
>  Total-Tx:  18.72 Gbps
>  Total-Rx:  59.30 Mbps
>  Total-PPS   :   5.31 Mpps
>  Total-CPS   :  79.79 Kcps
>
>  Expected-PPS:   9.02 Mpps
>  Expected-CPS: 135.31 Kcps
>  Expected-BPS:  31.77 Gbps
>
>  Active-flows:88837  Clients :  252   Socket-util : 0.5598 %
>  Open-flows  : 14708455  Servers :65532   Socket :88837
> Socket/Clients :  352.5
>  Total_queue_full : 328355248
>  drop-rate   :  18.66 Gbps
>  current time: 180.9 sec
>  test duration   : 99819.1 sec
>
> In best case (4 interface in one numa that only 2 of them has acl) my device
> (HP DL380 G9) throughput is maximum (18.72Gbps) but in worst case (4
> interface in one numa that all of them has acl) my device throughput will
> decrease from maximum to around 60Mbps. Actually patch just prevent deadlock
> in my case but throughput is same as before.
>
> 
> From: Andrew  Yourtchenko 
> Sent: Tuesday, May 29, 2018 10:11 AM
> To: Rubina Bianchi
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Rx stuck to 0 after a while
>
> Dear Rubina,
>
> thank you for quickly checking it!
>
> Judging by the logs the VPP quits, so I would say there should be a
> core file, could you check ?
>
> If you find it (doublecheck by the timestamps that it is indeed the
> fresh one), you can load it in gdb (using gdb 'path-to-vpp-binary'
> 'path-to-core') and then get the backtrace using 'bt', this will give
> more idea on what is going on.
>
> --a
>
> On 5/29/18, Rubina Bianchi  wrote:
>> Dear Andrew
>>
>> I tested your patch and my problem still exist, but my service status
>> changed and now there isn't any information about deadlock problem. Do
>> you
>> have any idea about how I can provide you more information?
>>
>> root@MYRB:~# service vpp status
>> * vpp.service - vector packet processing engine
>>Loaded: loaded (/lib/systemd/system/vpp.service; disabled; vendor
>> preset:
>> enabled)
>>Active: inactive (dead)
>>
>> May 29 09:27:06 MYRB /usr/bin/vpp[30805]: load_one_vat_plugin:67: Loaded
>> plugin: udp_ping_test_plugin.so
>> May 29 09:27:06 MYRB /usr/bin/vpp[30805]: load_one_vat_plugin:67: Loaded
>> plugin: stn_test_plugin.so
>> May 29 09:27:06 MYRB vpp[30805]: /usr/bin/vpp[30805]: dpdk: EAL init
>> args:
>> -c 1ff -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w
>> :08:00.0
>> -w :08:00.1 -w :08
>> May 29 09:27:06 MYRB /usr/bin/vpp[30805]: dpdk: EAL init args: -c 1ff -n
>> 4
>> --huge-dir /run/vpp/hugepages --file-prefix vpp -w :08:00.0 -w
>> :08:00.1 -w :08:00.2 -w 000
>> May 29 09:27:07 MYRB vnet[30805]: dpdk_ipsec_process:1012: not enough
>> DPDK
>> crypto resources, default to OpenSSL
>> May 29 09:27:13 MYRB vnet[30805]: unix_signal_handler:124: received
>> signal
>> SIGCONT, PC 0x7fa535dfbac0
>> May 29 09:27:13 MYRB vnet[30805]: received SIGTERM, exiting...
>> May 29 09:27:13 MYRB systemd[1]: Stopping vector packet processing
>> engine...
>> May 29 09:27:13 MYRB vnet[30805]: unix_signal_handler:124: received
>> signal
>> SIGTERM, PC 0x7fa534121867
>> May 29 09:27:13 MYRB systemd[1]: Stopped vector packet processing engine.
>>
>>
>> 
>> From: Andrew  Yourtchenko 
>> Sent: Monday, May 28, 2018 5:58 PM
>> To: Rubina Bianchi
>> Cc: vpp-dev@lists.fd.io
>> Subject: Re: [vpp-dev] Rx stuck to 0 after a while
>>
>> Dear Rubina,
>>
>> Thanks for catching and reporting this!
>>
>> I suspect what might be happening is my recent change of using two
>> unidirectional sessions in bihash vs. the single one triggered a race,
>> whereby as the owning worker is deleting the session,
>> the non-owning worker is trying to update it. That would logically
>> explain the "BUG: .." line (since you don't change the interfaces nor
>> moving the traffic around, the 5 tuples 

Re: [vpp-dev] How to trigger perf test and compare the results

2018-05-30 Thread Damjan Marion
+csit-dev

You can compare with numbers available in perf dashboard.

https://docs.fd.io/csit/master/trending/introduction/index.html 


-- 
Damjan

> On 29 May 2018, at 11:17, Yang, Zhiyong  wrote:
> 
> Hi Guys,
>  
>I need CSIT perf testing  to test patch.
>I have known that vpp-verify-perf-l2 can trigger perf test, but I 
> only see the X520 results.
>  
> My questions are :
>  
> 1.  how to trigger XL710 perf test?
> 2.  What do we compare it to … have the nightly build results?  where do 
> I find those?
> 3.  These are MRR (maximum receive rate tests), how do we trigger NDP/PDR?
>  
>  
> Thanks
> Zhiyong