Re: [vpp-dev] Query for IPSec support on VPP

2017-09-05 Thread Mukesh Yadav (mukyadav)
HI Sergio,

As I mentioned that transport mode is working now.
Next I tried tunnel mode.
Here I can see successfully packet decryption. But later inner packet gets 
dropped.


Outer IPSec packet is like 172.28.128.4 -> 172.28.128.5
Inner packet is 1.1.1.1 -> 2.2.2.2
I have added 2.2.2.2 on same interface as is Outer IPSec end-point


vpp# show int address

GigabitEthernet0/8/0 (up):

  172.28.128.5/24

  2.2.2.2/24

local0 (dn):

vpp# q

Above I have configured as below:

set int ip address GigabitEthernet0/8/0 172.28.128.5/24

set int ip address GigabitEthernet0/8/0 2.2.2.2/24

set interface state GigabitEthernet0/8/0 up



Trace looks like:

00:39:17:383051: dpdk-input

  GigabitEthernet0/8/0 rx queue 0

  buffer 0x4a5b: current data 14, length 152, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0

  PKT MBUF: port 0, nb_segs 1, pkt_len 166

buf_len 2176, data_len 166, ol_flags 0x0, data_off 128, phys_addr 0x9cf29700

packet_type 0x0

  IP4: 08:00:27:f5:2b:b9 -> 08:00:27:a9:8e:d4

  IPSEC_ESP: 172.28.128.4 -> 172.28.128.5

tos 0x00, ttl 64, length 152, checksum 0xf9d5

fragment id 0xe81b, flags DONT_FRAGMENT

00:39:17:383093: ip4-input

  IPSEC_ESP: 172.28.128.4 -> 172.28.128.5

tos 0x00, ttl 64, length 152, checksum 0xf9d5

fragment id 0xe81b, flags DONT_FRAGMENT

00:39:17:383099: ipsec-input-ip4

  esp: sa_id 20 spi 1000 seq 7

00:39:17:383101: dpdk-esp-decrypt

  esp: crypto aes-cbc-128 integrity sha1-96

00:39:17:383105: dpdk-crypto-input

  dpdk_crypto: cryptodev-id 1 queue-pair 0 next-index 2 status 1 sa-idx 1



00:39:17:383115: dpdk-esp-decrypt-post



00:39:17:383116: ip4-input

  ICMP: 1.1.1.1 -> 2.2.2.2

tos 0x00, ttl 64, length 84, checksum 0x17a6

fragment id 0x1cfe, flags DONT_FRAGMENT

  ICMP echo_request checksum 0x80dc

00:39:17:383117: ipsec-input-ip4

  esp: no esp packet

00:39:17:383117: ip4-lookup

  fib 0 dpo-idx 7 flow hash: 0x

  ICMP: 1.1.1.1 -> 2.2.2.2

tos 0x00, ttl 64, length 84, checksum 0x17a6

fragment id 0x1cfe, flags DONT_FRAGMENT

  ICMP echo_request checksum 0x80dc

00:39:17:383122: ip4-local

ICMP: 1.1.1.1 -> 2.2.2.2

  tos 0x00, ttl 64, length 84, checksum 0x17a6

  fragment id 0x1cfe, flags DONT_FRAGMENT

ICMP echo_request checksum 0x80dc

00:39:17:383123: error-drop

  ip4-input: ip4 source lookup miss



Do I have to configure Inner IP in some other way to avoid “ip4 source lookup 
miss”.

Thanks
Mukesh
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Query for IPSec support on VPP

2017-09-05 Thread Mukesh Yadav (mukyadav)
Thanks Sergio,
DPDK based IPsec basic tunnel worked with multi-core config.

cpu {
main-core 0
corelist-workers 1
#skip-cores 4
workers 1
}

Now since DPDK basic IPSec is working. I will try to dig in more in detail.
One query I posted in early threads, possibly got missed.
Currently ESP is supported, is there a plan to support AH in future?


Thanks
Mukesh


On 05/09/17, 7:49 PM, "Sergio Gonzalez Monroy" 
 wrote:

There are a few different ways to set cores/workers, best explained in 
the following link:
 https://wiki.fd.io/view/VPP/Using_VPP_In_A_Multi-thread_Model

Thanks,
Sergio

 


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] mheap performance

2017-09-05 Thread Ole Troan
Jacek,

It's also been on my list for a while to add a better bulk add for MAP domains 
/ rules.
Any idea of the scale you are looking at here?

Best regards,
Ole


> On 5 Sep 2017, at 15:07, Jacek Siuda  wrote:
> 
> Hi,
> 
> I'm conducting a tunnel test using VPP (vnet) map with the following 
> parameters:
> ea_bits_len=0, psid_offset=16, psid=length, single rule for each domain; 
> total number of tunnels: 30, total number of control messages: 600k.
> 
> My problem is with simple adding tunnels. After adding more than ~150k-200k, 
> performance drops significantly: first 100k is added in ~3s (on asynchronous 
> C client), next 100k in another ~5s, but the last 100k takes ~37s to add; in 
> total: ~45s. Python clients are performing even worse: 32 minutes(!) for 300k 
> tunnels with synchronous (blocking) version and ~95s with asynchronous. The 
> python clients are expected to perform a bit worse according to vpp docs, but 
> I was worried by non-linear time of single tunnel addition that is visible 
> even on C client.
> 
> While investigating this using perf, I found the culprit: it is the memory 
> allocation done for ip address by rule addition request.
> The memory is allocated by clib, which is using mheap library (~98% of cpu 
> consumption). I looked into mheap and it looks a bit complicated for 
> allocating a short object.
> I've done a short experiment by replacing (in vnet/map/ only) clib allocation 
> with DPDK rte_malloc() and achieved a way better performance: 300k tunnels in 
> ~5-6s with the same C-client, and respectively ~70s and ~30-40s with Python 
> clients. Also, I haven't noticed any negative impact on packet throughput 
> with my experimental allocator.
> 
> So, here are my questions:
> 1) Did someone other reported performance penalties for using mheap library? 
> I've searched the list archive and could not find any related questions.
> 2) Why mheap library was chosen to be used in clib? Are there any performance 
> benefits in some scenarios?
> 3) Are there any (long- or short-term) plans to replace memory management in 
> clib with some other library?
> 4) I wonder, if I'd like to upstream my solution, how should I approach 
> customization of memory allocation, so it would be accepted by community. 
> Installable function pointers defaulting to clib?
> 
> Best Regards,
> Jacek Siuda.
> 
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev



signature.asc
Description: Message signed with OpenPGP
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Running CLI against named vpp instance

2017-09-05 Thread Dave Wallace

Marek,

What is the uid/gid of /dev/shm/vpe-api ?

Is the user a member of the vpp group?

Does your VPP workspace include the patch c900ccc34 "Enabled gid vpp in 
startup.conf to allow non-root vppctl access" ?


Thanks,
-daw-

On 09/05/2017 06:08 AM, Marek Gradzki -X (mgradzki - PANTHEON 
TECHNOLOGIES at Cisco) wrote:


Hi,

I am having problems with running CLI against named vpp instance 
(g809bc74):


sudo vpp api-segment { prefix vpp0 }

sudo vppctl -p vpp0 show int

clib_socket_init: connect: Connection refused

But ps shows vpp process is running.

It worked with 17.07.

Is it no longer supported or I need some additional configuration?

Regards,

Marek



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP Performance drop from 17.04 to 17.07

2017-09-05 Thread Billy McFall
Thanks for the update John. I'll this along to our test team. Not sure when
we can schedule a retest, but when we do, I'll provide our results.

Thanks again,
Billy

On Tue, Sep 5, 2017 at 10:10 AM, John Lo (loj)  wrote:

> Hi Billy,
>
>
>
> I submitted fixes for VPP-963, now merged in both 17.07 and master/17.10,
> that I believe should address the NDR/PDR performance issue with the 10K
> and 1M flow cases. The regression was caused by a bug fix in the L2
> learning path to update stale time stamp and sequence number of MAC entries
> in L2FIB. Because the time stamp is in unit of  minutes, whenever the clock
> hits the minute mark, there can be a prolonged burst of MAC updates
> affecting forwarding performance with large number of MACs in L2 FIB
> needing updates. My fix would smooth out the update burst to reduce the
> impact. I believe you should now find the 17.07 or 17.10 performance for
> 10K and 1M flows slightly lower but fairly close to the level of 17.04,
> instead of somewhere between 1/3 to 1/2 to that of the 17.04 as you
> measured before.
>
>
>
> I also doubled the memory size of L2FIB table to fit 4M MACs and set the
> learn limit to 4M entries. During my test, I found L2FIB will run out of
> memory at around 2.8M MACs with the previous memory size.
>
>
>
> Regards,
>
> John
>
>
>
> *From:* vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] *On
> Behalf Of *Billy McFall
> *Sent:* Monday, August 28, 2017 12:47 PM
> *To:* Maciek Konstantynowicz (mkonstan) 
> *Cc:* csit-...@lists.fd.io; vpp-dev 
> *Subject:* Re: [vpp-dev] VPP Performance drop from 17.04 to 17.07
>
>
>
>
>
>
>
> On Mon, Aug 28, 2017 at 8:53 AM, Maciek Konstantynowicz (mkonstan) <
> mkons...@cisco.com> wrote:
>
> + csit-dev
>
>
>
> Billy,
>
>
>
> Per the last week CSIT project call, from CSIT perspective, we
>
> classified your reported issue as Test coverage escape.
>
>
>
> Summary
>
> ===
>
> CSIT test coverage got fixed, see more detail below. The CSIT tests
>
> uncovered regression for L2BD with MAC learning with higher total number
>
> of MACs in L2FIB, >>10k MAC, for multi-threaded configurations. Single-
>
> threaded configurations seem to be not impacted.
>
>
>
> Billy, Karl, Can you confirm this aligns with your findings?
>
>
>
> When you say "multi-threaded configuration", I assume you mean multiple
> worker threads? Karl's tests had 4 workers, one for each NIC (physical
> and vhost-user). He only tested multi-threaded, so we can not confirm that 
> single-threaded
> configurations seem to be not impacted.
>
>
>
> Our numbers are a little different from yours, but we are both seeing
> drops between releases. We had a bigger drop off with 10k flows, but seems
> to be similar with the million flow tests.
>
>
>
> I was a little disappointed the MAC limit change by John Lo on 8/23 didn't
> improve master number some.
>
>
>
> Thanks for all the hard work and adding these additional test cases.
>
>
>
> Billy
>
>
>
>
>
> More detail
>
> ===
>
> MAC scale tests have been now added L2BD and L2BD+vhost CSIT suites, as
>
> a simple extension to existing L2 testing suites. Some known issues with
>
> TG prevented CSIT to add those tests in the past, but now as TG issues
>
> have been addressed, the tests could be added swiftly. The complete list
>
> of added tests is listed in [1] - thanks to Peter Mikus for great work
>
> there!
>
>
>
> Results from running those tests multiple times within FD.io
>  CSIT lab
>
> infra can be glanced over by checking dedicated test trigger commits
>
> [2][3][4], summary graphs in linked xls [5]. The results confirm there
>
> is regression in VPP l2fib code affecting all scaled up MAC tests in
>
> multi-thread configuration. Single-thread configurations seems not be
>
> impacted.
>
>
>
> The tests in commit [1] are not merged yet, as they're waiting for
>
> TG/TRex team to fix TRex issue with mis-calculating Ethernet FCS with
>
> large number of L2 MAC flows (>10k MAC flows). Issue is tracked by [6],
>
> TRex v2.29 with the fix ETA is w/e 1-Sep i.e. this week. Reported CSIT test
>
> results are using Ethernet frames with UDP headers that's masking the
>
> TRex issue.
>
>
>
> We have also vpp git bisected the problem between v17.04 (good) and
>
> v17.07 (bad) in a separate IXIA based lab in SJC, and found the culprit
>
> vpp patch [7]. Awaiting fix from vpp-dev, jira ticket raised [8].
>
>
>
> Many thanks for reporting this regression and working with CSIT to plug
>
> this hole in testing.
>
>
>
> -Maciek
>
>
>
> [1] CSIT-786 L2FIB scale testing [https://gerrit.fd.io/r/#/c/8145/
> ge8145] [https://jira.fd.io/browse/CSIT-786 CSIT-786];
>
> L2FIB scale testing for 10k, 100k, 1M FIB entries
>
>  ./l2:
>
>  10ge2p1x520-eth-l2bdscale10kmaclrn-ndrpdrdisc.robot
>
>  10ge2p1x520-eth-l2bdscale100kmaclrn-ndrpdrdisc.robot
>
>  10ge2p1x520-eth-l2bdscale1mmaclrn-ndrpdrdisc.robot
>
>  

Re: [vpp-dev] Query for IPSec support on VPP

2017-09-05 Thread Sergio Gonzalez Monroy
There are a few different ways to set cores/workers, best explained in 
the following link:

https://wiki.fd.io/view/VPP/Using_VPP_In_A_Multi-thread_Model

Thanks,
Sergio

On 05/09/2017 15:10, Mukesh Yadav (mukyadav) wrote:

Thanks Sergio,

I will for sure try latest clone with a fix.
Besides what is configuration to test same with worker core.
Will be helpful for me in future..

Thanks
Mukesh

On 05/09/17, 6:22 PM, "Sergio Gonzalez Monroy" 
 wrote:

 Hi Mukesh,
 
 I was able to find the bug. It was not directly related to transport

 mode but to the setup when using single core (master core) without
 workers ( https://gerrit.fd.io/r/8302 ).
 
 You can either apply the change or setup VPP to use workers (at the

 moment you are running with single core, no workers).
 
 Regards,

 Sergio
  





___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Query for IPSec support on VPP

2017-09-05 Thread Mukesh Yadav (mukyadav)
Thanks Sergio,

I will for sure try latest clone with a fix.
Besides what is configuration to test same with worker core.
Will be helpful for me in future..

Thanks
Mukesh

On 05/09/17, 6:22 PM, "Sergio Gonzalez Monroy" 
 wrote:

Hi Mukesh,

I was able to find the bug. It was not directly related to transport 
mode but to the setup when using single core (master core) without 
workers ( https://gerrit.fd.io/r/8302 ).

You can either apply the change or setup VPP to use workers (at the 
moment you are running with single core, no workers).

Regards,
Sergio
 


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] mheap performance

2017-09-05 Thread Dave Barach (dbarach)
Dear Jacek,

Use of the clib memory allocator is mainly historical. It’s elegant in a couple 
of ways - including built-in leak-finding - but it has been known to backfire 
in terms of performance. Individual mheaps are limited to 4gb in a [typical] 
32-bit vector length image.

Note that the idiosyncratic mheap API functions “tell me how long this object 
really is” and “allocate N bytes aligned to a boundary at a certain offset” are 
used all over the place.

I wouldn’t mind replacing it - so long as we don’t create a hard dependency on 
the dpdk - but before we go there...: Tell me a bit about the scenario at hand. 
What are we repeatedly allocating / freeing? That’s almost never necessary...

Can you easily share the offending backtrace?

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Jacek Siuda
Sent: Tuesday, September 5, 2017 9:08 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] mheap performance

Hi,
I'm conducting a tunnel test using VPP (vnet) map with the following parameters:
ea_bits_len=0, psid_offset=16, psid=length, single rule for each domain; total 
number of tunnels: 30, total number of control messages: 600k.
My problem is with simple adding tunnels. After adding more than ~150k-200k, 
performance drops significantly: first 100k is added in ~3s (on asynchronous C 
client), next 100k in another ~5s, but the last 100k takes ~37s to add; in 
total: ~45s. Python clients are performing even worse: 32 minutes(!) for 300k 
tunnels with synchronous (blocking) version and ~95s with asynchronous. The 
python clients are expected to perform a bit worse according to vpp docs, but I 
was worried by non-linear time of single tunnel addition that is visible even 
on C client.
While investigating this using perf, I found the culprit: it is the memory 
allocation done for ip address by rule addition request.
The memory is allocated by clib, which is using mheap library (~98% of cpu 
consumption). I looked into mheap and it looks a bit complicated for allocating 
a short object.
I've done a short experiment by replacing (in vnet/map/ only) clib allocation 
with DPDK rte_malloc() and achieved a way better performance: 300k tunnels in 
~5-6s with the same C-client, and respectively ~70s and ~30-40s with Python 
clients. Also, I haven't noticed any negative impact on packet throughput with 
my experimental allocator.
So, here are my questions:
1) Did someone other reported performance penalties for using mheap library? 
I've searched the list archive and could not find any related questions.
2) Why mheap library was chosen to be used in clib? Are there any performance 
benefits in some scenarios?
3) Are there any (long- or short-term) plans to replace memory management in 
clib with some other library?
4) I wonder, if I'd like to upstream my solution, how should I approach 
customization of memory allocation, so it would be accepted by community. 
Installable function pointers defaulting to clib?

Best Regards,
Jacek Siuda.


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Query for IPSec support on VPP

2017-09-05 Thread Sergio Gonzalez Monroy

Hi Mukesh,

I was able to find the bug. It was not directly related to transport 
mode but to the setup when using single core (master core) without 
workers ( https://gerrit.fd.io/r/8302 ).


You can either apply the change or setup VPP to use workers (at the 
moment you are running with single core, no workers).


Regards,
Sergio

On 05/09/2017 09:28, Sergio Gonzalez Monroy wrote:

On 04/09/2017 17:07, Mukesh Yadav (mukyadav) wrote:

HI Sergio,


I see new document as updated in latest clone is as below:
dpdk {
 dev :81:00.0
 dev :81:00.1
 dev :85:01.0
 dev :85:01.1
 vdev crypto_aesni_mb0,socket_id=1
 vdev crypto_aesni_mb1,socket_id=1
}
I think it shall be “vdev crypto_aesni_mb0” instead as you mentioned 
earlier in one of mail.

With “vdev crypto_aesni_mb0,socket_id=1”, I get error.
CRYPTODEV: [crypto_aesni_mb] cryptodev_aesni_mb_create() line 727: 
failed to create cryptodev vdev
CRYPTODEV: [crypto_aesni_mb] cryptodev_aesni_mb_create() line 769: 
driver crypto_aesni_mb0: cryptodev_aesni_create failed

EAL: failed to initialize crypto_aesni_mb0 device


The socket_id=1 was just an example of how to place the vdev in socket 
1 if you run with cores in that socket.




When I am trying “vdev crypto_aesni_mb0”, it seems it works fine and 
get no error. Log is as below.

DPDK physical memory layout:
Segment 0: phys:0x6f40, len:1073741824, virt:0x7f526520, 
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0

DPDK Cryptodevs info:
dev_idn_qpnb_objcache_size
182512


Still when I am sending ESP packet to VPP, it is getting silently 
dropped.
Note: Same config is working when it is non-dpdk i.e VPP core IPsec 
core.




Trace:
vpp# show trace
--- Start of thread 0 vpp_main ---
Packet 1

00:02:14:020870: dpdk-input
   GigabitEthernet0/8/0 rx queue 0
   buffer 0x4cf2: current data 14, length 136, free-list 0, 
clone-count 0, totlen-nifb 0, trace 0x0

   PKT MBUF: port 0, nb_segs 1, pkt_len 150
 buf_len 2176, data_len 150, ol_flags 0x0, data_off 128, 
phys_addr 0x9b9cf140

 packet_type 0x0
   IP4: 08:00:27:f5:2b:b9 -> 08:00:27:a9:8e:d4
   IPSEC_ESP: 172.28.128.4 -> 172.28.128.5
 tos 0x00, ttl 64, length 136, checksum 0x7fa7
 fragment id 0x625a, flags DONT_FRAGMENT
00:02:14:020903: ip4-input
   IPSEC_ESP: 172.28.128.4 -> 172.28.128.5
 tos 0x00, ttl 64, length 136, checksum 0x7fa7
 fragment id 0x625a, flags DONT_FRAGMENT
00:02:14:020909: ipsec-input-ip4
   esp: sa_id 20 spi 1000 seq 3
00:02:14:020910: dpdk-esp-decrypt
   esp: crypto aes-cbc-128 integrity sha1-96

vpp# q


vagrant@localhost:~$ sudo vppctl show errors
 _____   _  ___
  __/ __/ _ \  (_)__| | / / _ \/ _ \
  _/ _// // / / / _ \   | |/ / ___/ ___/
  /_/ /(_)_/\___/   |___/_/  /_/

vpp# show errors
CountNode  Reason
  1dpdk-esp-decryptESP pkts received
  1 ipsec-input-ip4IPSEC pkts received
vpp# q



Config done on VPP:
vpp# ipsec sa add 10 spi 1001 esp crypto-alg aes-cbc-128 crypto-key 
4a506a794f574265564551694d653768 integ-alg sha1-96 integ-key 
4339314b55523947594d6d3547666b45764e6a58
vpp# ipsec sa add 20 spi 1000 esp crypto-alg aes-cbc-128 crypto-key 
4a506a794f574265564551694d653768 integ-alg sha1-96 integ-key 
4339314b55523947594d6d3547666b45764e6a58

vpp# ipsec spd add 1
vpp# set interface ipsec spd GigabitEthernet0/8/0 1
vpp# ipsec policy add spd 1 priority 100 inbound action bypass 
protocol 50
vpp# ipsec policy add spd 1 priority 100 outbound action bypass 
protocol 50
vpp# ipsec policy add spd 1 priority 10 inbound action protect sa 20 
local-ip-range 172.28.128.5 - 172.28.128.5 remote-ip-range 
172.28.128.4 - 172.28.128.4
vpp# ipsec policy add spd 1 priority 10 outbound action protect sa 10 
local-ip-range 172.28.128.5 - 172.28.128.5 remote-ip-range 
172.28.128.4 - 172.28.128.4

vpp#

Do you think, it is unresolved issue. Because I think same config 
unchanged at both IPsec end-point i.e VPP and host Ubuntu, it works 
fine.

Just when I start VPP with below config, it shows above errors.


Agree, same configuration should work, so it is likely that there is a 
bug for transport mode.

I'll investigate.

Thanks,
Sergio


dpdk {
  socket-mem 1024,1024
  num-mbufs 131072
  dev :00:08.0
  vdev crypto_aesni_mb0
  vdev crypto_aesni_mb1
}

If I remove above config and all setup same at Ubunto and vpp, things 
works fine.




Thanks
Mukesh

On 01/09/17, 1:52 PM, "Sergio Gonzalez Monroy" 
 wrote:


 FYI I updated the doc, hopefully everything is correct and up to 
date now.

 https://gerrit.fd.io/r/#/c/8273/
  Thanks,
 Sergio
  On 31/08/2017 10:00, Sergio Gonzalez Monroy wrote:
 > On 31/08/2017 09:37, Mukesh Yadav (mukyadav) wrote:
 >>
 >> Thanks a lot Sergio for lot of patience and help,

[vpp-dev] Running CLI against named vpp instance

2017-09-05 Thread Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Hi,

I am having problems with running CLI against named vpp instance (g809bc74):

sudo vpp api-segment { prefix vpp0 }

sudo vppctl -p vpp0 show int
clib_socket_init: connect: Connection refused

But ps shows vpp process is running.

It worked with 17.07.
Is it no longer supported or I need some additional configuration?

Regards,
Marek
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] query on hugepages usage in VPP

2017-09-05 Thread Balaji Kn
Hello,

Can you help me on below query related to 1G huge pages usage in VPP.

Regards,
Balaji


On Thu, Aug 31, 2017 at 5:19 PM, Balaji Kn  wrote:

> Hello,
>
> I am using *v17.07*. I am trying to configure huge page size as 1GB and
> reserve 16 huge pages for VPP.
> I went through /etc/sysctl.d/80-vpp.conf file and found options only for
> huge page of size 2M.
>
> *output of vpp-conf file.*
> .# Number of 2MB hugepages desired
> vm.nr_hugepages=1024
>
> # Must be greater than or equal to (2 * vm.nr_hugepages).
> vm.max_map_count=3096
>
> # All groups allowed to access hugepages
> vm.hugetlb_shm_group=0
>
> # Shared Memory Max must be greator or equal to the total size of
> hugepages.
> # For 2MB pages, TotalHugepageSize = vm.nr_hugepages * 2 * 1024 * 1024
> # If the existing kernel.shmmax setting  (cat /sys/proc/kernel/shmmax)
> # is greater than the calculated TotalHugepageSize then set this parameter
> # to current shmmax value.
> kernel.shmmax=2147483648 <(214)%20748-3648>
>
> Please can you let me know configurations i need to do so that VPP runs
> with 1GB huge pages.
>
> Host OS is supporting 1GB huge pages.
>
> Regards,
> Balaji
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Compile error

2017-09-05 Thread 薛欣颖
Hi Sergio,

I have  run 'make install-dep'.
The nasm version is 2.10.09.

Thanks,
xyxue



 
From: Sergio Gonzalez Monroy
Date: 2017-09-05 16:24
To: 薛欣颖; vpp-dev
Subject: Re: [vpp-dev] Compile error
Hi,

Have you run 'make install-dep' ?
Which nasm version do you have in your system?

Thanks,
Sergio

On 04/09/2017 11:59, 薛欣颖 wrote:

Hi,

I got code by : git clone https://gerrit.fd.io/r/vpp.

input ‘make  dpdk-install-dev’ the error infomation is shown below:
Building IPSec-MB 0.46 library
make[5]: Entering directory `/home/vpp_communication/vpp/dpdk'
mkdir -p /home/vpp_communication/vpp/dpdk/deb/debian/tmp/usr/lib/
# Do not build GCM stuff if we are building ISA_L
make -C /home/vpp_communication/vpp/dpdk/deb/_build/intel-ipsec-mb-0.46 -j 
NO_GCM=n
make: Entering an unknown directory
make: *** /home/vpp_communication/vpp/dpdk/deb/_build/intel-ipsec-mb-0.46: No 
such file or directory.  Stop.
make: Leaving an unknown directory
make[5]: *** [build-ipsec-mb] Error 2
make[5]: Leaving directory `/home/vpp_communication/vpp/dpdk'
make[4]: *** [override_dh_install] Error 2
make[4]: Leaving directory `/home/vpp_communication/vpp/dpdk/deb'
make[3]: *** [binary] Error 2
make[3]: Leaving directory `/home/vpp_communication/vpp/dpdk/deb'
dpkg-buildpackage: error: debian/rules binary gave error exit status 2
make[2]: *** [vpp-dpdk-dev_17.08-vpp1_amd64.deb] Error 2
make[2]: Leaving directory `/home/vpp_communication/vpp/dpdk'
make[1]: *** [install-deb] Error 2
make[1]: Leaving directory `/home/vpp_communication/vpp/dpdk'
make: *** [dpdk-install-dev] Error 2

input ‘make build-release’,the error information is shown below:
Making object file obj/mb_mgr_hmac_sha_224_submit_avx.o 
nasm -o obj/mb_mgr_hmac_sha_224_submit_avx.o -felf64 -Xgnu -gdwarf -DLINUX 
-D__linux__ -Iinclude/ -I./ -Iavx/ -Iavx2/ -Iavx512/ -Isse/ 
avx/mb_mgr_hmac_sha_224_submit_avx.asm
Making object file obj/mb_mgr_hmac_sha_224_submit_avx2.o 
Making object file obj/mb_mgr_hmac_sha_224_submit_avx512.o 
nasm -o obj/mb_mgr_hmac_sha_224_submit_avx2.o -felf64 -Xgnu -gdwarf -DLINUX 
-D__linux__ -Iinclude/ -I./ -Iavx/ -Iavx2/ -Iavx512/ -Isse/ 
avx2/mb_mgr_hmac_sha_224_submit_avx2.asm
nasm -o obj/mb_mgr_hmac_sha_224_submit_avx512.o -felf64 -Xgnu -gdwarf -DLINUX 
-D__linux__ -Iinclude/ -I./ -Iavx/ -Iavx2/ -Iavx512/ -Isse/ 
avx512/mb_mgr_hmac_sha_224_submit_avx512.asm
mb_mgr_hmac_sha_256_submit_avx512.asm:170: error: parser: instruction expected
mb_mgr_hmac_sha_256_submit_avx512.asm:171: error: symbol `vmovdqu32' redefined
mb_mgr_hmac_sha_256_submit_avx512.asm:171: error: parser: instruction expected
Making object file obj/mb_mgr_hmac_sha_224_submit_sse.o 
nasm -o obj/mb_mgr_hmac_sha_224_submit_sse.o -felf64 -Xgnu -gdwarf -DLINUX 
-D__linux__ -Iinclude/ -I./ -Iavx/ -Iavx2/ -Iavx512/ -Isse/ 
sse/mb_mgr_hmac_sha_224_submit_sse.asm
--
make[4]: *** [obj/mb_mgr_hmac_sha_224_submit_avx512.o] Error 1
make[4]: *** Waiting for unfinished jobs
--
--
--
make[4]: Leaving directory 
`/home/vpp_communication/vpp/build-root/build-vpp-native/dpdk/intel-ipsec-mb-0.46'
make[3]: *** [build-ipsec-mb] Error 2
make[3]: Leaving directory `/home/vpp_communication/vpp/dpdk'
make[2]: *** [ebuild-install] Error 2
make[2]: Leaving directory `/home/vpp_communication/vpp/dpdk'
make[1]: *** [dpdk-install] Error 2
make[1]: Leaving directory `/home/vpp_communication/vpp/build-root'
make: *** [build-release] Error 2
root@ubuntu:/home/vpp_communication/vpp# 

Is this a problem ? How can I solve it?

Thanks,
xyxue




___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Compile error

2017-09-05 Thread Sergio Gonzalez Monroy

Hi,

Have you run 'make install-dep' ?
Which nasm version do you have in your system?

Thanks,
Sergio

On 04/09/2017 11:59, 薛欣颖 wrote:


Hi,

I got code by : git clone https://gerrit.fd.io/r/vpp.

input ‘make  dpdk-install-dev’ the error infomation is shown below:
Building IPSec-MB 0.46 library
make[5]: Entering directory `/home/vpp_communication/vpp/dpdk'
mkdir -p /home/vpp_communication/vpp/dpdk/deb/debian/tmp/usr/lib/
# Do not build GCM stuff if we are building ISA_L
make -C /home/vpp_communication/vpp/dpdk/deb/_build/intel-ipsec-mb-0.46 -j 
NO_GCM=n
make: Entering an unknown directory
make: *** /home/vpp_communication/vpp/dpdk/deb/_build/intel-ipsec-mb-0.46: No 
such file or directory.  Stop.
make: Leaving an unknown directory
make[5]: *** [build-ipsec-mb] Error 2
make[5]: Leaving directory `/home/vpp_communication/vpp/dpdk'
make[4]: *** [override_dh_install] Error 2
make[4]: Leaving directory `/home/vpp_communication/vpp/dpdk/deb'
make[3]: *** [binary] Error 2
make[3]: Leaving directory `/home/vpp_communication/vpp/dpdk/deb'
dpkg-buildpackage: error: debian/rules binary gave error exit status 2
make[2]: *** [vpp-dpdk-dev_17.08-vpp1_amd64.deb] Error 2
make[2]: Leaving directory `/home/vpp_communication/vpp/dpdk'
make[1]: *** [install-deb] Error 2
make[1]: Leaving directory `/home/vpp_communication/vpp/dpdk'
make: *** [dpdk-install-dev] Error 2

input ‘make build-release’,the error information is shown below:
Making object file obj/mb_mgr_hmac_sha_224_submit_avx.o
nasm -o obj/mb_mgr_hmac_sha_224_submit_avx.o -felf64 -Xgnu -gdwarf -DLINUX 
-D__linux__ -Iinclude/ -I./ -Iavx/ -Iavx2/ -Iavx512/ -Isse/ 
avx/mb_mgr_hmac_sha_224_submit_avx.asm
Making object file obj/mb_mgr_hmac_sha_224_submit_avx2.o
Making object file obj/mb_mgr_hmac_sha_224_submit_avx512.o
nasm -o obj/mb_mgr_hmac_sha_224_submit_avx2.o -felf64 -Xgnu -gdwarf -DLINUX 
-D__linux__ -Iinclude/ -I./ -Iavx/ -Iavx2/ -Iavx512/ -Isse/ 
avx2/mb_mgr_hmac_sha_224_submit_avx2.asm
nasm -o obj/mb_mgr_hmac_sha_224_submit_avx512.o -felf64 -Xgnu -gdwarf -DLINUX 
-D__linux__ -Iinclude/ -I./ -Iavx/ -Iavx2/ -Iavx512/ -Isse/ 
avx512/mb_mgr_hmac_sha_224_submit_avx512.asm
mb_mgr_hmac_sha_256_submit_avx512.asm:170: error: parser: instruction expected
mb_mgr_hmac_sha_256_submit_avx512.asm:171: error: symbol `vmovdqu32' redefined
mb_mgr_hmac_sha_256_submit_avx512.asm:171: error: parser: instruction expected
Making object file obj/mb_mgr_hmac_sha_224_submit_sse.o
nasm -o obj/mb_mgr_hmac_sha_224_submit_sse.o -felf64 -Xgnu -gdwarf -DLINUX 
-D__linux__ -Iinclude/ -I./ -Iavx/ -Iavx2/ -Iavx512/ -Isse/ 
sse/mb_mgr_hmac_sha_224_submit_sse.asm
--
make[4]: *** [obj/mb_mgr_hmac_sha_224_submit_avx512.o] Error 1
make[4]: *** Waiting for unfinished jobs
--
--
--
make[4]: Leaving directory 
`/home/vpp_communication/vpp/build-root/build-vpp-native/dpdk/intel-ipsec-mb-0.46'
make[3]: *** [build-ipsec-mb] Error 2
make[3]: Leaving directory `/home/vpp_communication/vpp/dpdk'
make[2]: *** [ebuild-install] Error 2
make[2]: Leaving directory `/home/vpp_communication/vpp/dpdk'
make[1]: *** [dpdk-install] Error 2
make[1]: Leaving directory `/home/vpp_communication/vpp/build-root'
make: *** [build-release] Error 2
root@ubuntu:/home/vpp_communication/vpp#

Is this a problem ? How can I solve it?

Thanks,
xyxue



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev