Re: [vpp-dev] PEP8 expert needed

2019-01-29 Thread Paul Vinciguerra
They started popping up because a new version was released earlier today.

*https://pypi.org/project/pycodestyle/
* *Changelog
* 2.5.0 (2019-01-29)
New checks:

* E117: Over-indented code blocks
* W505: Maximum doc-string length only when configured with –max-doc-length
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12055): https://lists.fd.io/g/vpp-dev/message/12055
Mute This Topic: https://lists.fd.io/mt/29585853/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP send map-register with R=0 on LISP

2019-01-29 Thread Florin Coras
Does this [1] solve the issue?

Florin

[1] https://gerrit.fd.io/r/#/c/17151/

> On Jan 29, 2019, at 4:34 PM, Yosvany  wrote:
> 
> VPP with LISP send the reachable bit to 0. When use one iid to vni different 
> that 0.
> 
> How can change the reachable bit to 1 on map-register.
> 
> I am testing with cisco router as MS/MR.
> -- 
> Enviado desde mi dispositivo Android con K-9 Mail. Por favor, disculpa mi 
> brevedad. -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#12052): https://lists.fd.io/g/vpp-dev/message/12052
> Mute This Topic: https://lists.fd.io/mt/29589087/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12054): https://lists.fd.io/g/vpp-dev/message/12054
Mute This Topic: https://lists.fd.io/mt/29589087/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Dual stack con VPP and VRF

2019-01-29 Thread Yosvany
And then what is the differents between  this comand:

Ip table X and ip6 table X ???

Great job is VPP.



El 29 de enero de 2019 3:54:07 AM GMT-05:00, "Neale Ranns via Lists.Fd.Io" 
 escribió:
>Hi,
>
>You just need to give the interface an IPv4 and IPv6 address.
>
>DBGvpp# loop cre
>DBGvpp# ip table 1
>DBGvpp# set int ip table loop0 1
>DBGvpp# set int state loop0 up
>DBGvpp# set int ip address loop0 10.10.10.10/24
>DBGvpp# set int ip address loop0 2001::10/64
>
>The creation of the IP table 1 is optional, it would work in the
>‘default’ table too.
>
>/neale
>
>De :  au nom de Yosvany 
>Date : mardi 29 janvier 2019 à 02:34
>À : "dmar...@me.com" , "Damjan Marion via Lists.Fd.Io"
>, Marco Varlese 
>Cc : "vpp-dev@lists.fd.io" 
>Objet : [vpp-dev] Dual stack con VPP and VRF
>
>Can someone show me one example, how can use dual stack in one
>interface and vrf.??
>--
>Enviado desde mi dispositivo Android con K-9 Mail. Por favor, disculpa
>mi brevedad.

-- 
Enviado desde mi dispositivo Android con K-9 Mail. Por favor, disculpa mi 
brevedad.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12053): https://lists.fd.io/g/vpp-dev/message/12053
Mute This Topic: https://lists.fd.io/mt/29577845/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP send map-register with R=0 on LISP

2019-01-29 Thread Yosvany
VPP with LISP send the reachable bit to 0. When use one iid to vni different 
that 0.

How can change the reachable bit to 1 on map-register.

I am testing with cisco router as MS/MR.
-- 
Enviado desde mi dispositivo Android con K-9 Mail. Por favor, disculpa mi 
brevedad.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12052): https://lists.fd.io/g/vpp-dev/message/12052
Mute This Topic: https://lists.fd.io/mt/29589087/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] PEP8 expert needed

2019-01-29 Thread Ed Kern via Lists.Fd.Io
I’m sure someone beat me to it but if not..

https://gerrit.fd.io/r/17146

Ed



On Jan 29, 2019, at 12:42 PM, Damjan Marion via Lists.Fd.Io 
mailto:dmarion=me@lists.fd.io>> wrote:



Can somebody with python skills take care for this checkstyle errors, not sure 
why they started popping out now...

Thanks!

19:13:35 /w/workspace/vpp-checkstyle-verify-master/test/test_syslog.py:132:17: 
E117 over-indented
19:13:35 self.logger.error(ppp("invalid packet:", capture[0]))
19:13:35 ^
19:13:35 /w/workspace/vpp-checkstyle-verify-master/test/test_syslog.py:187:17: 
E117 over-indented
19:13:35 self.logger.error(ppp("invalid packet:", capture[0]))
19:13:35 ^

--
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12050): https://lists.fd.io/g/vpp-dev/message/12050
Mute This Topic: https://lists.fd.io/mt/29585853/675649
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[e...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12051): https://lists.fd.io/g/vpp-dev/message/12051
Mute This Topic: https://lists.fd.io/mt/29585853/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] PEP8 expert needed

2019-01-29 Thread Damjan Marion via Lists.Fd.Io


Can somebody with python skills take care for this checkstyle errors, not sure 
why they started popping out now...

Thanks!

19:13:35 /w/workspace/vpp-checkstyle-verify-master/test/test_syslog.py:132:17: 
E117 over-indented 
19:13:35 self.logger.error(ppp("invalid packet:", capture[0])) 
19:13:35 ^ 
19:13:35 /w/workspace/vpp-checkstyle-verify-master/test/test_syslog.py:187:17: 
E117 over-indented 
19:13:35 self.logger.error(ppp("invalid packet:", capture[0])) 
19:13:35 ^

-- 
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12050): https://lists.fd.io/g/vpp-dev/message/12050
Mute This Topic: https://lists.fd.io/mt/29585853/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Question about crypto dev queue pairs #vpp

2019-01-29 Thread Lee Roberts
Sergio,

I encountered the same problem when attempting to enable the AMD CCP poll mode 
driver
in VPP 18.10.

As mentioned earlier in this e-mail thread, with max_qp = 1, max_res_idx 
becomes 65535
in the following statement:

 max_res_idx = (dev->max_qp / 2) - 1;

I hadn't found the time to study/debug the code to understand whether this was a
VPP or DPDK issue.  If you have a patch available in the next few days, I could
test it with the AMD CCP device.


-Lee Roberts


From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Sergio 
Gonzalez Monroy
Sent: Tuesday, January 29, 2019 2:18 AM
To: manuel.alo...@cavium.com; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Question about crypto dev queue pairs #vpp

Hi Manuel,

This is likely a mismatch in VPP side. I only tested it with QAT (2 qps per VF) 
and SW cryptodevs (default 8 qps) at the time (over a year ago).

I only tested it with SW cryptodevs and QAT, that was the HW I had access to.

So like I mentioned before, if you do not want to rework the code to support 1 
qp per resource, then a check for at least 2 qps per device is required to use 
that device.

I could provide a patch to use 1 pq per resource over the next few days if you 
are interested in it or could review if you decide to do the work.

Which device do you want to use?

Regards,
Sergio

From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> on behalf of 
manuel.alo...@cavium.com 
mailto:manuel.alo...@cavium.com>>
Sent: Monday, January 28, 2019 4:15 PM
To: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Question about crypto dev queue pairs #vpp

Hi Sergio,

thank you for the explanation, I see that there are 2 (or more qps). My concern 
was due to dpdk, since there are a few device drivers exporting only one queue 
pair for their crypto devices.
(I followed the code assuming one qps, based on a dpdk-18.11 exported value)
So I do not know where is the mismatching, vpp or dpdk?


BR,
Manuel
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12049): https://lists.fd.io/g/vpp-dev/message/12049
Mute This Topic: https://lists.fd.io/mt/29538345/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] How do I get the "dpdk-shared" in VPP ?

2019-01-29 Thread Marco Varlese


On 1/29/19 1:51 PM, Damjan Marion via Lists.Fd.Io wrote:
> 
> As discussed on the last community call, japi is now new fd.io
>  project and will be in separate repo...
Thanks!

> 
> 
>> On 29 Jan 2019, at 13:10, Marco Varlese > > wrote:
>>
>> How does the new build-system build the japi (e.g. JAR)?
>> I can't get them to build...
>>
>>
>> On 1/28/19 5:57 PM, Damjan Marion via Lists.Fd.Io wrote:
>>>
>>> With this change, I'm able to compile VPP out of tarball produced by
>>> "make dist".
>>>
>>> https://gerrit.fd.io/r/#/c/17125/
>>>
>>>
 On 28 Jan 2019, at 13:35, Damjan Marion via Lists.Fd.Io
 >>>  >
 wrote:



> On 28 Jan 2019, at 12:08, Marco Varlese  
> > wrote:
>
> Is there still a way to use the old infrastructure to build the code?

 No, that doesn't make sense.

>
> Apparently, cmake works when used inside the GIT repo but fails to
> build
> when using the tarball generated via "make dist" (required indeed for
> downstream consumption).

 that should be easy fixable

>
> On 1/26/19 2:22 PM, Damjan Marion via Lists.Fd.Io wrote:
>>
>> Here it is: https://gerrit.fd.io/r/17094
>>
>>
>> $ mkdir build-vpp stage
>>
>> $ git clone 
>>
>> $ cd dpdk
>>
>> $ cat << _EOF_ | patch -p1
>> diff --git a/config/common_base b/config/common_base
>> index d12ae98bc..42d6f53dd 100644
>> --- a/config/common_base
>> +++ b/config/common_base
>> @@ -38,7 +38,7 @@ CONFIG_RTE_ARCH_STRICT_ALIGN=n
>>  #
>>  # Compile to share library
>>  #
>> -CONFIG_RTE_BUILD_SHARED_LIB=n
>> +CONFIG_RTE_BUILD_SHARED_LIB=y
>>
>>  #
>>  # Use newest code breaking previous ABI
>> _EOF_
>>
>>
>> $ make -j install T=x86_64-native-linuxapp-gcc DESTDIR=../stage
>>
>> $ cd ../build-vpp
>>
>> $ cmake -G Ninja -DCMAKE_PREFIX_PATH:PATH=$PWD/../stage
>> /path/to/vpp/src
>>
>> $ ninja
>>
>> $ LD_LIBRARY_PATH=../stage/lib ldd lib/vpp_plugins/dpdk_plugin.so
>> linux-vdso.so.1 (0x7ffe2a3b7000)
>> librte_cryptodev.so.5.1 => ../stage/lib/librte_cryptodev.so.5.1
>> (0x7fd5e1fa)
>> librte_eal.so.9.1 => ../stage/lib/librte_eal.so.9.1
>> (0x7fd5e1ed1000)
>> librte_ethdev.so.11.1 => ../stage/lib/librte_ethdev.so.11.1
>> (0x7fd5e1e3)
>> librte_mbuf.so.4.1 => ../stage/lib/librte_mbuf.so.4.1
>> (0x7fd5e1e28000)
>> librte_mempool.so.5.1 => ../stage/lib/librte_mempool.so.5.1
>> (0x7fd5e1e1f000)
>> librte_pmd_bond.so.2.1 => ../stage/lib/librte_pmd_bond.so.2.1
>> (0x7fd5e1dfe000)
>> librte_ring.so.2.1 => ../stage/lib/librte_ring.so.2.1
>> (0x7fd5e1df9000)
>> librte_sched.so.1.1 => ../stage/lib/librte_sched.so.1.1
>> (0x7fd5e1ded000)
>> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7fd5e1be9000)
>> /lib64/ld-linux-x86-64.so.2 (0x7fd5e211d000)
>> librte_kvargs.so.1.1 => ../stage/lib/librte_kvargs.so.1.1
>> (0x7fd5e1be4000)
>> libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7fd5e1bdc000)
>> libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
>> (0x7fd5e1bbb000)
>> librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7fd5e1bb1000)
>> libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1
>> (0x7fd5e19a6000)
>> librte_cmdline.so.2.1 => ../stage/lib/librte_cmdline.so.2.1
>> (0x7fd5e199a000)
>> librte_pci.so.1.1 => ../stage/lib/librte_pci.so.1.1
>> (0x7fd5e1993000)
>> librte_bus_vdev.so.2.1 => ../stage/lib/librte_bus_vdev.so.2.1
>> (0x7fd5e198c000)
>> libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7fd5e17ff000)
>>
>> -- 
>> Damjan
>>
>>
>>
>>
>>> On 25 Jan 2019, at 18:03, Kinsella, Ray >> 
>>> 
>>> > wrote:
>>>
>>> I tried doing this recently and it barfed.
>>> How did you get it working?
>>>
>>> Ray K
>>>
 -Original Message-
 From: vpp-dev@lists.fd.io 
  
 [mailto:vpp-dev@lists.fd.io] On Behalf Of Marco
 Varlese
 Sent: Friday 25 January 2019 12:38
 To: Damjan Marion mailto:dmar...@me.com>
  >
 Cc: vpp-dev@lists.fd.io 
  
 Subject: Re: [vpp-dev] How do I get the "dpdk-shared" in VPP ?

 Never mind... I did find the issue. All good ;)

 Thank you Damjan!!!

Re: [vpp-dev] How do I get the "dpdk-shared" in VPP ?

2019-01-29 Thread Damjan Marion via Lists.Fd.Io

As discussed on the last community call, japi is now new fd.io project and will 
be in separate repo...


> On 29 Jan 2019, at 13:10, Marco Varlese  wrote:
> 
> How does the new build-system build the japi (e.g. JAR)?
> I can't get them to build...
> 
> 
> On 1/28/19 5:57 PM, Damjan Marion via Lists.Fd.Io wrote:
>> 
>> With this change, I'm able to compile VPP out of tarball produced by
>> "make dist".
>> 
>> https://gerrit.fd.io/r/#/c/17125/ 
>> 
>> 
>>> On 28 Jan 2019, at 13:35, Damjan Marion via Lists.Fd.Io
>>> mailto:dmarion=me@lists.fd.io> 
>>> >> 
>>> wrote:
>>> 
>>> 
>>> 
 On 28 Jan 2019, at 12:08, Marco Varlese >>> 
 >> wrote:
 
 Is there still a way to use the old infrastructure to build the code?
>>> 
>>> No, that doesn't make sense.
>>> 
 
 Apparently, cmake works when used inside the GIT repo but fails to build
 when using the tarball generated via "make dist" (required indeed for
 downstream consumption).
>>> 
>>> that should be easy fixable
>>> 
 
 On 1/26/19 2:22 PM, Damjan Marion via Lists.Fd.Io wrote:
> 
> Here it is: https://gerrit.fd.io/r/17094 
> 
> 
> $ mkdir build-vpp stage
> 
> $ git clone 
> 
> $ cd dpdk
> 
> $ cat << _EOF_ | patch -p1
> diff --git a/config/common_base b/config/common_base
> index d12ae98bc..42d6f53dd 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -38,7 +38,7 @@ CONFIG_RTE_ARCH_STRICT_ALIGN=n
>  #
>  # Compile to share library
>  #
> -CONFIG_RTE_BUILD_SHARED_LIB=n
> +CONFIG_RTE_BUILD_SHARED_LIB=y
> 
>  #
>  # Use newest code breaking previous ABI
> _EOF_
> 
> 
> $ make -j install T=x86_64-native-linuxapp-gcc DESTDIR=../stage
> 
> $ cd ../build-vpp
> 
> $ cmake -G Ninja -DCMAKE_PREFIX_PATH:PATH=$PWD/../stage /path/to/vpp/src
> 
> $ ninja
> 
> $ LD_LIBRARY_PATH=../stage/lib ldd lib/vpp_plugins/dpdk_plugin.so
> linux-vdso.so.1 (0x7ffe2a3b7000)
> librte_cryptodev.so.5.1 => ../stage/lib/librte_cryptodev.so.5.1
> (0x7fd5e1fa)
> librte_eal.so.9.1 => ../stage/lib/librte_eal.so.9.1 (0x7fd5e1ed1000)
> librte_ethdev.so.11.1 => ../stage/lib/librte_ethdev.so.11.1
> (0x7fd5e1e3)
> librte_mbuf.so.4.1 => ../stage/lib/librte_mbuf.so.4.1
> (0x7fd5e1e28000)
> librte_mempool.so.5.1 => ../stage/lib/librte_mempool.so.5.1
> (0x7fd5e1e1f000)
> librte_pmd_bond.so.2.1 => ../stage/lib/librte_pmd_bond.so.2.1
> (0x7fd5e1dfe000)
> librte_ring.so.2.1 => ../stage/lib/librte_ring.so.2.1
> (0x7fd5e1df9000)
> librte_sched.so.1.1 => ../stage/lib/librte_sched.so.1.1
> (0x7fd5e1ded000)
> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7fd5e1be9000)
> /lib64/ld-linux-x86-64.so.2 (0x7fd5e211d000)
> librte_kvargs.so.1.1 => ../stage/lib/librte_kvargs.so.1.1
> (0x7fd5e1be4000)
> libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7fd5e1bdc000)
> libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
> (0x7fd5e1bbb000)
> librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7fd5e1bb1000)
> libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1
> (0x7fd5e19a6000)
> librte_cmdline.so.2.1 => ../stage/lib/librte_cmdline.so.2.1
> (0x7fd5e199a000)
> librte_pci.so.1.1 => ../stage/lib/librte_pci.so.1.1 (0x7fd5e1993000)
> librte_bus_vdev.so.2.1 => ../stage/lib/librte_bus_vdev.so.2.1
> (0x7fd5e198c000)
> libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7fd5e17ff000)
> 
> -- 
> Damjan
> 
> 
> 
> 
>> On 25 Jan 2019, at 18:03, Kinsella, Ray > 
>> >
>> >> wrote:
>> 
>> I tried doing this recently and it barfed.
>> How did you get it working?
>> 
>> Ray K
>> 
>>> -Original Message-
>>> From: vpp-dev@lists.fd.io 
>>> > 
>>> >
>>> [mailto:vpp-dev@lists.fd.io ] On Behalf Of 
>>> Marco
>>> Varlese
>>> Sent: Friday 25 January 2019 12:38
>>> To: Damjan Marion mailto:dmar...@me.com>
>>> > >> >>
>>> Cc: vpp-dev@lists.fd.io 
>>> > 
>>> 

Re: [vpp-dev] How do I get the "dpdk-shared" in VPP ?

2019-01-29 Thread Marco Varlese
Hi Damjan,

On 1/29/19 10:39 AM, Damjan Marion wrote:
> 
> Dear Marco,
> 
> May be that my first explanation was not clear enough.
> 
> (1) In VPP repo we use cmake + (ninja or gnumake) for compiling VPP
> which includes searching for dependencies (different libs like dpdk, openssl, 
> uuid).
> To compile VPP everything you need is in src/ directory.
> If cmake is not able to find some dependencies like DPDK, it will warn
> you and disable that compolnent (i.e. DPDK plugin).
> 
> (2) Then we have our build environment crafted out of different Makefiles,
> which is there mainly to support developers and for our internal packaging.
> What those sets of makefiles are doing is:
>  - downloading, compiling (and optionally packaging)  dependencies like dpdk, 
> ipsecmb, nasm
>  - compiling VPP by passing right arguments to (1) so cmake is able to find 
> libraries at the right place
> 
> If you are working on disto packaging, specially if you are linking against
> distro version of libraries like DPDK, there is no sense in using (2).
> Just call cmake with right arguments from your .spec file following
> by "cmake --build" similar to majority of open source projects.
> Simply forget about anything in build-root/ build-data/ or build/
> directories. They are all part of (2).
I managed to get a new .spec in place...
Many paths have changed so it was quite a bit of refactoring.
I'm just struggling right now to have the good, old JAR files built...
are they still available or no longer built?
> 
> Hope this explains,
It definitely helps! Thanks Damjan.

Cheers,
Marco
> 
> 
>> On 29 Jan 2019, at 08:07, Marco Varlese  wrote:
>>
>> Thanks Damjan. I will try that too.
>>
>> A last question: I assume I can keep using the "make -C build-root
>> install-packages" if I pull your last patches. Am I right / wrong?
>>
>>
>> Thanks,
>> Marco
>>
>> On 1/28/19 5:57 PM, Damjan Marion via Lists.Fd.Io wrote:
>>>
>>> With this change, I'm able to compile VPP out of tarball produced by
>>> "make dist".
>>>
>>> https://gerrit.fd.io/r/#/c/17125/
>>>
>>>
 On 28 Jan 2019, at 13:35, Damjan Marion via Lists.Fd.Io
 mailto:dmarion=me@lists.fd.io>> wrote:



> On 28 Jan 2019, at 12:08, Marco Varlese  > wrote:
>
> Is there still a way to use the old infrastructure to build the code?

 No, that doesn't make sense.

>
> Apparently, cmake works when used inside the GIT repo but fails to build
> when using the tarball generated via "make dist" (required indeed for
> downstream consumption).

 that should be easy fixable

>
> On 1/26/19 2:22 PM, Damjan Marion via Lists.Fd.Io wrote:
>>
>> Here it is: https://gerrit.fd.io/r/17094
>>
>>
>> $ mkdir build-vpp stage
>>
>> $ git clone 
>>
>> $ cd dpdk
>>
>> $ cat << _EOF_ | patch -p1
>> diff --git a/config/common_base b/config/common_base
>> index d12ae98bc..42d6f53dd 100644
>> --- a/config/common_base
>> +++ b/config/common_base
>> @@ -38,7 +38,7 @@ CONFIG_RTE_ARCH_STRICT_ALIGN=n
>>  #
>>  # Compile to share library
>>  #
>> -CONFIG_RTE_BUILD_SHARED_LIB=n
>> +CONFIG_RTE_BUILD_SHARED_LIB=y
>>
>>  #
>>  # Use newest code breaking previous ABI
>> _EOF_
>>
>>
>> $ make -j install T=x86_64-native-linuxapp-gcc DESTDIR=../stage
>>
>> $ cd ../build-vpp
>>
>> $ cmake -G Ninja -DCMAKE_PREFIX_PATH:PATH=$PWD/../stage /path/to/vpp/src
>>
>> $ ninja
>>
>> $ LD_LIBRARY_PATH=../stage/lib ldd lib/vpp_plugins/dpdk_plugin.so
>> linux-vdso.so.1 (0x7ffe2a3b7000)
>> librte_cryptodev.so.5.1 => ../stage/lib/librte_cryptodev.so.5.1
>> (0x7fd5e1fa)
>> librte_eal.so.9.1 => ../stage/lib/librte_eal.so.9.1 (0x7fd5e1ed1000)
>> librte_ethdev.so.11.1 => ../stage/lib/librte_ethdev.so.11.1
>> (0x7fd5e1e3)
>> librte_mbuf.so.4.1 => ../stage/lib/librte_mbuf.so.4.1
>> (0x7fd5e1e28000)
>> librte_mempool.so.5.1 => ../stage/lib/librte_mempool.so.5.1
>> (0x7fd5e1e1f000)
>> librte_pmd_bond.so.2.1 => ../stage/lib/librte_pmd_bond.so.2.1
>> (0x7fd5e1dfe000)
>> librte_ring.so.2.1 => ../stage/lib/librte_ring.so.2.1
>> (0x7fd5e1df9000)
>> librte_sched.so.1.1 => ../stage/lib/librte_sched.so.1.1
>> (0x7fd5e1ded000)
>> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7fd5e1be9000)
>> /lib64/ld-linux-x86-64.so.2 (0x7fd5e211d000)
>> librte_kvargs.so.1.1 => ../stage/lib/librte_kvargs.so.1.1
>> (0x7fd5e1be4000)
>> libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7fd5e1bdc000)
>> libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
>> (0x7fd5e1bbb000)
>> librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7fd5e1bb1000)
>> libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1
>> (0x7fd5e19a6000)
>> librte_cmdline.so.2.1 => 

Re: [vpp-dev] How do I get the "dpdk-shared" in VPP ?

2019-01-29 Thread Marco Varlese
How does the new build-system build the japi (e.g. JAR)?
I can't get them to build...


On 1/28/19 5:57 PM, Damjan Marion via Lists.Fd.Io wrote:
> 
> With this change, I'm able to compile VPP out of tarball produced by
> "make dist".
> 
> https://gerrit.fd.io/r/#/c/17125/
> 
> 
>> On 28 Jan 2019, at 13:35, Damjan Marion via Lists.Fd.Io
>> mailto:dmarion=me@lists.fd.io>> wrote:
>>
>>
>>
>>> On 28 Jan 2019, at 12:08, Marco Varlese >> > wrote:
>>>
>>> Is there still a way to use the old infrastructure to build the code?
>>
>> No, that doesn't make sense.
>>
>>>
>>> Apparently, cmake works when used inside the GIT repo but fails to build
>>> when using the tarball generated via "make dist" (required indeed for
>>> downstream consumption).
>>
>> that should be easy fixable
>>
>>>
>>> On 1/26/19 2:22 PM, Damjan Marion via Lists.Fd.Io wrote:

 Here it is: https://gerrit.fd.io/r/17094


 $ mkdir build-vpp stage

 $ git clone 

 $ cd dpdk

 $ cat << _EOF_ | patch -p1
 diff --git a/config/common_base b/config/common_base
 index d12ae98bc..42d6f53dd 100644
 --- a/config/common_base
 +++ b/config/common_base
 @@ -38,7 +38,7 @@ CONFIG_RTE_ARCH_STRICT_ALIGN=n
  #
  # Compile to share library
  #
 -CONFIG_RTE_BUILD_SHARED_LIB=n
 +CONFIG_RTE_BUILD_SHARED_LIB=y

  #
  # Use newest code breaking previous ABI
 _EOF_


 $ make -j install T=x86_64-native-linuxapp-gcc DESTDIR=../stage

 $ cd ../build-vpp

 $ cmake -G Ninja -DCMAKE_PREFIX_PATH:PATH=$PWD/../stage /path/to/vpp/src

 $ ninja

 $ LD_LIBRARY_PATH=../stage/lib ldd lib/vpp_plugins/dpdk_plugin.so
 linux-vdso.so.1 (0x7ffe2a3b7000)
 librte_cryptodev.so.5.1 => ../stage/lib/librte_cryptodev.so.5.1
 (0x7fd5e1fa)
 librte_eal.so.9.1 => ../stage/lib/librte_eal.so.9.1 (0x7fd5e1ed1000)
 librte_ethdev.so.11.1 => ../stage/lib/librte_ethdev.so.11.1
 (0x7fd5e1e3)
 librte_mbuf.so.4.1 => ../stage/lib/librte_mbuf.so.4.1
 (0x7fd5e1e28000)
 librte_mempool.so.5.1 => ../stage/lib/librte_mempool.so.5.1
 (0x7fd5e1e1f000)
 librte_pmd_bond.so.2.1 => ../stage/lib/librte_pmd_bond.so.2.1
 (0x7fd5e1dfe000)
 librte_ring.so.2.1 => ../stage/lib/librte_ring.so.2.1
 (0x7fd5e1df9000)
 librte_sched.so.1.1 => ../stage/lib/librte_sched.so.1.1
 (0x7fd5e1ded000)
 libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7fd5e1be9000)
 /lib64/ld-linux-x86-64.so.2 (0x7fd5e211d000)
 librte_kvargs.so.1.1 => ../stage/lib/librte_kvargs.so.1.1
 (0x7fd5e1be4000)
 libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7fd5e1bdc000)
 libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
 (0x7fd5e1bbb000)
 librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7fd5e1bb1000)
 libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1
 (0x7fd5e19a6000)
 librte_cmdline.so.2.1 => ../stage/lib/librte_cmdline.so.2.1
 (0x7fd5e199a000)
 librte_pci.so.1.1 => ../stage/lib/librte_pci.so.1.1 (0x7fd5e1993000)
 librte_bus_vdev.so.2.1 => ../stage/lib/librte_bus_vdev.so.2.1
 (0x7fd5e198c000)
 libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7fd5e17ff000)

 -- 
 Damjan




> On 25 Jan 2019, at 18:03, Kinsella, Ray  
> > wrote:
>
> I tried doing this recently and it barfed.
> How did you get it working?
>
> Ray K
>
>> -Original Message-
>> From: vpp-dev@lists.fd.io
>>  
>> [mailto:vpp-dev@lists.fd.io] On Behalf Of Marco
>> Varlese
>> Sent: Friday 25 January 2019 12:38
>> To: Damjan Marion >  >
>> Cc: vpp-dev@lists.fd.io
>>  
>> Subject: Re: [vpp-dev] How do I get the "dpdk-shared" in VPP ?
>>
>> Never mind... I did find the issue. All good ;)
>>
>> Thank you Damjan!!!
>>
>> On 1/25/19 1:26 PM, Marco Varlese wrote:
>>>
>>>
>>> On 1/25/19 11:14 AM, Damjan Marion wrote:


> On 25 Jan 2019, at 10:49, Marco Varlese  
> 
> > wrote:
>
> Hi Damjan,
>
> On 1/24/19 10:46 PM, Damjan Marion via Lists.Fd.Io wrote:
>>
>> In theory like any other cmake project:
>>
>> $ mkdir build
>> $ cd build
>> $ cmake /path/to/vpp/src  $ make $ make install
> Hmmm, not sure if I explained myself in the right way.
>
> The problem today is that I cannot find a way to tell VPP _not_ to
> download the dpdk 

Re: [vpp-dev] Getting core in vec_resize (vpp 18.01)

2019-01-29 Thread Damjan Marion via Lists.Fd.Io
Please search this mailing list archive, Dave provided some hints some time 
ago

90M is not terribly high, but it can also be victim of something else holding 
memory.


> On 29 Jan 2019, at 12:54, chetan bhasin  wrote:
> 
> Hi Damjan,
> 
> Thanks for the reply.
> 
> what should be a typical way of debugging a corrupt vector pointer eg. can we 
> set a watchpoint on some field in vector header which will most likelygetting 
> disturbed so that we can nab who is corrupting the vector.
> 
> With 1M entries do you think 90M is an issue.
>  
> Clearly we have a lurking bug somewhere.
> 
> Thanks,
> Chetan Bhasin
> 
> 
> On Tue, Jan 29, 2019, 16:53 Damjan Marion   wrote:
> 
> typically this happens when you run out of memory / main heap size or you 
> have corrupted vector pointer..
> 
> It will be easier to read your traceback if it is captured with debug image, 
> but according to frame 11, your vector is already 90MB big.
> Is this expected to be?
> 
> 
>> On 29 Jan 2019, at 11:31, chetan bhasin > > wrote:
>> 
>> Hello Everyone,
>> 
>> I know 18.01 is not supported now , but just want to understand what could 
>> be the reason for the below crash, we are adding entries in pool using 
>> pool_get_alligned which is causing vec_resize.
>> 
>> This issue comes when reaches around 1M entries.
>> 
>> Whether it is due to limited memory or some memory corruption or something 
>> else?
>> 
>> Core was generated by `bin/vpp -c co'.
>> Program terminated with signal 6, Aborted.
>> #0  0x2ab534028207 in __GI_raise (sig=sig@entry=6) at 
>> ../nptl/sysdeps/unix/sysv/linux/raise.c:56
>> 56return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
>> Missing separate debuginfos, use: debuginfo-install OPWVmepCR-7.0-el7.x86_64
>> (gdb) bt
>> #0  0x2ab534028207 in __GI_raise (sig=sig@entry=6) at 
>> ../nptl/sysdeps/unix/sysv/linux/raise.c:56
>> #1  0x2ab5340298f8 in __GI_abort () at abort.c:90
>> #2  0x00405ea9 in os_panic () at 
>> /bfs-build/build-area.42/builds/LinuxNBngp_7.X_RH7/2019-01-07-2044/third-party/vpp/vpp_1801/build-data/../src/vpp/vnet/main.c:266
>> #3  0x2ab53213aad9 in unix_signal_handler (signum=, 
>> si=, uc=)
>> at vpp/vpp_1801/build-data/../src/vlib/unix/main.c:126
>> #4  
>> #5  _mm_storeu_si128 (__B=..., __P=) at 
>> /usr/lib/gcc/x86_64-redhat-linux/4.8.5/include/emmintrin.h:702
>> #6  clib_mov16 (src=, dst=)
>> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:60
>> #7  clib_mov32 (src=, dst=)
>> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:66
>> #8  clib_mov64 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
>> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:74
>> #9  clib_mov128 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
>> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:80
>> #10 clib_mov256 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
>> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:87
>> #11 clib_memcpy (n=90646888, src=0x2ab62d1b04e0, dst=0x2ab5426e1fe0)
>> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:325
>> #12 vec_resize_allocate_memory (v=, 
>> length_increment=length_increment@entry=1, data_bytes=, 
>> header_bytes=, header_bytes@entry=48,
>> data_align=data_align@entry=64) at 
>> vpp/vpp_1801/build-data/../src/vppinfra/vec.c:95
>> #13 0x2ab7b74a61c1 in _vec_resize (data_align=64, header_bytes=48, 
>> data_bytes=, length_increment=1, v=)
>> at include/vppinfra/vec.h:142
>> #14 xxx_allocate_flow (fm=0x2ab7b76c8fc0 ) 
>> atvpp/plugins/src/fastpath/fastpath.c:1502
>> 
>> 
>> Regards,
>> Chetan Bhasin
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>> 
>> View/Reply Online (#12039): https://lists.fd.io/g/vpp-dev/message/12039 
>> 
>> Mute This Topic: https://lists.fd.io/mt/29580803/675642 
>> 
>> Group Owner: vpp-dev+ow...@lists.fd.io 
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>>   [dmar...@me.com 
>> ]
>> -=-=-=-=-=-=-=-=-=-=-=-
> 
> -- 
> Damjan
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#12042): https://lists.fd.io/g/vpp-dev/message/12042
> Mute This Topic: https://lists.fd.io/mt/29580803/675642
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-- 
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12043): https://lists.fd.io/g/vpp-dev/message/12043
Mute This Topic: https://lists.fd.io/mt/29580803/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Getting core in vec_resize (vpp 18.01)

2019-01-29 Thread chetan bhasin
Hi Damjan,


Thanks for the reply.


what should be a typical way of debugging a corrupt vector pointer eg. can
we set a watchpoint on some field in vector header which will most
likelygetting disturbed so that we can nab who is corrupting the vector.


With 1M entries do you think 90M is an issue.



Clearly we have a lurking bug somewhere.


Thanks,

Chetan Bhasin


On Tue, Jan 29, 2019, 16:53 Damjan Marion 
> typically this happens when you run out of memory / main heap size or you
> have corrupted vector pointer..
>
> It will be easier to read your traceback if it is captured with debug
> image, but according to frame 11, your vector is already 90MB big.
> Is this expected to be?
>
>
> On 29 Jan 2019, at 11:31, chetan bhasin 
> wrote:
>
> Hello Everyone, I know 18.01 is not supported now , but just want to
> understand what could be the reason for the below crash, we are adding
> entries in pool using pool_get_alligned which is causing vec_resize. This
> issue comes when reaches around 1M entries. Whether it is due to limited
> memory or some memory corruption or something else? Core was generated by 
> `bin/vpp
> -c co'.
> Program terminated with signal 6, Aborted.
> #0  0x2ab534028207 in __GI_raise (sig=sig@entry=6) at
> ../nptl/sysdeps/unix/sysv/linux/raise.c:56
> 56return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
> Missing separate debuginfos, use: debuginfo-install
> OPWVmepCR-7.0-el7.x86_64
> (gdb) bt
> #0  0x2ab534028207 in __GI_raise (sig=sig@entry=6) at
> ../nptl/sysdeps/unix/sysv/linux/raise.c:56
> #1  0x2ab5340298f8 in __GI_abort () at abort.c:90
> #2  0x00405ea9 in os_panic () at
> /bfs-build/build-area.42/builds/LinuxNBngp_7.X_RH7/2019-01-07-2044/third-party/vpp/vpp_1801/build-data/../src/vpp/vnet/main.c:266
> #3  0x2ab53213aad9 in unix_signal_handler (signum=,
> si=, uc=)
> at vpp/vpp_1801/build-data/../src/vlib/unix/main.c:126
> #4  
> #5  _mm_storeu_si128 (__B=..., __P=) at
> /usr/lib/gcc/x86_64-redhat-linux/4.8.5/include/emmintrin.h:702
> #6  clib_mov16 (src=, dst=)
> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:60
> #7  clib_mov32 (src=, dst=)
> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:66
> #8  clib_mov64 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:74
> #9  clib_mov128 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:80
> #10 clib_mov256 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:87
> #11 clib_memcpy (n=90646888, src=0x2ab62d1b04e0, dst=0x2ab5426e1fe0)
> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:325
> #12 vec_resize_allocate_memory (v=,
> length_increment=length_increment@entry=1, data_bytes=,
> header_bytes=, header_bytes@entry=48,
> data_align=data_align@entry=64) at
> vpp/vpp_1801/build-data/../src/vppinfra/vec.c:95
> #13 0x2ab7b74a61c1 in _vec_resize (data_align=64, header_bytes=48,
> data_bytes=, length_increment=1, v=)
> at include/vppinfra/vec.h:142
> #14 xxx_allocate_flow (fm=0x2ab7b76c8fc0 )
> atvpp/plugins/src/fastpath/fastpath.c:1502 Regards, Chetan Bhasin
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#12039): https://lists.fd.io/g/vpp-dev/message/12039
> Mute This Topic: https://lists.fd.io/mt/29580803/675642
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
>
> --
> Damjan
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12042): https://lists.fd.io/g/vpp-dev/message/12042
Mute This Topic: https://lists.fd.io/mt/29580803/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Getting core in vec_resize (vpp 18.01)

2019-01-29 Thread Damjan Marion via Lists.Fd.Io

typically this happens when you run out of memory / main heap size or you have 
corrupted vector pointer..

It will be easier to read your traceback if it is captured with debug image, 
but according to frame 11, your vector is already 90MB big.
Is this expected to be?


> On 29 Jan 2019, at 11:31, chetan bhasin  wrote:
> 
> Hello Everyone,
> 
> I know 18.01 is not supported now , but just want to understand what could be 
> the reason for the below crash, we are adding entries in pool using 
> pool_get_alligned which is causing vec_resize.
> 
> This issue comes when reaches around 1M entries.
> 
> Whether it is due to limited memory or some memory corruption or something 
> else?
> 
> Core was generated by `bin/vpp -c co'.
> Program terminated with signal 6, Aborted.
> #0  0x2ab534028207 in __GI_raise (sig=sig@entry=6) at 
> ../nptl/sysdeps/unix/sysv/linux/raise.c:56
> 56return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
> Missing separate debuginfos, use: debuginfo-install OPWVmepCR-7.0-el7.x86_64
> (gdb) bt
> #0  0x2ab534028207 in __GI_raise (sig=sig@entry=6) at 
> ../nptl/sysdeps/unix/sysv/linux/raise.c:56
> #1  0x2ab5340298f8 in __GI_abort () at abort.c:90
> #2  0x00405ea9 in os_panic () at 
> /bfs-build/build-area.42/builds/LinuxNBngp_7.X_RH7/2019-01-07-2044/third-party/vpp/vpp_1801/build-data/../src/vpp/vnet/main.c:266
> #3  0x2ab53213aad9 in unix_signal_handler (signum=, 
> si=, uc=)
> at vpp/vpp_1801/build-data/../src/vlib/unix/main.c:126
> #4  
> #5  _mm_storeu_si128 (__B=..., __P=) at 
> /usr/lib/gcc/x86_64-redhat-linux/4.8.5/include/emmintrin.h:702
> #6  clib_mov16 (src=, dst=)
> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:60
> #7  clib_mov32 (src=, dst=)
> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:66
> #8  clib_mov64 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:74
> #9  clib_mov128 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:80
> #10 clib_mov256 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:87
> #11 clib_memcpy (n=90646888, src=0x2ab62d1b04e0, dst=0x2ab5426e1fe0)
> at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:325
> #12 vec_resize_allocate_memory (v=, 
> length_increment=length_increment@entry=1, data_bytes=, 
> header_bytes=, header_bytes@entry=48,
> data_align=data_align@entry=64) at 
> vpp/vpp_1801/build-data/../src/vppinfra/vec.c:95
> #13 0x2ab7b74a61c1 in _vec_resize (data_align=64, header_bytes=48, 
> data_bytes=, length_increment=1, v=)
> at include/vppinfra/vec.h:142
> #14 xxx_allocate_flow (fm=0x2ab7b76c8fc0 ) 
> atvpp/plugins/src/fastpath/fastpath.c:1502
> 
> 
> Regards,
> Chetan Bhasin
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#12039): https://lists.fd.io/g/vpp-dev/message/12039
> Mute This Topic: https://lists.fd.io/mt/29580803/675642
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-- 
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12041): https://lists.fd.io/g/vpp-dev/message/12041
Mute This Topic: https://lists.fd.io/mt/29580803/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Getting core in vec_resize (vpp 18.01)

2019-01-29 Thread chetan bhasin
Hello Everyone, I know 18.01 is not supported now , but just want to
understand what could be the reason for the below crash, we are adding
entries in pool using pool_get_alligned which is causing vec_resize. This
issue comes when reaches around 1M entries. Whether it is due to limited
memory or some memory corruption or something else? Core was generated
by `bin/vpp
-c co'.
Program terminated with signal 6, Aborted.
#0  0x2ab534028207 in __GI_raise (sig=sig@entry=6) at
../nptl/sysdeps/unix/sysv/linux/raise.c:56
56return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
Missing separate debuginfos, use: debuginfo-install OPWVmepCR-7.0-el7.x86_64
(gdb) bt
#0  0x2ab534028207 in __GI_raise (sig=sig@entry=6) at
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x2ab5340298f8 in __GI_abort () at abort.c:90
#2  0x00405ea9 in os_panic () at
/bfs-build/build-area.42/builds/LinuxNBngp_7.X_RH7/2019-01-07-2044/third-party/vpp/vpp_1801/build-data/../src/vpp/vnet/main.c:266
#3  0x2ab53213aad9 in unix_signal_handler (signum=,
si=, uc=)
at vpp/vpp_1801/build-data/../src/vlib/unix/main.c:126
#4  
#5  _mm_storeu_si128 (__B=..., __P=) at
/usr/lib/gcc/x86_64-redhat-linux/4.8.5/include/emmintrin.h:702
#6  clib_mov16 (src=, dst=)
at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:60
#7  clib_mov32 (src=, dst=)
at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:66
#8  clib_mov64 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:74
#9  clib_mov128 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:80
#10 clib_mov256 (src=0x2ab62d1b04e0 "", dst=0x2ab5426e1fe0 "")
at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:87
#11 clib_memcpy (n=90646888, src=0x2ab62d1b04e0, dst=0x2ab5426e1fe0)
at vpp/vpp_1801/build-data/../src/vppinfra/memcpy_sse3.h:325
#12 vec_resize_allocate_memory (v=,
length_increment=length_increment@entry=1, data_bytes=,
header_bytes=, header_bytes@entry=48,
data_align=data_align@entry=64) at
vpp/vpp_1801/build-data/../src/vppinfra/vec.c:95
#13 0x2ab7b74a61c1 in _vec_resize (data_align=64, header_bytes=48,
data_bytes=, length_increment=1, v=)
at include/vppinfra/vec.h:142
#14 xxx_allocate_flow (fm=0x2ab7b76c8fc0 )
atvpp/plugins/src/fastpath/fastpath.c:1502 Regards, Chetan Bhasin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12039): https://lists.fd.io/g/vpp-dev/message/12039
Mute This Topic: https://lists.fd.io/mt/29580803/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] multi-queue tap interface #vpp

2019-01-29 Thread Mohsin Kazmi via Lists.Fd.Io
?Current implementation of tap doesn't support multi-queue. But it should not 
be hard to implement multi-queue support for tap driver. You are welcome to 
contribute.



From: vpp-dev@lists.fd.io  on behalf of Ranadip Das 

Sent: Tuesday, January 29, 2019 1:57 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] multi-queue tap interface #vpp

Does vpp support multi-queue tap interface? If yes, how do I create a tap 
interface with multi-queue support?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12038): https://lists.fd.io/g/vpp-dev/message/12038
Mute This Topic: https://lists.fd.io/mt/29577498/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] How do I get the "dpdk-shared" in VPP ?

2019-01-29 Thread Damjan Marion via Lists.Fd.Io

Dear Marco,

May be that my first explanation was not clear enough.

(1) In VPP repo we use cmake + (ninja or gnumake) for compiling VPP
which includes searching for dependencies (different libs like dpdk, openssl, 
uuid).
To compile VPP everything you need is in src/ directory.
If cmake is not able to find some dependencies like DPDK, it will warn
you and disable that compolnent (i.e. DPDK plugin).

(2) Then we have our build environment crafted out of different Makefiles,
which is there mainly to support developers and for our internal packaging.
What those sets of makefiles are doing is:
 - downloading, compiling (and optionally packaging)  dependencies like dpdk, 
ipsecmb, nasm
 - compiling VPP by passing right arguments to (1) so cmake is able to find 
libraries at the right place

If you are working on disto packaging, specially if you are linking against
distro version of libraries like DPDK, there is no sense in using (2).
Just call cmake with right arguments from your .spec file following
by "cmake --build" similar to majority of open source projects.
Simply forget about anything in build-root/ build-data/ or build/
directories. They are all part of (2).

Hope this explains,


> On 29 Jan 2019, at 08:07, Marco Varlese  wrote:
> 
> Thanks Damjan. I will try that too.
> 
> A last question: I assume I can keep using the "make -C build-root
> install-packages" if I pull your last patches. Am I right / wrong?
> 
> 
> Thanks,
> Marco
> 
> On 1/28/19 5:57 PM, Damjan Marion via Lists.Fd.Io wrote:
>> 
>> With this change, I'm able to compile VPP out of tarball produced by
>> "make dist".
>> 
>> https://gerrit.fd.io/r/#/c/17125/
>> 
>> 
>>> On 28 Jan 2019, at 13:35, Damjan Marion via Lists.Fd.Io
>>> mailto:dmarion=me@lists.fd.io>> wrote:
>>> 
>>> 
>>> 
 On 28 Jan 2019, at 12:08, Marco Varlese >>> > wrote:
 
 Is there still a way to use the old infrastructure to build the code?
>>> 
>>> No, that doesn't make sense.
>>> 
 
 Apparently, cmake works when used inside the GIT repo but fails to build
 when using the tarball generated via "make dist" (required indeed for
 downstream consumption).
>>> 
>>> that should be easy fixable
>>> 
 
 On 1/26/19 2:22 PM, Damjan Marion via Lists.Fd.Io wrote:
> 
> Here it is: https://gerrit.fd.io/r/17094
> 
> 
> $ mkdir build-vpp stage
> 
> $ git clone 
> 
> $ cd dpdk
> 
> $ cat << _EOF_ | patch -p1
> diff --git a/config/common_base b/config/common_base
> index d12ae98bc..42d6f53dd 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -38,7 +38,7 @@ CONFIG_RTE_ARCH_STRICT_ALIGN=n
>  #
>  # Compile to share library
>  #
> -CONFIG_RTE_BUILD_SHARED_LIB=n
> +CONFIG_RTE_BUILD_SHARED_LIB=y
> 
>  #
>  # Use newest code breaking previous ABI
> _EOF_
> 
> 
> $ make -j install T=x86_64-native-linuxapp-gcc DESTDIR=../stage
> 
> $ cd ../build-vpp
> 
> $ cmake -G Ninja -DCMAKE_PREFIX_PATH:PATH=$PWD/../stage /path/to/vpp/src
> 
> $ ninja
> 
> $ LD_LIBRARY_PATH=../stage/lib ldd lib/vpp_plugins/dpdk_plugin.so
> linux-vdso.so.1 (0x7ffe2a3b7000)
> librte_cryptodev.so.5.1 => ../stage/lib/librte_cryptodev.so.5.1
> (0x7fd5e1fa)
> librte_eal.so.9.1 => ../stage/lib/librte_eal.so.9.1 (0x7fd5e1ed1000)
> librte_ethdev.so.11.1 => ../stage/lib/librte_ethdev.so.11.1
> (0x7fd5e1e3)
> librte_mbuf.so.4.1 => ../stage/lib/librte_mbuf.so.4.1
> (0x7fd5e1e28000)
> librte_mempool.so.5.1 => ../stage/lib/librte_mempool.so.5.1
> (0x7fd5e1e1f000)
> librte_pmd_bond.so.2.1 => ../stage/lib/librte_pmd_bond.so.2.1
> (0x7fd5e1dfe000)
> librte_ring.so.2.1 => ../stage/lib/librte_ring.so.2.1
> (0x7fd5e1df9000)
> librte_sched.so.1.1 => ../stage/lib/librte_sched.so.1.1
> (0x7fd5e1ded000)
> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7fd5e1be9000)
> /lib64/ld-linux-x86-64.so.2 (0x7fd5e211d000)
> librte_kvargs.so.1.1 => ../stage/lib/librte_kvargs.so.1.1
> (0x7fd5e1be4000)
> libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7fd5e1bdc000)
> libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
> (0x7fd5e1bbb000)
> librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7fd5e1bb1000)
> libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1
> (0x7fd5e19a6000)
> librte_cmdline.so.2.1 => ../stage/lib/librte_cmdline.so.2.1
> (0x7fd5e199a000)
> librte_pci.so.1.1 => ../stage/lib/librte_pci.so.1.1 (0x7fd5e1993000)
> librte_bus_vdev.so.2.1 => ../stage/lib/librte_bus_vdev.so.2.1
> (0x7fd5e198c000)
> libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7fd5e17ff000)
> 
> -- 
> Damjan
> 
> 
> 
> 
>> On 25 Jan 2019, at 18:03, Kinsella, Ray > 
>> 

Re: [vpp-dev] Question about crypto dev queue pairs #vpp

2019-01-29 Thread manuel . alonso
Hi Sergio,

I prefer you to provide the patch to use 1 qp since I have been inspecting 
source code for two days only(I might add other bugs...).
I could test your patch in an Octeon board that is supposed to setup 1 qp.

BR,
Manuel
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12036): https://lists.fd.io/g/vpp-dev/message/12036
Mute This Topic: https://lists.fd.io/mt/29538345/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Question about crypto dev queue pairs #vpp

2019-01-29 Thread Sergio Gonzalez Monroy
Hi Manuel,

This is likely a mismatch in VPP side. I only tested it with QAT (2 qps per VF) 
and SW cryptodevs (default 8 qps) at the time (over a year ago).

I only tested it with SW cryptodevs and QAT, that was the HW I had access to.

So like I mentioned before, if you do not want to rework the code to support 1 
qp per resource, then a check for at least 2 qps per device is required to use 
that device.

I could provide a patch to use 1 pq per resource over the next few days if you 
are interested in it or could review if you decide to do the work.

Which device do you want to use?

Regards,
Sergio

From: vpp-dev@lists.fd.io  on behalf of 
manuel.alo...@cavium.com 
Sent: Monday, January 28, 2019 4:15 PM
To: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Question about crypto dev queue pairs #vpp

Hi Sergio,

thank you for the explanation, I see that there are 2 (or more qps). My concern 
was due to dpdk, since there are a few device drivers exporting only one queue 
pair for their crypto devices.
(I followed the code assuming one qps, based on a dpdk-18.11 exported value)
So I do not know where is the mismatching, vpp or dpdk?


BR,
Manuel
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12035): https://lists.fd.io/g/vpp-dev/message/12035
Mute This Topic: https://lists.fd.io/mt/29538345/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Dual stack con VPP and VRF

2019-01-29 Thread Neale Ranns via Lists.Fd.Io
Hi,

You just need to give the interface an IPv4 and IPv6 address.

DBGvpp# loop cre
DBGvpp# ip table 1
DBGvpp# set int ip table loop0 1
DBGvpp# set int state loop0 up
DBGvpp# set int ip address loop0 10.10.10.10/24
DBGvpp# set int ip address loop0 2001::10/64

The creation of the IP table 1 is optional, it would work in the ‘default’ 
table too.

/neale

De :  au nom de Yosvany 
Date : mardi 29 janvier 2019 à 02:34
À : "dmar...@me.com" , "Damjan Marion via Lists.Fd.Io" 
, Marco Varlese 
Cc : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] Dual stack con VPP and VRF

Can someone show me one example, how can use dual stack in one interface and 
vrf.??
--
Enviado desde mi dispositivo Android con K-9 Mail. Por favor, disculpa mi 
brevedad.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12034): https://lists.fd.io/g/vpp-dev/message/12034
Mute This Topic: https://lists.fd.io/mt/29577845/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-