[vpp-dev] acl priority

2017-09-06 Thread yug...@telincn.com
Hi all,
Does vpp acl sourpport ajust priority? 
I have configured ten acl rules, if i want to move the tenth acl to be the 
first acl, is there a easy way to do this?

Regards,
Ewan



yug...@telincn.com
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Running CLI against named vpp instance

2017-09-06 Thread Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Dave,

please find replies inline.

From: Dave Wallace [mailto:dwallac...@gmail.com]
Sent: 5 września 2017 16:41
To: Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco) 
; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Running CLI against named vpp instance

Marek,

What is the uid/gid of /dev/shm/vpe-api ?
root/vpp

Is the user a member of the vpp group?
yes

Does your VPP workspace include the patch c900ccc34 "Enabled gid vpp in 
startup.conf to allow non-root vppctl access" ?
yes (I've built master with HEAD @ 809bc74, also tested corresponding package 
from nexus)

Thanks,
-daw-
On 09/05/2017 06:08 AM, Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at 
Cisco) wrote:
Hi,

I am having problems with running CLI against named vpp instance (g809bc74):

sudo vpp api-segment { prefix vpp0 }

sudo vppctl -p vpp0 show int
clib_socket_init: connect: Connection refused

But ps shows vpp process is running.

It worked with 17.07.
Is it no longer supported or I need some additional configuration?

Regards,
Marek




___

vpp-dev mailing list

vpp-dev@lists.fd.io

https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] acl priority

2017-09-06 Thread Andrew Yourtchenko
Hi,

If we you talk about acl plugin then the ACLs are evaluated in the order of 
them applied and same about the ACEs within an acl - to change the order you 
can apply a differently sorted list or call acl_add_replace with new contents 
of the ACL.

If you talk the built in ACLs using classifier tables, then within the same 
table the rules don't overlap so there is no order. And to change the order of 
evaluation of multiple tables, in case you have more than one, you need to 
recreate the entire chain.

Hope this helps!

--a

> On 6 Sep 2017, at 11:17, "yug...@telincn.com"  wrote:
> 
> Hi all,
> Does vpp acl sourpport ajust priority? 
> I have configured ten acl rules, if i want to move the tenth acl to be the 
> first acl, is there a easy way to do this?
> 
> Regards,
> Ewan
> 
> yug...@telincn.com
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] query on hugepages usage in VPP

2017-09-06 Thread Damjan Marion (damarion)

why do you need so much memory? Currently, for default number of buffers (16K 
per socket) VPP needs
around 40MB of hugepage memory so allocating 1G will be huge waste of memory….

Thanks,

Damjan

On 5 Sep 2017, at 11:15, Balaji Kn 
mailto:balaji.s...@gmail.com>> wrote:

Hello,

Can you help me on below query related to 1G huge pages usage in VPP.

Regards,
Balaji


On Thu, Aug 31, 2017 at 5:19 PM, Balaji Kn 
mailto:balaji.s...@gmail.com>> wrote:
Hello,

I am using v17.07. I am trying to configure huge page size as 1GB and reserve 
16 huge pages for VPP.
I went through /etc/sysctl.d/80-vpp.conf file and found options only for huge 
page of size 2M.

output of vpp-conf file.
.# Number of 2MB hugepages desired
vm.nr_hugepages=1024

# Must be greater than or equal to (2 * vm.nr_hugepages).
vm.max_map_count=3096

# All groups allowed to access hugepages
vm.hugetlb_shm_group=0

# Shared Memory Max must be greator or equal to the total size of hugepages.
# For 2MB pages, TotalHugepageSize = vm.nr_hugepages * 2 * 1024 * 1024
# If the existing kernel.shmmax setting  (cat /sys/proc/kernel/shmmax)
# is greater than the calculated TotalHugepageSize then set this parameter
# to current shmmax value.
kernel.shmmax=2147483648

Please can you let me know configurations i need to do so that VPP runs with 
1GB huge pages.

Host OS is supporting 1GB huge pages.

Regards,
Balaji


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] query on hugepages usage in VPP

2017-09-06 Thread Balaji Kn
Hi Damjan,

I was trying to create 4k sub-interfaces for an interface and associate
each sub-interface with vrf and observed a limitation in VPP 17.07 that was
supporting only 874 VRFs and shared memory was unlinked for 875th VRF.

I felt this might be because of shortage of heap memory used in VPP and
might be solved with  increase of huge page memory.

Regards,
Balaji

On Wed, Sep 6, 2017 at 7:10 PM, Damjan Marion (damarion)  wrote:

>
> why do you need so much memory? Currently, for default number of buffers
> (16K per socket) VPP needs
> around 40MB of hugepage memory so allocating 1G will be huge waste of
> memory….
>
> Thanks,
>
> Damjan
>
> On 5 Sep 2017, at 11:15, Balaji Kn  wrote:
>
> Hello,
>
> Can you help me on below query related to 1G huge pages usage in VPP.
>
> Regards,
> Balaji
>
>
> On Thu, Aug 31, 2017 at 5:19 PM, Balaji Kn  wrote:
>
>> Hello,
>>
>> I am using *v17.07*. I am trying to configure huge page size as 1GB and
>> reserve 16 huge pages for VPP.
>> I went through /etc/sysctl.d/80-vpp.conf file and found options only for
>> huge page of size 2M.
>>
>> *output of vpp-conf file.*
>> .# Number of 2MB hugepages desired
>> vm.nr_hugepages=1024
>>
>> # Must be greater than or equal to (2 * vm.nr_hugepages).
>> vm.max_map_count=3096
>>
>> # All groups allowed to access hugepages
>> vm.hugetlb_shm_group=0
>>
>> # Shared Memory Max must be greator or equal to the total size of
>> hugepages.
>> # For 2MB pages, TotalHugepageSize = vm.nr_hugepages * 2 * 1024 * 1024
>> # If the existing kernel.shmmax setting  (cat /sys/proc/kernel/shmmax)
>> # is greater than the calculated TotalHugepageSize then set this parameter
>> # to current shmmax value.
>> kernel.shmmax=2147483648 <(214)%20748-3648>
>>
>> Please can you let me know configurations i need to do so that VPP runs
>> with 1GB huge pages.
>>
>> Host OS is supporting 1GB huge pages.
>>
>> Regards,
>> Balaji
>>
>>
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Running CLI against named vpp instance

2017-09-06 Thread Dave Wallace

Marek,

Please check the vpp startup configuration (/etc/vpp/startup.conf) to 
ensure that "unix { cli-listen /run/vpp/cli.sock }" is present.  This is 
the default socket used by the 'c' implementation of vppctl.


I'm going to fix the error message to output the socket file name to 
make this easier to debug.


Thanks,
-daw-

On 09/06/2017 05:55 AM, Marek Gradzki -X (mgradzki - PANTHEON 
TECHNOLOGIES at Cisco) wrote:


Dave,

please find replies inline.

*From:*Dave Wallace [mailto:dwallac...@gmail.com]
*Sent:* 5 września 2017 16:41
*To:* Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco) 
; vpp-dev@lists.fd.io

*Subject:* Re: [vpp-dev] Running CLI against named vpp instance

Marek,

What is the uid/gid of /dev/shm/vpe-api ?

root/vpp

Is the user a member of the vpp group?
yes


Does your VPP workspace include the patch c900ccc34 "Enabled gid vpp 
in startup.conf to allow non-root vppctl access" ?
yes (I’ve built master with HEAD @ 809bc74, also tested corresponding 
package from nexus)



Thanks,
-daw-

On 09/05/2017 06:08 AM, Marek Gradzki -X (mgradzki - PANTHEON 
TECHNOLOGIES at Cisco) wrote:


Hi,

I am having problems with running CLI against named vpp instance
(g809bc74):

sudo vpp api-segment { prefix vpp0 }

sudo vppctl -p vpp0 show int

clib_socket_init: connect: Connection refused

But ps shows vpp process is running.

It worked with 17.07.

Is it no longer supported or I need some additional configuration?

Regards,

Marek




___

vpp-dev mailing list

vpp-dev@lists.fd.io 

https://lists.fd.io/mailman/listinfo/vpp-dev



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Hugepage/Memory Allocation Rework

2017-09-06 Thread Billy McFall
Damjan,

On the VPP call yesterday, you described the patch you are working on to
rework how VPP allocates and uses hugepages. Per request from Jerome
Tollet, I wrote VPP-958  to document
some issues they were seeing. I believe your patch will address this issue.
I added a comment to the JIRA. Is my comment in the JIRA accurate?

Save you from having to follow the link:

Damjan Marion is working on a patch that reworks how VPP uses memory. With
the patch, VPP will not need to allocate memory using 80-vpp.conf. Instead,
when VPP is started, it will check to insure there are enough free
hugespages for it to function. If so, it will not touch the current huge
page allocation. If not, it will attempt to allocate what it needs.

This patch also reduces the default amount of memory VPP requires. This is
a fairly big change so it will probably not be merged until after 17.10. I
believe this patch will address the concerns of this JIRA. I will update
this JIRA as progress is made.

This may not be the final patch, but here is the current work in progress:
https://gerrit.fd.io/r/#/c/7701/

Thanks,
Billy McFall
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] query on hugepages usage in VPP

2017-09-06 Thread Damjan Marion (damarion)

On 6 Sep 2017, at 16:49, Balaji Kn 
mailto:balaji.s...@gmail.com>> wrote:

Hi Damjan,

I was trying to create 4k sub-interfaces for an interface and associate each 
sub-interface with vrf and observed a limitation in VPP 17.07 that was 
supporting only 874 VRFs and shared memory was unlinked for 875th VRF.

What do you mean by “shared memory was unlinked” ?
Which shared memory?


I felt this might be because of shortage of heap memory used in VPP and might 
be solved with  increase of huge page memory.

VPP heap is not using hugepages.


Regards,
Balaji

On Wed, Sep 6, 2017 at 7:10 PM, Damjan Marion (damarion) 
mailto:damar...@cisco.com>> wrote:

why do you need so much memory? Currently, for default number of buffers (16K 
per socket) VPP needs
around 40MB of hugepage memory so allocating 1G will be huge waste of memory….

Thanks,

Damjan

On 5 Sep 2017, at 11:15, Balaji Kn 
mailto:balaji.s...@gmail.com>> wrote:

Hello,

Can you help me on below query related to 1G huge pages usage in VPP.

Regards,
Balaji


On Thu, Aug 31, 2017 at 5:19 PM, Balaji Kn 
mailto:balaji.s...@gmail.com>> wrote:
Hello,

I am using v17.07. I am trying to configure huge page size as 1GB and reserve 
16 huge pages for VPP.
I went through /etc/sysctl.d/80-vpp.conf file and found options only for huge 
page of size 2M.

output of vpp-conf file.
.# Number of 2MB hugepages desired
vm.nr_hugepages=1024

# Must be greater than or equal to (2 * vm.nr_hugepages).
vm.max_map_count=3096

# All groups allowed to access hugepages
vm.hugetlb_shm_group=0

# Shared Memory Max must be greator or equal to the total size of hugepages.
# For 2MB pages, TotalHugepageSize = vm.nr_hugepages * 2 * 1024 * 1024
# If the existing kernel.shmmax setting  (cat /sys/proc/kernel/shmmax)
# is greater than the calculated TotalHugepageSize then set this parameter
# to current shmmax value.
kernel.shmmax=2147483648

Please can you let me know configurations i need to do so that VPP runs with 
1GB huge pages.

Host OS is supporting 1GB huge pages.

Regards,
Balaji


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Hugepage/Memory Allocation Rework

2017-09-06 Thread Damjan Marion (damarion)
HI Billy,

On 6 Sep 2017, at 16:55, Billy McFall 
mailto:bmcf...@redhat.com>> wrote:

Damjan,

On the VPP call yesterday, you described the patch you are working on to rework 
how VPP allocates and uses hugepages. Per request from Jerome Tollet, I wrote 
VPP-958 to document some issues they were 
seeing. I believe your patch will address this issue. I added a comment to the 
JIRA. Is my comment in the JIRA accurate?

Save you from having to follow the link:

Damjan Marion is working on a patch that reworks how VPP uses memory. With the 
patch, VPP will not need to allocate memory using 80-vpp.conf. Instead, when 
VPP is started, it will check to insure there are enough free hugespages for it 
to function. If so, it will not touch the current huge page allocation. If not, 
it will attempt to allocate what it needs.

yes, it will pre-allocate delta.

This patch also reduces the default amount of memory VPP requires. This is a 
fairly big change so it will probably not be merged until after 17.10. I 
believe this patch will address the concerns of this JIRA. I will update this 
JIRA as progress is made.

yes

This may not be the final patch, but here is the current work in progress: 
https://gerrit.fd.io/r/#/c/7701/

yes

Thanks,

Damjan

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Running CLI against named vpp instance

2017-09-06 Thread Ed Warnicke
Dave,

I think we would need to be sure that different vpp instances have
different cli-listen socket files, and that vppctl has a mechanism to
address them easily.

I'd suggest a pattern like

"unix { cli-listen /run/vpp.cli-${prefix}.sock"

and

vppctl -p ${prefix}

to be in line with current usage.

Ed




On Wed, Sep 6, 2017 at 7:50 AM, Dave Wallace  wrote:

> Marek,
>
> Please check the vpp startup configuration (/etc/vpp/startup.conf) to
> ensure that "unix { cli-listen /run/vpp/cli.sock }" is present.  This is
> the default socket used by the 'c' implementation of vppctl.
>
> I'm going to fix the error message to output the socket file name to make
> this easier to debug.
>
> Thanks,
> -daw-
>
> On 09/06/2017 05:55 AM, Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES
> at Cisco) wrote:
>
> Dave,
>
>
>
> please find replies inline.
>
>
>
> *From:* Dave Wallace [mailto:dwallac...@gmail.com ]
> *Sent:* 5 września 2017 16:41
> *To:* Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
>  ; vpp-dev@lists.fd.io
> *Subject:* Re: [vpp-dev] Running CLI against named vpp instance
>
>
>
> Marek,
>
> What is the uid/gid of /dev/shm/vpe-api ?
>
> root/vpp
>
> Is the user a member of the vpp group?
> yes
>
>
> Does your VPP workspace include the patch c900ccc34 "Enabled gid vpp in
> startup.conf to allow non-root vppctl access" ?
> yes (I’ve built master with HEAD @ 809bc74, also tested corresponding
> package from nexus)
>
>
> Thanks,
> -daw-
>
> On 09/05/2017 06:08 AM, Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES
> at Cisco) wrote:
>
> Hi,
>
>
>
> I am having problems with running CLI against named vpp instance
> (g809bc74):
>
>
>
> sudo vpp api-segment { prefix vpp0 }
>
>
>
> sudo vppctl -p vpp0 show int
>
> clib_socket_init: connect: Connection refused
>
>
>
> But ps shows vpp process is running.
>
>
>
> It worked with 17.07.
>
> Is it no longer supported or I need some additional configuration?
>
>
>
> Regards,
>
> Marek
>
>
>
>
> ___
>
> vpp-dev mailing list
>
> vpp-dev@lists.fd.io
>
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
>
>
>
>
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] query on hugepages usage in VPP

2017-09-06 Thread Balaji Kn
Hi Damjan,

I am creating vrf's using "*set interface ip table  ".*
*/dev/shm/vpe-api* shared memory is unlinked. I am able to see following
error message on vppctl console.

*exec error: Misc*

After this if i execute "show int" on vppctl, all VPP configuration i did
so far was lost and started with default configuration as per
/etc/vpp/startup.conf.

You mentioned that VPP heap is not using huge pages. In that case can I
increase heap memory with startup configuration "heapsize" parameter?

Regards,
Balaji


On Wed, Sep 6, 2017 at 8:27 PM, Damjan Marion (damarion)  wrote:

>
> On 6 Sep 2017, at 16:49, Balaji Kn  wrote:
>
> Hi Damjan,
>
> I was trying to create 4k sub-interfaces for an interface and associate
> each sub-interface with vrf and observed a limitation in VPP 17.07 that was
> supporting only 874 VRFs and shared memory was unlinked for 875th VRF.
>
>
> What do you mean by “shared memory was unlinked” ?
> Which shared memory?
>
>
> I felt this might be because of shortage of heap memory used in VPP and
> might be solved with  increase of huge page memory.
>
>
> VPP heap is not using hugepages.
>
>
> Regards,
> Balaji
>
> On Wed, Sep 6, 2017 at 7:10 PM, Damjan Marion (damarion) <
> damar...@cisco.com> wrote:
>
>>
>> why do you need so much memory? Currently, for default number of buffers
>> (16K per socket) VPP needs
>> around 40MB of hugepage memory so allocating 1G will be huge waste of
>> memory….
>>
>> Thanks,
>>
>> Damjan
>>
>> On 5 Sep 2017, at 11:15, Balaji Kn  wrote:
>>
>> Hello,
>>
>> Can you help me on below query related to 1G huge pages usage in VPP.
>>
>> Regards,
>> Balaji
>>
>>
>> On Thu, Aug 31, 2017 at 5:19 PM, Balaji Kn  wrote:
>>
>>> Hello,
>>>
>>> I am using *v17.07*. I am trying to configure huge page size as 1GB and
>>> reserve 16 huge pages for VPP.
>>> I went through /etc/sysctl.d/80-vpp.conf file and found options only for
>>> huge page of size 2M.
>>>
>>> *output of vpp-conf file.*
>>> .# Number of 2MB hugepages desired
>>> vm.nr_hugepages=1024
>>>
>>> # Must be greater than or equal to (2 * vm.nr_hugepages).
>>> vm.max_map_count=3096
>>>
>>> # All groups allowed to access hugepages
>>> vm.hugetlb_shm_group=0
>>>
>>> # Shared Memory Max must be greator or equal to the total size of
>>> hugepages.
>>> # For 2MB pages, TotalHugepageSize = vm.nr_hugepages * 2 * 1024 * 1024
>>> # If the existing kernel.shmmax setting  (cat /sys/proc/kernel/shmmax)
>>> # is greater than the calculated TotalHugepageSize then set this
>>> parameter
>>> # to current shmmax value.
>>> kernel.shmmax=2147483648 <(214)%20748-3648>
>>>
>>> Please can you let me know configurations i need to do so that VPP runs
>>> with 1GB huge pages.
>>>
>>> Host OS is supporting 1GB huge pages.
>>>
>>> Regards,
>>> Balaji
>>>
>>>
>> ___
>> vpp-dev mailing list
>> vpp-dev@lists.fd.io
>> https://lists.fd.io/mailman/listinfo/vpp-dev
>>
>>
>>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] query on hugepages usage in VPP

2017-09-06 Thread Damjan Marion (damarion)
yes, you can also try to execute “show memory verbose” before the failing one 
to see the stats…

On 6 Sep 2017, at 17:21, Balaji Kn 
mailto:balaji.s...@gmail.com>> wrote:

Hi Damjan,

I am creating vrf's using "set interface ip table  ".
/dev/shm/vpe-api shared memory is unlinked. I am able to see following error 
message on vppctl console.

exec error: Misc

After this if i execute "show int" on vppctl, all VPP configuration i did so 
far was lost and started with default configuration as per 
/etc/vpp/startup.conf.

You mentioned that VPP heap is not using huge pages. In that case can I 
increase heap memory with startup configuration "heapsize" parameter?

Regards,
Balaji


On Wed, Sep 6, 2017 at 8:27 PM, Damjan Marion (damarion) 
mailto:damar...@cisco.com>> wrote:

On 6 Sep 2017, at 16:49, Balaji Kn 
mailto:balaji.s...@gmail.com>> wrote:

Hi Damjan,

I was trying to create 4k sub-interfaces for an interface and associate each 
sub-interface with vrf and observed a limitation in VPP 17.07 that was 
supporting only 874 VRFs and shared memory was unlinked for 875th VRF.

What do you mean by “shared memory was unlinked” ?
Which shared memory?


I felt this might be because of shortage of heap memory used in VPP and might 
be solved with  increase of huge page memory.

VPP heap is not using hugepages.


Regards,
Balaji

On Wed, Sep 6, 2017 at 7:10 PM, Damjan Marion (damarion) 
mailto:damar...@cisco.com>> wrote:

why do you need so much memory? Currently, for default number of buffers (16K 
per socket) VPP needs
around 40MB of hugepage memory so allocating 1G will be huge waste of memory….

Thanks,

Damjan

On 5 Sep 2017, at 11:15, Balaji Kn 
mailto:balaji.s...@gmail.com>> wrote:

Hello,

Can you help me on below query related to 1G huge pages usage in VPP.

Regards,
Balaji


On Thu, Aug 31, 2017 at 5:19 PM, Balaji Kn 
mailto:balaji.s...@gmail.com>> wrote:
Hello,

I am using v17.07. I am trying to configure huge page size as 1GB and reserve 
16 huge pages for VPP.
I went through /etc/sysctl.d/80-vpp.conf file and found options only for huge 
page of size 2M.

output of vpp-conf file.
.# Number of 2MB hugepages desired
vm.nr_hugepages=1024

# Must be greater than or equal to (2 * vm.nr_hugepages).
vm.max_map_count=3096

# All groups allowed to access hugepages
vm.hugetlb_shm_group=0

# Shared Memory Max must be greator or equal to the total size of hugepages.
# For 2MB pages, TotalHugepageSize = vm.nr_hugepages * 2 * 1024 * 1024
# If the existing kernel.shmmax setting  (cat /sys/proc/kernel/shmmax)
# is greater than the calculated TotalHugepageSize then set this parameter
# to current shmmax value.
kernel.shmmax=2147483648

Please can you let me know configurations i need to do so that VPP runs with 
1GB huge pages.

Host OS is supporting 1GB huge pages.

Regards,
Balaji


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev





___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Running CLI against named vpp instance

2017-09-06 Thread Dave Wallace

Ed,

vppctl already has a command line arg (-s) which allows the user to 
specify a specific socket pathname which must be the same as the 
"cli-listen " configuration in a given vpp instance.  Adding a 
naming convention to both vpp and vppctl is going to over-complicate the 
matter.


Thanks,
-daw-



On 09/06/2017 11:01 AM, Ed Warnicke wrote:

Dave,

I think we would need to be sure that different vpp instances have 
different cli-listen socket files, and that vppctl has a mechanism to 
address them easily.


I'd suggest a pattern like

"unix { cli-listen /run/vpp.cli-${prefix}.sock"

and

vppctl -p ${prefix}

to be in line with current usage.

Ed




On Wed, Sep 6, 2017 at 7:50 AM, Dave Wallace > wrote:


Marek,

Please check the vpp startup configuration (/etc/vpp/startup.conf)
to ensure that "unix { cli-listen /run/vpp/cli.sock }" is
present.  This is the default socket used by the 'c'
implementation of vppctl.

I'm going to fix the error message to output the socket file name
to make this easier to debug.

Thanks,
-daw-

On 09/06/2017 05:55 AM, Marek Gradzki -X (mgradzki - PANTHEON
TECHNOLOGIES at Cisco) wrote:


Dave,

please find replies inline.

*From:*Dave Wallace [mailto:dwallac...@gmail.com]
*Sent:* 5 września 2017 16:41
*To:* Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at
Cisco)  ;
vpp-dev@lists.fd.io 
*Subject:* Re: [vpp-dev] Running CLI against named vpp instance

Marek,

What is the uid/gid of /dev/shm/vpe-api ?

root/vpp

Is the user a member of the vpp group?
yes


Does your VPP workspace include the patch c900ccc34 "Enabled gid
vpp in startup.conf to allow non-root vppctl access" ?
yes (I’ve built master with HEAD @ 809bc74, also tested
corresponding package from nexus)


Thanks,
-daw-

On 09/05/2017 06:08 AM, Marek Gradzki -X (mgradzki - PANTHEON
TECHNOLOGIES at Cisco) wrote:

Hi,

I am having problems with running CLI against named vpp
instance (g809bc74):

sudo vpp api-segment { prefix vpp0 }

sudo vppctl -p vpp0 show int

clib_socket_init: connect: Connection refused

But ps shows vpp process is running.

It worked with 17.07.

Is it no longer supported or I need some additional
configuration?

Regards,

Marek




___

vpp-dev mailing list

vpp-dev@lists.fd.io 

https://lists.fd.io/mailman/listinfo/vpp-dev





___
vpp-dev mailing list
vpp-dev@lists.fd.io 
https://lists.fd.io/mailman/listinfo/vpp-dev





___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Running CLI against named vpp instance

2017-09-06 Thread Ed Warnicke
All good.  As long as we have a consistent way to handle it on both sides,
I'm good :)

Ed

On Wed, Sep 6, 2017 at 9:06 AM, Dave Wallace  wrote:

> Ed,
>
> vppctl already has a command line arg (-s) which allows the user to
> specify a specific socket pathname which must be the same as the
> "cli-listen " configuration in a given vpp instance.  Adding a
> naming convention to both vpp and vppctl is going to over-complicate the
> matter.
>
> Thanks,
> -daw-
>
>
>
> On 09/06/2017 11:01 AM, Ed Warnicke wrote:
>
> Dave,
>
> I think we would need to be sure that different vpp instances have
> different cli-listen socket files, and that vppctl has a mechanism to
> address them easily.
>
> I'd suggest a pattern like
>
> "unix { cli-listen /run/vpp.cli-${prefix}.sock"
>
> and
>
> vppctl -p ${prefix}
>
> to be in line with current usage.
>
> Ed
>
>
>
>
> On Wed, Sep 6, 2017 at 7:50 AM, Dave Wallace  wrote:
>
>> Marek,
>>
>> Please check the vpp startup configuration (/etc/vpp/startup.conf) to
>> ensure that "unix { cli-listen /run/vpp/cli.sock }" is present.  This is
>> the default socket used by the 'c' implementation of vppctl.
>>
>> I'm going to fix the error message to output the socket file name to make
>> this easier to debug.
>>
>> Thanks,
>> -daw-
>>
>> On 09/06/2017 05:55 AM, Marek Gradzki -X (mgradzki - PANTHEON
>> TECHNOLOGIES at Cisco) wrote:
>>
>> Dave,
>>
>>
>>
>> please find replies inline.
>>
>>
>>
>> *From:* Dave Wallace [mailto:dwallac...@gmail.com ]
>>
>> *Sent:* 5 września 2017 16:41
>> *To:* Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
>>  ; vpp-dev@lists.fd.io
>> *Subject:* Re: [vpp-dev] Running CLI against named vpp instance
>>
>>
>>
>> Marek,
>>
>> What is the uid/gid of /dev/shm/vpe-api ?
>>
>> root/vpp
>>
>> Is the user a member of the vpp group?
>> yes
>>
>>
>> Does your VPP workspace include the patch c900ccc34 "Enabled gid vpp in
>> startup.conf to allow non-root vppctl access" ?
>> yes (I’ve built master with HEAD @ 809bc74, also tested corresponding
>> package from nexus)
>>
>>
>> Thanks,
>> -daw-
>>
>> On 09/05/2017 06:08 AM, Marek Gradzki -X (mgradzki - PANTHEON
>> TECHNOLOGIES at Cisco) wrote:
>>
>> Hi,
>>
>>
>>
>> I am having problems with running CLI against named vpp instance
>> (g809bc74):
>>
>>
>>
>> sudo vpp api-segment { prefix vpp0 }
>>
>>
>>
>> sudo vppctl -p vpp0 show int
>>
>> clib_socket_init: connect: Connection refused
>>
>>
>>
>> But ps shows vpp process is running.
>>
>>
>>
>> It worked with 17.07.
>>
>> Is it no longer supported or I need some additional configuration?
>>
>>
>>
>> Regards,
>>
>> Marek
>>
>>
>>
>>
>> ___
>>
>> vpp-dev mailing list
>>
>> vpp-dev@lists.fd.io
>>
>> https://lists.fd.io/mailman/listinfo/vpp-dev
>>
>>
>>
>>
>>
>> ___
>> vpp-dev mailing list
>> vpp-dev@lists.fd.io
>> https://lists.fd.io/mailman/listinfo/vpp-dev
>>
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] query on hugepages usage in VPP

2017-09-06 Thread Balaji Kn
Hi Damjan,

I was able to create 4k VRF's with increase in heap memory size to 4G.
Thanks for the help.

Regards,
Balaji

On Wed, Sep 6, 2017 at 9:01 PM, Damjan Marion (damarion)  wrote:

> yes, you can also try to execute “show memory verbose” before the failing
> one to see the stats…
>
> On 6 Sep 2017, at 17:21, Balaji Kn  wrote:
>
> Hi Damjan,
>
> I am creating vrf's using "*set interface ip table 
> ".*
> */dev/shm/vpe-api* shared memory is unlinked. I am able to see following
> error message on vppctl console.
>
> *exec error: Misc*
>
> After this if i execute "show int" on vppctl, all VPP configuration i did
> so far was lost and started with default configuration as per
> /etc/vpp/startup.conf.
>
> You mentioned that VPP heap is not using huge pages. In that case can I
> increase heap memory with startup configuration "heapsize" parameter?
>
> Regards,
> Balaji
>
>
> On Wed, Sep 6, 2017 at 8:27 PM, Damjan Marion (damarion) <
> damar...@cisco.com> wrote:
>
>>
>> On 6 Sep 2017, at 16:49, Balaji Kn  wrote:
>>
>> Hi Damjan,
>>
>> I was trying to create 4k sub-interfaces for an interface and associate
>> each sub-interface with vrf and observed a limitation in VPP 17.07 that was
>> supporting only 874 VRFs and shared memory was unlinked for 875th VRF.
>>
>>
>> What do you mean by “shared memory was unlinked” ?
>> Which shared memory?
>>
>>
>> I felt this might be because of shortage of heap memory used in VPP and
>> might be solved with  increase of huge page memory.
>>
>>
>> VPP heap is not using hugepages.
>>
>>
>> Regards,
>> Balaji
>>
>> On Wed, Sep 6, 2017 at 7:10 PM, Damjan Marion (damarion) <
>> damar...@cisco.com> wrote:
>>
>>>
>>> why do you need so much memory? Currently, for default number of buffers
>>> (16K per socket) VPP needs
>>> around 40MB of hugepage memory so allocating 1G will be huge waste of
>>> memory….
>>>
>>> Thanks,
>>>
>>> Damjan
>>>
>>> On 5 Sep 2017, at 11:15, Balaji Kn  wrote:
>>>
>>> Hello,
>>>
>>> Can you help me on below query related to 1G huge pages usage in VPP.
>>>
>>> Regards,
>>> Balaji
>>>
>>>
>>> On Thu, Aug 31, 2017 at 5:19 PM, Balaji Kn 
>>> wrote:
>>>
 Hello,

 I am using *v17.07*. I am trying to configure huge page size as 1GB
 and reserve 16 huge pages for VPP.
 I went through /etc/sysctl.d/80-vpp.conf file and found options only
 for huge page of size 2M.

 *output of vpp-conf file.*
 .# Number of 2MB hugepages desired
 vm.nr_hugepages=1024

 # Must be greater than or equal to (2 * vm.nr_hugepages).
 vm.max_map_count=3096

 # All groups allowed to access hugepages
 vm.hugetlb_shm_group=0

 # Shared Memory Max must be greator or equal to the total size of
 hugepages.
 # For 2MB pages, TotalHugepageSize = vm.nr_hugepages * 2 * 1024 * 1024
 # If the existing kernel.shmmax setting  (cat /sys/proc/kernel/shmmax)
 # is greater than the calculated TotalHugepageSize then set this
 parameter
 # to current shmmax value.
 kernel.shmmax=2147483648 <(214)%20748-3648>

 Please can you let me know configurations i need to do so that VPP runs
 with 1GB huge pages.

 Host OS is supporting 1GB huge pages.

 Regards,
 Balaji


>>> ___
>>> vpp-dev mailing list
>>> vpp-dev@lists.fd.io
>>> https://lists.fd.io/mailman/listinfo/vpp-dev
>>>
>>>
>>>
>>
>>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] [FD.io Helpdesk #45343] [linuxfoundation.org #45343] Re: More build timeouts for vpp-verify-master-ubuntu1604

2017-09-06 Thread Florin Coras via RT
Hi, 

Any news regarding this? We are 1 week away from API freeze and the infra makes 
it almost impossible to merge patches! 

Thanks, 
Florin

> On Sep 4, 2017, at 9:44 PM, Dave Wallace  wrote:
> 
> Dear helpd...@fd.io ,
> 
> There has been another string of build timeouts for 
> vpp-verify-master-ubuntu1604:
> https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/buildTimeTrend 
> 
> Please change the timeout for build failures from 360 minutes to 120 minutes 
> in addition to addressing the slow minion issue.
> 
> Thanks,
> -daw-
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] More build timeouts for vpp-verify-master-ubuntu1604

2017-09-06 Thread Florin Coras
Hi, 

Any news regarding this? We are 1 week away from API freeze and the infra makes 
it almost impossible to merge patches! 

Thanks, 
Florin

> On Sep 4, 2017, at 9:44 PM, Dave Wallace  wrote:
> 
> Dear helpd...@fd.io ,
> 
> There has been another string of build timeouts for 
> vpp-verify-master-ubuntu1604:
> https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/buildTimeTrend 
> 
> Please change the timeout for build failures from 360 minutes to 120 minutes 
> in addition to addressing the slow minion issue.
> 
> Thanks,
> -daw-
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Hugepage/Memory Allocation Rework

2017-09-06 Thread Jerome Tollet (jtollet)
Hi Billy & Damjan,
That’s really a nice evolution and that will certainly fix the issue we are 
facing.
Anyway, I am wondering if we shouldn’t modify the specfile according to the 
proposal I did in the JIRA ticket:

%config /etc/sysctl.d/80-vpp.conf
%config /etc/vpp/startup.conf

could be modified by:

%config(noreplace) /etc/sysctl.d/80-vpp.conf
%config(noreplace) /etc/vpp/startup.conf

Wouldn’t that be better?

Jerome

De :  au nom de "Damjan Marion (damarion)" 

Date : mercredi 6 septembre 2017 à 16:59
À : "bmcf...@redhat.com" 
Cc : "vpp-dev@lists.fd.io" 
Objet : Re: [vpp-dev] Hugepage/Memory Allocation Rework

HI Billy,

On 6 Sep 2017, at 16:55, Billy McFall 
mailto:bmcf...@redhat.com>> wrote:

Damjan,

On the VPP call yesterday, you described the patch you are working on to rework 
how VPP allocates and uses hugepages. Per request from Jerome Tollet, I wrote 
VPP-958 to document some issues they were 
seeing. I believe your patch will address this issue. I added a comment to the 
JIRA. Is my comment in the JIRA accurate?

Save you from having to follow the link:
Damjan Marion is working on a patch that reworks how VPP uses memory. With the 
patch, VPP will not need to allocate memory using 80-vpp.conf. Instead, when 
VPP is started, it will check to insure there are enough free hugespages for it 
to function. If so, it will not touch the current huge page allocation. If not, 
it will attempt to allocate what it needs.
yes, it will pre-allocate delta.
This patch also reduces the default amount of memory VPP requires. This is a 
fairly big change so it will probably not be merged until after 17.10. I 
believe this patch will address the concerns of this JIRA. I will update this 
JIRA as progress is made.
yes

This may not be the final patch, but here is the current work in progress: 
https://gerrit.fd.io/r/#/c/7701/
yes

Thanks,

Damjan

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] More build timeouts for vpp-verify-master-ubuntu1604

2017-09-06 Thread Vanessa Valderrama
We are in the process of switching to dedicated instances that should
resolve this issue.  We hope to have this complete tomorrow around
9:00am PDT


On 09/06/2017 02:40 PM, Florin Coras wrote:
> Hi, 
>
> Any news regarding this? We are 1 week away from API freeze and the
> infra makes it almost impossible to merge patches! 
>
> Thanks, 
> Florin
>
>> On Sep 4, 2017, at 9:44 PM, Dave Wallace > > wrote:
>>
>> Dear helpd...@fd.io,
>>
>> There has been another string of build timeouts for
>> vpp-verify-master-ubuntu1604:
>>
>> https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/buildTimeTrend
>>
>> Please change the timeout for build failures from 360 minutes to 120
>> minutes in addition to addressing the slow minion issue.
>>
>> Thanks,
>> -daw-
>> ___
>> vpp-dev mailing list
>> vpp-dev@lists.fd.io 
>> https://lists.fd.io/mailman/listinfo/vpp-dev
>



signature.asc
Description: OpenPGP digital signature
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] [FD.io Helpdesk #45343] [linuxfoundation.org #45343] Re: More build timeouts for vpp-verify-master-ubuntu1604

2017-09-06 Thread Vanessa Valderrama via RT
We are in the process of switching to dedicated instances that should
resolve this issue.  We hope to have this complete tomorrow around
9:00am PDT


On 09/06/2017 02:40 PM, Florin Coras wrote:
> Hi, 
>
> Any news regarding this? We are 1 week away from API freeze and the
> infra makes it almost impossible to merge patches! 
>
> Thanks, 
> Florin
>
>> On Sep 4, 2017, at 9:44 PM, Dave Wallace > > wrote:
>>
>> Dear helpd...@fd.io,
>>
>> There has been another string of build timeouts for
>> vpp-verify-master-ubuntu1604:
>>
>> https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/buildTimeTrend
>>
>> Please change the timeout for build failures from 360 minutes to 120
>> minutes in addition to addressing the slow minion issue.
>>
>> Thanks,
>> -daw-
>> ___
>> vpp-dev mailing list
>> vpp-dev@lists.fd.io 
>> https://lists.fd.io/mailman/listinfo/vpp-dev
>


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev