Re: [vpp-dev] VPP 18.01 Release artifacts are now available on nexus.fd.io

2018-01-25 Thread Jerome Tollet (jtollet)
Congratulations!

De :  au nom de Dave Wallace 
Date : mercredi 24 janvier 2018 à 21:23
À : "vpp-dev@lists.fd.io" , "csit-...@lists.fd.io" 

Objet : [vpp-dev] VPP 18.01 Release artifacts are now available on nexus.fd.io

Folks,

The VPP 18.01 Release artifacts are now available on nexus.fd.io

The ubuntu.xenial and centos packages can be installed following the recipe on 
the wiki: https://wiki.fd.io/view/VPP/Installing_VPP_binaries_from_packages

Thank you to all of the VPP community who have contributed to the 18.01 VPP 
Release.


Elvis has left the building!
-daw-


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] How can i use VPP as MPLS PE/P device

2017-12-11 Thread Jerome Tollet (jtollet)
Hello Michael,
HC BGP currently doesn’t support EVPN and we have no current plan for OSPF and 
LDP in HC.
However, suggestions and contribs are more than welcome?

BTW, you may also be interested in having a look ath the ligato project that 
supports GoBGP.

Regards,
Jerome

De :  au nom de Michael Borokhovich 

Date : lundi 11 décembre 2017 à 19:11
À : "Kinsella, Ray" 
Cc : "vppsb-...@lists.fd.io" , "wangchuan...@163.com" 
, vpp-dev 
Objet : Re: [vpp-dev] How can i use VPP as MPLS PE/P device

Thanks, Ray. Do you know if HC BGP supports EVPN? Also, is there current or 
planned OSPF and LDP support in HC?

On Mon, Dec 11, 2017 at 6:39 AM, Kinsella, Ray 
mailto:m...@ashroe.eu>> wrote:
At the moment - there is no direct/easy way to do this AFAIK.

Router plugin is the best example of this, the other option is to use HC 
instead of FRR for BGP.

Ray K


On 08/12/2017 21:23, Michael Borokhovich wrote:
So, for the control plane, we can use e.g., FRR that will populate Linux's FIB 
and MPLS table. Then, we need to sync this info to VPP's FIB and VPP's MPLS 
table.

While the "router plugin" provides support for FIB synchronization, there is no 
support for MPLS sync. Does anyone know if there are plans to add this MPLS 
support to router plugin? Otherwise, what would be the alternative best way of 
synchronizing Linux MPLS table with VPP?

Thanks!
Michael.


On Wed, Dec 6, 2017 at 2:15 PM, Luke, Chris 
mailto:chris_l...@comcast.com> 
>> wrote:

But to make sure we’re clear, while VPP can provide the dataplane
of a P/PE, but something else has to provide the control plane
(eg, LDP, BGP, SDN controller, etc)

Chris.

*From: *mailto:vpp-dev-boun...@lists.fd.io>
>> 
on behalf of "Neale Ranns
(nranns)" mailto:nra...@cisco.com> 
>>
*Date: *Wednesday, December 6, 2017 at 09:33
*To: *"wangchuan...@163.com 
>"
mailto:wangchuan...@163.com> 
>>, vpp-dev
mailto:vpp-dev@lists.fd.io> 
>>
*Subject: *Re: [vpp-dev] How can i use VPP as MPLS PE/P device

Another hastily assembled, on-demand guide:

https://wiki.fd.io/view/VPP/MPLS_FIB

/neale

*From: *mailto:vpp-dev-boun...@lists.fd.io>

>> on 
behalf of
"wangchuan...@163.com 
>"
mailto:wangchuan...@163.com> 
>>
*Date: *Wednesday, 6 December 2017 at 09:11
*To: *vpp-dev mailto:vpp-dev@lists.fd.io> 
>>
*Subject: *[vpp-dev] How can i use VPP as MPLS PE/P device

hi all,

I want to configure my testing MPLS network.

how can I configure VPP to act as PE or P using CLI cmd?

who can help?




best regards!



simon wang


___
vpp-dev mailing list
vpp-dev@lists.fd.io 
>
https://lists.fd.io/mailman/listinfo/vpp-dev





___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP management protocol

2017-12-03 Thread Jerome Tollet (jtollet)
Hi Avi,
VPP comes with a shared memory that exposes APIs (eg.: add remove interface, 
acls, …). You’ll find all the APIs in “.api” files in the source tree. We then 
have various language “binding” that support those APIs (C, C++, Java, Python, 
…).

In addition to that, there are various “agents” leveraging those VPP APIs to 
interact with controllers:
- HoneyComb is a generic NC/Y agent for VPP
- Ligato is another project to drive VPP through REST, gRPC or etcd
- Networking-vpp is a specific project to integrate VPP with OpenStack Neutron
HTH,
Jerome


Le 03/12/2017 09:36, « vpp-dev-boun...@lists.fd.io au nom de Avi Cohen (A) » 
 a écrit :

Hello,
What is the protocol used for mgmt./controller to configure (other than 
cli) the vpp device ? netconf ? other
Are there plugins for openflow ?

Best Regards
Avi 
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] [networking-vpp] new version available

2017-11-21 Thread Jerome Tollet (jtollet)
Folks,
Please, find here below the announce of networking-vpp 17.10. Main improvements 
are:

  *   Support of JSON Web Tokens (RFC7519) to sign messages between the 
controller and compute nodes
  *   Improved Layer 3 data model in etcd to prepare HA
  *   VXLan-GPE ARP population.
http://lists.openstack.org/pipermail/openstack-dev/2017-November/124744.html

We have already started development of next version 18.02 coming with a bunch 
of new interesting features…
Jerome
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] FOSDEM 2018

2017-11-17 Thread Jerome Tollet (jtollet)
Hi Ole,
I did submit one as well on VPP/networking-vpp for OpenStack integration.
Jerome

Le 17/11/2017 10:25, « vpp-dev-boun...@lists.fd.io au nom de Ole Troan » 
 a écrit :

Ray,

I did submit a loosely defined one on VPP. Anyone else?
Great if we can coordinate a little up front so we don't end up repeating 
ourselves too much.

cheers,
Ole

> On 17 Nov 2017, at 00:19, Kinsella, Ray  wrote:
> 
> Folks,
> 
> The Call for Content for FOSDEM 2018 is closing today!
> The "SDN and NFV room" at FOSDEM is great way to get the FD.io message 
out there.
> 
> Last minute submissions would be most welcome!
> To submit see,
> 
> 
https://blogs.gnome.org/bolsh/2017/11/01/fosdem-2018-sdnnfv-devroom-call-for-content/
> 
> Thanks,
> 
> Ray K
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Mac Address Api Changes

2017-10-10 Thread Jerome Tollet (jtollet)
+1, it would be nice to harmonize those API calls.
Jerome

De :  au nom de Mohsin Kazmi 
Date : lundi 9 octobre 2017 à 19:06
À : vpp-dev 
Objet : [vpp-dev] Mac Address Api Changes


Hello,​



I am writing regarding a proposal to change API messages related to l2fib mac 
address.



In vpp/src/vnet/l2/l2.api file, currently two API messages related to l2fib 
uses u64 for mac address instead of u8 mac[6]. While rest of VPP API calls use 
array of six bytes to store mac address.



As u64 to store mac address is inconsistent with rest of VPP and may create 
conversion troubles to interpret to/from u8 mac[6].



The proposal is to change those API messages definitions to use standard u8 
mac[6] to store mac address. But this change may impact the users of VPP API 
and they will need to update their code.



​That said, Please let the community know if there is any specific objection or 
opinion to above proposal.



Thanks,

Mohsin




___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Hugepage/Memory Allocation Rework

2017-09-06 Thread Jerome Tollet (jtollet)
Hi Billy & Damjan,
That’s really a nice evolution and that will certainly fix the issue we are 
facing.
Anyway, I am wondering if we shouldn’t modify the specfile according to the 
proposal I did in the JIRA ticket:

%config /etc/sysctl.d/80-vpp.conf
%config /etc/vpp/startup.conf

could be modified by:

%config(noreplace) /etc/sysctl.d/80-vpp.conf
%config(noreplace) /etc/vpp/startup.conf

Wouldn’t that be better?

Jerome

De :  au nom de "Damjan Marion (damarion)" 

Date : mercredi 6 septembre 2017 à 16:59
À : "bmcf...@redhat.com" 
Cc : "vpp-dev@lists.fd.io" 
Objet : Re: [vpp-dev] Hugepage/Memory Allocation Rework

HI Billy,

On 6 Sep 2017, at 16:55, Billy McFall 
mailto:bmcf...@redhat.com>> wrote:

Damjan,

On the VPP call yesterday, you described the patch you are working on to rework 
how VPP allocates and uses hugepages. Per request from Jerome Tollet, I wrote 
VPP-958 to document some issues they were 
seeing. I believe your patch will address this issue. I added a comment to the 
JIRA. Is my comment in the JIRA accurate?

Save you from having to follow the link:
Damjan Marion is working on a patch that reworks how VPP uses memory. With the 
patch, VPP will not need to allocate memory using 80-vpp.conf. Instead, when 
VPP is started, it will check to insure there are enough free hugespages for it 
to function. If so, it will not touch the current huge page allocation. If not, 
it will attempt to allocate what it needs.
yes, it will pre-allocate delta.
This patch also reduces the default amount of memory VPP requires. This is a 
fairly big change so it will probably not be merged until after 17.10. I 
believe this patch will address the concerns of this JIRA. I will update this 
JIRA as progress is made.
yes

This may not be the final patch, but here is the current work in progress: 
https://gerrit.fd.io/r/#/c/7701/
yes

Thanks,

Damjan

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] FW: [openstack-dev] [neutron][networking-vpp]networking-vpp 17.07.1 for VPP 17.07 is available

2017-08-07 Thread Jerome Tollet (jtollet)
Dear FD.io-ers,
I know some of you may have missed this announce on openstack mailing list.
Regards,
Jerome

De : Ian Wells 
Répondre à : "OpenStack Development Mailing List (not for usage questions)" 

Date : lundi 31 juillet 2017 à 01:07
À : OpenStack Development Mailing List 
Objet : [openstack-dev] [neutron][networking-vpp]networking-vpp 17.07.1 for VPP 
17.07 is available

In conjunction with the release of VPP 17.07, I'd like to invite you all to try 
out networking-vpp 17.07.1 for VPP 17.07.  VPP is a fast userspace forwarder 
based on the DPDK toolkit, and uses vector packet processing algorithms to 
minimise the CPU time spent on each packet and maximise throughput.  
networking-vpp is a ML2 mechanism driver that controls VPP on your control and 
compute hosts to provide fast L2 forwarding under Neutron.
This version has a few additional enhancements, along with supporting the VPP 
17.07 API:
- remote security group IDs are now supported
- VXLAN GPE support now includes proxy ARP at the local forwarder

Along with this, there have been the usual bug fixes, code and test 
improvements.

The README [1] explains how you can try out VPP using devstack: the devstack 
plugin will deploy etcd, the mechanism driver and VPP itself and should give 
you a working system with a minimum of hassle.
We will continuing development between now and VPP's 17.10 release in October.  
There are several features we're planning to work on (you'll find a list in our 
RFE bugs at [2]), and we welcome anyone who would like to come help us.

Everyone is welcome to join our biweekly IRC meetings, every other Monday (the 
next one is due in a week), 0900 PDT = 1600 GMT.
--
Ian.

[1]https://github.com/openstack/networking-vpp/blob/master/README.rst
[2]http://goo.gl/i3TzAt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] DPDK PMD

2017-06-27 Thread Jerome Tollet (jtollet)
Have you looked at poll-sleep (https://gerrit.fd.io/r/#/c/5674/)?
Jerome

De :  au nom de Burt Silverman 
Date : mardi 27 juin 2017 à 20:07
À : vpp-dev 
Objet : [vpp-dev] DPDK PMD

I came across the idea of running DPDK in non poll mode for low power/albeit 
lower performance, but I don't remember where. I am just wondering if anyone in 
VPP has done that, and if you have an easy way to configure that when running 
VPP. Thanks.
Burt
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] QoS/Policy

2017-06-14 Thread Jerome Tollet (jtollet)
Hi Dana,
Perhaps, you could take it from here: https://jira.fd.io/browse/HC2VPP-39
Jerome

De :  au nom de Dana Kutenicsova 

Date : mercredi 14 juin 2017 à 04:21
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] QoS/Policy

Hi all,
I’m looking for some information about QoS/Policy implementation in VPP.
I’ve found just pieces of documentation about Hierarchical Scheduler and 
policer-api.
Can you please point me to any documentation, presentations dealing
with this topic?
Thanks,
Dana Kutenicsova
Software Engineer
Frinx s.r.o.
Mlynské Nivy 48 / 821 09 Bratislava / Slovakia
+421 2 20 91 01 41 / dkutenics...@frinx.io / 
www.frinx.io
[id:image002.png@01D24FBB.70342570]

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [discuss] CSIT rls1704 report published

2017-04-28 Thread Jerome Tollet (jtollet)
Congratulation! Performance are increasing version over versions !

Le 27/04/2017 21:50, « discuss-boun...@lists.fd.io au nom de Maciek 
Konstantynowicz (mkonstan) »  a écrit :

The CSIT rls1704 report is published on FD.io docs site: 
https://docs.fd.io/csit/rls1704/report/

Many Thanks to All Contributors and Committers that worked on csit
rls1704 and made it happen !

And of course Many Thanks to VPP contributors and committers that gave
us a solid piece of VPP rls1704 to test and report on :)

Linked report includes results of i) VPP performance and functional
tests, ii) Testpmd performance tests, iii) Honeycomb functional tests,
and iv) reference to VPP unit tests.

Points of note in the report:

- Added tests including Centos7, crypto-HW, cgnat


https://docs.fd.io/csit/rls1704/report/vpp_performance_tests/csit_release_notes.html#changes-in-csit-release

https://docs.fd.io/csit/rls1704/report/vpp_functional_tests/csit_release_notes.html#changes-in-csit-release

https://docs.fd.io/csit/rls1704/report/testpmd_performance_tests/csit_release_notes.html#changes-in-csit-release

- Measured VPP performance improvements


https://docs.fd.io/csit/rls1704/report/vpp_performance_tests/csit_release_notes.html#performance-improvements

- Performance graphs - throughput and latency for VPP and DPDK-Testpmd


https://docs.fd.io/csit/rls1704/report/vpp_performance_tests/packet_throughput_graphs/index.html

https://docs.fd.io/csit/rls1704/report/vpp_performance_tests/packet_latency_graphs/index.html

https://docs.fd.io/csit/rls1704/report/testpmd_performance_tests/packet_throughput_graphs/index.html

https://docs.fd.io/csit/rls1704/report/testpmd_performance_tests/packet_latency_graphs/index.html

- VPP configs used per test case


https://docs.fd.io/csit/rls1704/report/test_configuration/vpp_performance_configuration/index.html

https://docs.fd.io/csit/rls1704/report/test_configuration/vpp_functional_configuration/index.html

- VPP operational data - "show runtime” outputs at NDR rate with CPU 
core
  clock cycles spent per worker node per packet


https://docs.fd.io/csit/rls1704/report/test_operational_data/vpp_performance_operational_data/index.html

And for those addicted to numbers and stats - here total numbers of VPP 
tests per type in CSIT rls1704:

- 360 of performance non-drop-rate discovery tests.
- 319 of performance partial-drop-rate discovery tests.
- 258 of system functional VIRL tests for VPP.
- executed on-demand, per patch, nightly and semi-weekly depending on 
category and requirements.

Please send your feedback and comments by email to csit-...@lists.fd.io.

-Maciek (CSIT PTL)
On behalf of FD.io CSIT project.

P.S. All test data in the report incl. graphs and tables are auto-
generated by CSIT scripts. CSIT project team did their best to ensure
the scripts are bug free and content is pleasant to human reading eye,
but as content generated by test executors is dynamic some errors are
very likely. If you see any formatting issues or data inconsistencies,
please report them by email to csit-...@lists.fd.io, so that errors can
be corrected.
___
discuss mailing list
disc...@lists.fd.io
https://lists.fd.io/mailman/listinfo/discuss

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [honeycomb-dev] Honeycomb 1.17.04 released

2017-04-27 Thread Jerome Tollet (jtollet)
Congratulation!


De :  au nom de "Marek Gradzki -X (mgradzki 
- PANTHEON TECHNOLOGIES at Cisco)" 
Date : jeudi 27 avril 2017 à 17:04
À : honeycomb-dev 
Cc : "t...@lists.fd.io" , "hc2...@lists.fd.io" 
, "csit-...@lists.fd.io" , 
"nsh_sfc-...@lists.fd.io" , "vpp-dev@lists.fd.io" 

Objet : [honeycomb-dev] Honeycomb 1.17.04 released

The honeycomb 1.17.04 release is up.

Honeycomb artifacts can be found on nexus:
https://nexus.fd.io/content/repositories/fd.io.release/io/fd/honeycomb/

More details can be found in release notes:
https://docs.fd.io/honeycomb/1.17.04/release-notes-aggregator/release_notes.html

Many thanks to all contributors, testers and LF infra engineers,

Marek
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] OpenStack networking-vpp 17.04 released

2017-04-25 Thread Jerome Tollet (jtollet)
Dear FD.io community,
The new version of networking-vpp, the OpenStack Neutron ML2 driver for VPP 
17.04 has been released today.
This version contains significant evolutions including:
-VXLAN-GPE support to setup overlay
-Layer 3 (Neutron routers) with full support of ipv6, floating ip and SNAT.
-State resync to enable seamless component restarts

Many thanks to VPP team for your support as well as people who contributed to 
networking-vpp development.
You’ll find attached the original announcement sent on the OpenStack mailing 
list.
Jerome

--- Begin Message ---
In conjunction with the release of VPP 17.04, I'd like to invite you all to try 
out networking-vpp for VPP 17.04.  VPP is a fast userspace forwarder based on 
the DPDK toolkit, and uses vector packet processing algorithms to minimise the 
CPU time spent on each packet and maximise throughput.  networking-vpp  is a 
ML2 mechanism driver that controls VPP on your control and compute hosts to 
provide fast L2 forwarding under Neutron.

This version has a few additional features:
- resync - this allows you to upgrade the agent while packets continue to flow 
through VPP, and to update VPP and get it promptly reconfigured, and should 
mean you can do maintenance operations on your cloud with little to no network 
service interruption  (per NFV requirements)
- VXLAN GPE - this is a VXLAN overlay with a LISP-based control plane to 
provide horizontally scalable networking with L2FIB propagation.  You can also 
continue to use the standard VLAN and flat networking.
- L3 support - networking-vpp now includes a L3 plugin and driver code within 
the agent to use the L3 functionality of VPP to provide Neutron routers.

Along with this, there have been the usual bug fixes, code and test 
improvements.

The README [1] explains how you can try out VPP using devstack, which is even 
simpler than before the devstack plugin will deploy etcd, the mechanism driver 
and VPP itself and should give you a working system with a minimum of hassle.

We will continuing development between now and VPP's 17.07 release in July.  
There are several features we're planning to work on (you'll find a list in our 
RFE bugs at [2]), and we welcome anyone who would like to come help us.

Everyone is welcome to join our new biweekly IRC meetings, every Monday 
(including next Monday), 0900 PST = 1600 GMT. 

[1]https://github.com/openstack/networking-vpp/blob/master/README.rst
-- 
Ian.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--- End Message ---
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [honeycomb-dev] Honeycomb 1.17.01 Released

2017-01-30 Thread Jerome Tollet (jtollet)
Timely message.
Jerome

De :  au nom de "Marek Gradzki -X (mgradzki 
- PANTHEON TECHNOLOGIES at Cisco)" 
Date : lundi 30 janvier 2017 à 17:00
À : honeycomb-dev 
Cc : "t...@lists.fd.io" , "hc2...@lists.fd.io" 
, "csit-...@lists.fd.io" , 
"nsh_sfc-...@lists.fd.io" , "vpp-dev@lists.fd.io" 

Objet : [honeycomb-dev] Honeycomb 1.17.01 Released

The honeycomb 1.17.01 release is up.

Honeycomb artifacts can be found on nexus:
https://nexus.fd.io/content/repositories/fd.io.release/io/fd/honeycomb/

Honeycomb is a generic NETCONF/RESTCONF management agent.
VPP specific distribution of honeycomb was moved to hc2vpp project (also to be 
released today).

More details can be found in release notes:
https://docs.fd.io/honeycomb/1.17.01/release-notes-aggregator/release_notes.html

Many thanks to all contributors and testers,

Marek

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] OpenStack networking-vpp 17.01

2017-01-26 Thread Jerome Tollet (jtollet)
Dear FD.io community,
The new version of networking-vpp, the OpenStack Neutron ML2 driver for VPP 
17.01 has been released today.
This version contains significant evolutions including the support for Security 
Groups based on VPP statefull ACLs.
Many thanks to VPP team for your support as well as people who contributed to 
networking-vpp development.

You’ll find attached the original announcement sent on the OpenStack mailing 
list.

Jerome

--- Begin Message ---
In conjunction with the release of VPP 17.01, I'd like to invite you all to try 
out networking-vpp for VPP 17.01.  VPP is a fast userspace forwarder based on 
the DPDK toolkit, and uses vector packet processing algorithms to minimise the 
CPU time spent  on each packet and maximise throughput.  networking-vpp is a 
ML2 mechanism driver that controls VPP on your control and compute hosts to 
provide fast L2 forwarding under Neutron.

The latest version has been updated to work with the new featuers of VPP 17.01, 
including security group support based on VPP's ACL functionality. 

The README [1] explains how you can try out VPP using devstack, which is now 
pleasantly simple; the devstack plugin will deploy the mechanism driver and VPP 
itself and should give you a working system with a minimum of hassle.

We plan on continuing development between now and VPP's 17.04 release in April. 
 There are several features we're planning to work on (you'll find a list in 
our RFE bugs at [2]), and we welcome anyone who would like to come help us.

Everyone is welcome to join our new biweekly IRC meetings, Monday 0800 PST = 
1600 GMT, due to start next Monday.
-- 
Ian.

[1]https://github.com/openstack/networking-vpp/blob/17.01/README.rst
[2]https://bugs.launchpad.net/networking-vpp/+bugs?orderby=milestone_name&start=0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--- End Message ---
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Interface bonding (binary APIs)

2016-11-10 Thread Jerome Tollet (jtollet)
Thanks for this input. That’s helpful.
Jerome

De : "murali Venkateshaiah (muraliv)" 
Date : jeudi 10 novembre 2016 à 16:15
À : Jerome Tollet , "Frank Brockners (fbrockne)" 
, "Maros Marsalek -X (mmarsale - PANTHEON TECHNOLOGIES at 
Cisco)" 
Cc : vpp-dev 
Objet : Re: [vpp-dev] Interface bonding (binary APIs)


Quick FYI., atleast from our experience with Nfvi customers, where bonding is 
enabled, the runtime changes aren’t a requirement.
VPP bonding at startup has been good enough.

From: mailto:vpp-dev-boun...@lists.fd.io>> on 
behalf of "Jerome Tollet (jtollet)" 
mailto:jtol...@cisco.com>>
Date: Thursday, November 10, 2016 at 6:53 AM
To: "Frank Brockners (fbrockne)" 
mailto:fbroc...@cisco.com>>, "Maros Marsalek -X (mmarsale - 
PANTHEON TECHNOLOGIES at Cisco)" mailto:mmars...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] Interface bonding (binary APIs)

Frank,
I am not aware of customers changing their bonding configuration at runtime. Of 
course, we could always imagine scenario but nothing concrete. It would be good 
to get inputs from real users on this topic.
Anyway, my email below was just pointing design differences.
Jerome

De : "Frank Brockners (fbrockne)" 
mailto:fbroc...@cisco.com>>
Date : jeudi 10 novembre 2016 à 15:37
À : Jerome Tollet mailto:jtol...@cisco.com>>, "Maros 
Marsalek -X (mmarsale - PANTHEON TECHNOLOGIES at Cisco)" 
mailto:mmars...@cisco.com>>
Cc : vpp-dev mailto:vpp-dev@lists.fd.io>>
Objet : RE: [vpp-dev] Interface bonding (binary APIs)

Jerome,

quick question: In which case do you see customers changing the configuration 
for link-aggregation/interface-bonding at runtime? I would typically see that 
as an install-time feature, which is why even with OVS things are done through 
the installer.

Thanks, Frank

From: Jerome Tollet (jtollet)
Sent: Donnerstag, 10. November 2016 10:47
To: Frank Brockners (fbrockne) mailto:fbroc...@cisco.com>>; 
Maros Marsalek -X (mmarsale - PANTHEON TECHNOLOGIES at Cisco) 
mailto:mmars...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] Interface bonding (binary APIs)

Frank, Maros,
One important difference I see between OVS-DPDK and VPP is that VPP Bonding 
supports relies on DPDK. AFAIK, all DPDK parameters are set at startup and it 
is then impossible to modify them.
On the other side, OVS-DPDK provides its own implementation of bonding and 
LCAP. So it is possible to modify bonded interfaces at runtime.
Jerome

De : mailto:vpp-dev-boun...@lists.fd.io>> au nom 
de "Frank Brockners (fbrockne)" mailto:fbroc...@cisco.com>>
Date : mercredi 9 novembre 2016 à 14:47
À : "Maros Marsalek -X (mmarsale - PANTHEON TECHNOLOGIES at Cisco)" 
mailto:mmars...@cisco.com>>, vpp-dev 
mailto:vpp-dev@lists.fd.io>>
Objet : Re: [vpp-dev] Interface bonding (binary APIs)

Hi Maros,

to broaden the question: Which way do we want to go to configure “interface 
bonding”?

From a solutions stack perspective, we need the installer to configure bonding 
as part of the network setup. Installers like Fuel or TripleO/APEX do this 
today for OVS.  In case of FDS, we need to have the mechanics in TripleO/Apex, 
i.e.

? Have a config option similar to “BondInterfaceOvsOptions” for VPP in 
APEX/TripleO, e.g. BondInterfaceVPPOptions
(see e.g. 
http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/network_isolation.html)
 along with the associated puppet manifests.

? These puppet manifests would be expected to drive the associated 
config on VPP/DPDK. We could deal with CLI, but this is less desirable. Config 
through HC would be the obvious choice from a FDS perspective.
That said, we have systems (like the ML2-VPP based setup), where we don’t have 
HC present (yet), but would still need interface bonding to be configured 
through TripleO/APEX.
This somewhat leads to the more general question how we want VPP system level 
config to be driven while avoiding duplicated implementations. Do we default to 
CLI, or do we default to HC, or what would be the common denominator?

Thanks, Frank


From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Maros Marsalek -X (mmarsale - 
PANTHEON TECHNOLOGIES at Cisco)
Sent: Donnerstag, 3. November 2016 12:29
To: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] Interface bonding (binary APIs)

Hey,

VPP supports interface bonding and can be configured using DPDK configuration 
(https://wiki.fd.io/view/VPP/Command-line_Arguments#.22dpdk.22_parameters).

Is there any support for interface bonding over binary APIs ?

Thanks,
Maros
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Interface bonding (binary APIs)

2016-11-10 Thread Jerome Tollet (jtollet)
Frank,
I am not aware of customers changing their bonding configuration at runtime. Of 
course, we could always imagine scenario but nothing concrete. It would be good 
to get inputs from real users on this topic.
Anyway, my email below was just pointing design differences.
Jerome

De : "Frank Brockners (fbrockne)" 
Date : jeudi 10 novembre 2016 à 15:37
À : Jerome Tollet , "Maros Marsalek -X (mmarsale - PANTHEON 
TECHNOLOGIES at Cisco)" 
Cc : vpp-dev 
Objet : RE: [vpp-dev] Interface bonding (binary APIs)

Jerome,

quick question: In which case do you see customers changing the configuration 
for link-aggregation/interface-bonding at runtime? I would typically see that 
as an install-time feature, which is why even with OVS things are done through 
the installer.

Thanks, Frank

From: Jerome Tollet (jtollet)
Sent: Donnerstag, 10. November 2016 10:47
To: Frank Brockners (fbrockne) ; Maros Marsalek -X 
(mmarsale - PANTHEON TECHNOLOGIES at Cisco) 
Cc: vpp-dev 
Subject: Re: [vpp-dev] Interface bonding (binary APIs)

Frank, Maros,
One important difference I see between OVS-DPDK and VPP is that VPP Bonding 
supports relies on DPDK. AFAIK, all DPDK parameters are set at startup and it 
is then impossible to modify them.
On the other side, OVS-DPDK provides its own implementation of bonding and 
LCAP. So it is possible to modify bonded interfaces at runtime.
Jerome

De : mailto:vpp-dev-boun...@lists.fd.io>> au nom 
de "Frank Brockners (fbrockne)" mailto:fbroc...@cisco.com>>
Date : mercredi 9 novembre 2016 à 14:47
À : "Maros Marsalek -X (mmarsale - PANTHEON TECHNOLOGIES at Cisco)" 
mailto:mmars...@cisco.com>>, vpp-dev 
mailto:vpp-dev@lists.fd.io>>
Objet : Re: [vpp-dev] Interface bonding (binary APIs)

Hi Maros,

to broaden the question: Which way do we want to go to configure “interface 
bonding”?

From a solutions stack perspective, we need the installer to configure bonding 
as part of the network setup. Installers like Fuel or TripleO/APEX do this 
today for OVS.  In case of FDS, we need to have the mechanics in TripleO/Apex, 
i.e.

· Have a config option similar to “BondInterfaceOvsOptions” for VPP in 
APEX/TripleO, e.g. BondInterfaceVPPOptions
(see e.g. 
http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/network_isolation.html)
 along with the associated puppet manifests.

· These puppet manifests would be expected to drive the associated 
config on VPP/DPDK. We could deal with CLI, but this is less desirable. Config 
through HC would be the obvious choice from a FDS perspective.
That said, we have systems (like the ML2-VPP based setup), where we don’t have 
HC present (yet), but would still need interface bonding to be configured 
through TripleO/APEX.
This somewhat leads to the more general question how we want VPP system level 
config to be driven while avoiding duplicated implementations. Do we default to 
CLI, or do we default to HC, or what would be the common denominator?

Thanks, Frank


From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Maros Marsalek -X (mmarsale - 
PANTHEON TECHNOLOGIES at Cisco)
Sent: Donnerstag, 3. November 2016 12:29
To: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] Interface bonding (binary APIs)

Hey,

VPP supports interface bonding and can be configured using DPDK configuration 
(https://wiki.fd.io/view/VPP/Command-line_Arguments#.22dpdk.22_parameters).

Is there any support for interface bonding over binary APIs ?

Thanks,
Maros
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Interface bonding (binary APIs)

2016-11-10 Thread Jerome Tollet (jtollet)
Frank, Maros,
One important difference I see between OVS-DPDK and VPP is that VPP Bonding 
supports relies on DPDK. AFAIK, all DPDK parameters are set at startup and it 
is then impossible to modify them.
On the other side, OVS-DPDK provides its own implementation of bonding and 
LCAP. So it is possible to modify bonded interfaces at runtime.
Jerome

De :  au nom de "Frank Brockners (fbrockne)" 

Date : mercredi 9 novembre 2016 à 14:47
À : "Maros Marsalek -X (mmarsale - PANTHEON TECHNOLOGIES at Cisco)" 
, vpp-dev 
Objet : Re: [vpp-dev] Interface bonding (binary APIs)

Hi Maros,

to broaden the question: Which way do we want to go to configure “interface 
bonding”?

From a solutions stack perspective, we need the installer to configure bonding 
as part of the network setup. Installers like Fuel or TripleO/APEX do this 
today for OVS.  In case of FDS, we need to have the mechanics in TripleO/Apex, 
i.e.

· Have a config option similar to “BondInterfaceOvsOptions” for VPP in 
APEX/TripleO, e.g. BondInterfaceVPPOptions
(see e.g. 
http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/network_isolation.html)
 along with the associated puppet manifests.

· These puppet manifests would be expected to drive the associated 
config on VPP/DPDK. We could deal with CLI, but this is less desirable. Config 
through HC would be the obvious choice from a FDS perspective.
That said, we have systems (like the ML2-VPP based setup), where we don’t have 
HC present (yet), but would still need interface bonding to be configured 
through TripleO/APEX.
This somewhat leads to the more general question how we want VPP system level 
config to be driven while avoiding duplicated implementations. Do we default to 
CLI, or do we default to HC, or what would be the common denominator?

Thanks, Frank


From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Maros Marsalek -X (mmarsale - PANTHEON TECHNOLOGIES at Cisco)
Sent: Donnerstag, 3. November 2016 12:29
To: vpp-dev 
Subject: [vpp-dev] Interface bonding (binary APIs)

Hey,

VPP supports interface bonding and can be configured using DPDK configuration 
(https://wiki.fd.io/view/VPP/Command-line_Arguments#.22dpdk.22_parameters).

Is there any support for interface bonding over binary APIs ?

Thanks,
Maros
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] updated ovs vs. vpp results for 0.002% and 0% loss

2016-10-24 Thread Jerome Tollet (jtollet)
+ Pierre Pfister (ppfister) who ran a lot of benchmarks for VPP/vhostuser

De :  au nom de Thomas F Herbert 

Date : lundi 24 octobre 2016 à 21:32
À : "kris...@redhat.com" , Andrew Theurer 
, Franck Baudin , Rashid Khan 
, Bill Michalowski , Billy McFall 
, Douglas Shakshober 
Cc : vpp-dev , "Damjan Marion (damarion)" 

Objet : Re: [vpp-dev] updated ovs vs. vpp results for 0.002% and 0% loss


+Maciek Konstantynowicz CSIT (mkonstan)

+vpp-dev

+Damjan Marion (damarion)

Karl, Thanks!

Your results seem close to consistent with VPP's CSIT testing for vhost for 
16.09 but for broader visibility, I am including some people on the VPP team, 
Damjan who is working on multi-queue etc. (I see that there were some perf 
related patches merged in vhost that might help since 16.09.) and Maciek who 
works in the CSIT project and has done the testing of VPP.

I want to open up the discussion WRT to the following:

1, Optimizing for maximum vhost perf with vpp including vhost-user multi-queue.

2. Comparision with CSIT results for vhost. Following are two links for CSIT

3. Statistics:

4. Tuning suggestions:

Following are some CSIT results:

compiled 16.09 results for vhost-user: 
https://wiki.fd.io/view/CSIT/VPP-16.09_Test_Report#VM_vhost-user_Throughput_Measurements

Latest CSIT output from top of master, 16.12-rc0

https://jenkins.fd.io/view/csit/job/csit-vpp-verify-perf-master-nightly-all/1085/console

--Tom
On 10/21/2016 04:06 PM, Karl Rister wrote:

Hi All



Below are updated performance results for OVS and VPP on our new

Broadwell testbed.  I've tried to include all the relevant details, let

me know if I have forgotten anything of interest to you.



Karl







Processor: Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz (Broadwell)

Environment: RT + Hyperthreading (see [1] for details on KVM-RT)

Kernel: 3.10.0-510.rt56.415.el7.x86_64

Tuned: 2.7.1-3.el7



/proc/cmdline:

<...> default_hugepagesz=1G iommu=pt intel_iommu=on isolcpus=4-55

nohz=on nohz_full=4-55 rcu_nocbs=4-55 intel_pstate=disable nosoftlockup



Versions:

- OVS: openvswitch-2.5.0-10.git20160727.el7fdb + BZ fix [2]

- VPP: v16.09



NUMA node 0 CPU sibling pairs:

- (0,28)(2,30)(4,32)(6,34)(8,36)(10,38)(12,40)(14,42)(16,44)(18,46)

  (20,48)(22,50)(24,52)(26,54)



Host PMD Assignment:

- dpdk0 = CPU 6

- vhost-user1 = CPU 34

- dpdk1 = CPU 8

- vhost-user2 = CPU 36



Guest CPU Assignment:

- Emulator = CPU 20

- VCPU 0 (Housekeeping) = CPU 22

- VCPU 1 (PMD) = CPU 24

- VCPU 2 (PMD) = CPU 26



Configuration Details:

- OVS: custom OpenFlow rules direct packets similarly to VPP L2 xconnect

- VPP: L2 xconnect

- DPDK v16.07.0 testpmd in guest

- SCHED_FIFO priority 95 applied to all PMD threads (OVS/VPP/testpmd)

- SCHED_FIFO priority 1 applied to Guest VCPUs used for PMDs



Test Parameters:

- 64B packet size

- L2 forwarding test

  - All tests are bidirectional PVP (physical<->virtual<->physical)

  - Packets enter on a NIC port and are forwarded to the guest

  - Inside the guests, received packets are sent out the opposite

direction

- Binary search starting at line rate (14.88 Mpps each way)

- 10 Minute Search Duration

- 2 Hour Validation Duration follows passing run for 10 Minute Search

  - If validation fails, search continues



Mergeable Buffers Disabled:

- OVS:

  - 0.002% Loss: 11.5216 Mpps bidirectional (5.7608 Mpps each way)

  - 0% Loss: 11.5216 Mpps bidirectional (5.7608 Mpps each way)

- VPP:

  - 0.002% Loss: 7.5537 Mpps bidirectional (3.7769 Mpps each way)Andre

Fredette 

  - 0% Loss: 5.2971 Mpps bidirectional (2.6486 Mpps each way)



Mergeable Buffers Enabled:

- OVS:

  - 0.002% Loss: 6.5626 Mpps bidirectional (3.2813 Mpps each way)

  - 0% Loss: 6.3622 Mpps bidirectional (3.1811 Mpps each way)

- VPP:

  - 0.002% Loss: 7.8134 Mpps bidirectional (3.9067 Mpps each way)

  - 0% Loss: 5.1029 Mpps bidirectional (2.5515 Mpps each way)



Mergeable Buffers Disabled + VPP no-multi-seg:

- VPP:

  - 0.002% Loss: 8.0654 Mpps bidirectional (4.0327 Mpps each way)

  - 0% Loss: 5.6442 Mpps bidirectional (2.8221 Mpps each way)



The details of these results (including latency metrics and links to the

raw data) are available at [3].



[1]: https://virt-wiki.lab.eng.brq.redhat.com/KVM/RealTime

[2]: https://bugzilla.redhat.com/show_bug.cgi?id=1344787

[3]:

https://docs.google.com/a/redhat.com/spreadsheets/d/1K6zDVgZYPJL-7EsIYMBIZCn65NAkVL_GtkBrAnAdXao/edit?usp=sharing



--
Thomas F Herbert
SDN Group
Office of Technology
Red Hat
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev