Hi Pasi,

The RFC-2544 sweep can be indeed nice to do. The NFVbench tool will be able to 
support any frame size list in the sweep, so it is possible to add as many as 
desired – as long as the additional time to collect all these data is not a 
problem.
With the default configuration, an NDR-PDR measurement takes about 15’ to 
complete (per packet path). This can vary based on a few parameters that 
control the convergence algorithm (which is derived from the one used in FD.io 
CSIT but optimized for speed).

What is meant by “well-defined subset of RFC-2544” is a simpler sweep with 
[64,IMIX,1518] byte frame sizes, which will take less than an hour to run. Such 
short run will already provide a very good view of the capability of the stack 
under test.
We have not seen a lot of value to do much finer grain sweeps (in return of the 
extra time spent). Another practical consideration when doing a lot of frame 
sizes is to properly analyze the additional data. The 3 frame sizes above 
combined with all other variations (such  as packet paths and encaps types) 
already generates a very large set of data. I think adding even more data could 
be detrimental to the understanding by non-expert readers.

Other aspects of RFC-2544 that we have clarified are:

·         the reporting of the results. A simple example is the reported 
throughput of an RFC-2544 benchmark is not even clear in the spec. I have seen 
benchmarks that report throughput (frames per second) in no less than 4 
different ways.

·         Latency reporting

·         Exact frame formatting used (including how flows/sessions are formed)

There are many more parameters of benchmarking that are not in RFC-2544 (as 
they are more related to packet paths and OpenStack Neutron mapping) clarified 
in NFVbench.
You mentioned below the fine grain control of flows (how they are generated and 
timed) - that is not yet implemented but we have started discussions on this 
topic with the TRex team (along with bleeding edge enhancement in latency 
measurement/reporting).

Thanks

   Alec



From: Pasi Vaananen <opnfv.pasiv...@gmail.com>
Date: Wednesday, April 12, 2017 at 10:18 AM
To: 'Don Clarke' <d.cla...@cablelabs.com>, "Frank Brockners (fbrockne)" 
<fbroc...@cisco.com>, 'TSC OPNFV' <opnfv-...@lists.opnfv.org>, 'TECH-DISCUSS 
OPNFV' <opnfv-tech-discuss@lists.opnfv.org>
Cc: 'Carsten Rossenhoevel' <cr...@eantc.de>, "Alec Hothan (ahothan)" 
<ahot...@cisco.com>
Subject: RE: [opnfv-tsc] New project proposal: NFVbench

Don, Frank, Carsten,

Agreed, traffic models and/or anonymized PCAP traces and possibly in different 
application specific context, as they do vary at the edge (where lots of NFV 
use cases are expected to live on), depending on what is being done would be of 
interest and are currently pretty hard to come by (CAIDA used to do great job 
here in the past at least at aggregates levels, but it is not so much their 
focus anymore). Things like specific use case associated avg. packet sizes, 
rates, number of sessions, session lengths, delay constraints etc. are all 
interesting info.

While we wait for the data, as an observation for the interim, RFC2544 type 
sweeps are nice way to get an idea where the given chain is in terms of 
performance. I am however, wondering how someone will be able to define “a 
well-defined subset of RFC-2544 that is the most representative of NFV 
traffic“, as the “NFV traffic” is expected to vary by the position in the 
network as well as the application – why would we not just do a more full sweep 
(or is there any further info on what is this subset is supposed to be and why) 
?

Pasi
From: opnfv-tsc-boun...@lists.opnfv.org 
[mailto:opnfv-tsc-boun...@lists.opnfv.org] On Behalf Of Don Clarke
Sent: Wednesday, April 12, 2017 12:46 PM
To: Frank Brockners (fbrockne); TSC OPNFV; TECH-DISCUSS OPNFV
Cc: Carsten Rossenhoevel; Alec Hothan (ahothan)
Subject: Re: [opnfv-tsc] New project proposal: NFVbench

This is a very important area, being able to validate networking performance is 
a basic requirement for NFV implementations that hasn’t been adequately 
addressed in the rush to apply cloud technologies to networking.

Last week at ONS I was asked an interesting related question on whether there 
could be some broadly agreed traffic models created for validating NFV 
implementations. I’m raising this in the ETSI NFV NOC next week. OPNFV would be 
an obvious beneficiary of such models.

Don.

From: 
opnfv-tsc-boun...@lists.opnfv.org<mailto:opnfv-tsc-boun...@lists.opnfv.org> 
[mailto:opnfv-tsc-boun...@lists.opnfv.org] On Behalf Of Frank Brockners 
(fbrockne)
Sent: Wednesday, April 12, 2017 11:40 AM
To: TSC OPNFV; TECH-DISCUSS OPNFV
Cc: Carsten Rossenhoevel; Alec Hothan (ahothan)
Subject: [opnfv-tsc] New project proposal: NFVbench

Hi OPNFV,

over the past few weeks we’ve distilled a proposals to create a toolkit to 
allow for black-box performance testing of NFVI with a network focus:
NFVbench: https://wiki.opnfv.org/display/nfvbench/NFVbench+Project+Proposal

The NFVbench project is to develop a toolkit that allows developers, 
integrators, testers and customers to measure and assess the L2/L3 forwarding 
performance of an NFV-infrastructure solution stack (i.e. OPNFV scenario) using 
a black-box approach.

We’re hoping for a discussion in the technical community meeting on April/20, 
and are also asking for an official TSC review post the technical community 
review on May/2, so that NFVbench can participate in Euphrates. Consequently, 
NFVbench asks for tentative inclusion into Euphrates.

Your thoughts and ideas are greatly appreciated.

Thanks much, Frank, Carsten, Alec


_______________________________________________
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

Reply via email to