Hi Alec,

Please see answers inline.

Thanks,
Gabriel


From: Alec Hothan (ahothan) [mailto:ahot...@cisco.com]
Sent: Tuesday, May 09, 2017 11:11 PM
To: Yuyang (Gabriel); morgan.richo...@orange.com; Cooper, Trevor; 
opnfv-tech-discuss@lists.opnfv.org; test...@lists.opnfv.org
Subject: Re: [opnfv-tech-discuss] On Stress Test Demo//RE: [test-wg] Notes from 
OPNFV Plugfest meeting - "Testing group Euphrates collaborative work"

Hi Gabriel,



From: "Yuyang (Gabriel)" 
<gabriel.yuy...@huawei.com<mailto:gabriel.yuy...@huawei.com>>
Date: Friday, May 5, 2017 at 12:06 AM
To: "Alec Hothan (ahothan)" <ahot...@cisco.com<mailto:ahot...@cisco.com>>, 
"morgan.richo...@orange.com<mailto:morgan.richo...@orange.com>" 
<morgan.richo...@orange.com<mailto:morgan.richo...@orange.com>>, "Cooper, 
Trevor" <trevor.coo...@intel.com<mailto:trevor.coo...@intel.com>>, 
"opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>" 
<opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>>,
 "test...@lists.opnfv.org<mailto:test...@lists.opnfv.org>" 
<test...@lists.opnfv.org<mailto:test...@lists.opnfv.org>>
Subject: RE: [opnfv-tech-discuss] On Stress Test Demo//RE: [test-wg] Notes from 
OPNFV Plugfest meeting - "Testing group Euphrates collaborative work"

Hi Alec,

Please see my answers inline!

Thanks!
Gabriel

From: Alec Hothan (ahothan) [mailto:ahot...@cisco.com]
Sent: Friday, May 05, 2017 11:11 AM
To: Yuyang (Gabriel); 
morgan.richo...@orange.com<mailto:morgan.richo...@orange.com>; Cooper, Trevor; 
opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>; 
test...@lists.opnfv.org<mailto:test...@lists.opnfv.org>
Subject: Re: [opnfv-tech-discuss] On Stress Test Demo//RE: [test-wg] Notes from 
OPNFV Plugfest meeting - "Testing group Euphrates collaborative work"

Hi Gabriel,

Regarding TC1, do you have the results and original chart available somewhere 
(the chart in the picture and in the slides is kind of small).
Gabriel: We do not have the original results and chart now since this is one 
time result that we have when we do local test. We have discussed the results 
plotting within Testperf.
The test results will be shown in the community ELK maintained by bitergia in 
Q2.
Some results could be found in the community CI, e.g., 
https://build.opnfv.org/ci/view/bottlenecks/job/bottlenecks-compass-posca_stress_traffic-baremetal-daily-danube/5/console

[Alec] seems there are about 16 “runners” returning netperf results in a span 
of 10 minutes (at unequal interval) with ~941Mbps of throughput (presumably 
that would be TCP from VM guest user space to VM guest user space).

Gabriel: in TC1, no VM pair is created. 1 compute node is installed with 
netperf client and another is installed with netserver (TCP is used). The size 
of sending package and receiving package increases iteratively in the test. The 
sizes are configured as follows.
     
14<https://gerrit.opnfv.org/gerrit/gitweb?p=bottlenecks.git;a=blob;f=testsuites/posca/testcase_cfg/posca_factor_system_bandwidth.yaml;h=de2966b7ee262cfa5915eb4ebfbc58f81f41151a;hb=HEAD#l14>
   tx_pkt_sizes: 64, 256, 1024, 4096, 8192, 16384, 32768, 65536
     
15<https://gerrit.opnfv.org/gerrit/gitweb?p=bottlenecks.git;a=blob;f=testsuites/posca/testcase_cfg/posca_factor_system_bandwidth.yaml;h=de2966b7ee262cfa5915eb4ebfbc58f81f41151a;hb=HEAD#l15>
   rx_pkt_sizes: 64, 256, 1024, 4096, 8192, 16384, 32768, 65536
               For each (tx_pkt_sizes, rx_pkt_sizes), the throughput is 
measured. The test will stop until it detects the traffic limit.
               The package size/interval could flexibly configured for 
different purposes. For the CI pipeline, we use the above setting mainly for 
time saving.

Do you also have a detailed description of the test:
Gabriel: I assume you ask detailed about TC3 here, since we only run TC1 on 
vPOD or bare mental POD.

[Alec] ok it was not clear that TC3 also does throughput testing.

                Gabriel: Sorry for the unclear message here. In TC1, no VM pair 
is created. Netperf are installed on virtual/bare metal compute nodes.
In TC3, multiple VM pairs are expected, and only do ping test.


·         what kind of packet gen you are running in the first VM and what kind 
of forwarder you are running in the second VM
Gabriel: For TC3, we use netperf to send packet and retransmit it in the second 
VM.

[Alec] so that would be netperf client in VM1 and netperf server in VM2, 
unidirectional traffic.

                       Gabriel: First, sorry for misleading information and 
typos. Yes, for TC1, netperf and netserver are installed in separate compute 
node and TCP is used.


·         what flavor
Gabriel: Bottlenecks calls yardstick to do each test, the flavor is specified 
in Yardstick, i.e., yardstick-flavor: nova flavor-create yardstick-flavor 100 
512 3 1.

·         what vswitch was used
Gabriel: OVS is used. If you have any recommendation, please also inform us :)

[Alec] no special recommendation,


·         how are the VMs placed (wrt compute node, numa node)
Gabriel: VMs are created by using heat client without specifying how and where 
to place them

·         I assume the test will ramp up the number of VM pairs, if so how do 
you synchronize your VM starts/measurements (VMs don’t tend to all start the 
test at the same time due to the latency for bringing up individual VMs)
        Gabriel: Yes, the VMs will not start to ping simultaneously. We can 
only simultaneously sent the request to create all the VMs, waiting for the 
creation and then to the ping operation.

·         How do you track drop rate and latency across VM pairs
Gabriel: I assume you mean the packet drop rate and latency here. For these 
metrics, we use netperf to do the monitoring

[Alec] I was more interested in the throughput test. From what I can gather in 
the Jenkins log, it looks like there is no special synchronization between 
netperf clients, so they just start as soon as they’re able to send (and the 
server is ready).
The placement of the VMs does not seem to be tightly controlled, making it 
possible for some VM pairs to co-reside on the same compute node (which can 
impact the results)?
The unequal interval of the reporting shows that there is a phasing of the test 
start over time which makes it difficult to guarantee that the reported 
throughputs are all concurrent (they’re likely only partially overlapping).
So it is not easy to make good use of the results for concurrent VM to VM 
traffiic tests  those conditions.

Gabriel: Since in TC1, only 1 netperf client and 1 netperf server are 
installed, we do not doing any synchronization. Netperf and Netserver are 
installed on different compute nodes, there is no concurrent VMs issue for TC1 
too.

I see from the summary below that this stress test demo is perhaps more geared 
towards control plane stress testing, What is called a “stack” is I think a VM 
pair.
Do you have the detailed log or results of these tests somewhere?

Gabriel: “Stack” here refers to a openstack stack with a VM pair in it. We are 
planning to generate more VMs within a stack in Euphrates release and get rid 
of the usage of floating IPs.
For detailed log example, please refer to 
https://build.opnfv.org/ci/view/bottlenecks/job/bottlenecks-compass-posca_stress_ping-virtual-daily-danube/6/console
If you have further questions, feel free to ask and welcome to Bottlenecks 
weekly meetings! :)


Thanks

   Alec





Gabriel: you could also do the test locally. For more information, please refer 
to the link below.
Bottlenecks Testing Guide: 
http://docs.opnfv.org/en/stable-danube/submodules/bottlenecks/docs/testing/developer/devguide/posca_guide.html



Thanks!

  Alec





From: 
<opnfv-tech-discuss-boun...@lists.opnfv.org<mailto:opnfv-tech-discuss-boun...@lists.opnfv.org>>
 on behalf of "Yuyang (Gabriel)" 
<gabriel.yuy...@huawei.com<mailto:gabriel.yuy...@huawei.com>>
Date: Thursday, May 4, 2017 at 7:48 PM
To: "morgan.richo...@orange.com<mailto:morgan.richo...@orange.com>" 
<morgan.richo...@orange.com<mailto:morgan.richo...@orange.com>>, "Cooper, 
Trevor" <trevor.coo...@intel.com<mailto:trevor.coo...@intel.com>>, 
"opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>" 
<opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>>,
 "test...@lists.opnfv.org<mailto:test...@lists.opnfv.org>" 
<test...@lists.opnfv.org<mailto:test...@lists.opnfv.org>>
Subject: [opnfv-tech-discuss] On Stress Test Demo//RE: [test-wg] Notes from 
OPNFV Plugfest meeting - "Testing group Euphrates collaborative work"

Hi,

For the stress test demo, we have added more details.
Youtube link is provided below.
https://youtu.be/TPd4NZr__HI
Slides deck is uploaded in the WIKI page: 
https://wiki.opnfv.org/display/bottlenecks/Sress+Testing+over+OPNFV+Platform

The demo contents and results are briefly summarized below.

&#8226     Testing Contents
&#8226        Executing Stress Test and Provide comparison results for 
different installers (Installer A and Installer B)
&#8226 Up to 100 stacks for Installer A (Completes the test)
&#8226 Up to 40 stacks for Installer B (System fails to complete the test)
&#8226        Testing Steps
&#8226 Enter the Bottlenecks Repo: cd /home/opnfv/bottlenecks
&#8226 Prepare virtual environment: . pre_virt_env.sh
&#8226 Executing ping test case: bottlenecks testcase run posca_factor_ping
&#8226 Clean the virtual environment: . rm_virt_env.sh
&#8226     Testing Results
&#8226        Testing for Installer A
&#8226 Up to 100 stacks in configuration file for Installer B
&#8226    1 stack SSH error when number of stack raised to 50
&#8226 When stack number up to 100, most of the errors are heat response time 
out
&#8226    100 stacks are established successfully in the end
&#8226        Testing for Installer B
&#8226 Up to 40 stacks in configuration file for Installer B
&#8226    When stack number up to 30, the system fails to create all the stacks
&#8226      21 stacks are either created failure or keeping in creation
&#8226 To verify the system performance, we choose to do clean-up and run the 
test again
&#8226    When stack number up to 20, same situation happens as in the last test
&#8226    The system performance degrades
&#8226    Different to the test for Installer A, we do the verification step 
because the system clearly malfunctions.
&#8226 Which not shown in the demo is that after 3 rounds of the stress test, 
the system fail even to create 5 stacks

Best,
Gabriel

From: test-wg-boun...@lists.opnfv.org<mailto:test-wg-boun...@lists.opnfv.org> 
[mailto:test-wg-boun...@lists.opnfv.org] On Behalf Of 
morgan.richo...@orange.com<mailto:morgan.richo...@orange.com>
Sent: Tuesday, May 02, 2017 3:47 PM
To: Cooper, Trevor; 
opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>; 
test...@lists.opnfv.org<mailto:test...@lists.opnfv.org>
Subject: Re: [test-wg] [opnfv-tech-discuss] Notes from OPNFV Plugfest meeting - 
"Testing group Euphrates collaborative work"

Thnaks Trevor

I added some points that reflect the notes for the next test weekly meeting 
planned on the 4th of May: https://wiki.opnfv.org/display/meetings/TestPerf
feel free to add additional points
I will not be able to join this time, could you chair the meeting?

/Morgan


Le 28/04/2017 à 16:36, Cooper, Trevor a écrit :
Status of Danube and what improved

            1. landing page
                        - 
http://testresults.opnfv.org/reporting2/reporting/index.html
                        - 
http://testresults.opnfv.org/reporting2/reporting/index.html#!/landingpage/table
                                    § Meaning of info displayed? Test ran, test 
passed … agree for consistency … TBD

            2. catalogue - diagram with roll-over
                        - 
http://testresults.opnfv.org/reporting2/reporting/index.html#!/select/visual
                        - All PTLs to review if test cases are valid and if not 
remove
                        - Add short description that is human readable (add API 
field)
                        ○ Define test domain categories - start by using labels 
on test ecosystem diagram

            3. Stress tests - video of presentation is available - simultaneous 
vPING which increases until system fails

            4. Documentation
                        ○ Ecosystem diagram
                        ○ Testing guide
                        ○ Add agenda item to TWG in June (2 sessions on how to 
improve)
                        ○ New test projects … add to common docs
                        ○ Developer guide - move to docs

            5. Reporting status / dashboard
                        ○ Bottlenecks TBD?

            6. Bitergia
                        ○ Morgan meeting 10th May
                        ○ 
https://wiki.opnfv.org/display/testing/Result+alignment+for+ELK+post-processing
                        ○ Add to TWG agenda next week to revisit

Wish list for Infra group

            - One POD dedicated to long duration tests (with reservation 
mechanism)

            - Per installer
                        ○ Stable of previous release
                        ○ Master

            - OPNFV POD on demand for tester (before merging)
                        ○ Per installer if possible
                        ○ Infra group working on it
                        ○ Today can fill a ticket to ask if there is a free 
resource

Micro services
            - Deploy VNF
                        ○ Retrieve image
                        ○ Deploy
                        ○ Prebuild image with tools - standardise on a framework
                        ○ Take image and copy over tools - Ansible (infra doing 
this with Open Stack Ansible)
                        ○ Catalog of roles for Ansible?
                                    § Ansible Galaxy is a tool for deploying 
and managing roles
                                                □ e.g. Install TREX TG
                                                □ Turn on live migration in Nova

            - Test generator?

            - Collect / display results
                        ○ Test API and DB
                        ○ Reuse collectd lib for VNF KPIs

            - Analytics of results

            - What APIs to expose to other test projects?
                        ○ Functest
                                    § Deploy orchestrator VNF
                                    § Use Traffic gen and generate load


/Trevor





_______________________________________________
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

Reply via email to