The recheck failed.
https://jenkins.fd.io/job/vpp-csit-verify-virl-master/1937/console

Are you sure you updated qemu ?

Thanks,

- Pierre


Le 26 oct. 2016 à 07:32, Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at 
Cisco) <pmi...@cisco.com<mailto:pmi...@cisco.com>> a écrit :

Hello,

[+csit-dev]

After upgrade of VIRL images and PhyTB to 16.04.1 and also some code change we 
are now using Qemu v2.5.0 in CSIT (previously v2.2.1). All the CSIT changes 
should be now part of oper-161024. Patch to use this branch in VPP was merged 
yesterday https://gerrit.fd.io/r/#/c/3553/

@Pierre: Can you please rebase your commit https://gerrit.fd.io/r/#/c/2922/ and 
recheck? Please report to csit-dev.

If there is requirement for higher version than 2.5.0 I suggest to open Jira 
ticket to CSIT.

Thank you.

Peter Mikus
Engineer – Software
Cisco Systems Limited

Planned absence: 28.10., 1.11., 17.11., 9.12., 19.-31.12.

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Thomas F Herbert
Sent: Tuesday, October 25, 2016 5:57 PM
To: Edward Warnicke <hagb...@gmail.com<mailto:hagb...@gmail.com>>; Pierre 
Pfister (ppfister) <ppfis...@cisco.com<mailto:ppfis...@cisco.com>>
Cc: Andrew Theurer <atheu...@redhat.com<mailto:atheu...@redhat.com>>; Douglas 
Shakshober <dsh...@redhat.com<mailto:dsh...@redhat.com>>; Damjan Marion 
(damarion) <damar...@cisco.com<mailto:damar...@cisco.com>>; Bill Michalowski 
<bmich...@redhat.com<mailto:bmich...@redhat.com>>; vpp-dev 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>; Rashid Khan 
<rk...@redhat.com<mailto:rk...@redhat.com>>; 
kris...@redhat.com<mailto:kris...@redhat.com>
Subject: Re: [vpp-dev] updated ovs vs. vpp results for 0.002% and 0% loss





On 10/25/2016 11:19 AM, Edward Warnicke wrote:
Pierre,

Do you have a ticket requesting an update of the Jenkin's qemu so we can get 
your patch unblocked?
+1


Ed

On Tue, Oct 25, 2016 at 12:14 AM, Pierre Pfister (ppfister) 
<ppfis...@cisco.com<mailto:ppfis...@cisco.com>> wrote:
Hello,

For now the multi-queue patch is still stuck in gerrit because jenkin's qemu is 
using an old buggy version...
I made some measurements on vhost: 
FD.io_mini-summit_916_Vhost_Performance_and_Optimization.pptx<https://wiki.fd.io/images/c/cc/FD.io_mini-summit_916_Vhost_Performance_and_Optimization.pptx>

I see you try different combinations with and without mergeable descriptors.
Do you do the same with 'indirect descriptors' ? They are supported by VPP 
since september or so.
The issue with these zillions ways a buffer may be forwarded is that we only 
know what mode is enabled or disabled, but you never know exactly what is 
happening for real.

Using indirect descriptors, I got VPP doing 0% loss 10Mpps (5Mpps each way). 
And the setup was stricter than yours as VPP had only 2 threads on the same 
core.

You may also want to try 'chrt -r' on your working processes. This improves 
scheduling real-time properties.

Thanks,

- Pierre




Le 25 oct. 2016 à 06:36, Jerome Tollet (jtollet) 
<jtol...@cisco.com<mailto:jtol...@cisco.com>> a écrit :

+ Pierre Pfister (ppfister) who ran a lot of benchmarks for VPP/vhostuser

De : <vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>> au nom 
de Thomas F Herbert <therb...@redhat.com<mailto:therb...@redhat.com>>
Date : lundi 24 octobre 2016 à 21:32
À : "kris...@redhat.com<mailto:kris...@redhat.com>" 
<kris...@redhat.com<mailto:kris...@redhat.com>>, Andrew Theurer 
<atheu...@redhat.com<mailto:atheu...@redhat.com>>, Franck Baudin 
<fbau...@redhat.com<mailto:fbau...@redhat.com>>, Rashid Khan 
<rk...@redhat.com<mailto:rk...@redhat.com>>, Bill Michalowski 
<bmich...@redhat.com<mailto:bmich...@redhat.com>>, Billy McFall 
<bmcf...@redhat.com<mailto:bmcf...@redhat.com>>, Douglas Shakshober 
<dsh...@redhat.com<mailto:dsh...@redhat.com>>
Cc : vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>, "Damjan Marion 
(damarion)" <damar...@cisco.com<mailto:damar...@cisco.com>>
Objet : Re: [vpp-dev] updated ovs vs. vpp results for 0.002% and 0% loss


+Maciek Konstantynowicz CSIT (mkonstan)

+vpp-dev

+Damjan Marion (damarion)

Karl, Thanks!

Your results seem close to consistent with VPP's CSIT testing for vhost for 
16.09 but for broader visibility, I am including some people on the VPP team, 
Damjan who is working on multi-queue etc. (I see that there were some perf 
related patches merged in vhost that might help since 16.09.) and Maciek who 
works in the CSIT project and has done the testing of VPP.

I want to open up the discussion WRT to the following:

1, Optimizing for maximum vhost perf with vpp including vhost-user multi-queue.

2. Comparision with CSIT results for vhost. Following are two links for CSIT

3. Statistics:

4. Tuning suggestions:

Following are some CSIT results:

compiled 16.09 results for vhost-user: 
https://wiki.fd<https://wiki.fd/>.io/view/CSIT/VPP-16.09_Test_Report#VM_vhost-user_Throughput_Measurements

Latest CSIT output from top of master, 16.12-rc0

https://jenkins.fd.io/view/csit/job/csit-vpp-verify-perf-master-nightly-all/1085/console

--Tom

On 10/21/2016 04:06 PM, Karl Rister wrote:

Hi All



Below are updated performance results for OVS and VPP on our new

Broadwell testbed.  I've tried to include all the relevant details, let

me know if I have forgotten anything of interest to you.



Karl







Processor: Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz (Broadwell)

Environment: RT + Hyperthreading (see [1] for details on KVM-RT)

Kernel: 3.10.0-510.rt56.415.el7.x86_64

Tuned: 2.7.1-3.el7



/proc/cmdline:

<...> default_hugepagesz=1G iommu=pt intel_iommu=on isolcpus=4-55

nohz=on nohz_full=4-55 rcu_nocbs=4-55 intel_pstate=disable nosoftlockup



Versions:

- OVS: openvswitch-2.5.0-10.git20160727.el7fdb + BZ fix [2]

- VPP: v16.09



NUMA node 0 CPU sibling pairs:

- (0,28)(2,30)(4,32)(6,34)(8,36)(10,38)(12,40)(14,42)(16,44)(18,46)

  (20,48)(22,50)(24,52)(26,54)



Host PMD Assignment:

- dpdk0 = CPU 6

- vhost-user1 = CPU 34

- dpdk1 = CPU 8

- vhost-user2 = CPU 36



Guest CPU Assignment:

- Emulator = CPU 20

- VCPU 0 (Housekeeping) = CPU 22

- VCPU 1 (PMD) = CPU 24

- VCPU 2 (PMD) = CPU 26



Configuration Details:

- OVS: custom OpenFlow rules direct packets similarly to VPP L2 xconnect

- VPP: L2 xconnect

- DPDK v16.07.0 testpmd in guest

- SCHED_FIFO priority 95 applied to all PMD threads (OVS/VPP/testpmd)

- SCHED_FIFO priority 1 applied to Guest VCPUs used for PMDs



Test Parameters:

- 64B packet size

- L2 forwarding test

  - All tests are bidirectional PVP (physical<->virtual<->physical)

  - Packets enter on a NIC port and are forwarded to the guest

  - Inside the guests, received packets are sent out the opposite

    direction

- Binary search starting at line rate (14.88 Mpps each way)

- 10 Minute Search Duration

- 2 Hour Validation Duration follows passing run for 10 Minute Search

  - If validation fails, search continues



Mergeable Buffers Disabled:

- OVS:

  - 0.002% Loss: 11.5216 Mpps bidirectional (5.7608 Mpps each way)

  - 0% Loss: 11.5216 Mpps bidirectional (5.7608 Mpps each way)

- VPP:

  - 0.002% Loss: 7.5537 Mpps bidirectional (3.7769 Mpps each way)Andre

Fredette <afred...@redhat.com><mailto:afred...@redhat.com>

  - 0% Loss: 5.2971 Mpps bidirectional (2.6486 Mpps each way)



Mergeable Buffers Enabled:

- OVS:

  - 0.002% Loss: 6.5626 Mpps bidirectional (3.2813 Mpps each way)

  - 0% Loss: 6.3622 Mpps bidirectional (3.1811 Mpps each way)

- VPP:

  - 0.002% Loss: 7.8134 Mpps bidirectional (3.9067 Mpps each way)

  - 0% Loss: 5.1029 Mpps bidirectional (2.5515 Mpps each way)



Mergeable Buffers Disabled + VPP no-multi-seg:

- VPP:

  - 0.002% Loss: 8.0654 Mpps bidirectional (4.0327 Mpps each way)

  - 0% Loss: 5.6442 Mpps bidirectional (2.8221 Mpps each way)



The details of these results (including latency metrics and links to the

raw data) are available at [3].



[1]: https://virt-wiki.lab.eng.brq.redhat.com/KVM/RealTime

[2]: https://bugzilla.redhat.com/show_bug.cgi?id=1344787

[3]:

https://docs.google.com/a/redhat.com/spreadsheets/d/1K6zDVgZYPJL-7EsIYMBIZCn65NAkVL_GtkBrAnAdXao/edit?usp=sharing




--  Thomas F Herbert  SDN Group  Office of Technology  Red Hat
_______________________________________________ vpp-dev mailing list 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
https://lists.fd.io/mailman/listinfo/vpp-dev
-- Thomas F Herbert SDN Group Office of Technology Red Hat

_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
  • Re: [vpp-dev] upd... Pierre Pfister (ppfister)
    • Re: [vpp-dev... Thomas F Herbert
    • Re: [vpp-dev... Edward Warnicke
      • Re: [vpp... Thomas F Herbert
        • Re: ... Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco)
          • ... Pierre Pfister (ppfister)
            • ... Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco)
            • ... Maciek Konstantynowicz (mkonstan)
            • ... Edward Warnicke
            • ... Maciek Konstantynowicz (mkonstan)
            • ... Edward Warnicke
            • ... Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco)
            • ... Pierre Pfister (ppfister)
            • ... Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco)
            • ... Damjan Marion (damarion)
            • ... Pierre Pfister (ppfister)

Reply via email to