On Fri, Jul 16, 2021 at 7:30 PM Stokes, Ian <ian.sto...@intel.com> wrote: > > > On 16/07/2021 17:21, David Marchand wrote: > > > Users complained that per rxq pmd usage was confusing: summing those > > > values per pmd would never reach 100% even if increasing traffic load > > > beyond pmd capacity. > > > > > > This is because the dpif-netdev/pmd-rxq-show command only reports "pure" > > > rxq cycles while some cycles are used in the pmd mainloop and adds up to > > > the total pmd load. > > > > > > dpif-netdev/pmd-stats-show does report per pmd load usage. > > > This load is measured since the last dpif-netdev/pmd-stats-clear call. > > > On the other hand, the per rxq pmd usage reflects the pmd load on a 10s > > > sliding window which makes it non trivial to correlate. > > > > > > Gather per pmd busy cycles with the same periodicity and report the > > > difference as overhead in dpif-netdev/pmd-rxq-show so that we have all > > > info in a single command. > > > > > > Example: > > > $ ovs-appctl dpif-netdev/pmd-rxq-show > > > pmd thread numa_id 1 core_id 3: > > > isolated : true > > > port: dpdk0 queue-id: 0 (enabled) pmd usage: 90 % > > > overhead: 4 % > > > pmd thread numa_id 1 core_id 5: > > > isolated : false > > > port: vhost0 queue-id: 0 (enabled) pmd usage: 0 % > > > port: vhost1 queue-id: 0 (enabled) pmd usage: 93 % > > > port: vhost2 queue-id: 0 (enabled) pmd usage: 0 % > > > port: vhost6 queue-id: 0 (enabled) pmd usage: 0 % > > > overhead: 6 % > > > pmd thread numa_id 1 core_id 31: > > > isolated : true > > > port: dpdk1 queue-id: 0 (enabled) pmd usage: 86 % > > > overhead: 4 % > > > pmd thread numa_id 1 core_id 33: > > > isolated : false > > > port: vhost3 queue-id: 0 (enabled) pmd usage: 0 % > > > port: vhost4 queue-id: 0 (enabled) pmd usage: 0 % > > > port: vhost5 queue-id: 0 (enabled) pmd usage: 92 % > > > port: vhost7 queue-id: 0 (enabled) pmd usage: 0 % > > > overhead: 7 % > > > > > > Signed-off-by: David Marchand <david.march...@redhat.com> > > > --- > > > Changes since v2: > > > - rebased on master, dynamically allocating added stats array to avoid > > > exposing internal dpif-netdev array size, > > > - fixed UT on FreeBSD, > > > - rebased on top of Kevin series to ease merging wrt UT update, > > > - GHA result: https://github.com/david-marchand/ovs/runs/3087888172 > > > > Changes lgtm. UTs passing (thanks for the rebase for the new ones I > > added). GHA passing. checkpatch passing. I didn't re-test as there's no > > logic changes. > > > > Acked-by: Kevin Traynor <ktray...@redhat.com> > > Thanks for the patch David & thanks for reviewing Kevin. LGTM and testd ok. > > Applied to master.
Cool, thanks Ian. -- David Marchand _______________________________________________ dev mailing list d...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-dev