On 08.04.2019 16:44, David Marchand wrote:
> Hello Ilya,
> 
> On Mon, Apr 8, 2019 at 10:27 AM Ilya Maximets <i.maxim...@samsung.com 
> <mailto:i.maxim...@samsung.com>> wrote:
> 
>     On 04.04.2019 22:49, David Marchand wrote:
>     > We tried to lower the number of rebalances but we don't have a
>     > satisfying solution at the moment, so this patch rebalances on each
>     > update.
> 
>     Hi.
> 
>     Triggering the reconfiguration on each vring state change is a bad thing.
>     This could be abused by the guest to break the host networking by infinite
>     disabling/enabling queues. Each reconfiguration leads to removing ports
>     from the PMD port caches and their reloads. On rescheduling all the ports
> 
> 
> I'd say the reconfiguration itself is not wanted here.
> Only rebalancing the queues would be enough.

As you correctly mentioned in commit message where will be no real port
configuration changes.

Under 'reconfiguration' I mean datapath reconfiguration and 'rebalancing'
is one of its stages.

> 
> 
>     could be moved to a different PMD threads resulting in EMC/SMC/dpcls
>     invalidation and subsequent upcalls/packet reorderings.
> 
> 
> I agree that rebalancing does trigger EMC/SMC/dpcls invalidation when moving 
> queues.
> However, EMC/SMC/dpcls are per pmd specific, where would we have packet 
> reordering ?

Rx queue could be scheduled to different PMD thread --> new packets will go to 
different
Tx queue. It's unlikely, but it will depend on device/driver which packets will 
be sent
first. The main issue here that it happens to other ports, not only to port 
we're trying
to reconfigure.

> 
> 
> 
>     Same issues was discussed previously while looking at possibility of
>     vhost-pmd integration (with some test results):
>     https://mail.openvswitch.org/pipermail/ovs-dev/2016-August/320430.html
> 
> 
> Thanks for the link, I will test this.
> 
> 
> 
>     One more reference:
>     7f5f2bd0ce43 ("netdev-dpdk: Avoid reconfiguration on reconnection of same 
> vhost device.")
> 
> 
> Yes, I saw this patch.
> Are we safe against guest drivers/applications that play with 
> VIRTIO_NET_F_MQ, swapping it continuously ?

Good point. I didn't test that, but it looks like we're not safe here.
Kernel and DPDK drivers has F_MQ enabled by default so it'll require
some changes/explicit disabling. But I agree that this could produce
issues if someone will do that.

We could probably handle this using 'max seen qp_num' approach. But I'm
not a fan of this. Need to figure out how to do this correctly.

In general, I think, that we should not allow cases where guest is able
to manipulate the host configuration.

> 
> 
> 
> 
>     Anyway, do you have some numbers of how much time PMD thread spends on 
> polling
>     disabled queues? What the performance improvement you're able to achieve 
> by
>     avoiding that?
> 
> 
> With a simple pvp setup of mine.
> 1c/2t poll two physical ports.
> 1c/2t poll four vhost ports with 16 queues each.
>   Only one queue is enabled on each virtio device attached by the guest.
>   The first two virtio devices are bound to the virtio kmod.
>   The last two virtio devices are bound to vfio-pci and used to forward 
> incoming traffic with testpmd.
> 
> The forwarding zeroloss rate goes from 5.2Mpps (polling all 64 vhost queues) 
> to 6.2Mpps (polling only the 4 enabled vhost queues).

That's interesting. However, this doesn't look like a realistic scenario.
In practice you'll need much more PMD threads to handle so many queues.
If you'll add more threads, zeroloss test could show even worse results
if one of idle VMs will periodically change the number of queues. Periodic
latency spikes will cause queue overruns and subsequent packet drops on
hot Rx queues. This could be partially solved by allowing n_rxq to grow only.
However, I'd be happy to have different solution that will not hide number
of queues from the datapath.

> -- 
> David Marchand
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to