On 03.08.2016 12:21, Loftus, Ciara wrote:
>>
>> I've applied this patch and performed following test:
>>
>> OVS with 2 VMs connected via vhost-user ports.
>> Each vhost-user port has 4 queues.
>>
>> VM1 executes ping on LOCAL port.
>> In normal situation ping results are following:
>>
>> 100 packets transmitted, 100 received, 0% packet loss, time 99144ms
>> rtt min/avg/max/mdev = 0.231/0.459/0.888/0.111 ms
>>
>> After that VM2 starts execution of this script:
>>
>> while true;
>> do
>> ethtool -L eth0 combined 4;
>> ethtool -L eth0 combined 1;
>> done
>>
>> Now results of ping between VM1 and LOCAL port are:
>>
>> 100 packets transmitted, 100 received, 0% packet loss, time 99116ms
>> rtt min/avg/max/mdev = 5.466/150.327/356.201/85.208 ms
>>
>> Minimal time increased from 0.231 to 5.466 ms.
>> Average time increased from 0.459 to 150.327 ms (~300 times)!
>>
>> This happens because of constant reconfiguration requests from
>> the 'vring_state_changed_callback()'.
>>
>> As Ciara said, "Previously we could work with only reconfiguring during
>> link status change as we had full information available to us
>> ie. virtio_net->virt_qp_nb. We don't have that any more, so we need to
>> count the queues in OVS now every time we get a vring_change."
>>
>> Test above shows that this is unacceptable for OVS to perform
>> reconfiguration each time vring state changed because this leads to
>> ability for the guest user to break normal networking on all ports
>> connected to the same instance of Open vSwitch.
>
> Hi Ilya,
>
> Another thought on this. With the current master branch, isn't the above
> possible too with a script like this:
>
> while true;
> do
> echo "0000:00:03.0" > /sys/bus/pci/drivers/virtio-pci/bind
> echo "0000:00:03.0" > /sys/bus/pci/drivers/virtio-pci/unbind
> done
>
> The bind/unbind calls new/destroy device which in turn call reconfigure()
> each time.
Hmm, yes. You're right.
But this may be easily fixed by following patch:
-------------------------------------------------------------
diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
index 41ca91d..b9e72b8 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -2313,10 +2313,14 @@ new_device(int vid)
newnode = dev->socket_id;
}
- dev->requested_socket_id = newnode;
- dev->requested_n_rxq = qp_num;
- dev->requested_n_txq = qp_num;
- netdev_request_reconfigure(&dev->up);
+ if (dev->requested_n_txq != qp_num
+ || dev->requested_n_rxq != qp_num
+ || dev->requested_socket_id != newnode) {
+ dev->requested_socket_id = newnode;
+ dev->requested_n_rxq = qp_num;
+ dev->requested_n_txq = qp_num;
+ netdev_request_reconfigure(&dev->up);
+ }
exists = true;
@@ -2376,9 +2380,6 @@ destroy_device(int vid)
dev->vid = -1;
/* Clear tx/rx queue settings. */
netdev_dpdk_txq_map_clear(dev);
- dev->requested_n_rxq = NR_QUEUE;
- dev->requested_n_txq = NR_QUEUE;
- netdev_request_reconfigure(&dev->up);
netdev_change_seq_changed(&dev->up);
ovs_mutex_unlock(&dev->mutex);
-------------------------------------------------------------
We will not decrease number of queues on disconnect. But, I think,
it's not very important.
Thanks for pointing this.
I'll post above diff as a separate patch.
Best regards, Ilya Maximets.
_______________________________________________
dev mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/dev