Thanks for the patch, I think this moves in the right direction. I like how this patch removes "real_n_txq", as you pointed out it was confusing and, as proven here, unnecessary.
I don't like very much that the netdev implementation decides to create max_tx_queue_id + 1 queues. I still think the request should come from the datapath with netdev_dpdk_set_tx_multiq(). How about this? * For phy devices netdev_dpdk_set_tx_multiq() stays as it is. requested_n_txq is coming only from the datapath * For vhost devices netdev_dpdk_set_tx_multiq() becomes a no-op, because it doesn't matter how many queues the datapath requests, we're locking on every transmission anyway. requested_n_txq is coming only from the new_device() callback. Other than this I tested the patch and it appears to work, at least the automatic assignment of the number of queues from qemu, so thanks! On 07/07/2016 06:58, "Ilya Maximets" <[email protected]> wrote: >Currently, there are few inconsistencies between dpif-netdev >and netdev layers: > > * dpif-netdev can't know about exact number of tx queues > allocated inside netdev. > This leads to constant mapping of queue-ids to 'real' ones. Now n_txq is always the real number of transmission queues in the device. It think this is an improvement. > > * dpif-netdev is able to change number of tx queues while > it knows nothing about real hardware or number of queues > allocated in VM. > This leads to complications in reconfiguration of vhost-user > ports, because setting of 'n_txq' from different sources > (dpif-netdev and 'new_device()' call) requires additional > sychronization between this two layers. I suggested above a way to avoid this synchronization problem while maintaining the netdev_dpdk_set_tx_multiq() call. > >Also: We are able to configure 'n_rxq' for vhost-user devices, but > there is only one sane number of rx queues which must be used and > configured manually (number of queues that allocated in QEMU). > >This patch moves all configuration of queues to netdev layer and disables >configuration of 'n_rxq' for vhost devices. > >Configuration of rx and tx queues now automatically applied from >connected virtio device. Standard reconfiguration mechanism was used to >apply this changes. > >Number of tx queues by default set to 'n_cores + 1' for physical ports >and old 'needs_locking' logic preserved. > >For dummy-pmd ports new undocumented option 'n_txq' introduced to >configure number of tx queues. > >Ex.: > ovs-vsctl set interface dummy-pmd0 options:n_txq=32 > >Signed-off-by: Ilya Maximets <[email protected]> >--- > INSTALL.DPDK-ADVANCED.md | 26 +++----- > NEWS | 2 + > lib/dpif-netdev.c | 31 ++------- > lib/netdev-bsd.c | 1 - > lib/netdev-dpdk.c | 162 +++++++++++++++++++---------------------------- > lib/netdev-dummy.c | 31 ++------- > lib/netdev-linux.c | 1 - > lib/netdev-provider.h | 16 ----- > lib/netdev-vport.c | 1 - > lib/netdev.c | 30 --------- > lib/netdev.h | 1 - > vswitchd/vswitch.xml | 3 +- > 12 files changed, 90 insertions(+), 215 deletions(-) [...] > >diff --git a/vswitchd/vswitch.xml b/vswitchd/vswitch.xml >index 072fef4..a32f4ef 100644 >--- a/vswitchd/vswitch.xml >+++ b/vswitchd/vswitch.xml >@@ -2346,12 +2346,13 @@ > Only PMD netdevs support these options. > </p> > >- <column name="options" key="n_rxqs" >+ <column name="options" key="n_rxq" I'm embarrassed, I didn't notice this pretty obvious documentation mistake for a long time. Thanks for fixing it! > type='{"type": "integer", "minInteger": 1}'> > <p> > Specifies the maximum number of rx queues to be created for PMD > netdev. If not specified or specified to 0, one rx queue will > be created by default. >+ Not supported by vHost interfaces. maybe "DPDK vHost", instead of "vHost"? > </p> > </column> > </group> >-- >2.7.4 > _______________________________________________ dev mailing list [email protected] http://openvswitch.org/mailman/listinfo/dev
