On 26.06.2017 00:52, Bodireddy, Bhanuprakash wrote:
>>> +
>>> +/* Flush the txq if there are any packets available.
>>> + * dynamic_txqs/concurrent_txq is disabled for vHost User ports as
>>> + * 'OVS_VHOST_MAX_QUEUE_NUM' txqs are preallocated.
>>> + */
>>
>> This comment is completely untrue. You may ignore 'concurrent_txq'
>> because you *must* lock the queue in any case because of dynamic txq
>> remapping inside netdev-dpdk. You must take the spinlock for the 'dev-
>>> tx_q[qid % netdev->n_txq].map' before sending packets.
> 
> Thanks for catching this and the lock should be taken before flushing the 
> queue. Below is how the new logic with spinlocks. 
> 
> /* Flush the txq if there are any packets available. */
> static int
> netdev_dpdk_vhost_txq_flush(struct netdev *netdev, int qid,
>                             bool concurrent_txq OVS_UNUSED)
> {
>     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
>     struct dpdk_tx_queue *txq;
> 
>     qid = dev->tx_q[qid % netdev->n_txq].map;
> 
>     txq = &dev->tx_q[qid];
>     if (OVS_LIKELY(txq->vhost_pkt_cnt)) {
>         rte_spinlock_lock(&dev->tx_q[qid].tx_lock);
>         netdev_dpdk_vhost_tx_burst(dev, qid);
>         rte_spinlock_unlock(&dev->tx_q[qid].tx_lock);
>     }
> 
>     return 0;
> }
> 
>>
>> In current implementation you're able to call send and flush simultaneously
>> for the same queue from different threads because 'flush' doesn't care about
>> queue remapping.
>>
>> See '__netdev_dpdk_vhost_send' and 'netdev_dpdk_remap_txqs' for detail.
>>
>> Additionally, flushing logic will be broken in case of txq remapping because
>> you may have different underneath queue each time you trying to send of
>> flush.
> 
> I remember you raised this point earlier. To handle this case,  
> 'last_used_qid' was introduced in tx_port. 
> With this we can track any change in qid and make sure the packets are  
> flushed in the old queue.
> This logic is in patch 3/6 of this series.

I'm talking about txq remapping inside netdev-dpdk not about XPS.
You're trying to flush the 'qid = tx_q[...].map' but the 'map'
value can be changed at any time because of enabling/disabling
vrings inside guest. Refer the 'vring_state_changed()' callback
that triggers 'netdev_dpdk_remap_txqs' which I already mentioned.
It's actually the reason why we're using unconditional locking
for vhost-user ports ignoring 'concurrent_txq' value.

>> Have you ever tested this with multiqueue vhost?
>> With disabling/enabling queues inside the guest?
> 
> I did basic sanity testing with vhost multiqueue to verify throughput and to 
> check if flush logic is working at low rate(1 pkt each is sent to VM) on 
> multiple VMs.
> When you say queue 'enable/disable' in the guest are you referring to using 
> 'ethtool -L <inface> combined <channel no>' ?
> If this is the case I did do this by configuring 5 rxqs(DPDK and vhost User) 
> and changed the channel nos and verified with testpmd app again for flushing 
> scenarios. I didn't testing with kernel forwarding here.
> 
> Regards,
> Bhanuprakash.
> 
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to