> -----Original Message-----
> From: Ilya Maximets [mailto:[email protected]]
> Sent: Wednesday, March 9, 2016 3:39 PM
> To: Wang, Zhihong <[email protected]>; [email protected]
> Cc: Flavio Leitner <[email protected]>; Traynor, Kevin
> <[email protected]>;
> Dyasly Sergey <[email protected]>
> Subject: Re: vhost-user invalid txqid cause discard of packets
>
> OK. Finally I got it.
>
> There is not good distribution of rx queues between pmd
> threads for dpdk0 port.
>
> > # ./ovs/utilities/ovs-appctl dpif-netdev/pmd-rxq-show
> > pmd thread numa_id 0 core_id 13:
> > port: vhost-user1 queue-id: 1
> > port: dpdk0 queue-id: 3
> > pmd thread numa_id 0 core_id 14:
> > port: vhost-user1 queue-id: 2
> > pmd thread numa_id 0 core_id 16:
> > port: dpdk0 queue-id: 0
> > pmd thread numa_id 0 core_id 17:
> > port: dpdk0 queue-id: 1
> > pmd thread numa_id 0 core_id 12:
> > port: vhost-user1 queue-id: 0
> > port: dpdk0 queue-id: 2
> > pmd thread numa_id 0 core_id 15:
> > port: vhost-user1 queue-id: 3
> > ------------------------------------------------------
>
> As we can see above dpdk0 port polled by threads on cores:
> 12, 13, 16 and 17.
> By design of dpif-netdev, there is only one TX queue-id assigned
> to each pmd thread. This queue-id's are sequential similar to
> core-id's. And thread will send packets to queue with exact this
> queue-id regardless of port.
>
> In our case:
> pmd thread on core 12 will send packets to tx queue 0
> pmd thread on core 13 will send packets to tx queue 1
> ...
> pmd thread on core 17 will send packets to tx queue 5
>
> So, for dpdk0 port:
> core 12 --> TX queue-id 0
> core 13 --> TX queue-id 1
> core 16 --> TX queue-id 4
> core 17 --> TX queue-id 5
>
> After truncating in netdev-dpdk:
> core 12 --> TX queue-id 0 % 4 == 0
> core 13 --> TX queue-id 1 % 4 == 1
> core 16 --> TX queue-id 4 % 4 == 0
> core 17 --> TX queue-id 5 % 4 == 1
>
> As a result only 2 queues used.
> This is not a good behaviour. Thanks for reporting.
> I'll try to fix rx queue distribution in dpif-netdev.
>
> Best regards, Ilya Maximets.
>
> P.S. There will be no packet loss on low speeds. Only 2x
> performance drop.
Yeah, seems a better algorithm will be needed.
Also I see this behavior which I think will lead to packet loss:
In source code qid is calculated at runtime in
__netdev_dpdk_vhost_send:
qid = vhost_dev->tx_q[qid % vhost_dev->real_n_txq].map;
8 cores:
vhost txq: 4, 5, 6, 7 (become 0, 1, 2, 3)
6 cores:
vhost txq: 0, 1, 4, 5 (4 & 5 become -1 after qid calculation at runtime)
And qid == -1 will lead to:
if (OVS_UNLIKELY(!is_vhost_running(virtio_dev) || qid == -1)) {
rte_spinlock_lock(&vhost_dev->stats_lock);
vhost_dev->stats.tx_dropped+= cnt;
rte_spinlock_unlock(&vhost_dev->stats_lock);
goto out;
}
Could you please check if you see this behavior?
_______________________________________________
dev mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/dev