Hi, Daniele.
Thanks for review.

On 13.07.2016 04:15, Daniele Di Proietto wrote:
> Thanks for the patch.
> 
> This is not a complete review, but I have some preliminary comments.
> 
> If I understand correctly 'port_mutex' is converted to rwlock because
> we want the pmd threads in dpif_netdev_xps_get_tx_qid() to be able to
> grab it concurrently.  I think that we can add a pointer from 'struct
> tx_port' to 'struct dp_netdev_port' and access that without locking.
> As long as a port is in a pmd thread tx_port cache it cannot be
> deleted from the datapath.  This way we can avoid the rwlock.

Yes. Thank you for suggestion. This greatly simplifies this patch set.
It become almost 2 times smaller.

> 'last_cycles' is only used to monitor the performances of the pmd
> threads and it is always 0 if we compile without DPDK.  Perhaps
> we can add a 'now' parameter to dp_netdev_execute_actions(),
> pass it from packet_batch_per_flow_execute() and use that instead.

Thanks, I've implemented this in v4.

> Maybe we can improve this in the future, but with this patch
> dpif-netdev calls netdev_send() taking into account n_txq, which
> is the real number of queue.  Perhaps txq_needs_locking for
> phy devices should be stored in dpif-netdev and passed to every
> invocation of netdev_send()?

Yes I had this idea, but decided to implement this later because
there is no real profit for XPS patch set.

> Finally, have you thought about avoiding txq_used_mutex and
> using some form of atomic_compare_exchange() on the number
> of users, perhaps?  I'm not sure it's better than the
> mutex, I just wanted to throw this here, in case someone
> comes up with a good idea.

I thought about some atomic solution, but it will lead to some
retries or not-optimal txq distribution. So, I choose simple
mutex instead of complex solution with atomics. If this place
will become a bottleneck we could replace cycle by some
effective data structure (heap?) or invent some schema with
atomics in the future.

Best regards, Ilya Maximets.
_______________________________________________
dev mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/dev

Reply via email to