On 31.10.2019 20:21, William Tu wrote:
> The patch detects the numa node id from the name of the netdev,
> by reading the '/sys/class/net/<devname>/device/numa_node'.
> If not available, ex: virtual device, or any error happens,
> return numa id 0.  Currently only the afxdp netdev type uses it,
> other linux netdev types are disabled due to no use case.

Hi.

This version looks good in general, but I'm concerned a bit with
the global effect it will have.  Let me explain, since this patch
doesn't manage any memory allocations and umem/pools allocations
are happened in OVS main thread, all the memory will be still
on original NUMA node (assuming NUMA id 0 in most cases).  umem
in native cases will be locked in kernel and will not be able to
migrate.  So, without this patch we always have all devices polled
by threads from NUMA 0 and all the memory will be on NUMA 0.
The only cross-NUMA access will be done by device DMA, which is OK
and usually the fastest cross-NUMA scenario.  But with this patch
applied device will be polled by the thread from NUMA 1, part of
the memory will migrate creating random performance spikes, but
locked umem will remain on NUMA 0.  In this case both DMA and PMD
will perform cross-NUMA memory accesses all the time stressing
the QPI and degrading performance significantly.

So, the question is: Should we merge this now or wait and merge
along with correct NUMA-aware memory allocations?
Thoughts?

Best regards, Ilya Maximets.
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to