On Wed, Nov 06, 2019 at 08:49:53AM +0100, Eelco Chaudron wrote:
> 
> 
> On 5 Nov 2019, at 23:16, William Tu wrote:
> 
> >The RFC patch enables shared umem support. It requires kernel change and
> >libbpf change, I will post it in another thread. I tested with multiple
> >afxdp ports using skb mode.  For example:
> >  ovs-vsctl -- set interface afxdp-p0 options:n_rxq=1 type="afxdp"
> >options:xdpmode=skb
> >  ovs-vsctl -- set interface afxdp-p1 options:n_rxq=1 type="afxdp"
> >options:xdpmode=skb
> >Will share one umem instead of two.
> >
> >Note that once shared umem is created with a specific mode (ex: XDP_COPY),
> >the netdev that shares this umem can not change its mode.  So I'm thinking
> >about
> >using just one shared umem for all skb-mode netdevs, and for the rest
> >drv-mode
> >netdevs, keep using dedicated umem. Or create one umem per mode? So the
> >drv-mode netdevs also share one umem.
> 
> 
> Hi William,
> 
> Did not go trough the entire patch, but wasn’t Magnus hinting on not using
> the shared umem ring’s but just the buffers they point too?
> 
> This way you do not need any locking when doing the ring operations, only
> when getting buffers. Guess this is more like the mbuf implementation in
> DPDK.
Hi Eelco,

You're right, thanks! I was following the old sample code in kernel,
xdpsock_user.c and its sharing single cq and fq among multiple xsks.

Based on Magnus suggestion, we could do:
"
you can register the same umem
area multiple times (creating multiple umem handles and multiple fqs
and cqs) to be able to support xsk sockets that have different queue
ids, but the same umem area. In both cases you need a mempool that can
handle multiple threads.
"

Magnus has replied in another email thread, I will wait for
his patch set.

Regards,
William


_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to