The problem has actually gotten worse since we've gotten rid of the
dispatcher thread.  Now each thread has it's own channel per port.

I wonder if the right approach is to simply ditch the per-port
fairness in the case where mmap netlink is enabled.  I.E. we simply
have one channel per thread and call it a day.

Anyways, I don't have a lot of context on this thread, so take
everything above with a grain of salt.

Ethan

On Wed, Apr 23, 2014 at 1:05 PM, Zoltan Kiss <zoltan.k...@citrix.com> wrote:
> Hi,
>
> I would like to ask, what's the status of enabling Netlink MMAP in the
> userspace? I'm interested to see this progressing, but digging the mail
> archive I've found it run into scalability issues:
>
> http://openvswitch.org/pipermail/dev/2013-December/034546.html
>
> And also it can't handle frags at the moment properly:
>
> http://openvswitch.org/pipermail/dev/2014-March/037337.html
>
> I was thinking about this scalability issue, and I think maybe we shouldn't
> stick to the 16 KB frame size. I think in most cases the packets we are
> actually sending up are small ones, in case of TCP the packets of the
> handshakes are less than 100 bytes. And for the rest, we can call
> genlmsg_new_unicast() with the full packet size, so it will fall back to
> non-mmaped communication. And this would solve the frag-handling problem as
> well.
> Another approach to keep the frame size lower is to pass references to the
> frags, which would point outside the ring buffer, to the actual page (which
> need to be mapped for the userspace). I don't know how feasible would that
> be, but huge linear buffers also has to be sliced up to frags.
>
> Regards,
>
> Zoli
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> http://openvswitch.org/mailman/listinfo/dev
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to