On Sun, 29 Jul 2018 09:00:27 -0700 (PDT) David Miller <da...@davemloft.net> wrote:
> From: Caleb Raitto <caleb.rai...@gmail.com> > Date: Mon, 23 Jul 2018 16:11:19 -0700 > > > From: Caleb Raitto <carai...@google.com> > > > > The driver disables tx napi if it's not certain that completions will > > be processed affine with tx service. > > > > Its heuristic doesn't account for some scenarios where it is, such as > > when the queue pair count matches the core but not hyperthread count. > > > > Allow userspace to override the heuristic. This is an alternative > > solution to that in the linked patch. That added more logic in the > > kernel for these cases, but the agreement was that this was better left > > to user control. > > > > Do not expand the existing napi_tx variable to a ternary value, > > because doing so can break user applications that expect > > boolean ('Y'/'N') instead of integer output. Add a new param instead. > > > > Link: https://patchwork.ozlabs.org/patch/725249/ > > Acked-by: Willem de Bruijn <will...@google.com> > > Acked-by: Jon Olson <jonol...@google.com> > > Signed-off-by: Caleb Raitto <carai...@google.com> > > So I looked into the history surrounding these issues. > > First of all, it's always ends up turning out crummy when drivers start > to set affinities themselves. The worst possible case is to do it > _conditionally_, and that is exactly what virtio_net is doing. > > From the user's perspective, this provides a really bad experience. > > So if I have a 32-queue device and there are 32 cpus, you'll do all > the affinity settings, stopping Irqbalanced from doing anything > right? > > So if I add one more cpu, you'll say "oops, no idea what to do in > this situation" and not touch the affinities at all? > > That makes no sense at all. > > If the driver is going to set affinities at all, OWN that decision > and set it all the time to something reasonable. > > Or accept that you shouldn't be touching this stuff in the first place > and leave the affinities alone. > > Right now we're kinda in a situation where the driver has been setting > affinities in the ncpus==nqueues cases for some time, so we can't stop > doing it. > > Which means we have to set them in all cases to make the user > experience sane again. > > I looked at the linked to patch again: > > https://patchwork.ozlabs.org/patch/725249/ > > And I think the strategy should be made more generic, to get rid of > the hyperthreading assumptions. I also agree that the "assign > to first N cpus" logic doesn't make much sense either. > > Just distribute across the available cpus evenly, and be done with it. > If you have 64 cpus and 32 queues, this assigns queues to every other > cpu. > > Then we don't need this weird new module parameter. I wonder if it would be possible to give irqbalanced hints with irq_set_affinity_hint instead of doing direct affinity setting?