On Thu, May 28, 2020 at 05:35:55PM +0200, Cornelia Huck wrote:
> On Wed, 27 May 2020 11:29:21 +0100
> Stefan Hajnoczi <stefa...@redhat.com> wrote:
> 
> > Multi-queue devices achieve the best performance when each vCPU has a
> > dedicated queue. This ensures that virtqueue used notifications are
> > handled on the same vCPU that submitted virtqueue buffers.  When another
> > vCPU handles the the notification an IPI will be necessary to wake the
> > submission vCPU and this incurs a performance overhead.
> > 
> > Provide a helper function that virtio-pci devices will use in later
> > patches to automatically select the optimal number of queues.
> > 
> > Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
> > ---
> >  hw/virtio/virtio-pci.h | 9 +++++++++
> >  hw/virtio/virtio-pci.c | 7 +++++++
> >  2 files changed, 16 insertions(+)
> 
> That looks like a good idea, since the policy can be easily tweaked in
> one place later.
> 
> For ccw, I don't see a good way to arrive at an optimal number of
> queues. Is there something we should do for mmio? If yes, should this
> be a callback in VirtioBusClass?

I looked at this but virtio-pci devices need to do num_queues ->
num_vectors -> .realize() in that order. It's hard to introduce a
meaningful VirtioBusClass method. (The problem is that some devices
automatically calculate the number of PCI MSI-X vectors based on the
number of queues, but that needs to happen before .realize() and
involves PCI-specific qdev properties.)

Trying to go through a common interface for all transports doesn't
simplify things here.

Stefan

Attachment: signature.asc
Description: PGP signature

Reply via email to