On Thu, Jan 30, 2020 at 01:29:16AM +0100, Paolo Bonzini wrote:
> On 29/01/20 16:44, Stefan Hajnoczi wrote:
> > On Mon, Jan 27, 2020 at 02:10:31PM +0100, Cornelia Huck wrote:
> >> On Fri, 24 Jan 2020 10:01:57 +0000
> >> Stefan Hajnoczi <stefa...@redhat.com> wrote:
> >>> @@ -47,10 +48,15 @@ static void vhost_scsi_pci_realize(VirtIOPCIProxy 
> >>> *vpci_dev, Error **errp)
> >>>  {
> >>>      VHostSCSIPCI *dev = VHOST_SCSI_PCI(vpci_dev);
> >>>      DeviceState *vdev = DEVICE(&dev->vdev);
> >>> -    VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(vdev);
> >>> +    VirtIOSCSIConf *conf = &dev->vdev.parent_obj.parent_obj.conf;
> >>> +
> >>> +    /* 1:1 vq to vcpu mapping is ideal because it avoids IPIs */
> >>> +    if (conf->num_queues == VIRTIO_SCSI_AUTO_NUM_QUEUES) {
> >>> +        conf->num_queues = current_machine->smp.cpus;
> >> This now maps the request vqs 1:1 to the vcpus. What about the fixed
> >> vqs? If they don't really matter, amend the comment to explain that?
> > The fixed vqs don't matter.  They are typically not involved in the data
> > path, only the control path where performance doesn't matter.
> 
> Should we put a limit on the number of vCPUs?  For anything above ~128
> the guest is probably not going to be disk or network bound.

Michael Tsirkin pointed out there's a hard limit of VIRTIO_QUEUE_MAX
(1024).  We need to at least stay under that limit.

Should the guest have >128 virtqueues?  Each virtqueue requires guest
RAM and 2 host eventfds.  Eventually these resource requirements will
become a scalability problem, but how do we choose a hard limit and what
happens to guest performance above that limit?

Stefan

Attachment: signature.asc
Description: PGP signature

Reply via email to