On 29/01/20 16:44, Stefan Hajnoczi wrote:
> On Mon, Jan 27, 2020 at 02:10:31PM +0100, Cornelia Huck wrote:
>> On Fri, 24 Jan 2020 10:01:57 +0000
>> Stefan Hajnoczi <stefa...@redhat.com> wrote:
>>> @@ -47,10 +48,15 @@ static void vhost_scsi_pci_realize(VirtIOPCIProxy 
>>> *vpci_dev, Error **errp)
>>>  {
>>>      VHostSCSIPCI *dev = VHOST_SCSI_PCI(vpci_dev);
>>>      DeviceState *vdev = DEVICE(&dev->vdev);
>>> -    VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(vdev);
>>> +    VirtIOSCSIConf *conf = &dev->vdev.parent_obj.parent_obj.conf;
>>> +
>>> +    /* 1:1 vq to vcpu mapping is ideal because it avoids IPIs */
>>> +    if (conf->num_queues == VIRTIO_SCSI_AUTO_NUM_QUEUES) {
>>> +        conf->num_queues = current_machine->smp.cpus;
>> This now maps the request vqs 1:1 to the vcpus. What about the fixed
>> vqs? If they don't really matter, amend the comment to explain that?
> The fixed vqs don't matter.  They are typically not involved in the data
> path, only the control path where performance doesn't matter.

Should we put a limit on the number of vCPUs?  For anything above ~128
the guest is probably not going to be disk or network bound.

Paolo

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to