Re: [PATCH RFC 4/4] xen, blkback: add support for multiple block rings

2014-09-12 Thread David Vrabel
On 12/09/14 00:45, Arianna Avanzini wrote:
> On Fri, Aug 22, 2014 at 02:15:58PM +0100, David Vrabel wrote:
>> What
>> behaviour do we want when a domain is migrated to a host with different
>> storage?
>>
> 
> This first patchset does not include support to migrate a multi-queue-capable
> domU to a host with different storage. The second version, which I am posting
> now, includes it. The behavior I have implemented as of now lets the frontend
> use the same number of rings, if the backend is still multi-queue-capable
> after the migration, otherwise it exposes one only ring.

It would be preferable to allow the number of queues to be renegotiated
on reconnection.  This is what netfront does (but netfront is easier
since it can safely discard any queued packets but blkfront cannot).

If the number of queues is fixed then a maximum number of queues must be
part of the ABI specification.  i.e., all backends must support at least
N queues (even if this is more than its preferred number).

The backend can still hint what its preferred number of queues is, but
this can never be more than the maximum.

David

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH RFC 4/4] xen, blkback: add support for multiple block rings

2014-09-12 Thread David Vrabel
On 12/09/14 00:45, Arianna Avanzini wrote:
 On Fri, Aug 22, 2014 at 02:15:58PM +0100, David Vrabel wrote:
 What
 behaviour do we want when a domain is migrated to a host with different
 storage?

 
 This first patchset does not include support to migrate a multi-queue-capable
 domU to a host with different storage. The second version, which I am posting
 now, includes it. The behavior I have implemented as of now lets the frontend
 use the same number of rings, if the backend is still multi-queue-capable
 after the migration, otherwise it exposes one only ring.

It would be preferable to allow the number of queues to be renegotiated
on reconnection.  This is what netfront does (but netfront is easier
since it can safely discard any queued packets but blkfront cannot).

If the number of queues is fixed then a maximum number of queues must be
part of the ABI specification.  i.e., all backends must support at least
N queues (even if this is more than its preferred number).

The backend can still hint what its preferred number of queues is, but
this can never be more than the maximum.

David

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH RFC 4/4] xen, blkback: add support for multiple block rings

2014-09-11 Thread Bob Liu

On 09/12/2014 07:45 AM, Arianna Avanzini wrote:
> On Fri, Aug 22, 2014 at 02:15:58PM +0100, David Vrabel wrote:
>> On 22/08/14 12:20, Arianna Avanzini wrote:
>>> This commit adds to xen-blkback the support to retrieve the block
>>> layer API being used and the number of available hardware queues,
>>> in case the block layer is using the multi-queue API. This commit
>>> also lets the driver advertise the number of available hardware
>>> queues to the frontend via XenStore, therefore allowing for actual
>>> multiple I/O rings to be used.
>>
>> Does it make sense for number of queues should be dependent on the
>> number of queues available in the underlying block device?  
> 
> Thank you for raising that point. It probably is not the best solution.
> 
> Bob Liu suggested to have the number of I/O rings depend on the number
> of vCPUs in the driver domain. Konrad Wilk suggested to compute the
> number of I/O rings according to the following formula to preserve the
> possibility to explicitly define the number of hardware queues to be
> exposed to the frontend:
> what_backend_exposes = some_module_parameter ? :
>min(nr_online_cpus(), nr_hardware_queues()).
> io_rings = min(nr_online_cpus(), what_backend_exposes);
> 
> (Please do correct me if I misunderstood your point)

Since xen-netfront/xen-netback driver have already implemented
multi-queue, I'd like we can use the same way as the net driver
negotiate of number of queues.

Thanks,
-Bob
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH RFC 4/4] xen, blkback: add support for multiple block rings

2014-09-11 Thread Arianna Avanzini
On Fri, Aug 22, 2014 at 02:15:58PM +0100, David Vrabel wrote:
> On 22/08/14 12:20, Arianna Avanzini wrote:
> > This commit adds to xen-blkback the support to retrieve the block
> > layer API being used and the number of available hardware queues,
> > in case the block layer is using the multi-queue API. This commit
> > also lets the driver advertise the number of available hardware
> > queues to the frontend via XenStore, therefore allowing for actual
> > multiple I/O rings to be used.
> 
> Does it make sense for number of queues should be dependent on the
> number of queues available in the underlying block device?  

Thank you for raising that point. It probably is not the best solution.

Bob Liu suggested to have the number of I/O rings depend on the number
of vCPUs in the driver domain. Konrad Wilk suggested to compute the
number of I/O rings according to the following formula to preserve the
possibility to explicitly define the number of hardware queues to be
exposed to the frontend:
what_backend_exposes = some_module_parameter ? :
   min(nr_online_cpus(), nr_hardware_queues()).
io_rings = min(nr_online_cpus(), what_backend_exposes);

(Please do correct me if I misunderstood your point)

> What
> behaviour do we want when a domain is migrated to a host with different
> storage?
> 

This first patchset does not include support to migrate a multi-queue-capable
domU to a host with different storage. The second version, which I am posting
now, includes it. The behavior I have implemented as of now lets the frontend
use the same number of rings, if the backend is still multi-queue-capable
after the migration, otherwise it exposes one only ring.

> Can you split this patch up as well?

Sure, thank you for the comments.

> 
> David

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH RFC 4/4] xen, blkback: add support for multiple block rings

2014-09-11 Thread Arianna Avanzini
On Fri, Aug 22, 2014 at 02:15:58PM +0100, David Vrabel wrote:
 On 22/08/14 12:20, Arianna Avanzini wrote:
  This commit adds to xen-blkback the support to retrieve the block
  layer API being used and the number of available hardware queues,
  in case the block layer is using the multi-queue API. This commit
  also lets the driver advertise the number of available hardware
  queues to the frontend via XenStore, therefore allowing for actual
  multiple I/O rings to be used.
 
 Does it make sense for number of queues should be dependent on the
 number of queues available in the underlying block device?  

Thank you for raising that point. It probably is not the best solution.

Bob Liu suggested to have the number of I/O rings depend on the number
of vCPUs in the driver domain. Konrad Wilk suggested to compute the
number of I/O rings according to the following formula to preserve the
possibility to explicitly define the number of hardware queues to be
exposed to the frontend:
what_backend_exposes = some_module_parameter ? :
   min(nr_online_cpus(), nr_hardware_queues()).
io_rings = min(nr_online_cpus(), what_backend_exposes);

(Please do correct me if I misunderstood your point)

 What
 behaviour do we want when a domain is migrated to a host with different
 storage?
 

This first patchset does not include support to migrate a multi-queue-capable
domU to a host with different storage. The second version, which I am posting
now, includes it. The behavior I have implemented as of now lets the frontend
use the same number of rings, if the backend is still multi-queue-capable
after the migration, otherwise it exposes one only ring.

 Can you split this patch up as well?

Sure, thank you for the comments.

 
 David

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH RFC 4/4] xen, blkback: add support for multiple block rings

2014-09-11 Thread Bob Liu

On 09/12/2014 07:45 AM, Arianna Avanzini wrote:
 On Fri, Aug 22, 2014 at 02:15:58PM +0100, David Vrabel wrote:
 On 22/08/14 12:20, Arianna Avanzini wrote:
 This commit adds to xen-blkback the support to retrieve the block
 layer API being used and the number of available hardware queues,
 in case the block layer is using the multi-queue API. This commit
 also lets the driver advertise the number of available hardware
 queues to the frontend via XenStore, therefore allowing for actual
 multiple I/O rings to be used.

 Does it make sense for number of queues should be dependent on the
 number of queues available in the underlying block device?  
 
 Thank you for raising that point. It probably is not the best solution.
 
 Bob Liu suggested to have the number of I/O rings depend on the number
 of vCPUs in the driver domain. Konrad Wilk suggested to compute the
 number of I/O rings according to the following formula to preserve the
 possibility to explicitly define the number of hardware queues to be
 exposed to the frontend:
 what_backend_exposes = some_module_parameter ? :
min(nr_online_cpus(), nr_hardware_queues()).
 io_rings = min(nr_online_cpus(), what_backend_exposes);
 
 (Please do correct me if I misunderstood your point)

Since xen-netfront/xen-netback driver have already implemented
multi-queue, I'd like we can use the same way as the net driver
negotiate of number of queues.

Thanks,
-Bob
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH RFC 4/4] xen, blkback: add support for multiple block rings

2014-08-22 Thread David Vrabel
On 22/08/14 12:20, Arianna Avanzini wrote:
> This commit adds to xen-blkback the support to retrieve the block
> layer API being used and the number of available hardware queues,
> in case the block layer is using the multi-queue API. This commit
> also lets the driver advertise the number of available hardware
> queues to the frontend via XenStore, therefore allowing for actual
> multiple I/O rings to be used.

Does it make sense for number of queues should be dependent on the
number of queues available in the underlying block device?  What
behaviour do we want when a domain is migrated to a host with different
storage?

Can you split this patch up as well?

David
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH RFC 4/4] xen, blkback: add support for multiple block rings

2014-08-22 Thread David Vrabel
On 22/08/14 12:20, Arianna Avanzini wrote:
 This commit adds to xen-blkback the support to retrieve the block
 layer API being used and the number of available hardware queues,
 in case the block layer is using the multi-queue API. This commit
 also lets the driver advertise the number of available hardware
 queues to the frontend via XenStore, therefore allowing for actual
 multiple I/O rings to be used.

Does it make sense for number of queues should be dependent on the
number of queues available in the underlying block device?  What
behaviour do we want when a domain is migrated to a host with different
storage?

Can you split this patch up as well?

David
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/