Re: [Xen-devel] [PATCH v5 00/10] xen-block: multi hardware-queues/rings support

2015-11-30 Thread Konrad Rzeszutek Wilk
On Thu, Nov 26, 2015 at 03:09:02PM +0800, Bob Liu wrote:
> 
> On 11/26/2015 10:57 AM, Konrad Rzeszutek Wilk wrote:
> > On Thu, Nov 26, 2015 at 10:28:10AM +0800, Bob Liu wrote:
> >>
> >> On 11/26/2015 06:12 AM, Konrad Rzeszutek Wilk wrote:
> >>> On Wed, Nov 25, 2015 at 03:56:03PM -0500, Konrad Rzeszutek Wilk wrote:
>  On Wed, Nov 25, 2015 at 02:25:07PM -0500, Konrad Rzeszutek Wilk wrote:
> >>   xen/blkback: separate ring information out of struct xen_blkif
> >>   xen/blkback: pseudo support for multi hardware queues/rings
> >>   xen/blkback: get the number of hardware queues/rings from blkfront
> >>   xen/blkback: make pool of persistent grants and free pages per-queue
> >
> > OK, got to those as well. I have put them in 'devel/for-jens-4.5' and
> > are going to test them overnight before pushing them out.
> >
> > I see two bugs in the code that we MUST deal with:
> >
> >  - print_stats () is going to show zero values.
> >  - the sysfs code (VBD_SHOW) aren't converted over to fetch data
> >from all the rings.
> 
>  - kthread_run can't handle the two "name, i" arguments. I see:
> 
>  root  5101 2  0 20:47 ?00:00:00 [blkback.3.xvda-]
>  root  5102 2  0 20:47 ?00:00:00 [blkback.3.xvda-]
> >>>
> >>> And doing save/restore:
> >>>
> >>> xl save  /tmp/A;
> >>> xl restore /tmp/A;
> >>>
> >>> ends up us loosing the proper state and not getting the ring setup back.
> >>> I see this is backend:
> >>>
> >>> [ 2719.448600] vbd vbd-22-51712: -1 guest requested 0 queues, exceeding 
> >>> the maximum of 3.
> >>>
> >>> And XenStore agrees:
> >>> tool = ""
> >>>  xenstored = ""
> >>> local = ""
> >>>  domain = ""
> >>>   0 = ""
> >>>domid = "0"
> >>>name = "Domain-0"
> >>>device-model = ""
> >>> 0 = ""
> >>>  state = "running"
> >>>error = ""
> >>> backend = ""
> >>>  vbd = ""
> >>>   2 = ""
> >>>51712 = ""
> >>> error = "-1 guest requested 0 queues, exceeding the maximum of 3."
> >>>
> >>> .. which also leads to a memory leak as xen_blkbk_remove never gets
> >>> called.
> >>
> >> I think which was already fix by your patch:
> >> [PATCH RFC 2/2] xen/blkback: Free resources if connect_ring failed.
> > 
> > Nope. I get that with or without the patch.
> > 
> 
> Attached patch should fix this issue. 

I reworked it a bit.

>From 214635bd2d1c331d984a8170be30c7ba82a11fb2 Mon Sep 17 00:00:00 2001
From: Bob Liu 
Date: Wed, 25 Nov 2015 17:52:55 -0500
Subject: [PATCH] xen/blkfront: realloc ring info in blkif_resume

Need to reallocate ring info in the resume path, because info->rinfo was freed
in blkif_free(). And 'multi-queue-max-queues' backend reports may have been
changed.

Signed-off-by: Bob Liu 
Reported-and-Tested-by: Konrad Rzeszutek Wilk 
Signed-off-by: Konrad Rzeszutek Wilk 
---
 drivers/block/xen-blkfront.c | 74 +++-
 1 file changed, 45 insertions(+), 29 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index ef5ce43..4f77d36 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1676,6 +1676,43 @@ again:
return err;
 }
 
+static int negotiate_mq(struct blkfront_info *info)
+{
+   unsigned int backend_max_queues = 0;
+   int err;
+   unsigned int i;
+
+   BUG_ON(info->nr_rings);
+
+   /* Check if backend supports multiple queues. */
+   err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+  "multi-queue-max-queues", "%u", _max_queues);
+   if (err < 0)
+   backend_max_queues = 1;
+
+   info->nr_rings = min(backend_max_queues, xen_blkif_max_queues);
+   /* We need at least one ring. */
+   if (!info->nr_rings)
+   info->nr_rings = 1;
+
+   info->rinfo = kzalloc(sizeof(struct blkfront_ring_info) * 
info->nr_rings, GFP_KERNEL);
+   if (!info->rinfo) {
+   xenbus_dev_fatal(info->xbdev, -ENOMEM, "allocating ring_info 
structure");
+   return -ENOMEM;
+   }
+
+   for (i = 0; i < info->nr_rings; i++) {
+   struct blkfront_ring_info *rinfo;
+
+   rinfo = >rinfo[i];
+   INIT_LIST_HEAD(>indirect_pages);
+   INIT_LIST_HEAD(>grants);
+   rinfo->dev_info = info;
+   INIT_WORK(>work, blkif_restart_queue);
+   spin_lock_init(>ring_lock);
+   }
+   return 0;
+}
 /**
  * Entry point to this code when a new device is created.  Allocate the basic
  * structures and the ring buffer for communication with the backend, and
@@ -1686,9 +1723,7 @@ static int blkfront_probe(struct xenbus_device *dev,
  const struct xenbus_device_id *id)
 {
int err, vdevice;
-   unsigned int r_index;
struct blkfront_info *info;
-   unsigned int 

Re: [Xen-devel] [PATCH v5 00/10] xen-block: multi hardware-queues/rings support

2015-11-25 Thread Konrad Rzeszutek Wilk
On Thu, Nov 26, 2015 at 10:28:10AM +0800, Bob Liu wrote:
> 
> On 11/26/2015 06:12 AM, Konrad Rzeszutek Wilk wrote:
> > On Wed, Nov 25, 2015 at 03:56:03PM -0500, Konrad Rzeszutek Wilk wrote:
> >> On Wed, Nov 25, 2015 at 02:25:07PM -0500, Konrad Rzeszutek Wilk wrote:
>    xen/blkback: separate ring information out of struct xen_blkif
>    xen/blkback: pseudo support for multi hardware queues/rings
>    xen/blkback: get the number of hardware queues/rings from blkfront
>    xen/blkback: make pool of persistent grants and free pages per-queue
> >>>
> >>> OK, got to those as well. I have put them in 'devel/for-jens-4.5' and
> >>> are going to test them overnight before pushing them out.
> >>>
> >>> I see two bugs in the code that we MUST deal with:
> >>>
> >>>  - print_stats () is going to show zero values.
> >>>  - the sysfs code (VBD_SHOW) aren't converted over to fetch data
> >>>from all the rings.
> >>
> >> - kthread_run can't handle the two "name, i" arguments. I see:
> >>
> >> root  5101 2  0 20:47 ?00:00:00 [blkback.3.xvda-]
> >> root  5102 2  0 20:47 ?00:00:00 [blkback.3.xvda-]
> > 
> > And doing save/restore:
> > 
> > xl save  /tmp/A;
> > xl restore /tmp/A;
> > 
> > ends up us loosing the proper state and not getting the ring setup back.
> > I see this is backend:
> > 
> > [ 2719.448600] vbd vbd-22-51712: -1 guest requested 0 queues, exceeding the 
> > maximum of 3.
> > 
> > And XenStore agrees:
> > tool = ""
> >  xenstored = ""
> > local = ""
> >  domain = ""
> >   0 = ""
> >domid = "0"
> >name = "Domain-0"
> >device-model = ""
> > 0 = ""
> >  state = "running"
> >error = ""
> > backend = ""
> >  vbd = ""
> >   2 = ""
> >51712 = ""
> > error = "-1 guest requested 0 queues, exceeding the maximum of 3."
> > 
> > .. which also leads to a memory leak as xen_blkbk_remove never gets
> > called.
> 
> I think which was already fix by your patch:
> [PATCH RFC 2/2] xen/blkback: Free resources if connect_ring failed.

Nope. I get that with or without the patch.

I pushed the patches in
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
#devel/for-jens-4.5 tree. It also has some extra patches that should be
soon going via the x86 tree.

With the xen-blkback compiled with #define DEBUG 1 I see:

[   63.887741] xen-blkback: xen_blkbk_probe 880026a8cc00 1
[   63.894302] xen-blkback: backend_changed 880026a8cc00 1
[   63.895748] xen-blkback: frontend_changed 880026a8cc00 Initialising
[   63.922700] xen-blkback: xen_blkbk_probe 8800269da800 1
[   63.927849] xen-blkback: backend_changed 8800269da800 1
[   63.929117] xen-blkback: Successful creation of handle=ca00 (dom=1)
[   63.930605] xen-blkback: frontend_changed 8800269da800 Initialising
[   64.097161] xen-blkback: backend_changed 880026a8cc00 1
[   64.098992] xen-blkback: Successful creation of handle=1600 (dom=1)
[   64.345913] device vif1.0 entered promiscuous mode
[   64.351469] IPv6: ADDRCONF(NETDEV_UP): vif1.0: link is not ready
[   64.538682] device vif1.0-emu entered promiscuous mode
[   64.546592] switch: port 3(vif1.0-emu) entered forwarding state
[   64.548357] switch: port 3(vif1.0-emu) entered forwarding state
[   79.544475] switch: port 3(vif1.0-emu) entered forwarding state
[   84.090637] switch: port 3(vif1.0-emu) entered disabled state
[   84.091545] device vif1.0-emu left promiscuous mode
[   84.092416] switch: port 3(vif1.0-emu) entered disabled state
[   89.286901] vif vif-1-0 vif1.0: Guest Rx ready
[   89.287921] IPv6: ADDRCONF(NETDEV_CHANGE): vif1.0: link becomes ready
[   89.288943] switch: port 2(vif1.0) entered forwarding state
[   89.289747] switch: port 2(vif1.0) entered forwarding state
[   89.456176] xen-blkback: frontend_changed 880026a8cc00 Closed
[   89.481945] xen-blkback: frontend_changed 8800269da800 Initialised
[   89.482802] xen-blkback: connect_ring /local/domain/1/device/vbd/51712
[   89.484068] xen-blkback: backend/vbd/1/51712: using 2 queues, protocol 2 
(x86_32-abi) persistent grants
[   89.532755] xen-blkback: connect /local/domain/1/device/vbd/51712
[   89.541694] xen_update_blkif_status: name=[blkback.1.xvda-0]
[   89.542667] xen_update_blkif_status: name=[blkback.1.xvda-1]
[   89.561913] xen-blkback: frontend_changed 8800269da800 Connected

.. so here the guest booted and now we are suspending it.

[  104.300579] switch: port 2(vif1.0) entered forwarding state
[  208.057752] xen-blkback: frontend_changed 880026a8cc00 Unknown
[  208.061282] xen-blkback: xen_blkbk_remove 880026a8cc00 1
[  208.081888] xen-blkback: frontend_changed 8800269da800 Unknown
[  208.082759] xen-blkback: xen_blkbk_remove 8800269da800 1
[  208.102745] switch: port 2(vif1.0) entered disabled state
[  208.109089] switch: port 2(vif1.0) entered disabled state
[  208.109934] device vif1.0 left promiscuous mode
[  208.110734] switch: port 2(vif1.0) entered disabled state

We are done 

Re: [Xen-devel] [PATCH v5 00/10] xen-block: multi hardware-queues/rings support

2015-11-25 Thread Bob Liu

On 11/26/2015 06:12 AM, Konrad Rzeszutek Wilk wrote:
> On Wed, Nov 25, 2015 at 03:56:03PM -0500, Konrad Rzeszutek Wilk wrote:
>> On Wed, Nov 25, 2015 at 02:25:07PM -0500, Konrad Rzeszutek Wilk wrote:
   xen/blkback: separate ring information out of struct xen_blkif
   xen/blkback: pseudo support for multi hardware queues/rings
   xen/blkback: get the number of hardware queues/rings from blkfront
   xen/blkback: make pool of persistent grants and free pages per-queue
>>>
>>> OK, got to those as well. I have put them in 'devel/for-jens-4.5' and
>>> are going to test them overnight before pushing them out.
>>>
>>> I see two bugs in the code that we MUST deal with:
>>>
>>>  - print_stats () is going to show zero values.
>>>  - the sysfs code (VBD_SHOW) aren't converted over to fetch data
>>>from all the rings.
>>
>> - kthread_run can't handle the two "name, i" arguments. I see:
>>
>> root  5101 2  0 20:47 ?00:00:00 [blkback.3.xvda-]
>> root  5102 2  0 20:47 ?00:00:00 [blkback.3.xvda-]
> 
> And doing save/restore:
> 
> xl save  /tmp/A;
> xl restore /tmp/A;
> 
> ends up us loosing the proper state and not getting the ring setup back.
> I see this is backend:
> 
> [ 2719.448600] vbd vbd-22-51712: -1 guest requested 0 queues, exceeding the 
> maximum of 3.
> 
> And XenStore agrees:
> tool = ""
>  xenstored = ""
> local = ""
>  domain = ""
>   0 = ""
>domid = "0"
>name = "Domain-0"
>device-model = ""
> 0 = ""
>  state = "running"
>error = ""
> backend = ""
>  vbd = ""
>   2 = ""
>51712 = ""
> error = "-1 guest requested 0 queues, exceeding the maximum of 3."
> 
> .. which also leads to a memory leak as xen_blkbk_remove never gets
> called.

I think which was already fix by your patch:
[PATCH RFC 2/2] xen/blkback: Free resources if connect_ring failed.

P.S. I didn't see your git tree updated with these patches.

-- 
Regards,
-Bob

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v5 00/10] xen-block: multi hardware-queues/rings support

2015-11-25 Thread Konrad Rzeszutek Wilk
>   xen/blkback: separate ring information out of struct xen_blkif
>   xen/blkback: pseudo support for multi hardware queues/rings
>   xen/blkback: get the number of hardware queues/rings from blkfront
>   xen/blkback: make pool of persistent grants and free pages per-queue

OK, got to those as well. I have put them in 'devel/for-jens-4.5' and
are going to test them overnight before pushing them out.

I see two bugs in the code that we MUST deal with:

 - print_stats () is going to show zero values.
 - the sysfs code (VBD_SHOW) aren't converted over to fetch data
   from all the rings.

> 
>  drivers/block/xen-blkback/blkback.c | 386 ++-
>  drivers/block/xen-blkback/common.h  |  78 ++--
>  drivers/block/xen-blkback/xenbus.c  | 359 --
>  drivers/block/xen-blkfront.c| 718 
> ++--
>  include/xen/interface/io/blkif.h|  48 +++
>  5 files changed, 971 insertions(+), 618 deletions(-)
> 
> -- 
> 1.8.3.1
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v5 00/10] xen-block: multi hardware-queues/rings support

2015-11-25 Thread Konrad Rzeszutek Wilk
On Wed, Nov 25, 2015 at 03:56:03PM -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Nov 25, 2015 at 02:25:07PM -0500, Konrad Rzeszutek Wilk wrote:
> > >   xen/blkback: separate ring information out of struct xen_blkif
> > >   xen/blkback: pseudo support for multi hardware queues/rings
> > >   xen/blkback: get the number of hardware queues/rings from blkfront
> > >   xen/blkback: make pool of persistent grants and free pages per-queue
> > 
> > OK, got to those as well. I have put them in 'devel/for-jens-4.5' and
> > are going to test them overnight before pushing them out.
> > 
> > I see two bugs in the code that we MUST deal with:
> > 
> >  - print_stats () is going to show zero values.
> >  - the sysfs code (VBD_SHOW) aren't converted over to fetch data
> >from all the rings.
> 
> - kthread_run can't handle the two "name, i" arguments. I see:
> 
> root  5101 2  0 20:47 ?00:00:00 [blkback.3.xvda-]
> root  5102 2  0 20:47 ?00:00:00 [blkback.3.xvda-]

And doing save/restore:

xl save  /tmp/A;
xl restore /tmp/A;

ends up us loosing the proper state and not getting the ring setup back.
I see this is backend:

[ 2719.448600] vbd vbd-22-51712: -1 guest requested 0 queues, exceeding the 
maximum of 3.

And XenStore agrees:
tool = ""
 xenstored = ""
local = ""
 domain = ""
  0 = ""
   domid = "0"
   name = "Domain-0"
   device-model = ""
0 = ""
 state = "running"
   error = ""
backend = ""
 vbd = ""
  2 = ""
   51712 = ""
error = "-1 guest requested 0 queues, exceeding the maximum of 3."

.. which also leads to a memory leak as xen_blkbk_remove never gets
called.
> 
> 
> > 
> > > 
> > >  drivers/block/xen-blkback/blkback.c | 386 ++-
> > >  drivers/block/xen-blkback/common.h  |  78 ++--
> > >  drivers/block/xen-blkback/xenbus.c  | 359 --
> > >  drivers/block/xen-blkfront.c| 718 
> > > ++--
> > >  include/xen/interface/io/blkif.h|  48 +++
> > >  5 files changed, 971 insertions(+), 618 deletions(-)
> > > 
> > > -- 
> > > 1.8.3.1
> > > 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v5 00/10] xen-block: multi hardware-queues/rings support

2015-11-25 Thread Konrad Rzeszutek Wilk
On Wed, Nov 25, 2015 at 02:25:07PM -0500, Konrad Rzeszutek Wilk wrote:
> >   xen/blkback: separate ring information out of struct xen_blkif
> >   xen/blkback: pseudo support for multi hardware queues/rings
> >   xen/blkback: get the number of hardware queues/rings from blkfront
> >   xen/blkback: make pool of persistent grants and free pages per-queue
> 
> OK, got to those as well. I have put them in 'devel/for-jens-4.5' and
> are going to test them overnight before pushing them out.
> 
> I see two bugs in the code that we MUST deal with:
> 
>  - print_stats () is going to show zero values.
>  - the sysfs code (VBD_SHOW) aren't converted over to fetch data
>from all the rings.

- kthread_run can't handle the two "name, i" arguments. I see:

root  5101 2  0 20:47 ?00:00:00 [blkback.3.xvda-]
root  5102 2  0 20:47 ?00:00:00 [blkback.3.xvda-]


> 
> > 
> >  drivers/block/xen-blkback/blkback.c | 386 ++-
> >  drivers/block/xen-blkback/common.h  |  78 ++--
> >  drivers/block/xen-blkback/xenbus.c  | 359 --
> >  drivers/block/xen-blkfront.c| 718 
> > ++--
> >  include/xen/interface/io/blkif.h|  48 +++
> >  5 files changed, 971 insertions(+), 618 deletions(-)
> > 
> > -- 
> > 1.8.3.1
> > 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v5 00/10] xen-block: multi hardware-queues/rings support

2015-11-25 Thread Bob Liu

On 11/26/2015 10:57 AM, Konrad Rzeszutek Wilk wrote:
> On Thu, Nov 26, 2015 at 10:28:10AM +0800, Bob Liu wrote:
>>
>> On 11/26/2015 06:12 AM, Konrad Rzeszutek Wilk wrote:
>>> On Wed, Nov 25, 2015 at 03:56:03PM -0500, Konrad Rzeszutek Wilk wrote:
 On Wed, Nov 25, 2015 at 02:25:07PM -0500, Konrad Rzeszutek Wilk wrote:
>>   xen/blkback: separate ring information out of struct xen_blkif
>>   xen/blkback: pseudo support for multi hardware queues/rings
>>   xen/blkback: get the number of hardware queues/rings from blkfront
>>   xen/blkback: make pool of persistent grants and free pages per-queue
>
> OK, got to those as well. I have put them in 'devel/for-jens-4.5' and
> are going to test them overnight before pushing them out.
>
> I see two bugs in the code that we MUST deal with:
>
>  - print_stats () is going to show zero values.
>  - the sysfs code (VBD_SHOW) aren't converted over to fetch data
>from all the rings.

 - kthread_run can't handle the two "name, i" arguments. I see:

 root  5101 2  0 20:47 ?00:00:00 [blkback.3.xvda-]
 root  5102 2  0 20:47 ?00:00:00 [blkback.3.xvda-]
>>>
>>> And doing save/restore:
>>>
>>> xl save  /tmp/A;
>>> xl restore /tmp/A;
>>>
>>> ends up us loosing the proper state and not getting the ring setup back.
>>> I see this is backend:
>>>
>>> [ 2719.448600] vbd vbd-22-51712: -1 guest requested 0 queues, exceeding the 
>>> maximum of 3.
>>>
>>> And XenStore agrees:
>>> tool = ""
>>>  xenstored = ""
>>> local = ""
>>>  domain = ""
>>>   0 = ""
>>>domid = "0"
>>>name = "Domain-0"
>>>device-model = ""
>>> 0 = ""
>>>  state = "running"
>>>error = ""
>>> backend = ""
>>>  vbd = ""
>>>   2 = ""
>>>51712 = ""
>>> error = "-1 guest requested 0 queues, exceeding the maximum of 3."
>>>
>>> .. which also leads to a memory leak as xen_blkbk_remove never gets
>>> called.
>>
>> I think which was already fix by your patch:
>> [PATCH RFC 2/2] xen/blkback: Free resources if connect_ring failed.
> 
> Nope. I get that with or without the patch.
> 

Attached patch should fix this issue. 

-- 
Regards,
-Bob
>From f297a05fc27fb0bc9a3ed15407f8cc6ffd5e2a00 Mon Sep 17 00:00:00 2001
From: Bob Liu 
Date: Wed, 25 Nov 2015 14:56:32 -0500
Subject: [PATCH 1/2] xen:blkfront: fix compile error
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Fix this build error:
drivers/block/xen-blkfront.c: In function ‘blkif_free’:
drivers/block/xen-blkfront.c:1234:6: error: ‘struct blkfront_info’ has no
member named ‘ring’ info->ring = NULL;

Signed-off-by: Bob Liu 
---
 drivers/block/xen-blkfront.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 625604d..ef5ce43 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1231,7 +1231,7 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 		blkif_free_ring(>rinfo[i]);

 	kfree(info->rinfo);
-	info->ring = NULL;
+	info->rinfo = NULL;
 	info->nr_rings = 0;
 }

--
1.8.3.1

>From aab0bb1690213e665966ea22b021e0eeaacfc717 Mon Sep 17 00:00:00 2001
From: Bob Liu 
Date: Wed, 25 Nov 2015 17:52:55 -0500
Subject: [PATCH 2/2] xen/blkfront: realloc ring info in blkif_resume

Need to reallocate ring info in the resume path, because info->rinfo was freed
in blkif_free(). And 'multi-queue-max-queues' backend reports may have been
changed.

Signed-off-by: Bob Liu 
---
 drivers/block/xen-blkfront.c | 28 +++-
 1 file changed, 27 insertions(+), 1 deletion(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index ef5ce43..9634a65 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1926,12 +1926,38 @@ static int blkif_recover(struct blkfront_info *info)
 static int blkfront_resume(struct xenbus_device *dev)
 {
 	struct blkfront_info *info = dev_get_drvdata(>dev);
-	int err;
+	int err = 0;
+	unsigned int max_queues = 0, r_index;
 
 	dev_dbg(>dev, "blkfront_resume: %s\n", dev->nodename);
 
 	blkif_free(info, info->connected == BLKIF_STATE_CONNECTED);
 
+	err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
+			"multi-queue-max-queues", "%u", _queues, NULL);
+	if (err)
+		max_queues = 1;
+
+	info->nr_rings = min(max_queues, xen_blkif_max_queues);
+	/* We need at least one ring. */
+	if (!info->nr_rings)
+		info->nr_rings = 1;
+
+	info->rinfo = kzalloc(sizeof(struct blkfront_ring_info) * info->nr_rings, GFP_KERNEL);
+	if (!info->rinfo)
+		return -ENOMEM;
+
+	for (r_index = 0; r_index < info->nr_rings; r_index++) {
+		struct blkfront_ring_info *rinfo;
+
+		rinfo = >rinfo[r_index];
+		INIT_LIST_HEAD(>indirect_pages);
+		INIT_LIST_HEAD(>grants);
+		rinfo->dev_info = info;
+		INIT_WORK(>work, blkif_restart_queue);
+		

Re: [Xen-devel] [PATCH v5 00/10] xen-block: multi hardware-queues/rings support

2015-11-16 Thread Konrad Rzeszutek Wilk
On Sat, Nov 14, 2015 at 11:12:09AM +0800, Bob Liu wrote:
> Note: These patches were based on original work of Arianna's internship for
> GNOME's Outreach Program for Women.
> 
> After using blk-mq api, a guest has more than one(nr_vpus) software request
> queues associated with each block front. These queues can be mapped over 
> several
> rings(hardware queues) to the backend, making it very easy for us to run
> multiple threads on the backend for a single virtual disk.
> 
> By having different threads issuing requests at the same time, the performance
> of guest can be improved significantly.
> 
> Test was done based on null_blk driver:
> dom0: v4.3-rc7 16vcpus 10GB "modprobe null_blk"

Surely v4.4-rc1?

> domU: v4.3-rc7 16vcpus 10GB

Ditto.

> 
> [test]
> rw=read
> direct=1
> ioengine=libaio
> bs=4k
> time_based
> runtime=30
> filename=/dev/xvdb
> numjobs=16
> iodepth=64
> iodepth_batch=64
> iodepth_batch_complete=64
> group_reporting
> 
> Results:
> iops1: After commit("xen/blkfront: make persistent grants per-queue").
> iops2: After commit("xen/blkback: make persistent grants and free pages pool 
> per-queue").
> 
> Queues: 14  8  16
> Iops orig(k): 810 1064780 700
> Iops1(k): 810 1230(~20%)  1024(~20%)  850(~20%)
> Iops2(k): 810 1410(~35%)  1354(~75%)  1440(~100%)

Wholy cow. That is some contention on a lock (iops1 vs iops2). Thank you for
running these numbers.

> 
> With 4 queues after this series we can get ~75% increase in IOPS, and
> performance won't drop if incresing queue numbers.
> 
> Please find the respective chart in this link:
> https://www.dropbox.com/s/agrcy2pbzbsvmwv/iops.png?dl=0

Thank you for that link.



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH v5 00/10] xen-block: multi hardware-queues/rings support

2015-11-13 Thread Bob Liu
Note: These patches were based on original work of Arianna's internship for
GNOME's Outreach Program for Women.

After using blk-mq api, a guest has more than one(nr_vpus) software request
queues associated with each block front. These queues can be mapped over several
rings(hardware queues) to the backend, making it very easy for us to run
multiple threads on the backend for a single virtual disk.

By having different threads issuing requests at the same time, the performance
of guest can be improved significantly.

Test was done based on null_blk driver:
dom0: v4.3-rc7 16vcpus 10GB "modprobe null_blk"
domU: v4.3-rc7 16vcpus 10GB

[test]
rw=read
direct=1
ioengine=libaio
bs=4k
time_based
runtime=30
filename=/dev/xvdb
numjobs=16
iodepth=64
iodepth_batch=64
iodepth_batch_complete=64
group_reporting

Results:
iops1: After commit("xen/blkfront: make persistent grants per-queue").
iops2: After commit("xen/blkback: make persistent grants and free pages pool 
per-queue").

Queues:   14  8  16
Iops orig(k):   810 1064780 700
Iops1(k):   810 1230(~20%)  1024(~20%)  850(~20%)
Iops2(k):   810 1410(~35%)  1354(~75%)  1440(~100%)

With 4 queues after this series we can get ~75% increase in IOPS, and
performance won't drop if incresing queue numbers.

Please find the respective chart in this link:
https://www.dropbox.com/s/agrcy2pbzbsvmwv/iops.png?dl=0

---
v5:
 * Rebase to xen/tip.git tags/for-linus-4.4-rc0-tag.
 * Comments from Konrad.

v4:
 * Rebase to v4.3-rc7.
 * Comments from Roger.

v3:
 * Rebased to v4.2-rc8.

Bob Liu (10):
  xen/blkif: document blkif multi-queue/ring extension
  xen/blkfront: separate per ring information out of device info
  xen/blkfront: pseudo support for multi hardware queues/rings
  xen/blkfront: split per device io_lock
  xen/blkfront: negotiate number of queues/rings to be used with backend
  xen/blkback: separate ring information out of struct xen_blkif
  xen/blkback: pseudo support for multi hardware queues/rings
  xen/blkback: get the number of hardware queues/rings from blkfront
  xen/blkfront: make persistent grants per-queue
  xen/blkback: make pool of persistent grants and free pages per-queue

 drivers/block/xen-blkback/blkback.c | 386 ++-
 drivers/block/xen-blkback/common.h  |  78 ++--
 drivers/block/xen-blkback/xenbus.c  | 359 --
 drivers/block/xen-blkfront.c| 718 ++--
 include/xen/interface/io/blkif.h|  48 +++
 5 files changed, 971 insertions(+), 618 deletions(-)

-- 
1.8.3.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel