The per device io_lock became a coarser grained lock after multi-queues/rings
was introduced, this patch introduced a fine-grained ring_lock for each ring.
The old io_lock was renamed to dev_lock and only protect the ->grants list
which is shared by all rings.
Signed-off-by: Bob
Preparatory patch for multiple hardware queues (rings). The number of
rings is unconditionally set to 1, larger number will be enabled in next
patch so as to make every single patch small and readable.
Signed-off-by: Arianna Avanzini
Signed-off-by: Bob Liu
---
drivers/block/xen-blkback
Preparatory patch for multiple hardware queues (rings). The number of
rings is unconditionally set to 1, larger number will be enabled in next
patch so as to make every single patch small and readable.
Signed-off-by: Bob Liu
---
drivers/block/xen-blkfront.c | 327
supporting multi hardware queues/rings.
Signed-off-by: Arianna Avanzini
Signed-off-by: Bob Liu
---
drivers/block/xen-blkback/blkback.c | 233
drivers/block/xen-blkback/common.h | 64 ++
drivers/block/xen-blkback/xenbus.c | 107 ++---
3 fil
i hardware queues/rings.
Signed-off-by: Arianna Avanzini
Signed-off-by: Bob Liu
---
drivers/block/xen-blkfront.c | 321 ---
1 file changed, 178 insertions(+), 143 deletions(-)
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
ind
Backend advertises "multi-queue-max-queues" to front, then get the negotiated
number from "multi-queue-num-queues" wrote by blkfront.
Signed-off-by: Bob Liu
---
drivers/block/xen-blkback/blkback.c | 11 +++
drivers/block/xen-blkback/common.h | 1 +
drivers/block/
Make pool of persistent grants and free pages per-queue/ring instead of
per-device to get better scalability.
Signed-off-by: Bob Liu
---
drivers/block/xen-blkback/blkback.c | 212 +---
drivers/block/xen-blkback/common.h | 32 +++---
drivers/block/xen-blkback
Make persistent grants per-queue/ring instead of per-device, so that we can
drop the 'dev_lock' and get better scalability.
Signed-off-by: Bob Liu
---
drivers/block/xen-blkfront.c | 89 +---
1 file changed, 34 insertions(+), 55 deletions(-)
diff --git
Document the multi-queue/ring feature in terms of XenStore keys to be written by
the backend and by the frontend.
Signed-off-by: Bob Liu
--
v2:
Add descriptions together with multi-page ring buffer.
---
include/xen/interface/io/blkif.h | 48
1 file
ueue-num-queues", blkback need to read this negotiated number.
Signed-off-by: Bob Liu
---
drivers/block/xen-blkfront.c | 166 +++
1 file changed, 120 insertions(+), 46 deletions(-)
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkf
ueue-num-queues", blkback need to read this negotiated number.
Signed-off-by: Bob Liu <bob@oracle.com>
---
drivers/block/xen-blkfront.c | 166 +++
1 file changed, 120 insertions(+), 46 deletions(-)
diff --git a/drivers/block/xen-blkfront.c b/dr
Document the multi-queue/ring feature in terms of XenStore keys to be written by
the backend and by the frontend.
Signed-off-by: Bob Liu <bob@oracle.com>
--
v2:
Add descriptions together with multi-page ring buffer.
---
include/xen/interface/io/blkif.
e and real SSD storage.
---
v4:
* Rebase to v4.3-rc7
* Comments from Roger
v3:
* Rebased to v4.2-rc8
Bob Liu (10):
xen/blkif: document blkif multi-queue/ring extension
xen/blkfront: separate per ring information out of device info
xen/blkfront: pseudo support for multi hardware queues/r
The per device io_lock became a coarser grained lock after multi-queues/rings
was introduced, this patch introduced a fine-grained ring_lock for each ring.
The old io_lock was renamed to dev_lock and only protect the ->grants list
which is shared by all rings.
Signed-off-by: Bob Liu &
Make persistent grants per-queue/ring instead of per-device, so that we can
drop the 'dev_lock' and get better scalability.
Signed-off-by: Bob Liu <bob@oracle.com>
---
drivers/block/xen-blkfront.c | 89 +---
1 file changed, 34 insertions(
i hardware queues/rings.
Signed-off-by: Arianna Avanzini <avanzini.aria...@gmail.com>
Signed-off-by: Bob Liu <bob@oracle.com>
---
drivers/block/xen-blkfront.c | 321 ---
1 file changed, 178 insertions(+), 143 deletions(-)
diff --git a/drivers/
supporting multi hardware queues/rings.
Signed-off-by: Arianna Avanzini <avanzini.aria...@gmail.com>
Signed-off-by: Bob Liu <bob@oracle.com>
---
drivers/block/xen-blkback/blkback.c | 233
drivers/block/xen-blkback/common.h | 64 ++
dri
Preparatory patch for multiple hardware queues (rings). The number of
rings is unconditionally set to 1, larger number will be enabled in next
patch so as to make every single patch small and readable.
Signed-off-by: Arianna Avanzini <avanzini.aria...@gmail.com>
Signed-off-by: Bob Li
Preparatory patch for multiple hardware queues (rings). The number of
rings is unconditionally set to 1, larger number will be enabled in next
patch so as to make every single patch small and readable.
Signed-off-by: Bob Liu <bob@oracle.com>
---
drivers/block/xen-blkfront.c
Make pool of persistent grants and free pages per-queue/ring instead of
per-device to get better scalability.
Signed-off-by: Bob Liu <bob@oracle.com>
---
drivers/block/xen-blkback/blkback.c | 212 +---
drivers/block/xen-blkback/common.h | 32 +++---
d
Backend advertises "multi-queue-max-queues" to front, then get the negotiated
number from "multi-queue-num-queues" wrote by blkfront.
Signed-off-by: Bob Liu <bob@oracle.com>
---
drivers/block/xen-blkback/blkback.c | 11 +++
drivers/block/xen-blkback/commo
On 11/02/2015 12:49 PM, kbuild test robot wrote:
> Hi Bob,
>
> [auto build test ERROR on v4.3-rc7 -- if it's inappropriate base, please
> suggest rules for selecting the more suitable base]
>
> url:
> https://github.com/0day-ci/linux/commits/Bob-Liu/xen-block-multi-
On 10/19/2015 05:36 PM, Roger Pau Monné wrote:
> El 10/10/15 a les 6.08, Bob Liu ha escrit:
>> On 10/05/2015 10:55 PM, Roger Pau Monné wrote:
>>> The same for the pool of persistent grants, it should be per-device and
>>> not per-ring.
>>>
>>> And I
On 10/19/2015 05:36 PM, Roger Pau Monné wrote:
> El 10/10/15 a les 6.08, Bob Liu ha escrit:
>> On 10/05/2015 10:55 PM, Roger Pau Monné wrote:
>>> The same for the pool of persistent grants, it should be per-device and
>>> not per-ring.
>>>
>>> And I
On 10/03/2015 01:02 AM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> Split per ring information to an new structure:blkfront_ring_info, also
>> rename
>> per blkfront_info to blkfront_dev_info.
> ^ removed.
>>
>> A ring is th
On 10/03/2015 01:02 AM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> Split per ring information to an new structure:blkfront_ring_info, also
>> rename
>> per blkfront_info to blkfront_dev_info.
> ^ removed.
>>
>> A ring is th
On 10/05/2015 10:55 PM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> Split per ring information to an new structure:xen_blkif_ring, so that one
>> vbd
>> device can associate with one or more rings/hardware queues.
>>
>> This patch is a
On 10/05/2015 10:55 PM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> Split per ring information to an new structure:xen_blkif_ring, so that one
>> vbd
>> device can associate with one or more rings/hardware queues.
>>
>> This patch is a
On 10/07/2015 07:46 PM, Roger Pau Monné wrote:
> El 07/10/15 a les 12.39, Bob Liu ha escrit:
>> On 10/05/2015 10:40 PM, Roger Pau Monné wrote:
>>> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>>>> @@ -2267,6 +2335,12 @@ static int __init xlblk_init(void)
>>&g
On 10/05/2015 11:15 PM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> Backend advertises "multi-queue-max-queues" to front, and then read back the
>> final negotiated queues/rings from "multi-queue-num-queues" which is wrote by
>&g
On 10/05/2015 11:08 PM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> Prepare patch for multi hardware queues/rings, the ring number was set to 1
>> by
>> force.
>
> This should be:
>
> Preparatory patch for multiple hardware queu
On 10/05/2015 10:55 PM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> Split per ring information to an new structure:xen_blkif_ring, so that one
>> vbd
>> device can associate with one or more rings/hardware queues.
>>
>> This patch is a
On 10/05/2015 10:40 PM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> The max number of hardware queues for xen/blkfront is set by parameter
>> 'max_queues', while the number xen/blkback supported is notified through
>> xenstore(&q
On 10/05/2015 10:13 PM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> The per device io_lock became a coarser grained lock after multi-queues/rings
>> was introduced, this patch converts it to a fine-grained per ring lock.
>>
>> NOTE:
On 10/05/2015 06:52 PM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> Prepare patch for multi hardware queues/rings, the ring number was set to 1
>> by
>> force.
>>
>> * Use 'nr_rings' in per dev_info to identify how many h
On 10/05/2015 06:52 PM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> Prepare patch for multi hardware queues/rings, the ring number was set to 1
>> by
>> force.
>>
>> * Use 'nr_rings' in per dev_info to identify how many h
On 10/05/2015 10:40 PM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> The max number of hardware queues for xen/blkfront is set by parameter
>> 'max_queues', while the number xen/blkback supported is notified through
>> xenstore(&q
On 10/05/2015 10:13 PM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> The per device io_lock became a coarser grained lock after multi-queues/rings
>> was introduced, this patch converts it to a fine-grained per ring lock.
>>
>> NOTE:
On 10/05/2015 10:55 PM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> Split per ring information to an new structure:xen_blkif_ring, so that one
>> vbd
>> device can associate with one or more rings/hardware queues.
>>
>> This patch is a
On 10/05/2015 11:08 PM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> Prepare patch for multi hardware queues/rings, the ring number was set to 1
>> by
>> force.
>
> This should be:
>
> Preparatory patch for multiple hardware queu
On 10/05/2015 11:15 PM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> Backend advertises "multi-queue-max-queues" to front, and then read back the
>> final negotiated queues/rings from "multi-queue-num-queues" which is wrote by
>&g
On 10/07/2015 07:46 PM, Roger Pau Monné wrote:
> El 07/10/15 a les 12.39, Bob Liu ha escrit:
>> On 10/05/2015 10:40 PM, Roger Pau Monné wrote:
>>> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>>>> @@ -2267,6 +2335,12 @@ static int __init xlblk_init(void)
>>&g
On 10/03/2015 01:02 AM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> Split per ring information to an new structure:blkfront_ring_info, also
>> rename
>> per blkfront_info to blkfront_dev_info.
> ^ removed.
>>
>> A ring is th
On 10/03/2015 12:22 AM, Roger Pau Monné wrote:
> El 02/10/15 a les 18.12, Wei Liu ha escrit:
>> On Fri, Oct 02, 2015 at 06:04:35PM +0200, Roger Pau Monné wrote:
>>> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>>>> Document multi queues/rings of xen-block.
&g
On 10/03/2015 12:22 AM, Roger Pau Monné wrote:
> El 02/10/15 a les 18.12, Wei Liu ha escrit:
>> On Fri, Oct 02, 2015 at 06:04:35PM +0200, Roger Pau Monné wrote:
>>> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>>>> Document multi queues/rings of xen-block.
>>
On 10/03/2015 01:02 AM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> Split per ring information to an new structure:blkfront_ring_info, also
>> rename
>> per blkfront_info to blkfront_dev_info.
> ^ removed.
>>
>> A ring is th
On 09/14/2015 01:47 AM, Julien Grall wrote:
>
>
> On 13/09/2015 13:44, Bob Liu wrote:
>> I may misunderstood here.
>> But I think same changes are also required even if backend supports indirect
>> grant when frontend is using 64KB page granularity.
>> Else
On 09/13/2015 08:06 PM, Julien Grall wrote:
>
>
> On 12/09/2015 10:46, Bob Liu wrote:
>> Hi Julien,
>
> Hi Bob,
>
>
>> On 09/12/2015 03:31 AM, Julien Grall wrote:
>>> Hi all,
>>>
>>> This is a follow-up on the previous discussi
On 09/13/2015 08:06 PM, Julien Grall wrote:
>
>
> On 12/09/2015 10:46, Bob Liu wrote:
>> Hi Julien,
>
> Hi Bob,
>
>
>> On 09/12/2015 03:31 AM, Julien Grall wrote:
>>> Hi all,
>>>
>>> This is a follow-up on the previous discussi
On 09/14/2015 01:47 AM, Julien Grall wrote:
>
>
> On 13/09/2015 13:44, Bob Liu wrote:
>> I may misunderstood here.
>> But I think same changes are also required even if backend supports indirect
>> grant when frontend is using 64KB page granularity.
>> Else
Hi Julien,
On 09/12/2015 03:31 AM, Julien Grall wrote:
> Hi all,
>
> This is a follow-up on the previous discussion [1] related to guest using 64KB
> page granularity not booting with backend using non-indirect grant.
>
> This has been successly tested on ARM64 with both 64KB and 4KB page
>
Hi Julien,
On 09/12/2015 03:31 AM, Julien Grall wrote:
> Hi all,
>
> This is a follow-up on the previous discussion [1] related to guest using 64KB
> page granularity not booting with backend using non-indirect grant.
>
> This has been successly tested on ARM64 with both 64KB and 4KB page
>
On 09/07/2015 07:10 PM, Julien Grall wrote:
> On 07/09/15 07:07, Bob Liu wrote:
>> Hi Julien,
>
> Hi Bob,
>
>> On 09/04/2015 09:51 PM, Julien Grall wrote:
>>> Hi Roger,
>>>
>>> On 04/09/15 11:08, Roger Pau Monne wrote:
>>>> Req
Hi Julien,
On 09/04/2015 09:51 PM, Julien Grall wrote:
> Hi Roger,
>
> On 04/09/15 11:08, Roger Pau Monne wrote:
>> Request allocation has been moved to connect_ring, which is called every
>> time blkback connects to the frontend (this can happen multiple times during
>> a blkback instance life
Hi Julien,
On 09/04/2015 09:51 PM, Julien Grall wrote:
> Hi Roger,
>
> On 04/09/15 11:08, Roger Pau Monne wrote:
>> Request allocation has been moved to connect_ring, which is called every
>> time blkback connects to the frontend (this can happen multiple times during
>> a blkback instance life
On 09/07/2015 07:10 PM, Julien Grall wrote:
> On 07/09/15 07:07, Bob Liu wrote:
>> Hi Julien,
>
> Hi Bob,
>
>> On 09/04/2015 09:51 PM, Julien Grall wrote:
>>> Hi Roger,
>>>
>>> On 04/09/15 11:08, Roger Pau Monne wrote:
>>>> Req
Prepare patch for multi hardware queues/rings, the ring number was set to 1 by
force.
Signed-off-by: Arianna Avanzini
Signed-off-by: Bob Liu
---
drivers/block/xen-blkback/common.h |3 +-
drivers/block/xen-blkback/xenbus.c | 328 +++-
2 files changed, 209
Split per ring information to an new structure:xen_blkif_ring, so that one vbd
device can associate with one or more rings/hardware queues.
This patch is a preparation for supporting multi hardware queues/rings.
Signed-off-by: Arianna Avanzini
Signed-off-by: Bob Liu
---
drivers/block/xen
ueues".
Signed-off-by: Bob Liu
---
drivers/block/xen-blkfront.c | 142 --
1 file changed, 108 insertions(+), 34 deletions(-)
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 1cae76b..1aa66c9 100644
--- a/drivers/block/
Backend advertises "multi-queue-max-queues" to front, and then read back the
final negotiated queues/rings from "multi-queue-num-queues" which is wrote by
blkfront.
Signed-off-by: Bob Liu
---
drivers/block/xen-blkback/blkback.c |8
drivers/block/xen-blkb
The per device io_lock became a coarser grained lock after multi-queues/rings
was introduced, this patch converts it to a fine-grained per ring lock.
NOTE: The per dev_info structure was no more protected by any lock.
Signed-off-by: Bob Liu
---
drivers/block/xen-blkfront.c | 44
ter.
Signed-off-by: Bob Liu
---
drivers/block/xen-blkfront.c | 513 +-
1 file changed, 308 insertions(+), 205 deletions(-)
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index bf416d5..bf45c99 100644
--- a/drivers/block/
.
This patch is a preparation for supporting real multi hardware queues/rings.
Signed-off-by: Arianna Avanzini
Signed-off-by: Bob Liu
---
drivers/block/xen-blkfront.c | 854 ++
1 file changed, 445 insertions(+), 409 deletions(-)
diff --git a/drivers/block/xen
, and nvme.
Also dropped one unnecessary holding of info->io_lock when calling
blk_mq_stop_hw_queues().
Signed-off-by: Arianna Avanzini
Signed-off-by: Bob Liu
Reviewed-by: Christoph Hellwig
Acked-by: Jens Axboe
Signed-off-by: David Vrabel
---
drivers/block/xen-blkfront.c |
Document multi queues/rings of xen-block.
Signed-off-by: Bob Liu
---
include/xen/interface/io/blkif.h | 32
1 file changed, 32 insertions(+)
diff --git a/include/xen/interface/io/blkif.h b/include/xen/interface/io/blkif.h
index c33e1c4..b453b70 100644
ps: 1310k279k810k(+200%) 871k 1000k
Only with 4queues, iops for domU get improved a lot and nearly catch up with
dom0. There were also similar huge improvement for write and real SSD storage.
---
v3: Rebased to v4.2-rc8
Bob Liu (9):
xen-blkfront: convert to blk-mq
ueues".
Signed-off-by: Bob Liu <bob@oracle.com>
---
drivers/block/xen-blkfront.c | 142 --
1 file changed, 108 insertions(+), 34 deletions(-)
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 1cae76b..1aa66c9 1006
Backend advertises "multi-queue-max-queues" to front, and then read back the
final negotiated queues/rings from "multi-queue-num-queues" which is wrote by
blkfront.
Signed-off-by: Bob Liu <bob@oracle.com>
---
drivers/block/xen-blkback/blkback.c |8
The per device io_lock became a coarser grained lock after multi-queues/rings
was introduced, this patch converts it to a fine-grained per ring lock.
NOTE: The per dev_info structure was no more protected by any lock.
Signed-off-by: Bob Liu <bob@oracle.com>
---
drivers/block/xen-blkf
ter.
Signed-off-by: Bob Liu <bob@oracle.com>
---
drivers/block/xen-blkfront.c | 513 +-
1 file changed, 308 insertions(+), 205 deletions(-)
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index bf416d5..bf45c
.
This patch is a preparation for supporting real multi hardware queues/rings.
Signed-off-by: Arianna Avanzini <avanzini.aria...@gmail.com>
Signed-off-by: Bob Liu <bob@oracle.com>
---
drivers/block/xen-blkfront.c | 854 ++
1 file changed, 44
Split per ring information to an new structure:xen_blkif_ring, so that one vbd
device can associate with one or more rings/hardware queues.
This patch is a preparation for supporting multi hardware queues/rings.
Signed-off-by: Arianna Avanzini <avanzini.aria...@gmail.com>
Signed-off-by: B
Prepare patch for multi hardware queues/rings, the ring number was set to 1 by
force.
Signed-off-by: Arianna Avanzini <avanzini.aria...@gmail.com>
Signed-off-by: Bob Liu <bob@oracle.com>
---
drivers/block/xen-blkback/common.h |3 +-
drivers/block/xen-blkback/xen
, and nvme.
Also dropped one unnecessary holding of info->io_lock when calling
blk_mq_stop_hw_queues().
Signed-off-by: Arianna Avanzini <avanzini.aria...@gmail.com>
Signed-off-by: Bob Liu <bob@oracle.com>
Reviewed-by: Christoph Hellwig <h...@lst.de>
Acked-by: Jens Axboe <
Document multi queues/rings of xen-block.
Signed-off-by: Bob Liu <bob@oracle.com>
---
include/xen/interface/io/blkif.h | 32
1 file changed, 32 insertions(+)
diff --git a/include/xen/interface/io/blkif.h b/include/xen/interface/io/blkif.h
index c
ps: 1310k279k810k(+200%) 871k 1000k
Only with 4queues, iops for domU get improved a lot and nearly catch up with
dom0. There were also similar huge improvement for write and real SSD storage.
---
v3: Rebased to v4.2-rc8
Bob Liu (9):
xen-blkfront: convert to blk-mq
Hi Rafal,
Please have a try adding "--iodepth_batch=32 --iodepth_batch_complete=32" to
the fio command line.
I didn't see this issue any more, neither for domU.
Thanks,
-Bob
On 08/21/2015 04:46 PM, Rafal Mielniczuk wrote:
> On 19/08/15 12:12, Bob Liu wrote:
>> Hi Jens &
Hi Rafal,
Please have a try adding --iodepth_batch=32 --iodepth_batch_complete=32 to
the fio command line.
I didn't see this issue any more, neither for domU.
Thanks,
-Bob
On 08/21/2015 04:46 PM, Rafal Mielniczuk wrote:
On 19/08/15 12:12, Bob Liu wrote:
Hi Jens Christoph,
Rafal reported
s,
mint=30002msec, maxt=30002msec
Disk stats (read/write):
xvdb: ios=734048/0, merge=0/0, ticks=843584/0, in_queue=843080, util=99.72%
Regards,
-Bob
On 07/13/2015 05:55 PM, Bob Liu wrote:
> Note: This patch is based on original work of Arianna's internship for
> GNOME's Outreach Program for
=30002msec
Disk stats (read/write):
xvdb: ios=734048/0, merge=0/0, ticks=843584/0, in_queue=843080, util=99.72%
Regards,
-Bob
On 07/13/2015 05:55 PM, Bob Liu wrote:
Note: This patch is based on original work of Arianna's internship for
GNOME's Outreach Program for Women.
Only one hardware queue
On 08/13/2015 12:46 AM, Rafal Mielniczuk wrote:
> On 12/08/15 11:17, Bob Liu wrote:
>> On 08/12/2015 01:32 AM, Jens Axboe wrote:
>>> On 08/11/2015 03:45 AM, Rafal Mielniczuk wrote:
>>>> On 11/08/15 07:08, Bob Liu wrote:
>>>>> On 08/10/2015 11:52 PM, Je
On 08/13/2015 12:46 AM, Rafal Mielniczuk wrote:
On 12/08/15 11:17, Bob Liu wrote:
On 08/12/2015 01:32 AM, Jens Axboe wrote:
On 08/11/2015 03:45 AM, Rafal Mielniczuk wrote:
On 11/08/15 07:08, Bob Liu wrote:
On 08/10/2015 11:52 PM, Jens Axboe wrote:
On 08/10/2015 05:03 AM, Rafal Mielniczuk
On 08/12/2015 01:32 AM, Jens Axboe wrote:
> On 08/11/2015 03:45 AM, Rafal Mielniczuk wrote:
>> On 11/08/15 07:08, Bob Liu wrote:
>>> On 08/10/2015 11:52 PM, Jens Axboe wrote:
>>>> On 08/10/2015 05:03 AM, Rafal Mielniczuk wrote:
...
>>>>> Hello,
>&g
On 08/12/2015 01:32 AM, Jens Axboe wrote:
On 08/11/2015 03:45 AM, Rafal Mielniczuk wrote:
On 11/08/15 07:08, Bob Liu wrote:
On 08/10/2015 11:52 PM, Jens Axboe wrote:
On 08/10/2015 05:03 AM, Rafal Mielniczuk wrote:
...
Hello,
We rerun the tests for sequential reads with the identical
On 08/10/2015 11:52 PM, Jens Axboe wrote:
> On 08/10/2015 05:03 AM, Rafal Mielniczuk wrote:
>> On 01/07/15 04:03, Jens Axboe wrote:
>>> On 06/30/2015 08:21 AM, Marcus Granado wrote:
Hi,
Our measurements for the multiqueue patch indicate a clear improvement
in iops when more
On 08/10/2015 11:52 PM, Jens Axboe wrote:
On 08/10/2015 05:03 AM, Rafal Mielniczuk wrote:
On 01/07/15 04:03, Jens Axboe wrote:
On 06/30/2015 08:21 AM, Marcus Granado wrote:
Hi,
Our measurements for the multiqueue patch indicate a clear improvement
in iops when more queues are used.
The
8 32 16K84K 105K 82K
> 8 32 32K50K 54K 36K
> 8 32 64K24K 27K 16K
> 8 32 128K 11K 13K 11K
>
&g
/scheduler.
How about the result if using noop scheduler?
Thanks,
Bob Liu
As I understand blk-mq layer bypasses I/O scheduler which also effectively
disables merges.
Could you explain why it is difficult to enable merging in the blk-mq layer?
That could help closing the performance gap we
descriptors and
flush/barrier features to a separate function and call it from both
blkfront_connect and blkif_recover
Signed-off-by: Bob Liu
---
Changes in v2:
* Also put blkfront_setup_indirect() inside
---
drivers/block/xen-blkfront.c | 122 +++---
1 file
The BUG_ON() in purge_persistent_gnt() will be triggered when previous purge
work haven't finished.
There is a work_pending() before this BUG_ON, but it doesn't account if the work
is still currently running.
Signed-off-by: Bob Liu
---
Change in v2:
* Replace with work_busy()
---
drivers/block
We should consider info->feature_persistent when adding indriect page to list
info->indirect_pages, else the BUG_ON() in blkif_free() would be triggered.
Signed-off-by: Bob Liu
---
drivers/block/xen-blkfront.c |6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/d
descriptors and
flush/barrier features to a separate function and call it from both
blkfront_connect and blkif_recover
Signed-off-by: Bob Liu bob@oracle.com
---
Changes in v2:
* Also put blkfront_setup_indirect() inside
---
drivers/block/xen-blkfront.c | 122
We should consider info-feature_persistent when adding indriect page to list
info-indirect_pages, else the BUG_ON() in blkif_free() would be triggered.
Signed-off-by: Bob Liu bob@oracle.com
---
drivers/block/xen-blkfront.c |6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff
The BUG_ON() in purge_persistent_gnt() will be triggered when previous purge
work haven't finished.
There is a work_pending() before this BUG_ON, but it doesn't account if the work
is still currently running.
Signed-off-by: Bob Liu bob@oracle.com
---
Change in v2:
* Replace with work_busy
On 07/22/2015 12:43 PM, Bob Liu wrote:
>
> On 07/21/2015 05:25 PM, Roger Pau Monné wrote:
>> El 21/07/15 a les 5.30, Bob Liu ha escrit:
>>> This BUG_ON() in blkif_free() is incorrect, because indirect page can be
>>> added
>>> to list info->indi
On 07/21/2015 05:25 PM, Roger Pau Monné wrote:
> El 21/07/15 a les 5.30, Bob Liu ha escrit:
>> This BUG_ON() in blkif_free() is incorrect, because indirect page can be
>> added
>> to list info->indirect_pages in blkif_completion() no matter
>> feature_
On 07/21/2015 05:13 PM, Roger Pau Monné wrote:
> El 21/07/15 a les 5.30, Bob Liu ha escrit:
>> This BUG_ON() will be triggered when previous purge work haven't finished.
>> It's reasonable under pretty extreme load and should not panic the system.
>>
>> Signed-off-by:
On 07/22/2015 12:43 PM, Bob Liu wrote:
On 07/21/2015 05:25 PM, Roger Pau Monné wrote:
El 21/07/15 a les 5.30, Bob Liu ha escrit:
This BUG_ON() in blkif_free() is incorrect, because indirect page can be
added
to list info-indirect_pages in blkif_completion() no matter
feature_persistent
On 07/21/2015 05:25 PM, Roger Pau Monné wrote:
El 21/07/15 a les 5.30, Bob Liu ha escrit:
This BUG_ON() in blkif_free() is incorrect, because indirect page can be
added
to list info-indirect_pages in blkif_completion() no matter
feature_persistent
is true or false.
Signed-off-by: Bob
On 07/21/2015 05:13 PM, Roger Pau Monné wrote:
El 21/07/15 a les 5.30, Bob Liu ha escrit:
This BUG_ON() will be triggered when previous purge work haven't finished.
It's reasonable under pretty extreme load and should not panic the system.
Signed-off-by: Bob Liu bob@oracle.com
201 - 300 of 824 matches
Mail list logo