Re: [PATCH v8 3/4] block: add block timer and throttling algorithm

2011-09-26 Thread Zhi Yong Wu
On Sat, Sep 24, 2011 at 12:19 AM, Kevin Wolf kw...@redhat.com wrote:
 Am 08.09.2011 12:11, schrieb Zhi Yong Wu:
 Note:
      1.) When bps/iops limits are specified to a small value such as 511 
 bytes/s, this VM will hang up. We are considering how to handle this senario.
      2.) When dd command is issued in guest, if its option bs is set to a 
 large value such as bs=1024K, the result speed will slightly bigger than 
 the limits.

 For these problems, if you have nice thought, pls let us know.:)

 Signed-off-by: Zhi Yong Wu wu...@linux.vnet.ibm.com
 ---
  block.c |  259 
 ---
  block.h |    1 -
  2 files changed, 248 insertions(+), 12 deletions(-)

 One general comment: What about synchronous and/or coroutine I/O
 operations? Do you think they are just not important enough to consider
 here or were they forgotten?
For sync ops, we assume that it will be converse into async mode at
some point of future, right?
For coroutine I/O, it is introduced in image driver layer, and behind
bdrv_aio_readv/writev. I think that we need not consider them, right?


 Also, do I understand correctly that you're always submitting the whole
Right, when the block timer fire, it will flush whole request queue.
 queue at once? Does this effectively enforce the limit all the time or
 will it lead to some peaks and then no requests at all for a while until
In fact, it only try to submit those enqueued request one by one. If
fail to pass the limit, this request will be enqueued again.
 the average is right again?
Yeah, it is possible. Do you better idea?

 Maybe some documentation on how it all works from a high level
 perspective would be helpful.

 diff --git a/block.c b/block.c
 index cd75183..c08fde8 100644
 --- a/block.c
 +++ b/block.c
 @@ -30,6 +30,9 @@
  #include qemu-objects.h
  #include qemu-coroutine.h

 +#include qemu-timer.h
 +#include block/blk-queue.h
 +
  #ifdef CONFIG_BSD
  #include sys/types.h
  #include sys/stat.h
 @@ -72,6 +75,13 @@ static int coroutine_fn 
 bdrv_co_writev_em(BlockDriverState *bs,
                                           QEMUIOVector *iov);
  static int coroutine_fn bdrv_co_flush_em(BlockDriverState *bs);

 +static bool bdrv_exceed_bps_limits(BlockDriverState *bs, int nb_sectors,
 +        bool is_write, double elapsed_time, uint64_t *wait);
 +static bool bdrv_exceed_iops_limits(BlockDriverState *bs, bool is_write,
 +        double elapsed_time, uint64_t *wait);
 +static bool bdrv_exceed_io_limits(BlockDriverState *bs, int nb_sectors,
 +        bool is_write, int64_t *wait);
 +
  static QTAILQ_HEAD(, BlockDriverState) bdrv_states =
      QTAILQ_HEAD_INITIALIZER(bdrv_states);

 @@ -745,6 +755,11 @@ int bdrv_open(BlockDriverState *bs, const char 
 *filename, int flags,
              bs-change_cb(bs-change_opaque, CHANGE_MEDIA);
      }

 +    /* throttling disk I/O limits */
 +    if (bs-io_limits_enabled) {
 +        bdrv_io_limits_enable(bs);
 +    }
 +
      return 0;

  unlink_and_fail:
 @@ -783,6 +798,18 @@ void bdrv_close(BlockDriverState *bs)
          if (bs-change_cb)
              bs-change_cb(bs-change_opaque, CHANGE_MEDIA);
      }
 +
 +    /* throttling disk I/O limits */
 +    if (bs-block_queue) {
 +        qemu_del_block_queue(bs-block_queue);
 +        bs-block_queue = NULL;
 +    }
 +
 +    if (bs-block_timer) {
 +        qemu_del_timer(bs-block_timer);
 +        qemu_free_timer(bs-block_timer);
 +        bs-block_timer = NULL;
 +    }

 Why not io_limits_disable() instead of copying the code here?
Good point, thanks.

  }

  void bdrv_close_all(void)
 @@ -2341,16 +2368,48 @@ BlockDriverAIOCB *bdrv_aio_readv(BlockDriverState 
 *bs, int64_t sector_num,
                                   BlockDriverCompletionFunc *cb, void 
 *opaque)
  {
      BlockDriver *drv = bs-drv;
 -
 +    BlockDriverAIOCB *ret;
 +    int64_t wait_time = -1;
 +printf(sector_num=%ld, nb_sectors=%d\n, sector_num, nb_sectors);

 Debugging leftover (more of them follow, won't comment on each one)
Removed.

      trace_bdrv_aio_readv(bs, sector_num, nb_sectors, opaque);

 -    if (!drv)
 -        return NULL;
 -    if (bdrv_check_request(bs, sector_num, nb_sectors))
 +    if (!drv || bdrv_check_request(bs, sector_num, nb_sectors)) {
          return NULL;
 +    }

 This part is unrelated.
Have changed it to original.

 +
 +    /* throttling disk read I/O */
 +    if (bs-io_limits_enabled) {
 +        if (bdrv_exceed_io_limits(bs, nb_sectors, false, wait_time)) {
 +            ret = qemu_block_queue_enqueue(bs-block_queue, bs, 
 bdrv_aio_readv,
 +                           sector_num, qiov, nb_sectors, cb, opaque);
 +            printf(wait_time=%ld\n, wait_time);
 +            if (wait_time != -1) {
 +                printf(reset block timer\n);
 +                qemu_mod_timer(bs-block_timer,
 +                               wait_time + qemu_get_clock_ns(vm_clock));
 +            }
 +
 +            if (ret) {
 +                printf(ori ret is not 

Re: [PATCH v8 3/4] block: add block timer and throttling algorithm

2011-09-26 Thread Zhi Yong Wu
On Tue, Sep 20, 2011 at 8:34 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
 On Mon, Sep 19, 2011 at 05:55:41PM +0800, Zhi Yong Wu wrote:
 On Wed, Sep 14, 2011 at 6:50 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
  On Tue, Sep 13, 2011 at 11:09:46AM +0800, Zhi Yong Wu wrote:
  On Fri, Sep 9, 2011 at 10:44 PM, Marcelo Tosatti mtosa...@redhat.com 
  wrote:
   On Thu, Sep 08, 2011 at 06:11:07PM +0800, Zhi Yong Wu wrote:
   Note:
        1.) When bps/iops limits are specified to a small value such as 
   511 bytes/s, this VM will hang up. We are considering how to handle 
   this senario.
  
   You can increase the length of the slice, if the request is larger than
   slice_time * bps_limit.
  Yeah, but it is a challenge for how to increase it. Do you have some nice 
  idea?
 
  If the queue is empty, and the request being processed does not fit the
  queue, increase the slice so that the request fits.
 Sorry for late reply. actually, do you think that this scenario is
 meaningful for the user?
 Since we implement this, if the user limits the bps below 512
 bytes/second, the VM can also not run every task.
 Can you let us know why we need to make such effort?

 It would be good to handle request larger than the slice.

 It is not strictly necessary, but in case its not handled, a minimum
 should be in place, to reflect maximum request size known. Being able to
 specify something which crashes is not acceptable.
HI, Marcelo,

any comments? I have post the implementation based on your suggestions






-- 
Regards,

Zhi Yong Wu
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v8 3/4] block: add block timer and throttling algorithm

2011-09-23 Thread Kevin Wolf
Am 08.09.2011 12:11, schrieb Zhi Yong Wu:
 Note:
  1.) When bps/iops limits are specified to a small value such as 511 
 bytes/s, this VM will hang up. We are considering how to handle this senario.
  2.) When dd command is issued in guest, if its option bs is set to a 
 large value such as bs=1024K, the result speed will slightly bigger than 
 the limits.
 
 For these problems, if you have nice thought, pls let us know.:)
 
 Signed-off-by: Zhi Yong Wu wu...@linux.vnet.ibm.com
 ---
  block.c |  259 
 ---
  block.h |1 -
  2 files changed, 248 insertions(+), 12 deletions(-)

One general comment: What about synchronous and/or coroutine I/O
operations? Do you think they are just not important enough to consider
here or were they forgotten?

Also, do I understand correctly that you're always submitting the whole
queue at once? Does this effectively enforce the limit all the time or
will it lead to some peaks and then no requests at all for a while until
the average is right again?

Maybe some documentation on how it all works from a high level
perspective would be helpful.

 diff --git a/block.c b/block.c
 index cd75183..c08fde8 100644
 --- a/block.c
 +++ b/block.c
 @@ -30,6 +30,9 @@
  #include qemu-objects.h
  #include qemu-coroutine.h
  
 +#include qemu-timer.h
 +#include block/blk-queue.h
 +
  #ifdef CONFIG_BSD
  #include sys/types.h
  #include sys/stat.h
 @@ -72,6 +75,13 @@ static int coroutine_fn bdrv_co_writev_em(BlockDriverState 
 *bs,
   QEMUIOVector *iov);
  static int coroutine_fn bdrv_co_flush_em(BlockDriverState *bs);
  
 +static bool bdrv_exceed_bps_limits(BlockDriverState *bs, int nb_sectors,
 +bool is_write, double elapsed_time, uint64_t *wait);
 +static bool bdrv_exceed_iops_limits(BlockDriverState *bs, bool is_write,
 +double elapsed_time, uint64_t *wait);
 +static bool bdrv_exceed_io_limits(BlockDriverState *bs, int nb_sectors,
 +bool is_write, int64_t *wait);
 +
  static QTAILQ_HEAD(, BlockDriverState) bdrv_states =
  QTAILQ_HEAD_INITIALIZER(bdrv_states);
  
 @@ -745,6 +755,11 @@ int bdrv_open(BlockDriverState *bs, const char 
 *filename, int flags,
  bs-change_cb(bs-change_opaque, CHANGE_MEDIA);
  }
  
 +/* throttling disk I/O limits */
 +if (bs-io_limits_enabled) {
 +bdrv_io_limits_enable(bs);
 +}
 +
  return 0;
  
  unlink_and_fail:
 @@ -783,6 +798,18 @@ void bdrv_close(BlockDriverState *bs)
  if (bs-change_cb)
  bs-change_cb(bs-change_opaque, CHANGE_MEDIA);
  }
 +
 +/* throttling disk I/O limits */
 +if (bs-block_queue) {
 +qemu_del_block_queue(bs-block_queue);
 +bs-block_queue = NULL;
 +}
 +
 +if (bs-block_timer) {
 +qemu_del_timer(bs-block_timer);
 +qemu_free_timer(bs-block_timer);
 +bs-block_timer = NULL;
 +}

Why not io_limits_disable() instead of copying the code here?

  }
  
  void bdrv_close_all(void)
 @@ -2341,16 +2368,48 @@ BlockDriverAIOCB *bdrv_aio_readv(BlockDriverState 
 *bs, int64_t sector_num,
   BlockDriverCompletionFunc *cb, void *opaque)
  {
  BlockDriver *drv = bs-drv;
 -
 +BlockDriverAIOCB *ret;
 +int64_t wait_time = -1;
 +printf(sector_num=%ld, nb_sectors=%d\n, sector_num, nb_sectors);

Debugging leftover (more of them follow, won't comment on each one)

  trace_bdrv_aio_readv(bs, sector_num, nb_sectors, opaque);
  
 -if (!drv)
 -return NULL;
 -if (bdrv_check_request(bs, sector_num, nb_sectors))
 +if (!drv || bdrv_check_request(bs, sector_num, nb_sectors)) {
  return NULL;
 +}

This part is unrelated.

 +
 +/* throttling disk read I/O */
 +if (bs-io_limits_enabled) {
 +if (bdrv_exceed_io_limits(bs, nb_sectors, false, wait_time)) {
 +ret = qemu_block_queue_enqueue(bs-block_queue, bs, 
 bdrv_aio_readv,
 +   sector_num, qiov, nb_sectors, cb, opaque);
 +printf(wait_time=%ld\n, wait_time);
 +if (wait_time != -1) {
 +printf(reset block timer\n);
 +qemu_mod_timer(bs-block_timer,
 +   wait_time + qemu_get_clock_ns(vm_clock));
 +}
 +
 +if (ret) {
 +printf(ori ret is not null\n);
 +} else {
 +printf(ori ret is null\n);
 +}
 +
 +return ret;
 +}
 +}
  
 -return drv-bdrv_aio_readv(bs, sector_num, qiov, nb_sectors,
 +ret =  drv-bdrv_aio_readv(bs, sector_num, qiov, nb_sectors,
 cb, opaque);
 +if (ret) {
 +if (bs-io_limits_enabled) {
 +bs-io_disps.bytes[BLOCK_IO_LIMIT_READ] +=
 +  (unsigned) nb_sectors * BDRV_SECTOR_SIZE;
 +bs-io_disps.ios[BLOCK_IO_LIMIT_READ]++;
 +}

I wonder if you can't reuse 

Re: [PATCH v8 3/4] block: add block timer and throttling algorithm

2011-09-21 Thread Zhi Yong Wu
On Tue, Sep 20, 2011 at 8:34 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
 On Mon, Sep 19, 2011 at 05:55:41PM +0800, Zhi Yong Wu wrote:
 On Wed, Sep 14, 2011 at 6:50 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
  On Tue, Sep 13, 2011 at 11:09:46AM +0800, Zhi Yong Wu wrote:
  On Fri, Sep 9, 2011 at 10:44 PM, Marcelo Tosatti mtosa...@redhat.com 
  wrote:
   On Thu, Sep 08, 2011 at 06:11:07PM +0800, Zhi Yong Wu wrote:
   Note:
        1.) When bps/iops limits are specified to a small value such as 
   511 bytes/s, this VM will hang up. We are considering how to handle 
   this senario.
  
   You can increase the length of the slice, if the request is larger than
   slice_time * bps_limit.
  Yeah, but it is a challenge for how to increase it. Do you have some nice 
  idea?
 
  If the queue is empty, and the request being processed does not fit the
  queue, increase the slice so that the request fits.
 Sorry for late reply. actually, do you think that this scenario is
 meaningful for the user?
 Since we implement this, if the user limits the bps below 512
 bytes/second, the VM can also not run every task.
 Can you let us know why we need to make such effort?

 It would be good to handle request larger than the slice.
Below is the code changes for your way. I used simple trace and did dd
test on guest, then found only the first rw req is handled, and
subsequent reqs are enqueued. After several minutes, guest prints the
info below on its terminal:
INFO: task kdmflush:326 blocked for more than 120 seconds.
echo 0  /proc/sys/kernel/hung_task_timeout_secs disables this message.

I don't make sure if it is correct. Do you have better way to verify it?


 It is not strictly necessary, but in case its not handled, a minimum
 should be in place, to reflect maximum request size known. Being able to
 specify something which crashes is not acceptable.



diff --git a/block.c b/block.c
index af19784..f88c22a 100644
--- a/block.c
+++ b/block.c
@@ -132,9 +132,10 @@ void bdrv_io_limits_disable(BlockDriverState *bs)
 bs-block_timer = NULL;
 }

-bs-slice_start = 0;
-
-bs-slice_end   = 0;
+bs-slice_time= 0;
+bs-slice_start   = 0;
+bs-slice_end = 0;
+bs-first_time_rw = false;
 }

 static void bdrv_block_timer(void *opaque)
@@ -151,9 +152,10 @@ void bdrv_io_limits_enable(BlockDriverState *bs)
 bs-block_queue = qemu_new_block_queue();
 bs-block_timer = qemu_new_timer_ns(vm_clock, bdrv_block_timer, bs);

+bs-slice_time  = BLOCK_IO_SLICE_TIME;
 bs-slice_start = qemu_get_clock_ns(vm_clock);
-
-bs-slice_end   = bs-slice_start + BLOCK_IO_SLICE_TIME;
+bs-slice_end   = bs-slice_start + bs-slice_time;
+bs-first_time_rw = true;
 }

 bool bdrv_io_limits_enabled(BlockDriverState *bs)
@@ -2846,11 +2848,23 @@ static bool
bdrv_exceed_bps_limits(BlockDriverState *bs, int nb_sectors,
 /* Calc approx time to dispatch */
 wait_time = (bytes_disp + bytes_res) / bps_limit - elapsed_time;

-if (wait) {
-*wait = wait_time * BLOCK_IO_SLICE_TIME * 10;
-}
+if (!bs-first_time_rw
+|| !qemu_block_queue_is_empty(bs-block_queue)) {
+if (wait) {
+*wait = wait_time * BLOCK_IO_SLICE_TIME * 10;
+}

-return true;
+return true;
+} else {
+bs-slice_time = wait_time * BLOCK_IO_SLICE_TIME * 10;
+bs-slice_end += bs-slice_time - BLOCK_IO_SLICE_TIME;
+if (wait) {
+*wait = 0;
+}
+
+bs-first_time_rw = false;
+return false;
+}
 }

 static bool bdrv_exceed_iops_limits(BlockDriverState *bs, bool is_write,
@@ -2895,11 +2909,23 @@ static bool
bdrv_exceed_iops_limits(BlockDriverState *bs, bool is_write,
 wait_time = 0;
 }

-if (wait) {
-*wait = wait_time * BLOCK_IO_SLICE_TIME * 10;
-}
+if (!bs-first_time_rw
+|| !qemu_block_queue_is_empty(bs-block_queue)) {
+if (wait) {
+*wait = wait_time * BLOCK_IO_SLICE_TIME * 10;
+}

-return true;
+return true;
+} else {
+bs-slice_time = wait_time * BLOCK_IO_SLICE_TIME * 10;
+bs-slice_end += bs-slice_time - BLOCK_IO_SLICE_TIME;
+if (wait) {
+*wait = 0;
+}
+
+bs-first_time_rw = false;
+return false;
+}
 }

 static bool bdrv_exceed_io_limits(BlockDriverState *bs, int nb_sectors,
@@ -2912,10 +2938,10 @@ static bool
bdrv_exceed_io_limits(BlockDriverState *bs, int nb_sectors,
 now = qemu_get_clock_ns(vm_clock);
 if ((bs-slice_start  now)
  (bs-slice_end  now)) {
-bs-slice_end = now + BLOCK_IO_SLICE_TIME;
+bs-slice_end = now + bs-slice_time;
 } else {
 bs-slice_start = now;
-bs-slice_end   = now + BLOCK_IO_SLICE_TIME;
+bs-slice_end   = now + bs-slice_time;

 bs-io_disps.bytes[is_write]  = 0;
 bs-io_disps.bytes[!is_write] = 0;
diff --git a/block/blk-queue.c b/block/blk-queue.c
index adef497..04e52ad 100644
--- 

Re: [PATCH v8 3/4] block: add block timer and throttling algorithm

2011-09-20 Thread Marcelo Tosatti
On Mon, Sep 19, 2011 at 05:55:41PM +0800, Zhi Yong Wu wrote:
 On Wed, Sep 14, 2011 at 6:50 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
  On Tue, Sep 13, 2011 at 11:09:46AM +0800, Zhi Yong Wu wrote:
  On Fri, Sep 9, 2011 at 10:44 PM, Marcelo Tosatti mtosa...@redhat.com 
  wrote:
   On Thu, Sep 08, 2011 at 06:11:07PM +0800, Zhi Yong Wu wrote:
   Note:
        1.) When bps/iops limits are specified to a small value such as 
   511 bytes/s, this VM will hang up. We are considering how to handle 
   this senario.
  
   You can increase the length of the slice, if the request is larger than
   slice_time * bps_limit.
  Yeah, but it is a challenge for how to increase it. Do you have some nice 
  idea?
 
  If the queue is empty, and the request being processed does not fit the
  queue, increase the slice so that the request fits.
 Sorry for late reply. actually, do you think that this scenario is
 meaningful for the user?
 Since we implement this, if the user limits the bps below 512
 bytes/second, the VM can also not run every task.
 Can you let us know why we need to make such effort?

It would be good to handle request larger than the slice.

It is not strictly necessary, but in case its not handled, a minimum
should be in place, to reflect maximum request size known. Being able to
specify something which crashes is not acceptable.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v8 3/4] block: add block timer and throttling algorithm

2011-09-20 Thread Zhi Yong Wu
On Tue, Sep 20, 2011 at 8:34 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
 On Mon, Sep 19, 2011 at 05:55:41PM +0800, Zhi Yong Wu wrote:
 On Wed, Sep 14, 2011 at 6:50 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
  On Tue, Sep 13, 2011 at 11:09:46AM +0800, Zhi Yong Wu wrote:
  On Fri, Sep 9, 2011 at 10:44 PM, Marcelo Tosatti mtosa...@redhat.com 
  wrote:
   On Thu, Sep 08, 2011 at 06:11:07PM +0800, Zhi Yong Wu wrote:
   Note:
        1.) When bps/iops limits are specified to a small value such as 
   511 bytes/s, this VM will hang up. We are considering how to handle 
   this senario.
  
   You can increase the length of the slice, if the request is larger than
   slice_time * bps_limit.
  Yeah, but it is a challenge for how to increase it. Do you have some nice 
  idea?
 
  If the queue is empty, and the request being processed does not fit the
  queue, increase the slice so that the request fits.
 Sorry for late reply. actually, do you think that this scenario is
 meaningful for the user?
 Since we implement this, if the user limits the bps below 512
 bytes/second, the VM can also not run every task.
 Can you let us know why we need to make such effort?

 It would be good to handle request larger than the slice.
OK. Let me spend some time on trying your way.

 It is not strictly necessary, but in case its not handled, a minimum
 should be in place, to reflect maximum request size known. Being able to
In fact, slice_time has been dynamic now, and adjusted in some range.
 specify something which crashes is not acceptable.
Do you mean that one warning should be displayed if the specified
limit is smaller than the minimum capability?






-- 
Regards,

Zhi Yong Wu
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v8 3/4] block: add block timer and throttling algorithm

2011-09-20 Thread Zhi Yong Wu
On Wed, Sep 21, 2011 at 11:14 AM, Zhi Yong Wu zwu.ker...@gmail.com wrote:
 On Tue, Sep 20, 2011 at 8:34 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
 On Mon, Sep 19, 2011 at 05:55:41PM +0800, Zhi Yong Wu wrote:
 On Wed, Sep 14, 2011 at 6:50 PM, Marcelo Tosatti mtosa...@redhat.com 
 wrote:
  On Tue, Sep 13, 2011 at 11:09:46AM +0800, Zhi Yong Wu wrote:
  On Fri, Sep 9, 2011 at 10:44 PM, Marcelo Tosatti mtosa...@redhat.com 
  wrote:
   On Thu, Sep 08, 2011 at 06:11:07PM +0800, Zhi Yong Wu wrote:
   Note:
        1.) When bps/iops limits are specified to a small value such as 
   511 bytes/s, this VM will hang up. We are considering how to handle 
   this senario.
  
   You can increase the length of the slice, if the request is larger than
   slice_time * bps_limit.
  Yeah, but it is a challenge for how to increase it. Do you have some 
  nice idea?
 
  If the queue is empty, and the request being processed does not fit the
  queue, increase the slice so that the request fits.
 Sorry for late reply. actually, do you think that this scenario is
 meaningful for the user?
 Since we implement this, if the user limits the bps below 512
 bytes/second, the VM can also not run every task.
 Can you let us know why we need to make such effort?

 It would be good to handle request larger than the slice.
 OK. Let me spend some time on trying your way.

 It is not strictly necessary, but in case its not handled, a minimum
 should be in place, to reflect maximum request size known. Being able to
 In fact, slice_time has been dynamic now, and adjusted in some range.
Sorry, I made a mistake. Currently it is fixed.
 specify something which crashes is not acceptable.
 Do you mean that one warning should be displayed if the specified
 limit is smaller than the minimum capability?






 --
 Regards,

 Zhi Yong Wu




-- 
Regards,

Zhi Yong Wu
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v8 3/4] block: add block timer and throttling algorithm

2011-09-19 Thread Zhi Yong Wu
On Wed, Sep 14, 2011 at 6:50 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
 On Tue, Sep 13, 2011 at 11:09:46AM +0800, Zhi Yong Wu wrote:
 On Fri, Sep 9, 2011 at 10:44 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
  On Thu, Sep 08, 2011 at 06:11:07PM +0800, Zhi Yong Wu wrote:
  Note:
       1.) When bps/iops limits are specified to a small value such as 511 
  bytes/s, this VM will hang up. We are considering how to handle this 
  senario.
 
  You can increase the length of the slice, if the request is larger than
  slice_time * bps_limit.
 Yeah, but it is a challenge for how to increase it. Do you have some nice 
 idea?

 If the queue is empty, and the request being processed does not fit the
 queue, increase the slice so that the request fits.
Sorry for late reply. actually, do you think that this scenario is
meaningful for the user?
Since we implement this, if the user limits the bps below 512
bytes/second, the VM can also not run every task.
Can you let us know why we need to make such effort?


 That is, make BLOCK_IO_SLICE_TIME dynamic and adjust it as described
 above (if the bps or io limits change, reset it to the default
 BLOCK_IO_SLICE_TIME).

       2.) When dd command is issued in guest, if its option bs is set to 
  a large value such as bs=1024K, the result speed will slightly bigger 
  than the limits.
 
  Why?
 This issue has not existed. I will remove it.
 When drive bps=100, i did some testings on guest VM.
 1.) bs=1024K
 18+0 records in
 18+0 records out
 18874368 bytes (19 MB) copied, 26.6268 s, 709 kB/s
 2.) bs=2048K
 18+0 records in
 18+0 records out
 37748736 bytes (38 MB) copied, 46.5336 s, 811 kB/s

 
  There is lots of debugging leftovers in the patch.
 sorry, i forgot to remove them.
 
 





-- 
Regards,

Zhi Yong Wu
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v8 3/4] block: add block timer and throttling algorithm

2011-09-14 Thread Marcelo Tosatti
On Tue, Sep 13, 2011 at 11:09:46AM +0800, Zhi Yong Wu wrote:
 On Fri, Sep 9, 2011 at 10:44 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
  On Thu, Sep 08, 2011 at 06:11:07PM +0800, Zhi Yong Wu wrote:
  Note:
       1.) When bps/iops limits are specified to a small value such as 511 
  bytes/s, this VM will hang up. We are considering how to handle this 
  senario.
 
  You can increase the length of the slice, if the request is larger than
  slice_time * bps_limit.
 Yeah, but it is a challenge for how to increase it. Do you have some nice 
 idea?

If the queue is empty, and the request being processed does not fit the
queue, increase the slice so that the request fits.

That is, make BLOCK_IO_SLICE_TIME dynamic and adjust it as described
above (if the bps or io limits change, reset it to the default
BLOCK_IO_SLICE_TIME).

       2.) When dd command is issued in guest, if its option bs is set to 
  a large value such as bs=1024K, the result speed will slightly bigger 
  than the limits.
 
  Why?
 This issue has not existed. I will remove it.
 When drive bps=100, i did some testings on guest VM.
 1.) bs=1024K
 18+0 records in
 18+0 records out
 18874368 bytes (19 MB) copied, 26.6268 s, 709 kB/s
 2.) bs=2048K
 18+0 records in
 18+0 records out
 37748736 bytes (38 MB) copied, 46.5336 s, 811 kB/s
 
 
  There is lots of debugging leftovers in the patch.
 sorry, i forgot to remove them.
 
 

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v8 3/4] block: add block timer and throttling algorithm

2011-09-09 Thread Marcelo Tosatti
On Thu, Sep 08, 2011 at 06:11:07PM +0800, Zhi Yong Wu wrote:
 Note:
  1.) When bps/iops limits are specified to a small value such as 511 
 bytes/s, this VM will hang up. We are considering how to handle this senario.

You can increase the length of the slice, if the request is larger than
slice_time * bps_limit.

  2.) When dd command is issued in guest, if its option bs is set to a 
 large value such as bs=1024K, the result speed will slightly bigger than 
 the limits.

Why?

There is lots of debugging leftovers in the patch.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html