Re: [PATCH 9/9] qemu-file: Account for rate_limit usage on qemu_fflush()

2023-05-05 Thread Juan Quintela
Daniel P. Berrangé  wrote:
>> >
>> > This has a slight semantic behavioural change.
>> 
>> Yeap.
>> 
>> See the answer to Peter.  But three things came to mind:
>> 
>> a - the size of the buffer is small (between 32KB and 256KB depending
>> how you count it).  So we are going to call qemu_fflush() really
>> soon.
>> 
>> b - We are using this value to calculate how much we can send through
>> the wire.  Here we are saything how much we have accepted to send.
>> 
>> c - When using multifd the number of bytes that we send through the qemu
>> file is even smaller. migration-test multifd test send 300MB of data
>> through multifd channels and around 300KB on the qemu_file channel.
>> 
>> 
>> >
>> > By accounting for rate limit in the qemu_put functions, we ensure
>> > that we stop growing the iovec when rate limiting activates.
>> >
>> > If we only apply rate limit in the the flush function, that will
>> > let the  f->iov continue to accumulate buffers, while we have
>> > rate limited the actual transfer.
>> 
>> 256KB maximum.  Our accounting has bigger errors than that.
>> 
>> 
>> > This makes me uneasy - it feels like a bad idea to continue to
>> > accumulate buffers if we're not ready to send them
>> 
>> I still think that the change is correct.  But as you and Peter have
>> concerns about it, I will think a bit more about it.
>
> If Peter's calculations are correct, then I don't have any objection,
> as that's a small overhead.

#define IOV_MAX 1024

#define IO_BUF_SIZE 32768
#define MAX_IOV_SIZE MIN_CONST(IOV_MAX, 64)

struct QEMUFile {
   ...
uint8_t buf[IO_BUF_SIZE];

struct iovec iov[MAX_IOV_SIZE];

}

Later, Juan.




Re: [PATCH 9/9] qemu-file: Account for rate_limit usage on qemu_fflush()

2023-05-05 Thread Daniel P . Berrangé
On Thu, May 04, 2023 at 07:22:25PM +0200, Juan Quintela wrote:
> Daniel P. Berrangé  wrote:
> > On Thu, May 04, 2023 at 01:38:41PM +0200, Juan Quintela wrote:
> >> That is the moment we know we have transferred something.
> >> 
> >> Signed-off-by: Juan Quintela 
> >> ---
> >>  migration/qemu-file.c | 7 +++
> >>  1 file changed, 3 insertions(+), 4 deletions(-)
> >> 
> >> diff --git a/migration/qemu-file.c b/migration/qemu-file.c
> >> index ddebfac847..309b4c56f4 100644
> >> --- a/migration/qemu-file.c
> >> +++ b/migration/qemu-file.c
> >> @@ -300,7 +300,9 @@ void qemu_fflush(QEMUFile *f)
> >> &local_error) < 0) {
> >>  qemu_file_set_error_obj(f, -EIO, local_error);
> >>  } else {
> >> -f->total_transferred += iov_size(f->iov, f->iovcnt);
> >> +uint64_t size = iov_size(f->iov, f->iovcnt);
> >> +qemu_file_acct_rate_limit(f, size);
> >> +f->total_transferred += size;
> >>  }
> >>  
> >>  qemu_iovec_release_ram(f);
> >> @@ -527,7 +529,6 @@ void qemu_put_buffer_async(QEMUFile *f, const uint8_t 
> >> *buf, size_t size,
> >>  return;
> >>  }
> >>  
> >> -f->rate_limit_used += size;
> >>  add_to_iovec(f, buf, size, may_free);
> >>  }
> >>  
> >> @@ -545,7 +546,6 @@ void qemu_put_buffer(QEMUFile *f, const uint8_t *buf, 
> >> size_t size)
> >>  l = size;
> >>  }
> >>  memcpy(f->buf + f->buf_index, buf, l);
> >> -f->rate_limit_used += l;
> >>  add_buf_to_iovec(f, l);
> >>  if (qemu_file_get_error(f)) {
> >>  break;
> >> @@ -562,7 +562,6 @@ void qemu_put_byte(QEMUFile *f, int v)
> >>  }
> >>  
> >>  f->buf[f->buf_index] = v;
> >> -f->rate_limit_used++;
> >>  add_buf_to_iovec(f, 1);
> >>  }
> >
> > This has a slight semantic behavioural change.
> 
> Yeap.
> 
> See the answer to Peter.  But three things came to mind:
> 
> a - the size of the buffer is small (between 32KB and 256KB depending
> how you count it).  So we are going to call qemu_fflush() really
> soon.
> 
> b - We are using this value to calculate how much we can send through
> the wire.  Here we are saything how much we have accepted to send.
> 
> c - When using multifd the number of bytes that we send through the qemu
> file is even smaller. migration-test multifd test send 300MB of data
> through multifd channels and around 300KB on the qemu_file channel.
> 
> 
> >
> > By accounting for rate limit in the qemu_put functions, we ensure
> > that we stop growing the iovec when rate limiting activates.
> >
> > If we only apply rate limit in the the flush function, that will
> > let the  f->iov continue to accumulate buffers, while we have
> > rate limited the actual transfer.
> 
> 256KB maximum.  Our accounting has bigger errors than that.
> 
> 
> > This makes me uneasy - it feels like a bad idea to continue to
> > accumulate buffers if we're not ready to send them
> 
> I still think that the change is correct.  But as you and Peter have
> concerns about it, I will think a bit more about it.

If Peter's calculations are correct, then I don't have any objection,
as that's a small overhead.

With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH 9/9] qemu-file: Account for rate_limit usage on qemu_fflush()

2023-05-04 Thread Juan Quintela
Daniel P. Berrangé  wrote:
> On Thu, May 04, 2023 at 01:38:41PM +0200, Juan Quintela wrote:
>> That is the moment we know we have transferred something.
>> 
>> Signed-off-by: Juan Quintela 
>> ---
>>  migration/qemu-file.c | 7 +++
>>  1 file changed, 3 insertions(+), 4 deletions(-)
>> 
>> diff --git a/migration/qemu-file.c b/migration/qemu-file.c
>> index ddebfac847..309b4c56f4 100644
>> --- a/migration/qemu-file.c
>> +++ b/migration/qemu-file.c
>> @@ -300,7 +300,9 @@ void qemu_fflush(QEMUFile *f)
>> &local_error) < 0) {
>>  qemu_file_set_error_obj(f, -EIO, local_error);
>>  } else {
>> -f->total_transferred += iov_size(f->iov, f->iovcnt);
>> +uint64_t size = iov_size(f->iov, f->iovcnt);
>> +qemu_file_acct_rate_limit(f, size);
>> +f->total_transferred += size;
>>  }
>>  
>>  qemu_iovec_release_ram(f);
>> @@ -527,7 +529,6 @@ void qemu_put_buffer_async(QEMUFile *f, const uint8_t 
>> *buf, size_t size,
>>  return;
>>  }
>>  
>> -f->rate_limit_used += size;
>>  add_to_iovec(f, buf, size, may_free);
>>  }
>>  
>> @@ -545,7 +546,6 @@ void qemu_put_buffer(QEMUFile *f, const uint8_t *buf, 
>> size_t size)
>>  l = size;
>>  }
>>  memcpy(f->buf + f->buf_index, buf, l);
>> -f->rate_limit_used += l;
>>  add_buf_to_iovec(f, l);
>>  if (qemu_file_get_error(f)) {
>>  break;
>> @@ -562,7 +562,6 @@ void qemu_put_byte(QEMUFile *f, int v)
>>  }
>>  
>>  f->buf[f->buf_index] = v;
>> -f->rate_limit_used++;
>>  add_buf_to_iovec(f, 1);
>>  }
>
> This has a slight semantic behavioural change.

Yeap.

See the answer to Peter.  But three things came to mind:

a - the size of the buffer is small (between 32KB and 256KB depending
how you count it).  So we are going to call qemu_fflush() really
soon.

b - We are using this value to calculate how much we can send through
the wire.  Here we are saything how much we have accepted to send.

c - When using multifd the number of bytes that we send through the qemu
file is even smaller. migration-test multifd test send 300MB of data
through multifd channels and around 300KB on the qemu_file channel.


>
> By accounting for rate limit in the qemu_put functions, we ensure
> that we stop growing the iovec when rate limiting activates.
>
> If we only apply rate limit in the the flush function, that will
> let the  f->iov continue to accumulate buffers, while we have
> rate limited the actual transfer.

256KB maximum.  Our accounting has bigger errors than that.


> This makes me uneasy - it feels like a bad idea to continue to
> accumulate buffers if we're not ready to send them

I still think that the change is correct.  But as you and Peter have
concerns about it, I will think a bit more about it.

Thanks, Juan.




Re: [PATCH 9/9] qemu-file: Account for rate_limit usage on qemu_fflush()

2023-05-04 Thread Daniel P . Berrangé
On Thu, May 04, 2023 at 01:38:41PM +0200, Juan Quintela wrote:
> That is the moment we know we have transferred something.
> 
> Signed-off-by: Juan Quintela 
> ---
>  migration/qemu-file.c | 7 +++
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/migration/qemu-file.c b/migration/qemu-file.c
> index ddebfac847..309b4c56f4 100644
> --- a/migration/qemu-file.c
> +++ b/migration/qemu-file.c
> @@ -300,7 +300,9 @@ void qemu_fflush(QEMUFile *f)
> &local_error) < 0) {
>  qemu_file_set_error_obj(f, -EIO, local_error);
>  } else {
> -f->total_transferred += iov_size(f->iov, f->iovcnt);
> +uint64_t size = iov_size(f->iov, f->iovcnt);
> +qemu_file_acct_rate_limit(f, size);
> +f->total_transferred += size;
>  }
>  
>  qemu_iovec_release_ram(f);
> @@ -527,7 +529,6 @@ void qemu_put_buffer_async(QEMUFile *f, const uint8_t 
> *buf, size_t size,
>  return;
>  }
>  
> -f->rate_limit_used += size;
>  add_to_iovec(f, buf, size, may_free);
>  }
>  
> @@ -545,7 +546,6 @@ void qemu_put_buffer(QEMUFile *f, const uint8_t *buf, 
> size_t size)
>  l = size;
>  }
>  memcpy(f->buf + f->buf_index, buf, l);
> -f->rate_limit_used += l;
>  add_buf_to_iovec(f, l);
>  if (qemu_file_get_error(f)) {
>  break;
> @@ -562,7 +562,6 @@ void qemu_put_byte(QEMUFile *f, int v)
>  }
>  
>  f->buf[f->buf_index] = v;
> -f->rate_limit_used++;
>  add_buf_to_iovec(f, 1);
>  }

This has a slight semantic behavioural change.

By accounting for rate limit in the qemu_put functions, we ensure
that we stop growing the iovec when rate limiting activates.

If we only apply rate limit in the the flush function, that will
let the  f->iov continue to accumulate buffers, while we have
rate limited the actual transfer. This makes me uneasy - it feels
like a bad idea to continue to accumulate buffers if we're not
ready to send them

With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH 9/9] qemu-file: Account for rate_limit usage on qemu_fflush()

2023-05-04 Thread Juan Quintela
Peter Xu  wrote:
> On Thu, May 04, 2023 at 01:38:41PM +0200, Juan Quintela wrote:
>> That is the moment we know we have transferred something.
>> 
>> Signed-off-by: Juan Quintela 
>
> There'll be a slight side effect that qemu_file_rate_limit() can be
> triggered later than before because data cached in the qemufile won't be
> accounted until flushed.
>
> Two limits here:
>
> - IO_BUF_SIZE==32K, the real buffer
> - MAX_IOV_SIZE==64 (I think), the async buffer to put guest page ptrs
>   directly, on x86 it's 64*4K=256K
>
> So the impact should be no more than 288KB on x86.  Looks still fine..

This was on purpose.  We are counting data as sent when it has not been
sent yet.

With this change, we account for the data written when we do the real
write.

And yes, I fully agree that with the size that we have it shouldn't make
much of a difference in the speed calculation.

The difference here is that I will move that counter somewhere else, and
the less places that I have to use it the better O:-)

> Reviewed-by: Peter Xu 

Thanks.




Re: [PATCH 9/9] qemu-file: Account for rate_limit usage on qemu_fflush()

2023-05-04 Thread Peter Xu
On Thu, May 04, 2023 at 01:38:41PM +0200, Juan Quintela wrote:
> That is the moment we know we have transferred something.
> 
> Signed-off-by: Juan Quintela 

There'll be a slight side effect that qemu_file_rate_limit() can be
triggered later than before because data cached in the qemufile won't be
accounted until flushed.

Two limits here:

- IO_BUF_SIZE==32K, the real buffer
- MAX_IOV_SIZE==64 (I think), the async buffer to put guest page ptrs
  directly, on x86 it's 64*4K=256K

So the impact should be no more than 288KB on x86.  Looks still fine..

Reviewed-by: Peter Xu 

-- 
Peter Xu




[PATCH 9/9] qemu-file: Account for rate_limit usage on qemu_fflush()

2023-05-04 Thread Juan Quintela
That is the moment we know we have transferred something.

Signed-off-by: Juan Quintela 
---
 migration/qemu-file.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index ddebfac847..309b4c56f4 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -300,7 +300,9 @@ void qemu_fflush(QEMUFile *f)
&local_error) < 0) {
 qemu_file_set_error_obj(f, -EIO, local_error);
 } else {
-f->total_transferred += iov_size(f->iov, f->iovcnt);
+uint64_t size = iov_size(f->iov, f->iovcnt);
+qemu_file_acct_rate_limit(f, size);
+f->total_transferred += size;
 }
 
 qemu_iovec_release_ram(f);
@@ -527,7 +529,6 @@ void qemu_put_buffer_async(QEMUFile *f, const uint8_t *buf, 
size_t size,
 return;
 }
 
-f->rate_limit_used += size;
 add_to_iovec(f, buf, size, may_free);
 }
 
@@ -545,7 +546,6 @@ void qemu_put_buffer(QEMUFile *f, const uint8_t *buf, 
size_t size)
 l = size;
 }
 memcpy(f->buf + f->buf_index, buf, l);
-f->rate_limit_used += l;
 add_buf_to_iovec(f, l);
 if (qemu_file_get_error(f)) {
 break;
@@ -562,7 +562,6 @@ void qemu_put_byte(QEMUFile *f, int v)
 }
 
 f->buf[f->buf_index] = v;
-f->rate_limit_used++;
 add_buf_to_iovec(f, 1);
 }
 
-- 
2.40.0