On Tue, May 24, 2022 at 12:02:17PM +0100, Daniel P. Berrangé wrote:
> This removes one further custom impl of QEMUFile, in favour of a
> QIOChannel based impl.
>
> Signed-off-by: Daniel P. Berrangé
> ---
> migration/ram.c | 7 ---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
On Fri, May 20, 2022 at 09:59:22AM +0200, Fabian Ebner wrote:
> On 64-bit platforms, assigning SIZE_MAX to the int64_t max_pdiscard
> results in a negative value, and the following assertion would trigger
> down the line (it's not the same max_pdiscard, but computed from the
> other one):
>
On Thu, May 19, 2022 at 03:48:36PM +0100, Alberto Faria wrote:
> For consistency with other I/O functions, and in preparation to
> implement bdrv_{pread,pwrite}() using generated_co_wrapper.
>
> unsigned int fits in int64_t, so all callers remain correct.
>
> Signed-off-by: Alberto Faria
> ---
On Tue, May 24, 2022 at 12:02:16PM +0100, Daniel P. Berrangé wrote:
> This is for code which needs a portable equivalent to a QIOChannelFile
> connected to /dev/null.
>
> Signed-off-by: Daniel P. Berrangé
> ---
> include/io/channel-null.h | 55 +++
> io/channel-null.c
On 5/24/22 20:30, Emanuele Giuseppe Esposito wrote:
It seems that aio_wait_kick always required a memory barrier
or atomic operation in the caller, but nobody actually
took care of doing it.
Let's put the barrier in the function instead, and pair it
with another one in AIO_WAIT_WHILE. Read
Am 24/05/2022 um 09:08 schrieb Paolo Bonzini:
> On 5/23/22 18:04, Vladimir Sementsov-Ogievskiy wrote:
>>
>> I have a doubt about how aio_wait_bh_oneshot() works. Exactly, I see
>> that data->done is not accessed atomically, and doesn't have any
>> barrier protecting it..
>>
>> Is following
It seems that aio_wait_kick always required a memory barrier
or atomic operation in the caller, but nobody actually
took care of doing it.
Let's put the barrier in the function instead, and pair it
with another one in AIO_WAIT_WHILE. Read aio_wait_kick()
comment for further explanation.
On 5/24/22 12:20, Stefan Hajnoczi wrote:
Maybe it's safe to run it without a lock because it runs after
virtio_set_status(vdev, 0) but I'd rather play it safe and protect s->rq
with a lock.
What does the lock protect?
A lock can prevent s->rq or req->vq corruption but it cannot prevent
On 5/24/22 14:40, Kevin Wolf wrote:
Why is the barrier in aio_bh_enqueue() not enough? Is the comment there
wrong?
aio_notify() has another barrier. This is a little bit too late, but if
I misunderstood the aio_bh_enqueue() one, it could explain why it was
never observed.
The missing one that
On 5/24/22 15:40, Kevin Wolf wrote:
Am 24.05.2022 um 09:08 hat Paolo Bonzini geschrieben:
On 5/23/22 18:04, Vladimir Sementsov-Ogievskiy wrote:
I have a doubt about how aio_wait_bh_oneshot() works. Exactly, I see
that data->done is not accessed atomically, and doesn't have any barrier
Am 17.05.2022 um 13:12 hat Vladimir Sementsov-Ogievskiy geschrieben:
> From: Vladimir Sementsov-Ogievskiy
>
> v4:
> 01,03: add Kevin's r-b
> 02: add hbitmap_free() on success patch if local_backup is not needed
>
> Vladimir Sementsov-Ogievskiy (3):
> block: block_dirty_bitmap_merge(): fix
Am 24.05.2022 um 09:08 hat Paolo Bonzini geschrieben:
> On 5/23/22 18:04, Vladimir Sementsov-Ogievskiy wrote:
> >
> > I have a doubt about how aio_wait_bh_oneshot() works. Exactly, I see
> > that data->done is not accessed atomically, and doesn't have any barrier
> > protecting it..
> >
> > Is
Am 18.05.2022 um 15:09 hat Stefan Hajnoczi geschrieben:
> Commit 1b7fd729559c ("block: rename buffer_alignment to
> guest_block_size") noted:
>
> At this point, the field is set by the device emulation, but completely
> ignored by the block layer.
>
> The last time the value of
Am 21.05.2022 um 14:27 hat Stefan Hajnoczi geschrieben:
> bdrv_co_drain() has not been used since commit 9a0cec664eef ("mirror:
> use bdrv_drained_begin/bdrv_drained_end") in 2016. Remove it so there
> are fewer drain scenarios to worry about.
>
> Use bdrv_drained_begin()/bdrv_drained_end()
Am 18.05.2022 um 14:28 hat Emanuele Giuseppe Esposito geschrieben:
> label: // read till the end to see why I wrote this here
>
> I was hoping someone from the "No" party would answer to your question,
> because as you know we reached this same conclusion together.
>
> We thought about dropping
Now that all QEMUFile callbacks are removed, the entire concept can be
deleted.
Signed-off-by: Daniel P. Berrangé
---
migration/channel.c | 4 +--
migration/colo.c | 5 ++--
migration/meson.build | 1 -
migration/migration.c | 7 ++---
This directly implements the close logic using QIOChannel APIs.
Signed-off-by: Daniel P. Berrangé
---
migration/qemu-file-channel.c | 12
migration/qemu-file.c | 12 ++--
migration/qemu-file.h | 10 --
3 files changed, 6 insertions(+), 28
The QEMUFile 'save_hook' callback has a 'size_t size' parameter.
The RDMA impl of this has logic that takes different actions
depending on whether the value is zero or non-zero. It has
commented out logic that would have taken further actions
if the value was negative.
The only place where the
This directly implements the writev_buffer logic using QIOChannel APIs.
Signed-off-by: Daniel P. Berrangé
---
migration/qemu-file-channel.c | 43 ---
migration/qemu-file.c | 24 +++
migration/qemu-file.h | 9
3 files
This directly implements the shutdown logic using QIOChannel APIs.
Signed-off-by: Daniel P. Berrangé
---
migration/qemu-file-channel.c | 27 ---
migration/qemu-file.c | 10 +++---
migration/qemu-file.h | 10 --
3 files changed, 7
Signed-off-by: Daniel P. Berrangé
---
migration/qemu-file.h | 4
1 file changed, 4 deletions(-)
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index 07c86bfea3..674c2c409b 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -46,10 +46,6 @@ typedef ssize_t
This makes the field name align with the newly introduced method
names in the previous commit.
Signed-off-by: Daniel P. Berrangé
---
migration/qemu-file.c | 19 ++-
1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index
Introduce a QIOChannelBlock class that exposes the BlockDriverState
VMState region for I/O.
This is kept in the migration/ directory rather than io/, to avoid
a mutual dependancy between block/ <-> io/ directories. Also the
VMState should only be used by the migration code.
Reviewed-by: Daniel
This directly implements the get_buffer logic using QIOChannel APIs.
Signed-off-by: Daniel P. Berrangé
---
migration/qemu-file-channel.c | 29 -
migration/qemu-file.c | 18 --
migration/qemu-file.h | 9 -
3 files changed, 16
The only user of the hooks is RDMA which provides a QIOChannel backed
impl of QEMUFile. It can thus use the qemu_file_get_ioc() method.
Signed-off-by: Daniel P. Berrangé
---
migration/qemu-file.c | 8
migration/qemu-file.h | 14 ++
migration/rdma.c | 19
The only callers of qemu_fopen_ops pass 'true' for the 'has_ioc'
parameter, so hardcode this assumption in QEMUFile, by passing in
the QIOChannel object as a non-opaque parameter.
Signed-off-by: Daniel P. Berrangé
---
migration/qemu-file-channel.c | 4 ++--
migration/qemu-file.c | 35
The name 'ftell' gives the misleading impression that the QEMUFile
objects are seekable. This is not the case, as in general we just
have an opaque stream. The users of this method are only interested
in the total bytes processed. This switches to a new name that
reflects the intended usage.
Am 18.05.2022 um 14:43 hat Paolo Bonzini geschrieben:
> On 5/18/22 14:28, Emanuele Giuseppe Esposito wrote:
> > For example, all callers of bdrv_open() always take the AioContext lock.
> > Often it is taken very high in the call stack, but it's always taken.
>
> I think it's actually not a
With this change, all QEMUFile usage is backed by QIOChannel at
last.
Signed-off-by: Daniel P. Berrangé
---
migration/savevm.c | 42 --
1 file changed, 4 insertions(+), 38 deletions(-)
diff --git a/migration/savevm.c b/migration/savevm.c
index
This directly implements the get_return_path logic using QIOChannel APIs.
Signed-off-by: Daniel P. Berrangé
---
migration/qemu-file-channel.c | 16
migration/qemu-file.c | 22 ++
migration/qemu-file.h | 6 --
3 files changed, 10
This directly implements the set_blocking logic using QIOChannel APIs.
Signed-off-by: Daniel P. Berrangé
---
migration/qemu-file-channel.c | 14 --
migration/qemu-file.c | 4 +---
migration/qemu-file.h | 5 -
3 files changed, 1 insertion(+), 22 deletions(-)
This is for code which needs a portable equivalent to a QIOChannelFile
connected to /dev/null.
Signed-off-by: Daniel P. Berrangé
---
include/io/channel-null.h | 55 +++
io/channel-null.c | 237 ++
io/meson.build| 1 +
Prepare for the elimination of QEMUFileOps by introducing a pair of new
constructors. This lets us distinguish between an input and output file
object explicitly rather than via the existance of specific callbacks.
Signed-off-by: Daniel P. Berrangé
---
migration/qemu-file-channel.c | 4 ++--
Quite a while ago now, the majority of QEMUFile implementations were
switched over to use QIOChannel APIs, but a couple remained.
The newish multifd code is directly using QIOChannel, only calling
in to QEMUFile for the VMState transfer and for rate limiting
purposes.
This series finishes the
This renames the following QEMUFile fields
* bytes_xfer -> rate_limit_used
* xfer_limit -> rate_limit_max
The intent is to make it clear that 'bytes_xfer' is specifically related
to rate limiting of data and applies to data queued, which need not have
been transferred on the wire yet if a
The qemu_update_position method name gives the misleading impression
that it is changing the current file offset. Most of the files are
just streams, however, so there's no concept of a file offset in the
general case.
What this method is actually used for is to report on the number of
bytes that
This removes one further custom impl of QEMUFile, in favour of a
QIOChannel based impl.
Signed-off-by: Daniel P. Berrangé
---
migration/ram.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 5f5e37f64d..89082716d6 100644
---
On Tue, May 24, 2022 at 11:17:06AM +0200, Paolo Bonzini wrote:
> On 5/24/22 10:08, Stefan Hajnoczi wrote:
> > On Tue, May 24, 2022 at 09:55:39AM +0200, Paolo Bonzini wrote:
> > > On 5/22/22 17:06, Stefan Hajnoczi wrote:
> > > > However, I hit on a problem that I think Emanuele and Paolo have
Queued, thanks. The same change needs to be done in hw/ide/macio.c:
diff --git a/hw/ide/macio.c b/hw/ide/macio.c
index f08318cf97..1c15c37ec5 100644
--- a/hw/ide/macio.c
+++ b/hw/ide/macio.c
@@ -267,7 +267,9 @@ static uint64_t pmac_ide_read(void *opaque, hwaddr addr,
unsigned size)
switch
On 5/24/22 10:08, Stefan Hajnoczi wrote:
On Tue, May 24, 2022 at 09:55:39AM +0200, Paolo Bonzini wrote:
On 5/22/22 17:06, Stefan Hajnoczi wrote:
However, I hit on a problem that I think Emanuele and Paolo have already
pointed out: draining is GS & IO. This might have worked under the 1
On Tue, May 24, 2022 at 09:55:39AM +0200, Paolo Bonzini wrote:
> On 5/22/22 17:06, Stefan Hajnoczi wrote:
> > However, I hit on a problem that I think Emanuele and Paolo have already
> > pointed out: draining is GS & IO. This might have worked under the 1
> > IOThread
> > model but it does not
On 5/22/22 17:06, Stefan Hajnoczi wrote:
However, I hit on a problem that I think Emanuele and Paolo have already
pointed out: draining is GS & IO. This might have worked under the 1 IOThread
model but it does not make sense for multi-queue. It is possible to submit I/O
requests in drained
On 5/23/22 18:04, Vladimir Sementsov-Ogievskiy wrote:
I have a doubt about how aio_wait_bh_oneshot() works. Exactly, I see
that data->done is not accessed atomically, and doesn't have any barrier
protecting it..
Is following possible:
main-loop iothread
43 matches
Mail list logo