Bring the block files in line with the QEMU coding style, with spaces
for indentation.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/371
Signed-off-by: Yeqi Fu
---
block/bochs.c | 14 +++
block/file-posix.c | 92 ++---
block/file-win32
The HMP monitor runs in the main loop thread. Calling
AIO_WAIT_WHILE(qemu_get_aio_context(), ...) from the main loop thread is
equivalent to AIO_WAIT_WHILE_UNLOCKED(NULL, ...) because neither unlocks
the AioContext and the latter's assertion that we're in the main loop
succeeds.
Reviewed-by: Phili
Since the AioContext argument was already NULL, AIO_WAIT_WHILE() was
never going to unlock the AioContext. Therefore it is possible to
replace AIO_WAIT_WHILE() with AIO_WAIT_WHILE_UNLOCKED().
Reviewed-by: Philippe Mathieu-Daudé
Tested-by: Philippe Mathieu-Daudé
Reviewed-by: Kevin Wolf
Signed-of
monitor_cleanup() is called from the main loop thread. Calling
AIO_WAIT_WHILE(qemu_get_aio_context(), ...) from the main loop thread is
equivalent to AIO_WAIT_WHILE_UNLOCKED(NULL, ...) because neither unlocks
the AioContext and the latter's assertion that we're in the main loop
succeeds.
Reviewed-
There is no need for the AioContext lock in bdrv_drain_all() because
nothing in AIO_WAIT_WHILE() needs the lock and the condition is atomic.
AIO_WAIT_WHILE_UNLOCKED() has no use for the AioContext parameter other
than performing a check that is nowadays already done by the
GLOBAL_STATE_CODE()/IO_C
v2:
- Clarify NULL ctx argument in Patch 1 commit description [Kevin]
AIO_WAIT_WHILE_UNLOCKED() is the future replacement for AIO_WAIT_WHILE(). Most
callers haven't been converted yet because they rely on the AioContext lock. I
looked through the code and found the easy cases that can be converted
There is no change in behavior. Switch to AIO_WAIT_WHILE_UNLOCKED()
instead of AIO_WAIT_WHILE() to document that this code has already been
audited and converted. The AioContext argument is already NULL so
aio_context_release() is never called anyway.
Reviewed-by: Philippe Mathieu-Daudé
Tested-by
The following conversion is safe and does not change behavior:
GLOBAL_STATE_CODE();
...
- AIO_WAIT_WHILE(qemu_get_aio_context(), ...);
+ AIO_WAIT_WHILE_UNLOCKED(NULL, ...);
Since we're in GLOBAL_STATE_CODE(), qemu_get_aio_context() is our home
thread's AioContext. Thus AIO_WAIT_WH
On Fri, Mar 10, 2023 at 12:10:08AM +0800, Yeqi Fu wrote:
> Bring the block files in line with the QEMU coding style, with spaces
> for indentation.
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/371
>
> Signed-off-by: Yeqi Fu
> ---
> block/bochs.c | 12 +--
> block/file-
Am 09.03.2023 um 17:31 hat Stefan Hajnoczi geschrieben:
> This looks like a copy-paste or merge error. BDRV_POLL_WHILE() is
> already called above. It's not needed in the qemu_in_coroutine() case.
>
> Fixes: 9fb4dfc570ce ("qed: make bdrv_qed_do_open a coroutine_fn")
> Signed-off-by: Stefan Hajnocz
Am 09.03.2023 um 14:59 hat Paolo Bonzini geschrieben:
> On 3/9/23 13:31, Hanna Czenczek wrote:
> > On 09.03.23 13:08, Paolo Bonzini wrote:
> > > On Thu, Mar 9, 2023 at 1:05 PM Paolo Bonzini wrote:
> > > > I think having to do this is problematic, because the blk_drain should
> > > > leave no pendi
Bring the block files in line with the QEMU coding style, with spaces
for indentation.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/371
Signed-off-by: Yeqi Fu
---
block/bochs.c | 12 +--
block/file-posix.c | 52 ++---
block/file-w
This looks like a copy-paste or merge error. BDRV_POLL_WHILE() is
already called above. It's not needed in the qemu_in_coroutine() case.
Fixes: 9fb4dfc570ce ("qed: make bdrv_qed_do_open a coroutine_fn")
Signed-off-by: Stefan Hajnoczi
---
block/qed.c | 1 -
1 file changed, 1 deletion(-)
diff --g
On Thu, Mar 09, 2023 at 09:44:51AM +0100, Paolo Bonzini wrote:
> Signed-off-by: Paolo Bonzini
> ---
> nbd/server.c | 48
> 1 file changed, 24 insertions(+), 24 deletions(-)
>
> diff --git a/nbd/server.c b/nbd/server.c
> index a4750e41880a..6f5fcad
On 3/9/23 13:31, Hanna Czenczek wrote:
On 09.03.23 13:08, Paolo Bonzini wrote:
On Thu, Mar 9, 2023 at 1:05 PM Paolo Bonzini wrote:
I think having to do this is problematic, because the blk_drain should
leave no pending operation.
Here it seems okay because you do it in a controlled situation,
On 3/9/23 13:31, Stefan Hajnoczi wrote:
On Thu, Mar 09, 2023 at 10:07:40AM +0100, Paolo Bonzini wrote:
On 3/7/23 22:04, Stefan Hajnoczi wrote:
This field is accessed by multiple threads without a lock. Use explicit
qatomic_read()/qatomic_set() calls. There is no need for acquire/release
because
On Wed, Mar 08, 2023 at 06:25:43PM +0100, Kevin Wolf wrote:
> Am 08.03.2023 um 15:26 hat Stefan Hajnoczi geschrieben:
> > On Wed, Mar 08, 2023 at 09:48:17AM +0100, Kevin Wolf wrote:
> > > Am 07.03.2023 um 20:20 hat Stefan Hajnoczi geschrieben:
> > > > On Tue, Mar 07, 2023 at 06:17:22PM +0100, Kevin
On 09.03.23 13:08, Paolo Bonzini wrote:
On Thu, Mar 9, 2023 at 1:05 PM Paolo Bonzini wrote:
I think having to do this is problematic, because the blk_drain should
leave no pending operation.
Here it seems okay because you do it in a controlled situation, but the
main thread can also issue blk_
On Thu, Mar 09, 2023 at 10:07:40AM +0100, Paolo Bonzini wrote:
> On 3/7/23 22:04, Stefan Hajnoczi wrote:
> > This field is accessed by multiple threads without a lock. Use explicit
> > qatomic_read()/qatomic_set() calls. There is no need for acquire/release
> > because blk_set_disable_request_queui
On Thu, Mar 09, 2023 at 12:18:00PM +0100, Paolo Bonzini wrote:
> On 3/7/23 22:04, Stefan Hajnoczi wrote:
> > static int coroutine_fn GRAPH_RDLOCK
> > @@ -1271,7 +1271,8 @@ static void coroutine_fn
> > blk_wait_while_drained(BlockBackend *blk)
> > {
> > assert(blk->in_flight > 0);
> > -
On Thu, Mar 9, 2023 at 1:05 PM Paolo Bonzini wrote:
> I think having to do this is problematic, because the blk_drain should
> leave no pending operation.
>
> Here it seems okay because you do it in a controlled situation, but the
> main thread can also issue blk_drain(), or worse bdrv_drained_beg
On 3/9/23 12:44, Hanna Czenczek wrote:
+ *
+ * Note that TRIM operations call blk_aio_pdiscard() multiple
+ * times (and finally increment s->blk's in-flight counter while
+ * ide_trim_bh_cb() is scheduled), so we have to loop blk_drain()
+ * until the whole operation is done.
Commit 7e5cdb345f77d76cb4877fe6230c4e17a7d0d0ca ("ide: Increment BB
in-flight counter for TRIM BH") fixed a problem when cancelling trims:
ide_cancel_dma_sync() drains the BB to stop a potentially ongoing
request, and then asserts that no request is active anymore. When
trying to cancel a trim, th
Add multi-conn option to the NBD client. This commit just adds the
option, it is not functional.
Setting this to a value > 1 permits multiple connections to the NBD
server; a typical value might be 4. The default is 1, meaning only a
single connection is made. If the NBD server does not adverti
To implement multi-conn, we will put multiple underlying NBD
connections (ie. NBDClientConnection) inside the NBD block device
handle (BDRVNBDState). This requires first breaking the one-to-one
relationship between NBDClientConnection and BDRVNBDState.
To do this a new structure (NBDConnState) is
If the user opts in to multi-conn and the server advertises it, open
multiple NBD connections. They are opened with the same parameters as
the initial connection. Similarly on the close path the connections
are closed.
However only the zeroth connection is used, so this commit does not
actually
Enable NBD multi-conn by spreading operations across multiple
connections.
(XXX) This uses a naive round-robin approach which could be improved.
For example we could look at how many requests are in flight and
assign operations to the connections with fewest. Or we could try to
estimate (based on
[ Patch series also available here, along with this cover letter and the
script used to generate test results:
https://gitlab.com/rwmjones/qemu/-/commits/2023-nbd-multi-conn-v1 ]
This patch series adds multi-conn support to the NBD block driver in
qemu. It is only meant for discussion and tes
On 3/7/23 22:04, Stefan Hajnoczi wrote:
static int coroutine_fn GRAPH_RDLOCK
@@ -1271,7 +1271,8 @@ static void coroutine_fn
blk_wait_while_drained(BlockBackend *blk)
{
assert(blk->in_flight > 0);
-if (qatomic_read(&blk->quiesce_counter) && !blk->disable_request_queuing) {
+
On 9/3/23 09:50, Paolo Bonzini wrote:
The value of the bdrv_file_open is sometimes checked to distinguish
protocol and format drivers, but apart from that there is no difference
between bdrv_file_open and bdrv_open.
However, they can all be distinguished by the non-NULL .protocol_name
member. C
On Thursday, March 9, 2023 9:44:52 AM CET Paolo Bonzini wrote:
> Signed-off-by: Paolo Bonzini
> ---
> hw/9pfs/9p.h| 4 ++--
> hw/9pfs/codir.c | 6 +++---
> 2 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/hw/9pfs/9p.h b/hw/9pfs/9p.h
> index 2fce4140d1e9..1b0d805b9c12 100644
On 3/7/23 22:04, Stefan Hajnoczi wrote:
This field is accessed by multiple threads without a lock. Use explicit
qatomic_read()/qatomic_set() calls. There is no need for acquire/release
because blk_set_disable_request_queuing() doesn't provide any
guarantees (it helps that it's used at BlockBacken
bdrv_file_open and bdrv_open are completely equivalent, they are
never checked except to see which one to invoke. So merge them
into a single one.
Signed-off-by: Paolo Bonzini
---
block.c | 2 --
block/blkdebug.c | 2 +-
block/blkio.c
The value of the bdrv_file_open is sometimes checked to distinguish
protocol and format drivers, but apart from that there is no difference
between bdrv_file_open and bdrv_open.
However, they can all be distinguished by the non-NULL .protocol_name
member. Change the checks to use .protocol_name i
The set of BlockDrivers that have .bdrv_file_open coincides with those
that have .protocol_name and guess what---checking drv->bdrv_file_open
is done to see if the driver is a protocol. So check drv->protocol_name
instead.
Signed-off-by: Paolo Bonzini
---
block.c | 13 ++---
1 file chan
.bdrv_needs_filename is only set for drivers that also set bdrv_file_open,
i.e. protocol drivers.
So we can make the assertion always, it will always pass for those drivers
that use bdrv_open.
Signed-off-by: Paolo Bonzini
---
block.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff -
do_sgio can suspend via the coroutine function thread_pool_submit_co, so it
has to be coroutine_fn as well---and the same is true of all its direct and
indirect callers.
Signed-off-by: Paolo Bonzini
---
scsi/qemu-pr-helper.c | 22 +++---
1 file changed, 11 insertions(+), 11 delet
Signed-off-by: Paolo Bonzini
---
tests/unit/test-thread-pool.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tests/unit/test-thread-pool.c b/tests/unit/test-thread-pool.c
index 6020e65d6986..493b966b94f9 100644
--- a/tests/unit/test-thread-pool.c
+++ b/tests/unit/test-thread
Functions that can do I/O (including calling bdrv_is_allocated
and bdrv_block_status functions) are prime candidates for being
coroutine_fns. Make the change for those that are themselves called
only from coroutine_fns. Also annotate that they are called with the
graph rdlock taken, thus allowing
Signed-off-by: Paolo Bonzini
---
nbd/server.c | 48
1 file changed, 24 insertions(+), 24 deletions(-)
diff --git a/nbd/server.c b/nbd/server.c
index a4750e41880a..6f5fcade2a54 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -1409,8 +1409,8 @@ nbd_
Functions that can do I/O are prime candidates for being coroutine_fns. Make
the
change for the one that is itself called only from coroutine_fns. Unfortunately
vmdk does not use a coroutine_fn for the bulk of the open (like qcow2 does) so
vmdk_read_cid cannot have the same treatment.
Signed-of
"Mostly" because there is a 9pfs patch in here too.
The series was developed with the help of vrc and the clang TSA annotations.
Paolo
Paolo Bonzini (9):
vvfat: mark various functions as coroutine_fn
blkdebug: add missing coroutine_fn annotation
mirror: make mirror_flush a coroutine_fn
n
Functions that can do I/O are prime candidates for being coroutine_fns. Make
the
change for those that are themselves called only from coroutine_fns.
In addition, coroutine_fns should do I/O using bdrv_co_*() functions, for
which it is required to hold the BlockDriverState graph lock. So also n
mirror_flush calls a mixed function blk_flush but it is only called
from mirror_run; so call the coroutine version and make mirror_flush
a coroutine_fn too.
Signed-off-by: Paolo Bonzini
---
block/mirror.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/block/mirror.c b/bl
Signed-off-by: Paolo Bonzini
---
block/blkdebug.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/block/blkdebug.c b/block/blkdebug.c
index 978c8cff9e33..addad914b3f7 100644
--- a/block/blkdebug.c
+++ b/block/blkdebug.c
@@ -583,8 +583,8 @@ out:
return ret;
}
-stati
Signed-off-by: Paolo Bonzini
---
hw/9pfs/9p.h| 4 ++--
hw/9pfs/codir.c | 6 +++---
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/hw/9pfs/9p.h b/hw/9pfs/9p.h
index 2fce4140d1e9..1b0d805b9c12 100644
--- a/hw/9pfs/9p.h
+++ b/hw/9pfs/9p.h
@@ -203,7 +203,7 @@ typedef struct V9fsDi
46 matches
Mail list logo