On 7/3/23 22:04, Stefan Hajnoczi wrote:
This field is accessed by multiple threads without a lock. Use explicit
qatomic_read()/qatomic_set() calls. There is no need for acquire/release
because blk_set_disable_request_queuing() doesn't provide any
guarantees (it helps that it's used at BlockBacken
On 7/3/23 22:04, Stefan Hajnoczi wrote:
The main loop thread increments/decrements BlockBackend->quiesce_counter
when drained sections begin/end. The counter is read in the I/O code
path. Therefore this field is used to communicate between threads
without a lock.
Acquire/release are not necessar
This field is accessed by multiple threads without a lock. Use explicit
qatomic_read()/qatomic_set() calls. There is no need for acquire/release
because blk_set_disable_request_queuing() doesn't provide any
guarantees (it helps that it's used at BlockBackend creation time and
not when there is I/O
The CoQueue API offers thread-safety via the lock argument that
qemu_co_queue_wait() and qemu_co_enter_next() take. BlockBackend
currently does not make use of the lock argument. This means that
multiple threads submitting I/O requests can corrupt the CoQueue's
QSIMPLEQ.
Add a QemuMutex and pass i
v2:
- Use qatomic_fetch_inc/dec() for readability in Patch 1 [Hanna]
QEMU block layer multi-queue support involves running I/O requests from
multiple threads. Shared state must be protected somehow to avoid thread-safety
issues.
The BlockBackend->queued_requests CoQueue is accessed without a lock
The main loop thread increments/decrements BlockBackend->quiesce_counter
when drained sections begin/end. The counter is read in the I/O code
path. Therefore this field is used to communicate between threads
without a lock.
Acquire/release are not necessary because the BlockBackend->in_flight
coun
On 1/3/23 21:58, Stefan Hajnoczi wrote:
monitor_cleanup() is called from the main loop thread. Calling
AIO_WAIT_WHILE(qemu_get_aio_context(), ...) from the main loop thread is
equivalent to AIO_WAIT_WHILE_UNLOCKED(NULL, ...) because neither unlocks
the AioContext and the latter's assertion that w
On 1/3/23 21:58, Stefan Hajnoczi wrote:
The HMP monitor runs in the main loop thread. Calling
AIO_WAIT_WHILE(qemu_get_aio_context(), ...) from the main loop thread is
equivalent to AIO_WAIT_WHILE_UNLOCKED(NULL, ...) because neither unlocks
the AioContext and the latter's assertion that we're in t
On 1/3/23 21:57, Stefan Hajnoczi wrote:
Since the AioContext argument was already NULL, AIO_WAIT_WHILE() was
never going to unlock the AioContext. Therefore it is possible to
replace AIO_WAIT_WHILE() with AIO_WAIT_WHILE_UNLOCKED().
Signed-off-by: Stefan Hajnoczi
---
block/io.c | 2 +-
1 file
On 2/3/23 11:19, Philippe Mathieu-Daudé wrote:
On 1/3/23 21:57, Stefan Hajnoczi wrote:
The following conversion is safe and does not change behavior:
GLOBAL_STATE_CODE();
...
- AIO_WAIT_WHILE(qemu_get_aio_context(), ...);
+ AIO_WAIT_WHILE_UNLOCKED(NULL, ...);
Since we're in
On 1/3/23 21:57, Stefan Hajnoczi wrote:
There is no change in behavior. Switch to AIO_WAIT_WHILE_UNLOCKED()
instead of AIO_WAIT_WHILE() to document that this code has already been
audited and converted. The AioContext argument is already NULL so
aio_context_release() is never called anyway.
Sign
On Tue, Mar 07, 2023 at 06:17:22PM +0100, Kevin Wolf wrote:
> Am 01.03.2023 um 21:57 hat Stefan Hajnoczi geschrieben:
> > There is no need for the AioContext lock in bdrv_drain_all() because
> > nothing in AIO_WAIT_WHILE() needs the lock and the condition is atomic.
> >
> > Note that the NULL AioC
Am 01.03.2023 um 21:57 hat Stefan Hajnoczi geschrieben:
> AIO_WAIT_WHILE_UNLOCKED() is the future replacement for AIO_WAIT_WHILE(). Most
> callers haven't been converted yet because they rely on the AioContext lock. I
> looked through the code and found the easy cases that can be converted today.
Am 01.03.2023 um 21:57 hat Stefan Hajnoczi geschrieben:
> There is no need for the AioContext lock in bdrv_drain_all() because
> nothing in AIO_WAIT_WHILE() needs the lock and the condition is atomic.
>
> Note that the NULL AioContext argument to AIO_WAIT_WHILE() is odd. In
> the future it can be
On 7.03.2023 15:02, Kevin Wolf wrote:
Commit a4b15a8b introduced a new function blk_pread_nonzeroes(). Instead
of reading directly from the root node of the BlockBackend, it reads
from its 'file' child node. This can happen to mostly work for raw
images (as long as the 'raw' format driver is in u
On Mon, Mar 06, 2023 at 03:34:29PM +0100, Klaus Jensen wrote:
> From: Joel Granados
>
> Move the rounding of bytes read/written into nvme_smart_log which
> reports in units of 512 bytes, rounded up in thousands. This is in
> preparation for adding the Endurance Group Information log page which
>
On 07.03.23 14:44, Hanna Czenczek wrote:
On 07.03.23 13:22, Fiona Ebner wrote:
Hi,
I am suspecting that commit 7e5cdb345f ("ide: Increment BB in-flight
counter for TRIM BH") introduced an issue in combination with draining.
From a debug session on a costumer's machine I gathered the following
Am 27.02.2023 um 11:47 hat Hanna Czenczek geschrieben:
> Hi,
>
> https://gitlab.com/qemu-project/qemu/-/issues/1507 reports a bug in FUSE
> exports: fallocate(PUNCH_HOLE) is implemented with blk_pdiscard(), but
> its man page documents that a successful call will result in the data
> being read as
On 3/7/23 15:02, Kevin Wolf wrote:
Commit a4b15a8b introduced a new function blk_pread_nonzeroes(). Instead
of reading directly from the root node of the BlockBackend, it reads
from its 'file' child node. This can happen to mostly work for raw
images (as long as the 'raw' format driver is in use,
On Tue, Mar 07, 2023 at 09:48:51AM +0100, Kevin Wolf wrote:
> Am 01.03.2023 um 17:16 hat Stefan Hajnoczi geschrieben:
> > On Fri, Feb 03, 2023 at 08:17:28AM -0500, Emanuele Giuseppe Esposito wrote:
> > > Remove usage of aio_context_acquire by always submitting asynchronous
> > > AIO to the current
On 3/7/23 15:00, Kevin Wolf wrote:
Am 03.03.2023 um 23:51 hat Maciej S. Szmigiero geschrieben:
On 8.02.2023 12:19, Cédric Le Goater wrote:
On 2/7/23 13:48, Kevin Wolf wrote:
Am 07.02.2023 um 10:19 hat Cédric Le Goater geschrieben:
On 2/7/23 09:38, Kevin Wolf wrote:
Am 06.02.2023 um 16:54 hat
Commit a4b15a8b introduced a new function blk_pread_nonzeroes(). Instead
of reading directly from the root node of the BlockBackend, it reads
from its 'file' child node. This can happen to mostly work for raw
images (as long as the 'raw' format driver is in use, but not actually
doing anything), bu
Am 03.03.2023 um 23:51 hat Maciej S. Szmigiero geschrieben:
> On 8.02.2023 12:19, Cédric Le Goater wrote:
> > On 2/7/23 13:48, Kevin Wolf wrote:
> > > Am 07.02.2023 um 10:19 hat Cédric Le Goater geschrieben:
> > > > On 2/7/23 09:38, Kevin Wolf wrote:
> > > > > Am 06.02.2023 um 16:54 hat Cédric Le G
On 07.03.23 13:22, Fiona Ebner wrote:
Hi,
I am suspecting that commit 7e5cdb345f ("ide: Increment BB in-flight
counter for TRIM BH") introduced an issue in combination with draining.
From a debug session on a costumer's machine I gathered the following
information:
* The QEMU process hangs in a
On Mon, 6 Mar 2023 at 14:34, Klaus Jensen wrote:
>
> From: Klaus Jensen
>
> Hi,
>
> The following changes since commit f003dd8d81f7d88f4b1f8802309eaa76f6eb223a:
>
> Merge tag 'pull-tcg-20230305' of https://gitlab.com/rth7680/qemu into
> staging (2023-03-06 10:20:04 +)
>
> are available in
Hi,
I am suspecting that commit 7e5cdb345f ("ide: Increment BB in-flight
counter for TRIM BH") introduced an issue in combination with draining.
>From a debug session on a costumer's machine I gathered the following
information:
* The QEMU process hangs in aio_poll called during draining and doesn
On 03.02.23 10:18, Alexander Ivanov wrote:
Fix image inflation when offset in BAT is out of image.
Replace whole BAT syncing by flushing only dirty blocks.
Move all the checks outside the main check function in
separate functions
Use WITH_QEMU_LOCK_GUARD for simplier code.
Fix incorrect condi
On 03.02.23 10:18, Alexander Ivanov wrote:
We will add more and more checks so we need a better code structure
in parallels_co_check. Let each check performs in a separate loop
in a separate helper.
Signed-off-by: Alexander Ivanov
---
block/parallels.c | 80 ---
Am 07.03.2023 um 11:58 hat Paolo Bonzini geschrieben:
> On 3/7/23 09:48, Kevin Wolf wrote:
> > You mean we have a device that has a separate iothread, but a request is
> > submitted from the main thread? This isn't even allowed today; if a node
> > is in an iothread, all I/O must be submitted from
On 03.02.23 10:18, Alexander Ivanov wrote:
We will add more and more checks so we need a better code structure in
parallels_co_check. Let each check performs in a separate loop in a
separate helper.
Signed-off-by: Alexander Ivanov
Reviewed-by: Denis V. Lunev
---
block/parallels.c | 81 ++
On 3/7/23 09:48, Kevin Wolf wrote:
You mean we have a device that has a separate iothread, but a request is
submitted from the main thread? This isn't even allowed today; if a node
is in an iothread, all I/O must be submitted from that iothread. Do you
know any code that does submit I/O from the
Am 01.03.2023 um 17:16 hat Stefan Hajnoczi geschrieben:
> On Fri, Feb 03, 2023 at 08:17:28AM -0500, Emanuele Giuseppe Esposito wrote:
> > Remove usage of aio_context_acquire by always submitting asynchronous
> > AIO to the current thread's LinuxAioState.
> >
> > In order to prevent mistakes from t
32 matches
Mail list logo