Re: [PATCH v3 02/25] include/block/block: split header into I/O and global state API

2021-10-14 Thread Eric Blake
On Tue, Oct 12, 2021 at 04:48:43AM -0400, Emanuele Giuseppe Esposito wrote:
> block.h currently contains a mix of functions:
> some of them run under the BQL and modify the block layer graph,
> others are instead thread-safe and perform I/O in iothreads.
> It is not easy to understand which function is part of which
> group (I/O vs GS), and this patch aims to clarify it.
> 
> The "GS" functions need the BQL, and often use
> aio_context_acquire/release and/or drain to be sure they
> can modify the graph safely.
> The I/O function are instead thread safe, and can run in
> any AioContext.
> 
> By splitting the header in two files, block-io.h
> and block-global-state.h we have a clearer view on what
> needs what kind of protection. block-common.h
> instead contains common structures shared by both headers.

s/instead //

> 
> block.h is left there for legacy and to avoid changing
> all includes in all c files that use the block APIs.
> 
> Assertions are added in the next patch.
> 
> Signed-off-by: Emanuele Giuseppe Esposito 
> ---

> diff --git a/include/block/block-common.h b/include/block/block-common.h
> new file mode 100644
> index 00..4f1fd8de21
> --- /dev/null
> +++ b/include/block/block-common.h
> @@ -0,0 +1,389 @@
> +#ifndef BLOCK_COMMON_H
> +#define BLOCK_COMMON_H

As a new file, it probably deserves a copyright/license blurb copied
from the file it is split out of.

> diff --git a/include/block/block-global-state.h 
> b/include/block/block-global-state.h
> new file mode 100644
> index 00..b57e275da9
> --- /dev/null
> +++ b/include/block/block-global-state.h
> @@ -0,0 +1,263 @@
> +#ifndef BLOCK_GLOBAL_STATE_H
> +#define BLOCK_GLOBAL_STATE_H

Likewise, here and in all other newly-split files in your series.

> +++ b/include/block/block.h
> @@ -1,864 +1,9 @@
>  #ifndef BLOCK_H
>  #define BLOCK_H

Oh. There wasn't one to copy from :( Well, now's as good a time to fix
that as any.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




Re: [PATCH v4 6/8] iotests/300: avoid abnormal shutdown race condition

2021-10-14 Thread Vladimir Sementsov-Ogievskiy

14.10.2021 00:57, John Snow wrote:

Wait for the destination VM to close itself instead of racing to shut it
down first, which produces different error log messages from AQMP
depending on precisely when we tried to shut it down.

(For example: We may try to issue 'quit' immediately prior to the target
VM closing its QMP socket, which will cause an ECONNRESET error to be
logged. Waiting for the VM to exit itself avoids the race on shutdown
behavior.)

Reported-by: Hanna Reitz
Signed-off-by: John Snow


Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir



Re: iotest 030 SIGSEGV

2021-10-14 Thread Vladimir Sementsov-Ogievskiy

14.10.2021 00:50, John Snow wrote:

In trying to replace the QMP library backend, I have now twice stumbled upon a 
SIGSEGV in iotest 030 in the last three weeks or so.

I didn't have debug symbols on at the time, so I've got only this stack trace:

(gdb) thread apply all bt

Thread 8 (Thread 0x7f0a6b8c4640 (LWP 1873554)):
#0  0x7f0a748a53ff in poll () at /lib64/libc.so.6
#1  0x7f0a759bfa36 in g_main_context_iterate.constprop () at 
/lib64/libglib-2.0.so.0
#2  0x7f0a7596d163 in g_main_loop_run () at /lib64/libglib-2.0.so.0
#3  0x557dac31d121 in iothread_run (opaque=opaque@entry=0x557dadd98800) at 
../../iothread.c:73
#4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a6b8c3650) at 
../../util/qemu-thread-posix.c:557
#5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 7 (Thread 0x7f0a6b000640 (LWP 1873555)):
#0  0x7f0a747ed7d2 in sigtimedwait () at /lib64/libc.so.6
#1  0x7f0a74b72cdc in sigwait () at /lib64/libpthread.so.0
#2  0x557dac2e403b in dummy_cpu_thread_fn (arg=arg@entry=0x557dae041c10) at 
../../accel/dummy-cpus.c:46
#3  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a6afff650) at 
../../util/qemu-thread-posix.c:557
#4  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#5  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 6 (Thread 0x7f0a56afa640 (LWP 1873582)):
#0  0x7f0a74b71308 in do_futex_wait.constprop () at /lib64/libpthread.so.0
#1  0x7f0a74b71433 in __new_sem_wait_slow.constprop.0 () at 
/lib64/libpthread.so.0
#2  0x557dac4d8f1f in qemu_sem_timedwait (sem=sem@entry=0x557dadd62878, 
ms=ms@entry=1) at ../../util/qemu-thread-posix.c:327
#3  0x557dac4f5ac4 in worker_thread (opaque=opaque@entry=0x557dadd62800) at 
../../util/thread-pool.c:91
#4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a56af9650) at 
../../util/qemu-thread-posix.c:557
#5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 5 (Thread 0x7f0a57dff640 (LWP 1873580)):
#0  0x7f0a74b71308 in do_futex_wait.constprop () at /lib64/libpthread.so.0
#1  0x7f0a74b71433 in __new_sem_wait_slow.constprop.0 () at 
/lib64/libpthread.so.0
#2  0x557dac4d8f1f in qemu_sem_timedwait (sem=sem@entry=0x557dadd62878, 
ms=ms@entry=1) at ../../util/qemu-thread-posix.c:327
#3  0x557dac4f5ac4 in worker_thread (opaque=opaque@entry=0x557dadd62800) at 
../../util/thread-pool.c:91
#4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a57dfe650) at 
../../util/qemu-thread-posix.c:557
#5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 4 (Thread 0x7f0a572fb640 (LWP 1873581)):
#0  0x7f0a74b7296f in pread64 () at /lib64/libpthread.so.0
#1  0x557dac39f18f in pread64 (__offset=, __nbytes=, 
__buf=, __fd=) at /usr/include/bits/unistd.h:105
#2  handle_aiocb_rw_linear (aiocb=aiocb@entry=0x7f0a573fc150, buf=0x7f0a6a47e000 
'\377' ...) at ../../block/file-posix.c:1481
#3  0x557dac39f664 in handle_aiocb_rw (opaque=0x7f0a573fc150) at 
../../block/file-posix.c:1521
#4  0x557dac4f5b54 in worker_thread (opaque=opaque@entry=0x557dadd62800) at 
../../util/thread-pool.c:104
#5  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a572fa650) at 
../../util/qemu-thread-posix.c:557
#6  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#7  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 3 (Thread 0x7f0a714e8640 (LWP 1873552)):
#0  0x7f0a748aaedd in syscall () at /lib64/libc.so.6
#1  0x557dac4d916a in qemu_futex_wait (val=, f=) at /home/jsnow/src/qemu/include/qemu/futex.h:29
#2  qemu_event_wait (ev=ev@entry=0x557dace1f1e8 ) at 
../../util/qemu-thread-posix.c:480
#3  0x557dac4e189a in call_rcu_thread (opaque=opaque@entry=0x0) at 
../../util/rcu.c:258
#4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a714e7650) at 
../../util/qemu-thread-posix.c:557
#5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 2 (Thread 0x7f0a70ae5640 (LWP 1873553)):
#0  0x7f0a74b71308 in do_futex_wait.constprop () at /lib64/libpthread.so.0
#1  0x7f0a74b71433 in __new_sem_wait_slow.constprop.0 () at 
/lib64/libpthread.so.0
#2  0x557dac4d8f1f in qemu_sem_timedwait (sem=sem@entry=0x557dadd62878, 
ms=ms@entry=1) at ../../util/qemu-thread-posix.c:327
#3  0x557dac4f5ac4 in worker_thread (opaque=opaque@entry=0x557dadd62800) at 
../../util/thread-pool.c:91
#4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a70ae4650) at 
../../util/qemu-thread-posix.c:557
#5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 1 (Thread 0x7f0a714ebec0 (LWP 1873551)):
#0  bdrv_inherits_from_recursive (parent=parent@entry=0x557dadfb5050, 
child=0xafafafafafafafaf, 

Re: iotest 030 SIGSEGV

2021-10-14 Thread Vladimir Sementsov-Ogievskiy

14.10.2021 16:20, Hanna Reitz wrote:

On 13.10.21 23:50, John Snow wrote:

In trying to replace the QMP library backend, I have now twice stumbled upon a 
SIGSEGV in iotest 030 in the last three weeks or so.

I didn't have debug symbols on at the time, so I've got only this stack trace:

(gdb) thread apply all bt

Thread 8 (Thread 0x7f0a6b8c4640 (LWP 1873554)):
#0  0x7f0a748a53ff in poll () at /lib64/libc.so.6
#1  0x7f0a759bfa36 in g_main_context_iterate.constprop () at 
/lib64/libglib-2.0.so.0
#2  0x7f0a7596d163 in g_main_loop_run () at /lib64/libglib-2.0.so.0
#3  0x557dac31d121 in iothread_run (opaque=opaque@entry=0x557dadd98800) at 
../../iothread.c:73
#4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a6b8c3650) at 
../../util/qemu-thread-posix.c:557
#5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 7 (Thread 0x7f0a6b000640 (LWP 1873555)):
#0  0x7f0a747ed7d2 in sigtimedwait () at /lib64/libc.so.6
#1  0x7f0a74b72cdc in sigwait () at /lib64/libpthread.so.0
#2  0x557dac2e403b in dummy_cpu_thread_fn (arg=arg@entry=0x557dae041c10) at 
../../accel/dummy-cpus.c:46
#3  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a6afff650) at 
../../util/qemu-thread-posix.c:557
#4  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#5  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 6 (Thread 0x7f0a56afa640 (LWP 1873582)):
#0  0x7f0a74b71308 in do_futex_wait.constprop () at /lib64/libpthread.so.0
#1  0x7f0a74b71433 in __new_sem_wait_slow.constprop.0 () at 
/lib64/libpthread.so.0
#2  0x557dac4d8f1f in qemu_sem_timedwait (sem=sem@entry=0x557dadd62878, 
ms=ms@entry=1) at ../../util/qemu-thread-posix.c:327
#3  0x557dac4f5ac4 in worker_thread (opaque=opaque@entry=0x557dadd62800) at 
../../util/thread-pool.c:91
#4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a56af9650) at 
../../util/qemu-thread-posix.c:557
#5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 5 (Thread 0x7f0a57dff640 (LWP 1873580)):
#0  0x7f0a74b71308 in do_futex_wait.constprop () at /lib64/libpthread.so.0
#1  0x7f0a74b71433 in __new_sem_wait_slow.constprop.0 () at 
/lib64/libpthread.so.0
#2  0x557dac4d8f1f in qemu_sem_timedwait (sem=sem@entry=0x557dadd62878, 
ms=ms@entry=1) at ../../util/qemu-thread-posix.c:327
#3  0x557dac4f5ac4 in worker_thread (opaque=opaque@entry=0x557dadd62800) at 
../../util/thread-pool.c:91
#4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a57dfe650) at 
../../util/qemu-thread-posix.c:557
#5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 4 (Thread 0x7f0a572fb640 (LWP 1873581)):
#0  0x7f0a74b7296f in pread64 () at /lib64/libpthread.so.0
#1  0x557dac39f18f in pread64 (__offset=, __nbytes=, 
__buf=, __fd=) at /usr/include/bits/unistd.h:105
#2  handle_aiocb_rw_linear (aiocb=aiocb@entry=0x7f0a573fc150, buf=0x7f0a6a47e000 
'\377' ...) at ../../block/file-posix.c:1481
#3  0x557dac39f664 in handle_aiocb_rw (opaque=0x7f0a573fc150) at 
../../block/file-posix.c:1521
#4  0x557dac4f5b54 in worker_thread (opaque=opaque@entry=0x557dadd62800) at 
../../util/thread-pool.c:104
#5  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a572fa650) at 
../../util/qemu-thread-posix.c:557
#6  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#7  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 3 (Thread 0x7f0a714e8640 (LWP 1873552)):
#0  0x7f0a748aaedd in syscall () at /lib64/libc.so.6
#1  0x557dac4d916a in qemu_futex_wait (val=, f=) at /home/jsnow/src/qemu/include/qemu/futex.h:29
#2  qemu_event_wait (ev=ev@entry=0x557dace1f1e8 ) at 
../../util/qemu-thread-posix.c:480
#3  0x557dac4e189a in call_rcu_thread (opaque=opaque@entry=0x0) at 
../../util/rcu.c:258
#4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a714e7650) at 
../../util/qemu-thread-posix.c:557
#5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 2 (Thread 0x7f0a70ae5640 (LWP 1873553)):
#0  0x7f0a74b71308 in do_futex_wait.constprop () at /lib64/libpthread.so.0
#1  0x7f0a74b71433 in __new_sem_wait_slow.constprop.0 () at 
/lib64/libpthread.so.0
#2  0x557dac4d8f1f in qemu_sem_timedwait (sem=sem@entry=0x557dadd62878, 
ms=ms@entry=1) at ../../util/qemu-thread-posix.c:327
#3  0x557dac4f5ac4 in worker_thread (opaque=opaque@entry=0x557dadd62800) at 
../../util/thread-pool.c:91
#4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a70ae4650) at 
../../util/qemu-thread-posix.c:557
#5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 1 (Thread 0x7f0a714ebec0 (LWP 1873551)):
#0  bdrv_inherits_from_recursive 

Re: iotest 030 SIGSEGV

2021-10-14 Thread John Snow
On Thu, Oct 14, 2021 at 9:20 AM Hanna Reitz  wrote:

> On 13.10.21 23:50, John Snow wrote:
> > In trying to replace the QMP library backend, I have now twice
> > stumbled upon a SIGSEGV in iotest 030 in the last three weeks or so.
> >
> > I didn't have debug symbols on at the time, so I've got only this
> > stack trace:
> >
> > (gdb) thread apply all bt
> >
> > Thread 8 (Thread 0x7f0a6b8c4640 (LWP 1873554)):
> > #0  0x7f0a748a53ff in poll () at /lib64/libc.so.6
> > #1  0x7f0a759bfa36 in g_main_context_iterate.constprop () at
> > /lib64/libglib-2.0.so.0
> > #2  0x7f0a7596d163 in g_main_loop_run () at /lib64/libglib-2.0.so.0
> > #3  0x557dac31d121 in iothread_run
> > (opaque=opaque@entry=0x557dadd98800) at ../../iothread.c:73
> > #4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a6b8c3650) at
> > ../../util/qemu-thread-posix.c:557
> > #5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
> > #6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6
> >
> > Thread 7 (Thread 0x7f0a6b000640 (LWP 1873555)):
> > #0  0x7f0a747ed7d2 in sigtimedwait () at /lib64/libc.so.6
> > #1  0x7f0a74b72cdc in sigwait () at /lib64/libpthread.so.0
> > #2  0x557dac2e403b in dummy_cpu_thread_fn
> > (arg=arg@entry=0x557dae041c10) at ../../accel/dummy-cpus.c:46
> > #3  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a6afff650) at
> > ../../util/qemu-thread-posix.c:557
> > #4  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
> > #5  0x7f0a748b04c3 in clone () at /lib64/libc.so.6
> >
> > Thread 6 (Thread 0x7f0a56afa640 (LWP 1873582)):
> > #0  0x7f0a74b71308 in do_futex_wait.constprop () at
> > /lib64/libpthread.so.0
> > #1  0x7f0a74b71433 in __new_sem_wait_slow.constprop.0 () at
> > /lib64/libpthread.so.0
> > #2  0x557dac4d8f1f in qemu_sem_timedwait
> > (sem=sem@entry=0x557dadd62878, ms=ms@entry=1) at
> > ../../util/qemu-thread-posix.c:327
> > #3  0x557dac4f5ac4 in worker_thread
> > (opaque=opaque@entry=0x557dadd62800) at ../../util/thread-pool.c:91
> > #4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a56af9650) at
> > ../../util/qemu-thread-posix.c:557
> > #5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
> > #6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6
> >
> > Thread 5 (Thread 0x7f0a57dff640 (LWP 1873580)):
> > #0  0x7f0a74b71308 in do_futex_wait.constprop () at
> > /lib64/libpthread.so.0
> > #1  0x7f0a74b71433 in __new_sem_wait_slow.constprop.0 () at
> > /lib64/libpthread.so.0
> > #2  0x557dac4d8f1f in qemu_sem_timedwait
> > (sem=sem@entry=0x557dadd62878, ms=ms@entry=1) at
> > ../../util/qemu-thread-posix.c:327
> > #3  0x557dac4f5ac4 in worker_thread
> > (opaque=opaque@entry=0x557dadd62800) at ../../util/thread-pool.c:91
> > #4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a57dfe650) at
> > ../../util/qemu-thread-posix.c:557
> > #5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
> > #6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6
> >
> > Thread 4 (Thread 0x7f0a572fb640 (LWP 1873581)):
> > #0  0x7f0a74b7296f in pread64 () at /lib64/libpthread.so.0
> > #1  0x557dac39f18f in pread64 (__offset=,
> > __nbytes=, __buf=, __fd=)
> > at /usr/include/bits/unistd.h:105
> > #2  handle_aiocb_rw_linear (aiocb=aiocb@entry=0x7f0a573fc150,
> > buf=0x7f0a6a47e000 '\377' ...) at
> > ../../block/file-posix.c:1481
> > #3  0x557dac39f664 in handle_aiocb_rw (opaque=0x7f0a573fc150) at
> > ../../block/file-posix.c:1521
> > #4  0x557dac4f5b54 in worker_thread
> > (opaque=opaque@entry=0x557dadd62800) at ../../util/thread-pool.c:104
> > #5  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a572fa650) at
> > ../../util/qemu-thread-posix.c:557
> > #6  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
> > #7  0x7f0a748b04c3 in clone () at /lib64/libc.so.6
> >
> > Thread 3 (Thread 0x7f0a714e8640 (LWP 1873552)):
> > #0  0x7f0a748aaedd in syscall () at /lib64/libc.so.6
> > #1  0x557dac4d916a in qemu_futex_wait (val=,
> > f=) at /home/jsnow/src/qemu/include/qemu/futex.h:29
> > #2  qemu_event_wait (ev=ev@entry=0x557dace1f1e8
> > ) at ../../util/qemu-thread-posix.c:480
> > #3  0x557dac4e189a in call_rcu_thread (opaque=opaque@entry=0x0) at
> > ../../util/rcu.c:258
> > #4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a714e7650) at
> > ../../util/qemu-thread-posix.c:557
> > #5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
> > #6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6
> >
> > Thread 2 (Thread 0x7f0a70ae5640 (LWP 1873553)):
> > #0  0x7f0a74b71308 in do_futex_wait.constprop () at
> > /lib64/libpthread.so.0
> > #1  0x7f0a74b71433 in __new_sem_wait_slow.constprop.0 () at
> > /lib64/libpthread.so.0
> > #2  0x557dac4d8f1f in qemu_sem_timedwait
> > (sem=sem@entry=0x557dadd62878, ms=ms@entry=1) at
> > ../../util/qemu-thread-posix.c:327
> > #3  0x557dac4f5ac4 in worker_thread
> > 

Re: iotest 030 SIGSEGV

2021-10-14 Thread Hanna Reitz

On 13.10.21 23:50, John Snow wrote:
In trying to replace the QMP library backend, I have now twice 
stumbled upon a SIGSEGV in iotest 030 in the last three weeks or so.


I didn't have debug symbols on at the time, so I've got only this 
stack trace:


(gdb) thread apply all bt

Thread 8 (Thread 0x7f0a6b8c4640 (LWP 1873554)):
#0  0x7f0a748a53ff in poll () at /lib64/libc.so.6
#1  0x7f0a759bfa36 in g_main_context_iterate.constprop () at 
/lib64/libglib-2.0.so.0

#2  0x7f0a7596d163 in g_main_loop_run () at /lib64/libglib-2.0.so.0
#3  0x557dac31d121 in iothread_run 
(opaque=opaque@entry=0x557dadd98800) at ../../iothread.c:73
#4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a6b8c3650) at 
../../util/qemu-thread-posix.c:557

#5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 7 (Thread 0x7f0a6b000640 (LWP 1873555)):
#0  0x7f0a747ed7d2 in sigtimedwait () at /lib64/libc.so.6
#1  0x7f0a74b72cdc in sigwait () at /lib64/libpthread.so.0
#2  0x557dac2e403b in dummy_cpu_thread_fn 
(arg=arg@entry=0x557dae041c10) at ../../accel/dummy-cpus.c:46
#3  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a6afff650) at 
../../util/qemu-thread-posix.c:557

#4  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#5  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 6 (Thread 0x7f0a56afa640 (LWP 1873582)):
#0  0x7f0a74b71308 in do_futex_wait.constprop () at 
/lib64/libpthread.so.0
#1  0x7f0a74b71433 in __new_sem_wait_slow.constprop.0 () at 
/lib64/libpthread.so.0
#2  0x557dac4d8f1f in qemu_sem_timedwait 
(sem=sem@entry=0x557dadd62878, ms=ms@entry=1) at 
../../util/qemu-thread-posix.c:327
#3  0x557dac4f5ac4 in worker_thread 
(opaque=opaque@entry=0x557dadd62800) at ../../util/thread-pool.c:91
#4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a56af9650) at 
../../util/qemu-thread-posix.c:557

#5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 5 (Thread 0x7f0a57dff640 (LWP 1873580)):
#0  0x7f0a74b71308 in do_futex_wait.constprop () at 
/lib64/libpthread.so.0
#1  0x7f0a74b71433 in __new_sem_wait_slow.constprop.0 () at 
/lib64/libpthread.so.0
#2  0x557dac4d8f1f in qemu_sem_timedwait 
(sem=sem@entry=0x557dadd62878, ms=ms@entry=1) at 
../../util/qemu-thread-posix.c:327
#3  0x557dac4f5ac4 in worker_thread 
(opaque=opaque@entry=0x557dadd62800) at ../../util/thread-pool.c:91
#4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a57dfe650) at 
../../util/qemu-thread-posix.c:557

#5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 4 (Thread 0x7f0a572fb640 (LWP 1873581)):
#0  0x7f0a74b7296f in pread64 () at /lib64/libpthread.so.0
#1  0x557dac39f18f in pread64 (__offset=, 
__nbytes=, __buf=, __fd=) 
at /usr/include/bits/unistd.h:105
#2  handle_aiocb_rw_linear (aiocb=aiocb@entry=0x7f0a573fc150, 
buf=0x7f0a6a47e000 '\377' ...) at 
../../block/file-posix.c:1481
#3  0x557dac39f664 in handle_aiocb_rw (opaque=0x7f0a573fc150) at 
../../block/file-posix.c:1521
#4  0x557dac4f5b54 in worker_thread 
(opaque=opaque@entry=0x557dadd62800) at ../../util/thread-pool.c:104
#5  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a572fa650) at 
../../util/qemu-thread-posix.c:557

#6  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#7  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 3 (Thread 0x7f0a714e8640 (LWP 1873552)):
#0  0x7f0a748aaedd in syscall () at /lib64/libc.so.6
#1  0x557dac4d916a in qemu_futex_wait (val=, 
f=) at /home/jsnow/src/qemu/include/qemu/futex.h:29
#2  qemu_event_wait (ev=ev@entry=0x557dace1f1e8 
) at ../../util/qemu-thread-posix.c:480
#3  0x557dac4e189a in call_rcu_thread (opaque=opaque@entry=0x0) at 
../../util/rcu.c:258
#4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a714e7650) at 
../../util/qemu-thread-posix.c:557

#5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 2 (Thread 0x7f0a70ae5640 (LWP 1873553)):
#0  0x7f0a74b71308 in do_futex_wait.constprop () at 
/lib64/libpthread.so.0
#1  0x7f0a74b71433 in __new_sem_wait_slow.constprop.0 () at 
/lib64/libpthread.so.0
#2  0x557dac4d8f1f in qemu_sem_timedwait 
(sem=sem@entry=0x557dadd62878, ms=ms@entry=1) at 
../../util/qemu-thread-posix.c:327
#3  0x557dac4f5ac4 in worker_thread 
(opaque=opaque@entry=0x557dadd62800) at ../../util/thread-pool.c:91
#4  0x557dac4d7f89 in qemu_thread_start (args=0x7f0a70ae4650) at 
../../util/qemu-thread-posix.c:557

#5  0x7f0a74b683f9 in start_thread () at /lib64/libpthread.so.0
#6  0x7f0a748b04c3 in clone () at /lib64/libc.so.6

Thread 1 (Thread 0x7f0a714ebec0 (LWP 1873551)):
#0  bdrv_inherits_from_recursive (parent=parent@entry=0x557dadfb5050, 

Re: [PATCH 0/3] linux-aio: allow block devices to limit aio-max-batch

2021-10-14 Thread Stefano Garzarella
Kind ping :-)

Thanks,
Stefano

On Thu, Sep 23, 2021 at 4:31 PM Stefano Garzarella  wrote:
>
> Commit d7ddd0a161 ("linux-aio: limit the batch size using
> `aio-max-batch` parameter") added a way to limit the batch size
> of Linux AIO backend for the entire AIO context.
>
> The same AIO context can be shared by multiple devices, so
> latency-sensitive devices may want to limit the batch size even
> more to avoid increasing latency.
>
> This series add the `aio-max-batch` option to the file backend,
> and use it in laio_co_submit() and laio_io_unplug() to limit the
> Linux AIO batch size more than the limit set by the AIO context.
>
> Stefano Garzarella (3):
>   file-posix: add `aio-max-batch` option
>   linux-aio: add `dev_max_batch` parameter to laio_co_submit()
>   linux-aio: add `dev_max_batch` parameter to laio_io_unplug()
>
>  qapi/block-core.json|  5 +
>  include/block/raw-aio.h |  6 --
>  block/file-posix.c  | 14 --
>  block/linux-aio.c   | 38 +++---
>  4 files changed, 48 insertions(+), 15 deletions(-)
>
> --
> 2.31.1
>
>




Re: [PATCH] tests/qtest/vhost-user-blk-test: Check whether qemu-storage-daemon is available

2021-10-14 Thread Thomas Huth

On 11/08/2021 13.08, Peter Maydell wrote:

On Wed, 11 Aug 2021 at 11:00, Thomas Huth  wrote:


The vhost-user-blk-test currently hangs if QTEST_QEMU_STORAGE_DAEMON_BINARY
points to a non-existing binary. Let's improve this situation by checking
for the availability of the binary first, so we can fail gracefully if
it is not accessible.

Signed-off-by: Thomas Huth 
---
  tests/qtest/vhost-user-blk-test.c | 8 
  1 file changed, 8 insertions(+)

diff --git a/tests/qtest/vhost-user-blk-test.c 
b/tests/qtest/vhost-user-blk-test.c
index 8796c74ca4..6f108a1b62 100644
--- a/tests/qtest/vhost-user-blk-test.c
+++ b/tests/qtest/vhost-user-blk-test.c
@@ -789,6 +789,14 @@ static const char *qtest_qemu_storage_daemon_binary(void)
  exit(0);
  }

+/* If we've got a path to the binary, check whether we can access it */
+if (strchr(qemu_storage_daemon_bin, '/') &&
+access(qemu_storage_daemon_bin, X_OK) != 0) {
+fprintf(stderr, "ERROR: '%s' is not accessible\n",
+qemu_storage_daemon_bin);
+exit(1);
+}


It makes sense not to bother starting the test if the binary isn't
even present, but why does the test hang? Shouldn't QEMU cleanly
exit rather than hanging if it turns out that it can't contact
the daemon ?


Sorry for the late reply: I think this happens due to the way we run that 
qtest: The test program forks to run the storage daemon. If that daemon 
binary is not available, or exits prematurely, the original program does not 
notice and hangs. Maybe we should intercept the SIGCHLD signal for such cases?


 Thomas