RE: Question: xen + vhost user

2024-07-08 Thread Peng Fan
> Subject: Re: Question: xen + vhost user
> 
> +Edgar
> 
> I don't think we are using vhost-user so I am unable to help, but adding
> Edgar just in case

Thanks, just an update, it works after some code changes to Qemu.

Thanks,
Peng.

> 
> On Sun, 30 Jun 2024, Peng Fan wrote:
> > Hi All,
> >
> > I am trying to enable vhost user input with xen hypervisor on i.MX95,
> > using qemu vhost-user-input. But meet " Invalid vring_addr message
> ". My xen domu cfg:
> >
> > '-chardev', 'socket,path=/tmp/input.sock,id=mouse0',
> > '-device', 'vhost-user-input-pci,chardev=mouse0',
> >
> > Anyone knows what missing?
> >
> > Partial error log:
> >  Vhost user message 
> > Request: VHOST_USER_SET_VRING_ADDR (9)
> > Flags:   0x1
> > Size:40
> > vhost_vring_addr:
> > index:  0
> > flags:  0
> > desc_user_addr:   0x889b
> > used_user_addr:   0x889b04c0
> > avail_user_addr:  0x889b0400
> > log_guest_addr:   0x444714c0
> > Setting virtq addresses:
> > vring_desc  at (nil)
> > vring_used  at (nil)
> > vring_avail at (nil)
> >
> > ** (vhost-user-input:1816): CRITICAL **: 07:20:46.077: Invalid
> > vring_addr message
> >
> > Thanks,
> > Peng.
> >
> > The full vhost user debug log:
> > ./vhost-user-input --socket-path=/tmp/input.sock --evdev-path=/d
> > -path=/dev/input/event1 ./vhost-user-input
> > --socket-path=/tmp/input.sock --evdev-  Vhost
> user
> > message 
> > Request: VHOST_USER_GET_FEATURES (1)
> > Flags:   0x1
> > Size:0
> > Sending back to guest u64: 0x00017500
>  Vhost
> > user message 
> > Request: VHOST_USER_GET_PROTOCOL_FEATURES (15)
> > Flags:   0x1
> > Size:0
> >  Vhost user message 
> > Request: VHOST_USER_SET_PROTOCOL_FEATURES (16)
> > Flags:   0x1
> > Size:8
> > u64: 0x8e2b
> >  Vhost user message 
> > Request: VHOST_USER_GET_QUEUE_NUM (17)
> > Flags:   0x1
> > Size:0
> >  Vhost user message 
> > Request: VHOST_USER_GET_MAX_MEM_SLOTS (36)
> > Flags:   0x1
> > Size:0
> > u64: 0x0020
> >  Vhost user message 
> > Request: VHOST_USER_SET_BACKEND_REQ_FD (21)
> > Flags:   0x9
> > Size:0
> > Fds: 6
> > Got backend_fd: 6
> >  Vhost user message 
> > Request: VHOST_USER_SET_OWNER (3)
> > Flags:   0x1
> > Size:0
> >  Vhost user message 
> > Request: VHOST_USER_GET_FEATURES (1)
> > Flags:   0x1
> > Size:0
> > Sending back to guest u64: 0x00017500
>  Vhost
> > user message 
> > Request: VHOST_USER_SET_VRING_CALL (13)
> > Flags:   0x1
> > Size:8
> > Fds: 7
> > u64: 0x
> > Got call_fd: 7 for vq: 0
> >  Vhost user message 
> > Request: VHOST_USER_SET_VRING_ERR (14)
> > Flags:   0x1
> > Size:8
> > Fds: 8
> > u64: 0x
> >  Vhost user message 
> > Request: VHOST_USER_SET_VRING_CALL (13)
> > Flags:   0x1
> > Size:8
> > Fds: 9
> > u64: 0x0001
> > Got call_fd: 9 for vq: 1
> >  Vhost user message 
> > Request: VHOST_USER_SET_VRING_ERR (14)
> > Flags:   0x1
> > Size:8
> > Fds: 10
> > u64: 0x0001
> > (XEN) d2v0 Unhandled SMC/HVC: 0x8450
> > (XEN) d2v0 Unhandled SMC/HVC: 0x8600ff01
> > (XEN) d2v0: vGICD: RAZ on reserved register offset 0x0c
> > (XEN) d2v0: vGICD: unhandled word write 0x00 to
> ICACTIVER4
> > (XEN) d2v0: vGICR: SGI: unhandled word write 0x00 to
> > ICACTIVER0  Vhost user message
> 
> > Request: VHOST_USER_SET_CONFIG (25)
> > Flags:   0x9
> > Size:148
> >  Vhost user message 
> > Request: VHOST_USER_SET_CONFIG (25)
> > Flags:   0x9
> > Size:148
> >  Vhost user message 
> > Request: VHOST_USER_GET_CONFIG (24)
> > Flags:   0x1
> > Size:148
> > ==

Question: xen + vhost user

2024-06-30 Thread Peng Fan
Hi All,

I am trying to enable vhost user input with xen hypervisor on i.MX95, using qemu
vhost-user-input. But meet " Invalid vring_addr message ". My xen domu cfg:

'-chardev', 'socket,path=/tmp/input.sock,id=mouse0',
'-device', 'vhost-user-input-pci,chardev=mouse0',

Anyone knows what missing?

Partial error log:
 Vhost user message 
Request: VHOST_USER_SET_VRING_ADDR (9)
Flags:   0x1
Size:40
vhost_vring_addr:
index:  0
flags:  0
desc_user_addr:   0x889b
used_user_addr:   0x889b04c0
avail_user_addr:  0x889b0400
log_guest_addr:   0x444714c0
Setting virtq addresses:
vring_desc  at (nil)
vring_used  at (nil)
vring_avail at (nil)

** (vhost-user-input:1816): CRITICAL **: 07:20:46.077: Invalid vring_addr 
message

Thanks,
Peng.

The full vhost user debug log:
./vhost-user-input --socket-path=/tmp/input.sock --evdev-path=/d
-path=/dev/input/event1 ./vhost-user-input --socket-path=/tmp/input.sock 
--evdev-
 Vhost user message 
Request: VHOST_USER_GET_FEATURES (1)
Flags:   0x1
Size:0
Sending back to guest u64: 0x00017500
 Vhost user message 
Request: VHOST_USER_GET_PROTOCOL_FEATURES (15)
Flags:   0x1
Size:0
 Vhost user message 
Request: VHOST_USER_SET_PROTOCOL_FEATURES (16)
Flags:   0x1
Size:8
u64: 0x8e2b
 Vhost user message 
Request: VHOST_USER_GET_QUEUE_NUM (17)
Flags:   0x1
Size:0
 Vhost user message 
Request: VHOST_USER_GET_MAX_MEM_SLOTS (36)
Flags:   0x1
Size:0
u64: 0x0020
 Vhost user message 
Request: VHOST_USER_SET_BACKEND_REQ_FD (21)
Flags:   0x9
Size:0
Fds: 6
Got backend_fd: 6
 Vhost user message 
Request: VHOST_USER_SET_OWNER (3)
Flags:   0x1
Size:0
 Vhost user message 
Request: VHOST_USER_GET_FEATURES (1)
Flags:   0x1
Size:0
Sending back to guest u64: 0x00017500
 Vhost user message 
Request: VHOST_USER_SET_VRING_CALL (13)
Flags:   0x1
Size:8
Fds: 7
u64: 0x
Got call_fd: 7 for vq: 0
 Vhost user message 
Request: VHOST_USER_SET_VRING_ERR (14)
Flags:   0x1
Size:8
Fds: 8
u64: 0x
 Vhost user message 
Request: VHOST_USER_SET_VRING_CALL (13)
Flags:   0x1
Size:8
Fds: 9
u64: 0x0001
Got call_fd: 9 for vq: 1
 Vhost user message 
Request: VHOST_USER_SET_VRING_ERR (14)
Flags:   0x1
Size:8
Fds: 10
u64: 0x0001
(XEN) d2v0 Unhandled SMC/HVC: 0x8450
(XEN) d2v0 Unhandled SMC/HVC: 0x8600ff01
(XEN) d2v0: vGICD: RAZ on reserved register offset 0x0c
(XEN) d2v0: vGICD: unhandled word write 0x00 to ICACTIVER4
(XEN) d2v0: vGICR: SGI: unhandled word write 0x00 to ICACTIVER0
 Vhost user message 
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:148
 Vhost user message 
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:148
 Vhost user message 
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:148
 Vhost user message 
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:148
 Vhost user message 
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:148
 Vhost user message 
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:148
 Vhost user message 
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:148
 Vhost user message 
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:148
 Vhost user message 
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:148
 Vhost user message 
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:148
 Vhost user message 
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:148
 Vhost user message 
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:148
 Vhost user message 
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:148
 Vhost user message 
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:148
 Vhost user message 
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:148
 Vhost user message 
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:148
 Vhost user message 
Request: VHOST_USER_GET_CONFIG (24)
Flags: 

RE: Qemu License question

2024-06-13 Thread Peng Fan
All,

> Subject: Re: Qemu License question
>
>
> IMHO this is largely a non-issue from a licensing compatibility POV, and thus
> not neccessary for stable.
>
> This is self-contained test code that, IIUC, is not linking to the bits of 
> QEMU
> that are GPLv-2-only, so is valid to have any license. GPL-2.0+ is just "nice 
> to
> have" for consistency of the codebase.

Thanks for clarification. So it is fine to keep as it is.

Thanks,
Peng.

>
>
> With regards,
> Daniel
> --
> |:
> https://berran/
> ge.com%2F&data=05%7C02%7Cpeng.fan%40nxp.com%7C2e14613ddb004d3
> df0ed08dc8b858f71%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%
> 7C638538652961675235%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLj
> AwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C
> %7C&sdata=v6sctb73jXj9Fs%2BKkIRaBMATHX%2FrT8ZFiShWDAguYIs%3D&re
> served=0  -o-
> https://www.f/
> lickr.com%2Fphotos%2Fdberrange&data=05%7C02%7Cpeng.fan%40nxp.com
> %7C2e14613ddb004d3df0ed08dc8b858f71%7C686ea1d3bc2b4c6fa92cd99c
> 5c301635%7C0%7C0%7C638538652961684479%7CUnknown%7CTWFpbGZ
> sb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6M
> n0%3D%7C0%7C%7C%7C&sdata=L5m4fucqhtzi2r3curbSR9OTY8cKu5ALciS%2
> BUmxJBRg%3D&reserved=0 :|
> |:
> https://libvirt/.
> org%2F&data=05%7C02%7Cpeng.fan%40nxp.com%7C2e14613ddb004d3df0
> ed08dc8b858f71%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C6
> 38538652961689321%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAw
> MDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C
> &sdata=gNx8JVBD0Hopl%2B6qIrOhOQtfZW2PC0QJpzRW8u42K3U%3D&reser
> ved=0 -o-
> https://fstop1/
> 38.berrange.com%2F&data=05%7C02%7Cpeng.fan%40nxp.com%7C2e14613
> ddb004d3df0ed08dc8b858f71%7C686ea1d3bc2b4c6fa92cd99c5c301635%7
> C0%7C0%7C638538652961693870%7CUnknown%7CTWFpbGZsb3d8eyJWIj
> oiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0
> %7C%7C%7C&sdata=ed01aWLeV8f2Hg5gRrxUE2GjuBYiFtmhfNDvNQeT20g%
> 3D&reserved=0 :|
> |:
> https://entan/
> gle-
> photo.org%2F&data=05%7C02%7Cpeng.fan%40nxp.com%7C2e14613ddb004
> d3df0ed08dc8b858f71%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0
> %7C638538652961698472%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wL
> jAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C
> %7C&sdata=W%2B8XurlQlElHGZrX5UhMT7Ep46hVa28MNuqzspviBSs%3D&re
> served=0-o-
> https://www.i/
> nstagram.com%2Fdberrange&data=05%7C02%7Cpeng.fan%40nxp.com%7C2
> e14613ddb004d3df0ed08dc8b858f71%7C686ea1d3bc2b4c6fa92cd99c5c30
> 1635%7C0%7C0%7C638538652961703289%7CUnknown%7CTWFpbGZsb3d
> 8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%
> 3D%7C0%7C%7C%7C&sdata=5qza2JAiHmt%2FgPkoDWp4j%2B4LSi9HCWNX
> VORGviICTMg%3D&reserved=0 :|




Qemu License question

2024-06-12 Thread Peng Fan
Hi All,

The following files are marked as GPL-3.0-or-later. Will these
Conflict with Qemu LICENSE?

Should we update the files to GPL-2.0?

./tests/tcg/aarch64/semicall.h:7: * SPDX-License-Identifier: GPL-3.0-or-later
./tests/tcg/x86_64/system/boot.S:13: * SPDX-License-Identifier: GPL-3.0-or-later
./tests/tcg/riscv64/semicall.h:7: * SPDX-License-Identifier: GPL-3.0-or-later
./tests/tcg/multiarch/float_convs.c:6: * SPDX-License-Identifier: 
GPL-3.0-or-later
./tests/tcg/multiarch/float_helpers.h:6: * SPDX-License-Identifier: 
GPL-3.0-or-later
./tests/tcg/multiarch/libs/float_helpers.c:10: * SPDX-License-Identifier: 
GPL-3.0-or-later
./tests/tcg/multiarch/arm-compat-semi/semihosting.c:7: * 
SPDX-License-Identifier: GPL-3.0-or-later
./tests/tcg/multiarch/arm-compat-semi/semiconsole.c:7: * 
SPDX-License-Identifier: GPL-3.0-or-later
./tests/tcg/multiarch/float_convd.c:6: * SPDX-License-Identifier: 
GPL-3.0-or-later
./tests/tcg/multiarch/float_madds.c:6: * SPDX-License-Identifier: 
GPL-3.0-or-later
./tests/tcg/i386/system/boot.S:10: * SPDX-License-Identifier: GPL-3.0-or-later
./tests/tcg/arm/semicall.h:7: * SPDX-License-Identifier: GPL-3.0-or-later

Thanks,
Peng.



RE: [PULL 2/3] xen: Drop out of coroutine context xen_invalidate_map_cache_entry

2024-03-13 Thread Peng Fan
> Subject: Re: [PULL 2/3] xen: Drop out of coroutine context
> xen_invalidate_map_cache_entry
> 
> 13.03.2024 20:21, Michael Tokarev:
> > 12.03.2024 17:27, Anthony PERARD wrote:
> >> From: Peng Fan 
> >>
> >> xen_invalidate_map_cache_entry is not expected to run in a coroutine.
> >> Without this, there is crash:
> >
> > Hi!  Is this a stable material? (It applies cleanly and builds on 8.2
> > and 7.2)
> 
> Actually for 7.2 it needed a minor tweak:
> 
> -void coroutine_mixed_fn xen_invalidate_map_cache_entry(uint8_t *buffer)
> +void xen_invalidate_map_cache_entry(uint8_t *buffer)

I only tested 8.2 with xen virtio enabled. Not sure whether 7.2 has the issue
or not.

Thanks,
Peng.

> 
> but the rest is okay.
> 
> /mjt


[PATCH V2] xen: Drop out of coroutine context xen_invalidate_map_cache_entry

2024-01-23 Thread Peng Fan (OSS)
From: Peng Fan 

xen_invalidate_map_cache_entry is not expected to run in a
coroutine. Without this, there is crash:

signo=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44
threadid=) at pthread_kill.c:78
at /usr/src/debug/glibc/2.38+git-r0/sysdeps/posix/raise.c:26
fmt=0x9e1ca8a8 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
assertion=assertion@entry=0xe0d25740 "!qemu_in_coroutine()",
file=file@entry=0xe0d301a8 "../qemu-xen-dir-remote/block/graph-lock.c", 
line=line@entry=260,
function=function@entry=0xe0e522c0 <__PRETTY_FUNCTION__.3> 
"bdrv_graph_rdlock_main_loop") at assert.c:92
assertion=assertion@entry=0xe0d25740 "!qemu_in_coroutine()",
file=file@entry=0xe0d301a8 "../qemu-xen-dir-remote/block/graph-lock.c", 
line=line@entry=260,
function=function@entry=0xe0e522c0 <__PRETTY_FUNCTION__.3> 
"bdrv_graph_rdlock_main_loop") at assert.c:101
at ../qemu-xen-dir-remote/block/graph-lock.c:260
at 
/home/Freenix/work/sw-stash/xen/upstream/tools/qemu-xen-dir-remote/include/block/graph-lock.h:259
host=host@entry=0x742c8000, size=size@entry=2097152)
at ../qemu-xen-dir-remote/block/io.c:3362
host=0x742c8000, size=2097152)
at ../qemu-xen-dir-remote/block/block-backend.c:2859
host=, size=, max_size=)
at ../qemu-xen-dir-remote/block/block-ram-registrar.c:33
size=2097152, max_size=2097152)
at ../qemu-xen-dir-remote/hw/core/numa.c:883
buffer=buffer@entry=0x743c5000 "")
at ../qemu-xen-dir-remote/hw/xen/xen-mapcache.c:475
buffer=buffer@entry=0x743c5000 "")
at ../qemu-xen-dir-remote/hw/xen/xen-mapcache.c:487
as=as@entry=0xe1ca3ae8 , buffer=0x743c5000,
len=, is_write=is_write@entry=true,
access_len=access_len@entry=32768)
at ../qemu-xen-dir-remote/system/physmem.c:3199
dir=DMA_DIRECTION_FROM_DEVICE, len=,
buffer=, as=0xe1ca3ae8 )
at 
/home/Freenix/work/sw-stash/xen/upstream/tools/qemu-xen-dir-remote/include/sysemu/dma.h:236
elem=elem@entry=0xf620aa30, len=len@entry=32769)
at ../qemu-xen-dir-remote/hw/virtio/virtio.c:758
elem=elem@entry=0xf620aa30, len=len@entry=32769, idx=idx@entry=0)
at ../qemu-xen-dir-remote/hw/virtio/virtio.c:919
elem=elem@entry=0xf620aa30, len=32769)
at ../qemu-xen-dir-remote/hw/virtio/virtio.c:994
req=req@entry=0xf620aa30, status=status@entry=0 '\000')
at ../qemu-xen-dir-remote/hw/block/virtio-blk.c:67
ret=0) at ../qemu-xen-dir-remote/hw/block/virtio-blk.c:136
at ../qemu-xen-dir-remote/block/block-backend.c:1559
--Type  for more, q to quit, c to continue without paging--
at ../qemu-xen-dir-remote/block/block-backend.c:1614
i1=) at ../qemu-xen-dir-remote/util/coroutine-ucontext.c:177
at ../sysdeps/unix/sysv/linux/aarch64/setcontext.S:123

Signed-off-by: Peng Fan 
---

V2:
 Drop unused ret in XenMapCacheData (thanks Stefano)

 hw/xen/xen-mapcache.c | 30 --
 1 file changed, 28 insertions(+), 2 deletions(-)

diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
index f7d974677d..8d62b3d2ed 100644
--- a/hw/xen/xen-mapcache.c
+++ b/hw/xen/xen-mapcache.c
@@ -481,11 +481,37 @@ static void 
xen_invalidate_map_cache_entry_unlocked(uint8_t *buffer)
 g_free(entry);
 }
 
-void xen_invalidate_map_cache_entry(uint8_t *buffer)
+typedef struct XenMapCacheData {
+Coroutine *co;
+uint8_t *buffer;
+} XenMapCacheData;
+
+static void xen_invalidate_map_cache_entry_bh(void *opaque)
 {
+XenMapCacheData *data = opaque;
+
 mapcache_lock();
-xen_invalidate_map_cache_entry_unlocked(buffer);
+xen_invalidate_map_cache_entry_unlocked(data->buffer);
 mapcache_unlock();
+
+aio_co_wake(data->co);
+}
+
+void coroutine_mixed_fn xen_invalidate_map_cache_entry(uint8_t *buffer)
+{
+if (qemu_in_coroutine()) {
+XenMapCacheData data = {
+.co = qemu_coroutine_self(),
+.buffer = buffer,
+};
+aio_bh_schedule_oneshot(qemu_get_current_aio_context(),
+xen_invalidate_map_cache_entry_bh, &data);
+qemu_coroutine_yield();
+} else {
+mapcache_lock();
+xen_invalidate_map_cache_entry_unlocked(buffer);
+mapcache_unlock();
+}
 }
 
 void xen_invalidate_map_cache(void)
-- 
2.35.3




RE: [PATCH] xen: Drop out of coroutine context xen_invalidate_map_cache_entry

2024-01-23 Thread Peng Fan
> Subject: Re: [PATCH] xen: Drop out of coroutine context
> xen_invalidate_map_cache_entry
> 
> On Tue, 16 Jan 2024, Peng Fan (OSS) wrote:
> > From: Peng Fan 
> >
> > xen_invalidate_map_cache_entry is not expected to run in a coroutine.
> > Without this, there is crash:
> >
> > signo=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44
> > threadid=) at pthread_kill.c:78
> > at /usr/src/debug/glibc/2.38+git-r0/sysdeps/posix/raise.c:26
> > fmt=0x9e1ca8a8 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
> > assertion=assertion@entry=0xe0d25740 "!qemu_in_coroutine()",
> > file=file@entry=0xe0d301a8 "../qemu-xen-dir-remote/block/graph-
> lock.c", line=line@entry=260,
> > function=function@entry=0xe0e522c0 <__PRETTY_FUNCTION__.3>
> "bdrv_graph_rdlock_main_loop") at assert.c:92
> > assertion=assertion@entry=0xe0d25740 "!qemu_in_coroutine()",
> > file=file@entry=0xe0d301a8 "../qemu-xen-dir-remote/block/graph-
> lock.c", line=line@entry=260,
> > function=function@entry=0xe0e522c0 <__PRETTY_FUNCTION__.3>
> "bdrv_graph_rdlock_main_loop") at assert.c:101
> > at ../qemu-xen-dir-remote/block/graph-lock.c:260
> > at /home/Freenix/work/sw-stash/xen/upstream/tools/qemu-xen-dir-
> remote/include/block/graph-lock.h:259
> > host=host@entry=0x742c8000, size=size@entry=2097152)
> > at ../qemu-xen-dir-remote/block/io.c:3362
> > host=0x742c8000, size=2097152)
> > at ../qemu-xen-dir-remote/block/block-backend.c:2859
> > host=, size=, max_size=)
> > at ../qemu-xen-dir-remote/block/block-ram-registrar.c:33
> > size=2097152, max_size=2097152)
> > at ../qemu-xen-dir-remote/hw/core/numa.c:883
> > buffer=buffer@entry=0x743c5000 "")
> > at ../qemu-xen-dir-remote/hw/xen/xen-mapcache.c:475
> > buffer=buffer@entry=0x743c5000 "")
> > at ../qemu-xen-dir-remote/hw/xen/xen-mapcache.c:487
> > as=as@entry=0xe1ca3ae8 ,
> buffer=0x743c5000,
> > len=, is_write=is_write@entry=true,
> > access_len=access_len@entry=32768)
> > at ../qemu-xen-dir-remote/system/physmem.c:3199
> > dir=DMA_DIRECTION_FROM_DEVICE, len=,
> > buffer=, as=0xe1ca3ae8 )
> > at /home/Freenix/work/sw-stash/xen/upstream/tools/qemu-xen-dir-
> remote/include/sysemu/dma.h:236
> > elem=elem@entry=0xf620aa30, len=len@entry=32769)
> > at ../qemu-xen-dir-remote/hw/virtio/virtio.c:758
> > elem=elem@entry=0xf620aa30, len=len@entry=32769,
> idx=idx@entry=0)
> > at ../qemu-xen-dir-remote/hw/virtio/virtio.c:919
> > elem=elem@entry=0xf620aa30, len=32769)
> > at ../qemu-xen-dir-remote/hw/virtio/virtio.c:994
> > req=req@entry=0xf620aa30, status=status@entry=0 '\000')
> > at ../qemu-xen-dir-remote/hw/block/virtio-blk.c:67
> > ret=0) at ../qemu-xen-dir-remote/hw/block/virtio-blk.c:136
> > at ../qemu-xen-dir-remote/block/block-backend.c:1559
> > --Type  for more, q to quit, c to continue without paging--
> > at ../qemu-xen-dir-remote/block/block-backend.c:1614
> > i1=) at ../qemu-xen-dir-remote/util/coroutine-
> ucontext.c:177
> > at ../sysdeps/unix/sysv/linux/aarch64/setcontext.S:123
> >
> > Signed-off-by: Peng Fan 
> 
> Hi Peng! Many thanks for the patch and for the investigation!
> 
> Only one minor question below
> 
> 
> > ---
> >  hw/xen/xen-mapcache.c | 31 +--
> >  1 file changed, 29 insertions(+), 2 deletions(-)
> >
> > diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c index
> > f7d974677d..4e1bb665ee 100644
> > --- a/hw/xen/xen-mapcache.c
> > +++ b/hw/xen/xen-mapcache.c
> > @@ -481,11 +481,38 @@ static void
> xen_invalidate_map_cache_entry_unlocked(uint8_t *buffer)
> >  g_free(entry);
> >  }
> >
> > -void xen_invalidate_map_cache_entry(uint8_t *buffer)
> > +typedef struct XenMapCacheData {
> > +Coroutine *co;
> > +uint8_t *buffer;
> > +int ret;
> 
> Do we need int ret? It doesn't look like we are using it.

Good catch, it is not needed, I will drop it in V2.

Thanks,
Peng.

> 
> 
> > +} XenMapCacheData;
> > +
> > +static void xen_invalidate_map_cache_entry_bh(void *opaque)
> >  {
> > +XenMapCacheData *data = opaque;
> > +
> >  mapcache_lock();
> > -xen_invalidate_map_cache_entry_unlocked(buffer);
> > +xen_invalidate_map_c

[PATCH] xen: Drop out of coroutine context xen_invalidate_map_cache_entry

2024-01-16 Thread Peng Fan (OSS)
From: Peng Fan 

xen_invalidate_map_cache_entry is not expected to run in a
coroutine. Without this, there is crash:

signo=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44
threadid=) at pthread_kill.c:78
at /usr/src/debug/glibc/2.38+git-r0/sysdeps/posix/raise.c:26
fmt=0x9e1ca8a8 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
assertion=assertion@entry=0xe0d25740 "!qemu_in_coroutine()",
file=file@entry=0xe0d301a8 "../qemu-xen-dir-remote/block/graph-lock.c", 
line=line@entry=260,
function=function@entry=0xe0e522c0 <__PRETTY_FUNCTION__.3> 
"bdrv_graph_rdlock_main_loop") at assert.c:92
assertion=assertion@entry=0xe0d25740 "!qemu_in_coroutine()",
file=file@entry=0xe0d301a8 "../qemu-xen-dir-remote/block/graph-lock.c", 
line=line@entry=260,
function=function@entry=0xe0e522c0 <__PRETTY_FUNCTION__.3> 
"bdrv_graph_rdlock_main_loop") at assert.c:101
at ../qemu-xen-dir-remote/block/graph-lock.c:260
at 
/home/Freenix/work/sw-stash/xen/upstream/tools/qemu-xen-dir-remote/include/block/graph-lock.h:259
host=host@entry=0x742c8000, size=size@entry=2097152)
at ../qemu-xen-dir-remote/block/io.c:3362
host=0x742c8000, size=2097152)
at ../qemu-xen-dir-remote/block/block-backend.c:2859
host=, size=, max_size=)
at ../qemu-xen-dir-remote/block/block-ram-registrar.c:33
size=2097152, max_size=2097152)
at ../qemu-xen-dir-remote/hw/core/numa.c:883
buffer=buffer@entry=0x743c5000 "")
at ../qemu-xen-dir-remote/hw/xen/xen-mapcache.c:475
buffer=buffer@entry=0x743c5000 "")
at ../qemu-xen-dir-remote/hw/xen/xen-mapcache.c:487
as=as@entry=0xe1ca3ae8 , buffer=0x743c5000,
len=, is_write=is_write@entry=true,
access_len=access_len@entry=32768)
at ../qemu-xen-dir-remote/system/physmem.c:3199
dir=DMA_DIRECTION_FROM_DEVICE, len=,
buffer=, as=0xe1ca3ae8 )
at 
/home/Freenix/work/sw-stash/xen/upstream/tools/qemu-xen-dir-remote/include/sysemu/dma.h:236
elem=elem@entry=0xf620aa30, len=len@entry=32769)
at ../qemu-xen-dir-remote/hw/virtio/virtio.c:758
elem=elem@entry=0xf620aa30, len=len@entry=32769, idx=idx@entry=0)
at ../qemu-xen-dir-remote/hw/virtio/virtio.c:919
elem=elem@entry=0xf620aa30, len=32769)
at ../qemu-xen-dir-remote/hw/virtio/virtio.c:994
req=req@entry=0xf620aa30, status=status@entry=0 '\000')
at ../qemu-xen-dir-remote/hw/block/virtio-blk.c:67
ret=0) at ../qemu-xen-dir-remote/hw/block/virtio-blk.c:136
at ../qemu-xen-dir-remote/block/block-backend.c:1559
--Type  for more, q to quit, c to continue without paging--
at ../qemu-xen-dir-remote/block/block-backend.c:1614
i1=) at ../qemu-xen-dir-remote/util/coroutine-ucontext.c:177
at ../sysdeps/unix/sysv/linux/aarch64/setcontext.S:123

Signed-off-by: Peng Fan 
---
 hw/xen/xen-mapcache.c | 31 +--
 1 file changed, 29 insertions(+), 2 deletions(-)

diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
index f7d974677d..4e1bb665ee 100644
--- a/hw/xen/xen-mapcache.c
+++ b/hw/xen/xen-mapcache.c
@@ -481,11 +481,38 @@ static void 
xen_invalidate_map_cache_entry_unlocked(uint8_t *buffer)
 g_free(entry);
 }
 
-void xen_invalidate_map_cache_entry(uint8_t *buffer)
+typedef struct XenMapCacheData {
+Coroutine *co;
+uint8_t *buffer;
+int ret;
+} XenMapCacheData;
+
+static void xen_invalidate_map_cache_entry_bh(void *opaque)
 {
+XenMapCacheData *data = opaque;
+
 mapcache_lock();
-xen_invalidate_map_cache_entry_unlocked(buffer);
+xen_invalidate_map_cache_entry_unlocked(data->buffer);
 mapcache_unlock();
+
+aio_co_wake(data->co);
+}
+
+void coroutine_mixed_fn xen_invalidate_map_cache_entry(uint8_t *buffer)
+{
+if (qemu_in_coroutine()) {
+XenMapCacheData data = {
+.co = qemu_coroutine_self(),
+.buffer = buffer,
+};
+aio_bh_schedule_oneshot(qemu_get_current_aio_context(),
+xen_invalidate_map_cache_entry_bh, &data);
+qemu_coroutine_yield();
+} else {
+mapcache_lock();
+xen_invalidate_map_cache_entry_unlocked(buffer);
+mapcache_unlock();
+}
 }
 
 void xen_invalidate_map_cache(void)
-- 
2.35.3




!qemu_in_coroutine() assert on ARM64 XEN

2024-01-07 Thread Peng Fan
Hi All,

When enabling virtio disk and virtio net on Xen, I could see qemu blk assert
and being killed sometimes,  This is not 100% reproducible. I am using
qemu master branch

7425b6277f12e82952cede1f531bfc689bf77fb1 (HEAD -> dummy, origin/staging, 
origin/master, origin/HEAD, master) Merge tag 'tracing-pull-request' 
of https://gitlab.com/stefanha/qemu into staging

The qemu built option is the one in xen tool/Makefile, I just
change to qemu-system-aarch64.

Anyone has suggestions?

The coredump stack:

Symbols already loaded for /usr/lib/libc.so.6
(gdb) bt
#0  __pthread_kill_implementation (threadid=,
signo=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44
#1  0x9e100568 in __pthread_kill_internal (signo=6,
threadid=) at pthread_kill.c:78
#2  0x9e0bacd0 in __GI_raise (sig=sig@entry=6)
at /usr/src/debug/glibc/2.38+git-r0/sysdeps/posix/raise.c:26
#3  0x9e0a6ef0 in __GI_abort () at abort.c:79
#4  0x9e0b43f8 in __assert_fail_base (
fmt=0x9e1ca8a8 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
assertion=assertion@entry=0xe0d25740 "!qemu_in_coroutine()",
file=file@entry=0xe0d301a8 "../qemu-xen-dir-remote/block/graph-lock.c", 
line=line@entry=260,
function=function@entry=0xe0e522c0 <__PRETTY_FUNCTION__.3> 
"bdrv_graph_rdlock_main_loop") at assert.c:92
#5  0x9e0b4470 in __assert_fail (
assertion=assertion@entry=0xe0d25740 "!qemu_in_coroutine()",
file=file@entry=0xe0d301a8 "../qemu-xen-dir-remote/block/graph-lock.c", 
line=line@entry=260,
function=function@entry=0xe0e522c0 <__PRETTY_FUNCTION__.3> 
"bdrv_graph_rdlock_main_loop") at assert.c:101
#6  0xe0a66a60 in bdrv_graph_rdlock_main_loop ()
at ../qemu-xen-dir-remote/block/graph-lock.c:260
#7  0xe0a6d9e0 in graph_lockable_auto_lock_mainloop (x=)
--Type  for more, q to quit, c to continue without paging--
at 
/home/Freenix/work/sw-stash/xen/upstream/tools/qemu-xen-dir-remote/include/block/graph-lock.h:259
#8  bdrv_unregister_buf (bs=bs@entry=0xf619d5a0,
host=host@entry=0x742c8000, size=size@entry=2097152)
at ../qemu-xen-dir-remote/block/io.c:3362
#9  0xe0a5ddd4 in blk_unregister_buf (blk=,
host=0x742c8000, size=2097152)
at ../qemu-xen-dir-remote/block/block-backend.c:2859
#10 0xe060aab4 in ram_block_removed (n=,
host=, size=, max_size=)
at ../qemu-xen-dir-remote/block/block-ram-registrar.c:33
#11 0xe0399318 in ram_block_notify_remove (host=0x742c8000,
size=2097152, max_size=2097152)
at ../qemu-xen-dir-remote/hw/core/numa.c:883
#12 0xe097cf84 in xen_invalidate_map_cache_entry_unlocked (
buffer=buffer@entry=0x743c5000 "")
at ../qemu-xen-dir-remote/hw/xen/xen-mapcache.c:475
#13 0xe097dad0 in xen_invalidate_map_cache_entry (
buffer=buffer@entry=0x743c5000 "")
at ../qemu-xen-dir-remote/hw/xen/xen-mapcache.c:487
#14 0xe0993e18 in address_space_unmap (
as=as@entry=0xe1ca3ae8 , buffer=0x743c5000,
len=, is_write=is_write@entry=true,
--Type  for more, q to quit, c to continue without paging--
access_len=access_len@entry=32768)
at ../qemu-xen-dir-remote/system/physmem.c:3199
#15 0xe095cc9c in dma_memory_unmap (access_len=32768,
dir=DMA_DIRECTION_FROM_DEVICE, len=,
buffer=, as=0xe1ca3ae8 )

at 
/home/Freenix/work/sw-stash/xen/upstream/tools/qemu-xen-dir-remote/include/sysemu/dma.h:236
#16 virtqueue_unmap_sg (vq=vq@entry=0x965cc010,
elem=elem@entry=0xf620aa30, len=len@entry=32769)

at ../qemu-xen-dir-remote/hw/virtio/virtio.c:758
#17 0xe095efa4 in virtqueue_fill (vq=vq@entry=0x965cc010,
elem=elem@entry=0xf620aa30, len=len@entry=32769, idx=idx@entry=0)
at ../qemu-xen-dir-remote/hw/virtio/virtio.c:919
#18 0xe095f0b8 in virtqueue_push (vq=0x965cc010,

elem=elem@entry=0xf620aa30, len=32769)
at ../qemu-xen-dir-remote/hw/virtio/virtio.c:994
#19 0xe091a608 in virtio_blk_req_complete (
req=req@entry=0xf620aa30, status=status@entry=0 '\000')

at ../qemu-xen-dir-remote/hw/block/virtio-blk.c:67
#20 0xe091bdc8 in virtio_blk_rw_complete (opaque=,
ret=0) at ../qemu-xen-dir-remote/hw/block/virtio-blk.c:136
#21 0xe0a5a938 in blk_aio_complete (acb=acb@entry=0x880015f0)

at ../qemu-xen-dir-remote/block/block-backend.c:1559
--Type  for more, q to quit, c to continue without paging--
#22 0xe0a5b58c in blk_aio_read_entry (opaque=0x880015f0)
at ../qemu-xen-dir-remote/block/block-backend.c:1614

#23 0xe0b96c2c in coroutine_trampoline (i0=,
i1=) at ../qemu-xen-dir-remote/util/coroutine-ucontext.c:177
#24 0x9e0bfb40 in ?? ()
at ../sysdeps/unix/sysv/linux/aarch64/setcontext.S:123

   from /usr/lib/libc.so.6

(gdb) thread apply all bt

Thread 10 (Thread 0x951348c0 (LWP 5460)):
#0  0x9e15d8c4 in __GI___libc_read (nbytes=16, buf=0x7c000cf