Re: [PATCH] qtest/x86/numa-test: do not use the obsolete 'pentium' cpu

2024-06-11 Thread Mario Casquero
This patch has been successfully tested. After running the numa-test
binary, now -cpu max is being used instead of the obsolete 'pentium'
one.

# starting QEMU: exec /home/qemu/build/qemu-system-x86_64 -qtest
unix:/tmp/qtest-16915.sock -qtest-log /dev/null -chardev
socket,path=/tmp/qtest-16915.qmp,id=char0 -mon
chardev=char0,mode=control -display none -audio none -object
memory-backend-ram,id=ram,size=128M -cpu max -machine
smp.cpus=8,smp.sockets=2,smp.cores=2,smp.threads=2 -numa
node,nodeid=0,memdev=ram -numa node,nodeid=1 -numa
cpu,node-id=1,socket-id=0 -numa cpu,node-id=0,socket-id=1,core-id=0
-numa cpu,node-id=0,socket-id=1,core-id=1,thread-id=0 -numa
cpu,node-id=1,socket-id=1,core-id=1,thread-id=1 -accel qtest

Tested-by: Mario Casquero 


On Tue, Jun 4, 2024 at 8:24 AM Ani Sinha  wrote:
>
> 'pentium' cpu is old and obsolete and should be avoided for running tests if
> its not strictly needed. Use 'max' cpu instead for generic non-cpu specific
> numa test.
>
> CC: th...@redhat.com
> Signed-off-by: Ani Sinha 
> ---
>  tests/qtest/numa-test.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/tests/qtest/numa-test.c b/tests/qtest/numa-test.c
> index 7aa262dbb9..f01f19592d 100644
> --- a/tests/qtest/numa-test.c
> +++ b/tests/qtest/numa-test.c
> @@ -125,7 +125,8 @@ static void pc_numa_cpu(const void *data)
>  QTestState *qts;
>  g_autofree char *cli = NULL;
>
> -cli = make_cli(data, "-cpu pentium -machine 
> smp.cpus=8,smp.sockets=2,smp.cores=2,smp.threads=2 "
> +cli = make_cli(data,
> +"-cpu max -machine 
> smp.cpus=8,smp.sockets=2,smp.cores=2,smp.threads=2 "
>  "-numa node,nodeid=0,memdev=ram -numa node,nodeid=1 "
>  "-numa cpu,node-id=1,socket-id=0 "
>  "-numa cpu,node-id=0,socket-id=1,core-id=0 "
> --
> 2.42.0
>
>




Re: [PATCH v4 5/5] backends/hostmem: Report error when memory size is unaligned

2024-06-05 Thread Mario Casquero
This patch has been successfully tested. Try to allocate some 2M
hugepages in the host, then boot up a VM with the memory size
unaligned and backed by a file, QEMU prompts the following message:
qemu-system-x86_64: backend 'memory-backend-file' memory size must be
multiple of 2 MiB

Tested-by: Mario Casquero 

On Wed, Jun 5, 2024 at 12:45 PM Michal Privoznik  wrote:
>
> If memory-backend-{file,ram} has a size that's not aligned to
> underlying page size it is not only wasteful, but also may lead
> to hard to debug behaviour. For instance, in case
> memory-backend-file and hugepages, madvise() and mbind() fail.
> Rightfully so, page is the smallest unit they can work with. And
> even though an error is reported, the root cause it not very
> clear:
>
>   qemu-system-x86_64: Couldn't set property 'dump' on 'memory-backend-file': 
> Invalid argument
>
> After this commit:
>
>   qemu-system-x86_64: backend 'memory-backend-file' memory size must be 
> multiple of 2 MiB
>
> Signed-off-by: Michal Privoznik 
> Reviewed-by: Philippe Mathieu-Daudé 
> Tested-by: Mario Casquero 
> ---
>  backends/hostmem.c | 10 ++
>  1 file changed, 10 insertions(+)
>
> diff --git a/backends/hostmem.c b/backends/hostmem.c
> index 012a8c190f..4d6c69fe4d 100644
> --- a/backends/hostmem.c
> +++ b/backends/hostmem.c
> @@ -20,6 +20,7 @@
>  #include "qom/object_interfaces.h"
>  #include "qemu/mmap-alloc.h"
>  #include "qemu/madvise.h"
> +#include "qemu/cutils.h"
>  #include "hw/qdev-core.h"
>
>  #ifdef CONFIG_NUMA
> @@ -337,6 +338,7 @@ host_memory_backend_memory_complete(UserCreatable *uc, 
> Error **errp)
>  HostMemoryBackendClass *bc = MEMORY_BACKEND_GET_CLASS(uc);
>  void *ptr;
>  uint64_t sz;
> +size_t pagesize;
>  bool async = !phase_check(PHASE_LATE_BACKENDS_CREATED);
>
>  if (!bc->alloc) {
> @@ -348,6 +350,14 @@ host_memory_backend_memory_complete(UserCreatable *uc, 
> Error **errp)
>
>  ptr = memory_region_get_ram_ptr(>mr);
>  sz = memory_region_size(>mr);
> +pagesize = qemu_ram_pagesize(backend->mr.ram_block);
> +
> +if (!QEMU_IS_ALIGNED(sz, pagesize)) {
> +g_autofree char *pagesize_str = size_to_str(pagesize);
> +error_setg(errp, "backend '%s' memory size must be multiple of %s",
> +   object_get_typename(OBJECT(uc)), pagesize_str);
> +return;
> +}
>
>  if (backend->merge &&
>  qemu_madvise(ptr, sz, QEMU_MADV_MERGEABLE)) {
> --
> 2.44.1
>
>




Re: [PATCH v2 4/4] backends/hostmem: Report error when memory size is unaligned

2024-06-05 Thread Mario Casquero
This patch has been successfully tested by QE. After allocating some
hugepages in the host, try to boot up a VM with the memory backed by a
file and the size unaligned, check now the message displayed by QEMU:
qemu-system-x86_64: backend memory size must be multiple of 0x20

Tested-by: Mario Casquero 


On Fri, May 31, 2024 at 9:29 AM Michal Privoznik  wrote:
>
> If memory-backend-{file,ram} has a size that's not aligned to
> underlying page size it is not only wasteful, but also may lead
> to hard to debug behaviour. For instance, in case
> memory-backend-file and hugepages, madvise() and mbind() fail.
> Rightfully so, page is the smallest unit they can work with. And
> even though an error is reported, the root cause it not very
> clear:
>
>   qemu-system-x86_64: Couldn't set property 'dump' on 'memory-backend-file': 
> Invalid argument
>
> After this commit:
>
>   qemu-system-x86_64: backend memory size must be multiple of 0x20
>
> Signed-off-by: Michal Privoznik 
> ---
>  backends/hostmem.c | 8 
>  1 file changed, 8 insertions(+)
>
> diff --git a/backends/hostmem.c b/backends/hostmem.c
> index 012a8c190f..5f9c9ea8f5 100644
> --- a/backends/hostmem.c
> +++ b/backends/hostmem.c
> @@ -337,6 +337,7 @@ host_memory_backend_memory_complete(UserCreatable *uc, 
> Error **errp)
>  HostMemoryBackendClass *bc = MEMORY_BACKEND_GET_CLASS(uc);
>  void *ptr;
>  uint64_t sz;
> +size_t pagesize;
>  bool async = !phase_check(PHASE_LATE_BACKENDS_CREATED);
>
>  if (!bc->alloc) {
> @@ -348,6 +349,13 @@ host_memory_backend_memory_complete(UserCreatable *uc, 
> Error **errp)
>
>  ptr = memory_region_get_ram_ptr(>mr);
>  sz = memory_region_size(>mr);
> +pagesize = qemu_ram_pagesize(backend->mr.ram_block);
> +
> +if (!QEMU_IS_ALIGNED(sz, pagesize)) {
> +error_setg(errp, "backend memory size must be multiple of 0x%"
> +   PRIx64, pagesize);
> +return;
> +}
>
>  if (backend->merge &&
>  qemu_madvise(ptr, sz, QEMU_MADV_MERGEABLE)) {
> --
> 2.44.1
>
>




Re: [PATCH 0/3] Remove useless architecture prefix from the CPU list

2024-04-26 Thread Mario Casquero
This series has been successfully tested in x86. Execute the cpu help
command and check in the list the x86 prefix is no longer present.

Tested-by: Mario Casquero 

On Sat, Apr 20, 2024 at 7:47 AM Thomas Huth  wrote:
>
> Printing an architecture prefix in front of each CPU name is not helpful
> at all: It is confusing for the users since they don't know whether they
> have to specify these letters for the "-cpu" parameter, too, and it also
> takes some precious space in the dense output of the CPU entries. Let's
> simply remove those now.
>
> Thomas Huth (3):
>   target/i386/cpu: Remove "x86" prefix from the CPU list
>   target/s390x/cpu_models: Rework the output of "-cpu help"
>   target/ppc/cpu_init: Remove "PowerPC" prefix from the CPU list
>
>  target/i386/cpu.c | 2 +-
>  target/ppc/cpu_init.c | 9 +
>  target/s390x/cpu_models.c | 9 +
>  3 files changed, 11 insertions(+), 9 deletions(-)
>
> --
> 2.44.0
>
>




Re: [PATCH v1] virtio-mem: improve error message when unplug of device fails due to plugged memory

2024-04-26 Thread Mario Casquero
This patch has been successfully tested. Boot up a VM with a
virtio-mem device, hotplug some memory increasing the requested-size,
finally try to unplug the device and see the new message:
(qemu) device_del vmem0
Error: virtio-mem device cannot get unplugged while some of its memory
is still plugged

Tested-by: Mario Casquero 


On Tue, Apr 16, 2024 at 4:14 PM David Hildenbrand  wrote:
>
> The error message is actually expressive, considering QEMU only. But
> when called from Libvirt, talking about "size" can be confusing, because
> in Libvirt "size" translates to the memory backend size in QEMU (maximum
> size) and "current" translates to the QEMU "size" property.
>
> Let's simply avoid talking about the "size" property and spell out that
> some device memory is still plugged.
>
> Cc: Liang Cong 
> Cc: Mario Casquero 
> Cc: "Michael S. Tsirkin" 
> Signed-off-by: David Hildenbrand 
> ---
>  hw/virtio/virtio-mem.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c
> index ffd119ebac..ef64bf1b4a 100644
> --- a/hw/virtio/virtio-mem.c
> +++ b/hw/virtio/virtio-mem.c
> @@ -1832,8 +1832,8 @@ static void virtio_mem_unplug_request_check(VirtIOMEM 
> *vmem, Error **errp)
>  }
>
>  if (vmem->size) {
> -error_setg(errp, "virtio-mem device cannot get unplugged while"
> -   " '" VIRTIO_MEM_SIZE_PROP "' != '0'");
> +error_setg(errp, "virtio-mem device cannot get unplugged while some"
> +   " of its memory is still plugged");
>  return;
>  }
>  if (vmem->requested_size) {
> --
> 2.44.0
>




Re: [PATCH v2 00/14] libvhost-user: support more memslots and cleanup memslot handling code

2024-03-11 Thread Mario Casquero
This series has been successfully tested by QE. Start the
qemu-storage-daemon in the background with a rhel 9.5 image and
vhost-user-blk. After that, boot up a VM with virtio-mem and
vhost-user-blk-pci. Check with the HMP command 'info mtree' that
virtio-mem is making use of multiple memslots.

Tested-by: Mario Casquero 

On Mon, Mar 11, 2024 at 9:00 PM Mario Casquero  wrote:
>
> This series has been successfully tested by QE. Start the
> qemu-storage-daemon in the background with a rhel 9.5 image and
> vhost-user-blk. After that, boot up a VM with virtio-mem and
> vhost-user-blk-pci. Check with the HMP command 'info mtree' that
> virtio-mem is making use of multiple memslots.
>
>
> On Wed, Feb 14, 2024 at 4:18 PM David Hildenbrand  wrote:
> >
> > This series adds support for more memslots (509) to libvhost-user, to
> > make it fully compatible with virtio-mem that uses up to 256 memslots
> > accross all memory devices in "dynamic-memslot" mode (more details
> > in patch #2).
> >
> > With that in place, this series optimizes and extends memory region
> > handling in libvhost-user:
> > * Heavily deduplicate and clean up the memory region handling code
> > * Speeds up GPA->VA translation with many memslots using binary search
> > * Optimize mmap_offset handling to use it as fd_offset for mmap()
> > * Avoid ring remapping when adding a single memory region
> > * Avoid dumping all guest memory, possibly allocating memory in sparse
> >   memory mappings when the process crashes
> >
> > I'm being very careful to not break some weird corner case that modern
> > QEMU might no longer trigger, but older one could have triggered or some
> > other frontend might trigger.
> >
> > The only thing where I am not careful is to forbid memory regions that
> > overlap in GPA space: it doesn't make any sense.
> >
> > With this series, virtio-mem (with dynamic-memslots=on) +
> > qemu-storage-daemon works flawlessly and as expected in my tests.
> >
> > v1 -> v2:
> > * Drop "libvhost-user: Fix msg_region->userspace_addr computation"
> >  -> Not actually required
> > * "libvhost-user: Factor out adding a memory region"
> >  -> Make debug output more consistent (add missing ":")
> > * "libvhost-user: Use most of mmap_offset as fd_offset"
> >  -> get_fd_pagesize -> get_fd_hugepagesize; remove getpagesize()
> >  -> "mmap_offset:" to "old mmap_offset:" in debug message
> >  -> "adj mmap_offset:" to "new mmap_offset:" in debug message
> >  -> Use "(unsigned int)fs.f_type"; the man page of fstatfs() calls out
> > that the type of f_type can vary depending on the architecture.
> > "unsigned int" is sufficient here.
> >  -> Updated patch description
> > * Added RBs+ACKs
> > * Did a Gitlab CI run, seems to be happy reagrding libvhost-user
> >
> > Cc: Michael S. Tsirkin 
> > Cc: Jason Wang 
> > Cc: Stefan Hajnoczi 
> > Cc: Stefano Garzarella 
> > Cc: Germano Veit Michel 
> > Cc: Raphael Norwitz 
> >
> > David Hildenbrand (14):
> >   libvhost-user: Dynamically allocate memory for memory slots
> >   libvhost-user: Bump up VHOST_USER_MAX_RAM_SLOTS to 509
> >   libvhost-user: Factor out removing all mem regions
> >   libvhost-user: Merge vu_set_mem_table_exec_postcopy() into
> > vu_set_mem_table_exec()
> >   libvhost-user: Factor out adding a memory region
> >   libvhost-user: No need to check for NULL when unmapping
> >   libvhost-user: Don't zero out memory for memory regions
> >   libvhost-user: Don't search for duplicates when removing memory
> > regions
> >   libvhost-user: Factor out search for memory region by GPA and simplify
> >   libvhost-user: Speedup gpa_to_mem_region() and vu_gpa_to_va()
> >   libvhost-user: Use most of mmap_offset as fd_offset
> >   libvhost-user: Factor out vq usability check
> >   libvhost-user: Dynamically remap rings after (temporarily?) removing
> > memory regions
> >   libvhost-user: Mark mmap'ed region memory as MADV_DONTDUMP
> >
> >  subprojects/libvhost-user/libvhost-user.c | 595 --
> >  subprojects/libvhost-user/libvhost-user.h |  10 +-
> >  2 files changed, 334 insertions(+), 271 deletions(-)
> >
> > --
> > 2.43.0
> >
> >




Re: [PATCH v2 00/14] libvhost-user: support more memslots and cleanup memslot handling code

2024-03-11 Thread Mario Casquero
This series has been successfully tested by QE. Start the
qemu-storage-daemon in the background with a rhel 9.5 image and
vhost-user-blk. After that, boot up a VM with virtio-mem and
vhost-user-blk-pci. Check with the HMP command 'info mtree' that
virtio-mem is making use of multiple memslots.


On Wed, Feb 14, 2024 at 4:18 PM David Hildenbrand  wrote:
>
> This series adds support for more memslots (509) to libvhost-user, to
> make it fully compatible with virtio-mem that uses up to 256 memslots
> accross all memory devices in "dynamic-memslot" mode (more details
> in patch #2).
>
> With that in place, this series optimizes and extends memory region
> handling in libvhost-user:
> * Heavily deduplicate and clean up the memory region handling code
> * Speeds up GPA->VA translation with many memslots using binary search
> * Optimize mmap_offset handling to use it as fd_offset for mmap()
> * Avoid ring remapping when adding a single memory region
> * Avoid dumping all guest memory, possibly allocating memory in sparse
>   memory mappings when the process crashes
>
> I'm being very careful to not break some weird corner case that modern
> QEMU might no longer trigger, but older one could have triggered or some
> other frontend might trigger.
>
> The only thing where I am not careful is to forbid memory regions that
> overlap in GPA space: it doesn't make any sense.
>
> With this series, virtio-mem (with dynamic-memslots=on) +
> qemu-storage-daemon works flawlessly and as expected in my tests.
>
> v1 -> v2:
> * Drop "libvhost-user: Fix msg_region->userspace_addr computation"
>  -> Not actually required
> * "libvhost-user: Factor out adding a memory region"
>  -> Make debug output more consistent (add missing ":")
> * "libvhost-user: Use most of mmap_offset as fd_offset"
>  -> get_fd_pagesize -> get_fd_hugepagesize; remove getpagesize()
>  -> "mmap_offset:" to "old mmap_offset:" in debug message
>  -> "adj mmap_offset:" to "new mmap_offset:" in debug message
>  -> Use "(unsigned int)fs.f_type"; the man page of fstatfs() calls out
> that the type of f_type can vary depending on the architecture.
> "unsigned int" is sufficient here.
>  -> Updated patch description
> * Added RBs+ACKs
> * Did a Gitlab CI run, seems to be happy reagrding libvhost-user
>
> Cc: Michael S. Tsirkin 
> Cc: Jason Wang 
> Cc: Stefan Hajnoczi 
> Cc: Stefano Garzarella 
> Cc: Germano Veit Michel 
> Cc: Raphael Norwitz 
>
> David Hildenbrand (14):
>   libvhost-user: Dynamically allocate memory for memory slots
>   libvhost-user: Bump up VHOST_USER_MAX_RAM_SLOTS to 509
>   libvhost-user: Factor out removing all mem regions
>   libvhost-user: Merge vu_set_mem_table_exec_postcopy() into
> vu_set_mem_table_exec()
>   libvhost-user: Factor out adding a memory region
>   libvhost-user: No need to check for NULL when unmapping
>   libvhost-user: Don't zero out memory for memory regions
>   libvhost-user: Don't search for duplicates when removing memory
> regions
>   libvhost-user: Factor out search for memory region by GPA and simplify
>   libvhost-user: Speedup gpa_to_mem_region() and vu_gpa_to_va()
>   libvhost-user: Use most of mmap_offset as fd_offset
>   libvhost-user: Factor out vq usability check
>   libvhost-user: Dynamically remap rings after (temporarily?) removing
> memory regions
>   libvhost-user: Mark mmap'ed region memory as MADV_DONTDUMP
>
>  subprojects/libvhost-user/libvhost-user.c | 595 --
>  subprojects/libvhost-user/libvhost-user.h |  10 +-
>  2 files changed, 334 insertions(+), 271 deletions(-)
>
> --
> 2.43.0
>
>




Re: [PATCH v4 1/1] oslib-posix: initialize backend memory objects in parallel

2024-02-02 Thread Mario Casquero
This patch has been successfully tested by QE. After configuring two
memory-backends with preallocation context objects, binded to two host
nodes; the result is QEMU being at least three times faster than
before.

# time /usr/libexec/qemu-kvm -M q35 -m 16G,maxmem=32G -numa
node,memdev=mem0,nodeid=0 -numa node,memdev=mem1,nodeid=1 -object
thread-context,id=tc1,node-affinity=0 -object
thread-context,id=tc2,node-affinity=1 -object
memory-backend-ram,id=mem0,size=8G,policy=bind,host-nodes=0 -object
memory-backend-ram,id=mem1,size=8G,policy=bind,host-nodes=1 -nographic
-monitor stdio
QEMU 8.2.0 monitor - type 'help' for more information
qemu-kvm: cannot use stdio by multiple character devices
qemu-kvm: could not connect serial device to character backend 'stdio'

real 0m0.038s
user 0m0.013s
sys 0m0.005s

# time /home/qemu/build/qemu-system-x86_64 -M q35 -m 16G,maxmem=32G
-numa node,memdev=mem0,nodeid=0 -numa node,memdev=mem1,nodeid=1
-object thread-context,id=tc1,node-affinity=0 -object
thread-context,id=tc2,node-affinity=1 -object
memory-backend-ram,id=mem0,size=8G,policy=bind,host-nodes=0 -object
memory-backend-ram,id=mem1,size=8G,policy=bind,host-nodes=1 -nographic
-monitor stdio
QEMU 8.2.50 monitor - type 'help' for more information
qemu-system-x86_64: cannot use stdio by multiple character devices
qemu-system-x86_64: could not connect serial device to character backend 'stdio'

real 0m0.012s
user 0m0.006s
sys 0m0.007s

Tested-by: Mario Casquero 


On Wed, Jan 31, 2024 at 7:24 PM David Hildenbrand  wrote:
>
> On 31.01.24 17:53, Mark Kanda wrote:
> > QEMU initializes preallocated backend memory as the objects are parsed from
> > the command line. This is not optimal in some cases (e.g. memory spanning
> > multiple NUMA nodes) because the memory objects are initialized in series.
> >
> > Allow the initialization to occur in parallel (asynchronously). In order to
> > ensure optimal thread placement, asynchronous initialization requires 
> > prealloc
> > context threads to be in use.
> >
> > Signed-off-by: Mark Kanda 
> > Signed-off-by: David Hildenbrand 
> > ---
>
> So, this LGTM. There might be ways to not rely on phases to achieve what
> we want to achieve (e.g., let the machine set an internal property on
> memory backends we create from the cmdline), but this should do as well.
>
> I'll wait a bit for more feedback. If there is none, I'll route this
> through my tree (after doing a quick sanity test).
>
> Thanks!
>
> --
> Cheers,
>
> David / dhildenb
>
>




Re: [PATCH v1 2/2] memory-device: reintroduce memory region size check

2024-01-19 Thread Mario Casquero
This series has also been successfully tested in x86_64.

Tested-by: Mario Casquero 

On Thu, Jan 18, 2024 at 4:08 AM Zhenyu Zhang  wrote:
>
> [PATCH v1 2/2] memory-device: reintroduce memory region size check
>
> Test on 64k basic page size aarch64
> The patches work well on my Ampere host.
> The test results are as expected.
>
> (a) 1M with 512M THP
> /home/zhenyzha/sandbox/qemu.main/build/qemu-system-aarch64 \
> -name 'avocado-vt-vm1'  \
> -sandbox on \
> :
> -nodefaults \
> -m 4096,maxmem=32G,slots=4 \
> -object memory-backend-ram,id=mem1,size=1M \
> -device pc-dimm,id=dimm1,memdev=mem1 \
> -smp 4  \
> -cpu 'host' \
> -accel kvm \
> -monitor stdio
> -> backend memory size must be multiple of 0x20
>
> (b) 1G+1byte
> /home/zhenyzha/sandbox/qemu.main/build/qemu-system-aarch64 \
> -name 'avocado-vt-vm1'  \
> -sandbox on \
> :
> -nodefaults \
> -m 4096,maxmem=32G,slots=4 \
> -object memory-backend-ram,id=mem1,size=1073741825B \
> -device pc-dimm,id=dimm1,memdev=mem1 \
> -smp 4  \
> -cpu 'host' \
> -accel kvm \
> -monitor stdio
> -> backend memory size must be multiple of 0x20
>
> (c) Unliagned hugetlb size (2M)
> /home/zhenyzha/sandbox/qemu.main/build/qemu-system-aarch64 \
> -name 'avocado-vt-vm1'  \
> -sandbox on \
> :
> -nodefaults \
> -m 4096,maxmem=32G,slots=4 \
> -object 
> memory-backend-file,id=mem1,prealloc=yes,mem-path=/mnt/kvm_hugepage,size=2047M
> \
> -device pc-dimm,id=dimm1,memdev=mem1 \
> -smp 4  \
> -cpu 'host' \
> -accel kvm \
> -monitor stdio
> -> backend memory size must be multiple of 0x20
>
> (d)  2M with 512M hugetlb size
> /home/zhenyzha/sandbox/qemu.main/build/qemu-system-aarch64 \
> -name 'avocado-vt-vm1'  \
> -sandbox on \
> :
> -nodefaults \
> -m 4096,maxmem=32G,slots=4 \
> -object memory-backend-file,id=mem1,mem-path=/dev/hugepages/tmp,size=2M \
> -device pc-dimm,id=dimm1,memdev=mem1 \
> -smp 4  \
> -cpu 'host' \
> -accel kvm \
> -monitor stdio
> -> memory size 0x20 must be equal to or larger than page size 0x2000
>
> (e)  511M with 512M hugetlb size
> /home/zhenyzha/sandbox/qemu.main/build/qemu-system-aarch64 \
> -name 'avocado-vt-vm1'  \
> -sandbox on \
> :
> -nodefaults \
> -m 4096,maxmem=32G,slots=4 \
> -object memory-backend-file,id=mem1,mem-path=/dev/hugepages/tmp,size=511M \
> -device pc-dimm,id=dimm1,memdev=mem1 \
> -smp 4  \
> -cpu 'host' \
> -accel kvm \
> -monitor stdio
> -> memory size 0x1ff0 must be equal to or larger than page size 0x2000
>
> Tested-by: Zhenyu Zhang 
>
>
>
> On Wed, Jan 17, 2024 at 9:56 PM David Hildenbrand  wrote:
> >
> > We used to check that the memory region size is multiples of the overall
> > requested address alignment for the device memory address.
> >
> > We removed that check, because there are cases (i.e., hv-balloon) where
> > devices unconditionally request an address alignment that has a very large
> > alignment (i.e., 32 GiB), but the actual memory device size might not be
> > multiples of that alignment.
> >
> > However, this change:
> >
> > (a) allows for some practically impossible DIMM sizes, like "1GB+1 byte".
> > (b) allows for DIMMs that partially cover hugetlb pages, previously
> > reported in [1].
> >
> > Both scenarios don't make any sense: we might even waste memory.
> >
> > So let's reintroduce that check, but only check that the
> > memory region size is multiples of the memory region alignment (i.e.,
> > page size, huge page size), but not any additional memory device
> > requirements communicated using md->get_min_alignment().
> >
> > The following examples now fail again as expected:
> >
> > (a) 1M with 2M THP
> >  qemu-system-x86_64 -m 4g,maxmem=16g,slots=1 -S -nodefaults -nographic \
> >  -object memory-backend-ram,id=mem1,size=1M \
> >  -device pc-dimm,id=dimm1,memdev=mem1
> >  -> backend memory size must be multiple of 0x20
> >
> > (b) 1G+1byte
> >
> >  qemu-system-x86_64 -m 4g,maxmem=16g,slots=1 -S -nodefaults -nographic \
> >-object memory-backend-ram,id=mem1,size=1073741825B \
> >-device pc-dimm,id=dimm1,memdev=mem1
> >  -> backend memory size must be multiple of 0x20
> >
> > (c) Unliagned hugetlb size (2M)
> >
> >  qemu-system-x86_64 -m 4g,maxmem=16g,slots=1 -S -nodefaults -nographic \
> >-object 
> > memory-backend-file,id=mem1,mem-path=/dev/hugepages/tmp,size=511M \
> >-device pc-dimm,id=dimm1,memd

Re: [PATCH 6/7] balloon: Fix a misleading error message

2023-11-02 Thread Mario Casquero
This patch has been successfully tested by QE. Start a VM with a
virtio-balloon device and resize it to an invalid value. Check in the
expected error message that now makes reference to 'value' instead of
'target'.

Tested-by: Mario Casquero 

On Tue, Oct 31, 2023 at 12:12 PM Markus Armbruster  wrote:
>
> The error message
>
> {"execute": "balloon", "arguments":{"value": -1}}
> {"error": {"class": "GenericError", "desc": "Parameter 'target' expects a 
> size"}}
>
> points to 'target' instead of 'value'.  Fix:
>
> {"error": {"class": "GenericError", "desc": "Parameter 'value' expects a 
> size"}}
>
> Root cause: qmp_balloon()'s parameter is named @target.  Rename it to
> @value to match the QAPI schema.
>
> Signed-off-by: Markus Armbruster 
> ---
>  system/balloon.c | 10 +-
>  1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/system/balloon.c b/system/balloon.c
> index e0e8969a4b..fda7af832e 100644
> --- a/system/balloon.c
> +++ b/system/balloon.c
> @@ -90,17 +90,17 @@ BalloonInfo *qmp_query_balloon(Error **errp)
>  return info;
>  }
>
> -void qmp_balloon(int64_t target, Error **errp)
> +void qmp_balloon(int64_t value, Error **errp)
>  {
>  if (!have_balloon(errp)) {
>  return;
>  }
>
> -if (target <= 0) {
> -error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "target", "a size");
> +if (value <= 0) {
> +error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "value", "a size");
>  return;
>  }
>
> -trace_balloon_event(balloon_opaque, target);
> -balloon_event_fn(balloon_opaque, target);
> +trace_balloon_event(balloon_opaque, value);
> +balloon_event_fn(balloon_opaque, value);
>  }
> --
> 2.41.0
>
>




Re: [PATCH v1] virtio-mem: fix division by zero in virtio_mem_activate_memslots_to_plug()

2023-10-24 Thread Mario Casquero
This patch has been successfully tested by QE. With a debug QEMU
build, start a VM with a virtio-mem device and 'dynamic-memslots=off',
this one can be resized seamlessly, no floating point exception found.

Tested-by: Mario Casquero 




On Mon, Oct 23, 2023 at 1:13 PM David Hildenbrand  wrote:
>
> When running with "dynamic-memslots=off", we enter
> virtio_mem_activate_memslots_to_plug() to return immediately again
> because "vmem->dynamic_memslots == false". However, the compiler might
> not optimize out calculating start_idx+end_idx, where we divide by
> vmem->memslot_size. In such a configuration, the memslot size is 0 and
> we'll get a division by zero:
>
> (qemu) qom-set vmem0 requested-size 3G
> (qemu) q35.sh: line 38: 622940 Floating point exception(core dumped)
>
> The same is true for virtio_mem_deactivate_unplugged_memslots(), however
> we never really reach that code without a prior
> virtio_mem_activate_memslots_to_plug() call.
>
> Let's fix it by simply calling these functions only with
> "dynamic-memslots=on".
>
> This was found when using a debug build of QEMU.
>
> Reprted-by: Mario Casquero 
> Fixes: 177f9b1ee464 ("virtio-mem: Expose device memory dynamically via 
> multiple memslots if enabled")
> Cc: Michael S. Tsirkin 
> Cc: Maciej S. Szmigiero 
> Signed-off-by: David Hildenbrand 
> ---
>  hw/virtio/virtio-mem.c | 22 --
>  1 file changed, 12 insertions(+), 10 deletions(-)
>
> diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c
> index 9dc3c61b5a..be4b0b364f 100644
> --- a/hw/virtio/virtio-mem.c
> +++ b/hw/virtio/virtio-mem.c
> @@ -525,9 +525,7 @@ static void 
> virtio_mem_activate_memslots_to_plug(VirtIOMEM *vmem,
>   vmem->memslot_size;
>  unsigned int idx;
>
> -if (!vmem->dynamic_memslots) {
> -return;
> -}
> +assert(vmem->dynamic_memslots);
>
>  /* Activate all involved memslots in a single transaction. */
>  memory_region_transaction_begin();
> @@ -547,9 +545,7 @@ static void 
> virtio_mem_deactivate_unplugged_memslots(VirtIOMEM *vmem,
>   vmem->memslot_size;
>  unsigned int idx;
>
> -if (!vmem->dynamic_memslots) {
> -return;
> -}
> +assert(vmem->dynamic_memslots);
>
>  /* Deactivate all memslots with unplugged blocks in a single 
> transaction. */
>  memory_region_transaction_begin();
> @@ -598,7 +594,9 @@ static int virtio_mem_set_block_state(VirtIOMEM *vmem, 
> uint64_t start_gpa,
>  virtio_mem_notify_unplug(vmem, offset, size);
>  virtio_mem_set_range_unplugged(vmem, start_gpa, size);
>  /* Deactivate completely unplugged memslots after updating the 
> state. */
> -virtio_mem_deactivate_unplugged_memslots(vmem, offset, size);
> +if (vmem->dynamic_memslots) {
> +virtio_mem_deactivate_unplugged_memslots(vmem, offset, size);
> +}
>  return 0;
>  }
>
> @@ -635,9 +633,11 @@ static int virtio_mem_set_block_state(VirtIOMEM *vmem, 
> uint64_t start_gpa,
>   * blocks we are plugging here. The following notification will 
> inform
>   * registered listeners about the blocks we're plugging.
>   */
> -virtio_mem_activate_memslots_to_plug(vmem, offset, size);
> +if (vmem->dynamic_memslots) {
> +virtio_mem_activate_memslots_to_plug(vmem, offset, size);
> +}
>  ret = virtio_mem_notify_plug(vmem, offset, size);
> -if (ret) {
> +if (ret && vmem->dynamic_memslots) {
>  virtio_mem_deactivate_unplugged_memslots(vmem, offset, size);
>  }
>  }
> @@ -749,7 +749,9 @@ static int virtio_mem_unplug_all(VirtIOMEM *vmem)
>  notifier_list_notify(>size_change_notifiers, >size);
>
>  /* Deactivate all memslots after updating the state. */
> -virtio_mem_deactivate_unplugged_memslots(vmem, 0, region_size);
> +if (vmem->dynamic_memslots) {
> +virtio_mem_deactivate_unplugged_memslots(vmem, 0, region_size);
> +}
>  }
>
>  trace_virtio_mem_unplugged_all();
> --
> 2.41.0
>




Re: [PATCH v4 07/11] softmmu/physmem: Never return directories from file_ram_open()

2023-09-07 Thread Mario Casquero
This series has been successfully tested by QE. Specify a directory
for the mem-path of a memory-backend-file object. Check the error
message has been improved without referring to a device, which can
lead to confusion.

Tested-by: Mario Casquero 






On Wed, Sep 6, 2023 at 2:07 PM David Hildenbrand  wrote:
>
> open() does not fail on directories when opening them readonly (O_RDONLY).
>
> Currently, we succeed opening such directories and fail later during
> mmap(), resulting in a misleading error message.
>
> $ ./qemu-system-x86_64 \
> -object memory-backend-file,id=ram0,mem-path=tmp,readonly=true,size=1g
>  qemu-system-x86_64: unable to map backing store for guest RAM: No such device
>
> To identify directories and handle them accordingly in file_ram_open()
> also when readonly=true was specified, detect if we just opened a directory
> using fstat() instead. Then, fail file_ram_open() right away, similarly
> to how we now fail if the file does not exist and we want to open the
> file readonly.
>
> With this change, we get a nicer error message:
>  qemu-system-x86_64: can't open backing store tmp for guest RAM: Is a 
> directory
>
> Note that the only memory-backend-file will end up calling
> memory_region_init_ram_from_file() -> qemu_ram_alloc_from_file() ->
> file_ram_open().
>
> Reported-by: Thiner Logoer 
> Reviewed-by: Peter Xu 
> Signed-off-by: David Hildenbrand 
> ---
>  softmmu/physmem.c | 19 +++
>  1 file changed, 19 insertions(+)
>
> diff --git a/softmmu/physmem.c b/softmmu/physmem.c
> index 138402b6cf..f1cd3ec28a 100644
> --- a/softmmu/physmem.c
> +++ b/softmmu/physmem.c
> @@ -1299,6 +1299,25 @@ static int file_ram_open(const char *path,
>  for (;;) {
>  fd = open(path, readonly ? O_RDONLY : O_RDWR);
>  if (fd >= 0) {
> +/*
> + * open(O_RDONLY) won't fail with EISDIR. Check manually if we
> + * opened a directory and fail similarly to how we fail ENOENT
> + * in readonly mode. Note that mkstemp() would imply O_RDWR.
> + */
> +if (readonly) {
> +struct stat file_stat;
> +
> +if (fstat(fd, _stat)) {
> +close(fd);
> +if (errno == EINTR) {
> +continue;
> +}
> +return -errno;
> +} else if (S_ISDIR(file_stat.st_mode)) {
> +close(fd);
> +return -EISDIR;
> +}
> +}
>  /* @path names an existing file, use it */
>  break;
>  }
> --
> 2.41.0
>
>




Re: [PATCH v3 11/11] machine: Improve error message when using default RAM backend id

2023-08-29 Thread Mario Casquero
This series has been successfully tested by QE. Start a vm using
pc.ram id but specifying a different memory-backend from the default
one. Check the error message has been improved.

Tested-by: Mario Casquero 


On Wed, Aug 23, 2023 at 5:38 PM David Hildenbrand  wrote:
>
> For migration purposes, users might want to reuse the default RAM
> backend id, but specify a different memory backend.
>
> For example, to reuse "pc.ram" on q35, one has to set
> -machine q35,memory-backend=pc.ram
> Only then, can a memory backend with the id "pc.ram" be created
> manually.
>
> Let's improve the error message.
>
> Unfortuantely, we cannot use error_append_hint(), because the caller
> passes _fatal.
>
> Suggested-by: ThinerLogoer 
> Signed-off-by: David Hildenbrand 
> ---
>  hw/core/machine.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index f0d35c6401..dbcd124d45 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -1382,7 +1382,9 @@ void machine_run_board_init(MachineState *machine, 
> const char *mem_path, Error *
>   machine_class->default_ram_id)) {
>  error_setg(errp, "object name '%s' is reserved for the default"
>  " RAM backend, it can't be used for any other purposes."
> -" Change the object's 'id' to something else",
> +" Change the object's 'id' to something else or disable"
> +" automatic creation of the default RAM backend by setting"
> +" the 'memory-backend' machine property",
>  machine_class->default_ram_id);
>  return;
>  }
> --
> 2.41.0
>
>




Re: [PATCH v3 0/7] virtio-mem: Device unplug support

2023-07-11 Thread Mario Casquero
This series has been successfully tested by QE. Start a VM, plug a
virtio-mem device, resize the device (adding memory) and verify that
the virtio-mem device cannot be unplugged. Finally, resize the device
(removing all the memory) and check that it can be unplugged
seamlessly.

Tested-by: Mario Casquero 

On Mon, Jul 10, 2023 at 12:07 PM David Hildenbrand  wrote:
>
> Any further comments? IMHO this is pretty straight forward. I'll wait
> a bit longer for more feedback.
>
>
> One limitation of virtio-mem is that we cannot currently unplug virtio-mem
> devices that have all memory unplugged from the VM.
>
> Let's properly handle forced unplug (as can be triggered by the VM) and
> add support for ordinary unplug (requests) of virtio-mem devices that are
> in a compatible state (no legacy mode, no plugged memory, no plug request).
>
> Briefly tested on both, x86_64 and aarch64.
>
> v2 -> v3:
> - "virtio-md-pci: New parent type for virtio-mem-pci and virtio-pmem-pci"
>  -> Add MAINTAINERS entry
>
> v1 -> v2:
> - Reduce code duplication and implement it in a cleaner way using a
>   new abstract virtio-md-pci parent class
> - "virtio-md-pci: New parent type for virtio-mem-pci and virtio-pmem-pci"
>  -> Added, use a new aprent type like virtio-input-pci
> - "pc: Factor out (un)plug handling of virtio-md-pci devices"
>  -> Added, factor it cleanly out
> - "arm/virt: Use virtio-md-pci (un)plug functions"
>  -> Added, reduce code duplciation
> - "virtio-md-pci: Handle unplug of virtio based memory devices"
>  -> More generic without any device-specifics
> - "virtio-md-pci: Support unplug requests for compatible devices"
>  -> More generic without any device-specifics
> - "virtio-mem: Prepare for device unplug support"
>  -> Use callback, separated from virtio-mem-pci device change
> - "virtio-mem-pci: Device unplug support"
>  -> Use callback, separated from virtio-mem device change
>
> Cc: Peter Maydell 
> Cc: Paolo Bonzini 
> Cc: Richard Henderson 
> Cc: Eduardo Habkost 
> Cc: "Michael S. Tsirkin" 
> Cc: Marcel Apfelbaum 
> Cc: Igor Mammedov 
> Cc: qemu-...@nongnu.org
> Cc: Gavin Shan 
> Cc: Mario Casquero 
>
> David Hildenbrand (7):
>   virtio-md-pci: New parent type for virtio-mem-pci and virtio-pmem-pci
>   pc: Factor out (un)plug handling of virtio-md-pci devices
>   arm/virt: Use virtio-md-pci (un)plug functions
>   virtio-md-pci: Handle unplug of virtio based memory devices
>   virtio-md-pci: Support unplug requests for compatible devices
>   virtio-mem: Prepare for device unplug support
>   virtio-mem-pci: Device unplug support
>
>  MAINTAINERS   |   6 ++
>  hw/arm/virt.c |  81 +++-
>  hw/i386/pc.c  |  90 +++---
>  hw/virtio/Kconfig |   8 +-
>  hw/virtio/meson.build |   1 +
>  hw/virtio/virtio-md-pci.c | 151 ++
>  hw/virtio/virtio-mem-pci.c|  54 +--
>  hw/virtio/virtio-mem-pci.h|   6 +-
>  hw/virtio/virtio-mem.c|  25 +
>  hw/virtio/virtio-pmem-pci.c   |   5 +-
>  hw/virtio/virtio-pmem-pci.h   |   6 +-
>  include/hw/virtio/virtio-md-pci.h |  44 +
>  include/hw/virtio/virtio-mem.h|   1 +
>  13 files changed, 311 insertions(+), 167 deletions(-)
>  create mode 100644 hw/virtio/virtio-md-pci.c
>  create mode 100644 include/hw/virtio/virtio-md-pci.h
>
> --
> 2.41.0
>




Re: [PATCH v1 0/4] virtio-mem: Support "x-ignore-shared" migration

2023-07-06 Thread Mario Casquero
This series has been tested successfully by QE. Start a VM with a 8G
virtio-mem device and start memtester on it. Enable x-ignore-shared
capability and then do migration. Migration was successful and
virtio-mem can be resized as usual.

Tested-by: Mario Casquero 

BR,
Mario




On Tue, Jun 20, 2023 at 3:05 PM David Hildenbrand  wrote:
>
> Stumbling over "x-ignore-shared" migration support for virtio-mem on
> my todo list, I remember talking to Dave G. a while ago about how
> ram_block_discard_range() in MAP_PIRVATE file mappings is possibly
> harmful when the file is used somewhere else -- for example, with VM
> templating in multiple VMs.
>
> This series adds a warning to ram_block_discard_range() in that problematic
> case and adds "x-ignore-shared" migration support for virtio-mem, which
> is pretty straight-forward. The last patch also documents how VM templating
> interacts with virtio-mem.
>
> Cc: "Michael S. Tsirkin" 
> Cc: Juan Quintela 
> Cc: Peter Xu 
> Cc: Leonardo Bras 
> Cc: Paolo Bonzini 
> Cc: "Philippe Mathieu-Daudé" 
> Cc: Peng Tao 
>
> David Hildenbrand (4):
>   softmmu/physmem: Warn with ram_block_discard_range() on MAP_PRIVATE
> file mapping
>   virtio-mem: Skip most of virtio_mem_unplug_all() without plugged
> memory
>   migration/ram: Expose ramblock_is_ignored() as
> migrate_ram_is_ignored()
>   virtio-mem: Support "x-ignore-shared" migration
>
>  hw/virtio/virtio-mem.c   | 67 
>  include/migration/misc.h |  1 +
>  migration/postcopy-ram.c |  2 +-
>  migration/ram.c  | 14 -
>  migration/ram.h  |  3 +-
>  softmmu/physmem.c| 18 +++
>  6 files changed, 76 insertions(+), 29 deletions(-)
>
> --
> 2.40.1
>
>