Dear all,
we're adding dynamically an IVSHMEM device on a VM that is already
running, but apparently this is not correctly recognized by the Guest OS.
Instead, everything works if we reboot the VM after adding the new
IVSHMEM device.
This is the list of steps we execute:
1) Launch a new Guest VM with Qemu
2) Create a new IVSHMEM metadata file in the Host
3) Map that file as a new IVSHMEM device in the Guest
For this step, we use the "device_add" command from Qemu:
(qemu) device_add ivshmem,size=2048M,shm=fd:/dev/hugepages
/rtemap_0:0x0:0x40000000:/dev/zero:0x0:0x3fffc000:/var/run
/.dpdk_ivshmem_metadata_vm_1:0x0:0x4000
4) List the available PCI devices in the Guest with "lshw":
$ sudo lshw
....
*-memory UNCLAIMED
description: RAM memory
product: Virtio Inter-VM shared memory
vendor: Red Hat, Inc
physical id: 4
bus info: pci@0000:00:04.0
version: 00
width: 64 bits
clock: 33MHz (30.3ns)
configuration: latency=0
resources: memory:e0000000-e00000ff
5) Reboot the Guest VM and re-do the 'lshw' command:
$ sudo lshw
...
*-memory UNCLAIMED
description: RAM memory
product: Virtio Inter-VM shared memory
vendor: Red Hat, Inc
physical id: 4
bus info: pci@0000:00:04.0
version: 00
width: 64 bits
clock: 33MHz (30.3ns)
configuration: latency=0
resources: iomemory:10-f memory:febd0000-febd00ff
memory:180000000-1ffffffff
It seems to us that, after the reboot, the IVSHMEM is mapped in a
different way than immediately after plugging it in a running VM
(compare the output of the line 'resources').
This has the side effect that we can use a DPDK application based on the
IVSHMEM only in the second case; the DPDK doens't see the shared memory
in the first case.
Is there any way to force the Guest OS to recognize the new device
without rebooting? Such as rmmod/insmod or equivalent?
Thank you for your answer,
Ivano