Hi, I'd like to express my interest in the vhost-user memory isolation project for GSoC 2026.
My background is mostly in systems programming and OS-level work. Last year I ported NestOS (a cloud-native container OS) to a new architecture as part of OSPP [1], during which I used QEMU extensively for boot testing and debugging. As part of the same effort, I backported the QEMU fw_cfg patch to the openEuler kernel, enabling the fw_cfg sysfs interface so that guest kernels can read configuration data from QEMU [2]. I've worked through xv6 as well, so I'm comfortable with memory management fundamentals (page tables, address space separation, mmap). I also have a competitive programming background (ICPC regional silver medalist), which has been helpful for reasoning about state tracking and concurrent data structure design. From my reading of the project description, the core idea is to add an opt-in mode where QEMU interposes itself on the data path using Shadow Virtqueues (SVQ). Currently the backend receives guest RAM file descriptors via VHOST_USER_SET_MEM_TABLE and maps them directly, giving it full access to the entire guest address space. In memory-isolation mode, QEMU would instead intercept the kickfd/callfd eventfd notifications, read the guest's virtqueue descriptors, copy the relevant data buffers between guest RAM and a separate isolated memory region, and present the backend with an SVQ backed by that isolated region. The backend never receives the guest RAM fds — it only sees the isolated area. This keeps the change transparent to both guest drivers and existing vhost-user backends, since the vhost-user protocol itself is unchanged. A few things I've been thinking about: The project description mentions integrating the existing SVQ code from vhost-shadow-virtqueue.c into vhost-user.c. I understand SVQ is currently used for vDPA live migration, where it temporarily interposes on the data path to track dirty pages. The main work here seems to be adapting that machinery for memory isolation: QEMU would allocate an isolated memory region at device startup, replace the guest RAM mappings in VHOST_USER_SET_MEM_TABLE with this isolated region, and then on each kick, walk the guest virtqueue descriptors, memcpy the relevant buffers from guest RAM into the isolated area (for TX) or back from the isolated area into guest RAM (for RX), and forward the operation through the SVQ to the backend. On the performance side, the per-I/O memcpy is unavoidable by design, but for typical network packets (64B–1500B) the cost should stay within L1/L2 cache and be fairly small. Larger transfers (e.g. virtio-blk with 128KB+ requests) might be more noticeable. I wonder if there are ways to amortize the cost — batching multiple descriptors before forwarding, or sizing the isolated region to stay cache-friendly. A few questions: - Is this expected to work with all vhost-user device types (net, blk, etc.), or is it scoped to a specific device initially? - The existing SVQ code is used for vDPA live migration, where it temporarily interposes during the migration window and is then removed. In memory-isolation mode, the SVQ would need to stay on the data path permanently. Are there any aspects of the current SVQ implementation that assume temporary use and would need rethinking for a persistent mode? - For testing, the project description mentions extending vhost-user-test.c. Beyond functional correctness (data integrity with isolation on), would you also want performance regression data comparing isolation on vs. off (throughput, latency)? I'm comfortable with C and have experience with memory management and low-level debugging from my OS work. I haven't worked on QEMU's codebase directly, but I've used it extensively and I'm looking forward to digging into the internals. Thanks, Xuhai Chang [1] https://gitee.com/openeuler/nestos-assembler/pulls/144 [2] https://github.com/RVCK-Project/rvck-olk/pull/107 (backport of mainline linux commit f2de37a572853d)
