Re: [RFC PATCH v5 00/26] vDPA shadow virtqueue
On Tue, Nov 2, 2021 at 5:26 AM Jason Wang wrote: > > > 在 2021/10/30 上午2:34, Eugenio Pérez 写道: > > This series enable shadow virtqueue (SVQ) for vhost-vdpa devices. This > > is intended as a new method of tracking the memory the devices touch > > during a migration process: Instead of relay on vhost device's dirty > > logging capability, SVQ intercepts the VQ dataplane forwarding the > > descriptors between VM and device. This way qemu is the effective > > writer of guests memory, like in qemu's virtio device operation. > > > > When SVQ is enabled qemu offers a new virtual address space to the > > device to read and write into, and it maps new vrings and the guest > > memory in it. SVQ also intercepts kicks and calls between the device > > and the guest. Used buffers relay would cause dirty memory being > > tracked, but at this RFC SVQ is not enabled on migration automatically. > > > > Thanks of being a buffers relay system, SVQ can be used also to > > communicate devices and drivers with different capabilities, like > > devices that only supports packed vring and not split and old guest > > with no driver packed support. > > > > It is based on the ideas of DPDK SW assisted LM, in the series of > > DPDK's https://patchwork.dpdk.org/cover/48370/ . However, these does > > not map the shadow vq in guest's VA, but in qemu's. > > > > For qemu to use shadow virtqueues the guest virtio driver must not use > > features like event_idx. > > > > SVQ needs to be enabled with QMP command: > > > > { "execute": "x-vhost-set-shadow-vq", > >"arguments": { "name": "vhost-vdpa0", "enable": true } } > > > > This series includes some patches to delete in the final version that > > helps with its testing. The first two of the series have been sent > > sepparately but they haven't been included in qemu main branch. > > > > The two after them adds the feature to stop the device and be able to > > set and get its status. It's intended to be used with vp_vpda driver in > > a nested environment, so they are also external to this series. The > > vp_vdpa driver also need modifications to forward the new status bit, > > they will be proposed sepparately > > > > Patches 5-12 prepares the SVQ and QMP command to support guest to host > > notifications forwarding. If the SVQ is enabled with these ones > > applied and the device supports it, that part can be tested in > > isolation (for example, with networking), hopping through SVQ. > > > > Same thing is true with patches 13-17, but with device to guest > > notifications. > > > > Based on them, patches from 18 to 22 implement the actual buffer > > forwarding, using some features already introduced in previous. > > However, they will need a host device with no iommu, something that > > is not available at the moment. > > > > The last part of the series uses properly the host iommu, so the driver > > can access this new virtual address space created. > > > > Comments are welcome. > > > I think we need do some benchmark to see the performance impact. > > Thanks > Ok, I will add them for the next revision. Thanks! > > > > > TODO: > > * Event, indirect, packed, and others features of virtio. > > * To sepparate buffers forwarding in its own AIO context, so we can > >throw more threads to that task and we don't need to stop the main > >event loop. > > * Support multiqueue virtio-net vdpa. > > * Proper documentation. > > > > Changes from v4 RFC: > > * Support of allocating / freeing iova ranges in IOVA tree. Extending > >already present iova-tree for that. > > * Proper validation of guest features. Now SVQ can negotiate a > >different set of features with the device when enabled. > > * Support of host notifiers memory regions > > * Handling of SVQ full queue in case guest's descriptors span to > >different memory regions (qemu's VA chunks). > > * Flush pending used buffers at end of SVQ operation. > > * QMP command now looks by NetClientState name. Other devices will need > >to implement it's way to enable vdpa. > > * Rename QMP command to set, so it looks more like a way of working > > * Better use of qemu error system > > * Make a few assertions proper error-handling paths. > > * Add more documentation > > * Less coupling of virtio / vhost, that could cause friction on changes > > * Addressed many other small comments and small fixes. > > > > Changes from v3 RFC: > >* Move everything to vhost-vdpa backend. A big change, this allowed > > some cleanup but more code has been added in other places. > >* More use of glib utilities, especially to manage memory. > > v3 link: > > https://lists.nongnu.org/archive/html/qemu-devel/2021-05/msg06032.html > > > > Changes from v2 RFC: > >* Adding vhost-vdpa devices support > >* Fixed some memory leaks pointed by different comments > > v2 link: > > https://lists.nongnu.org/archive/html/qemu-devel/2021-03/msg05600.html > > > > Changes from v1 RFC: > >* Use QMP instead of migration to start SVQ mode. >
Re: [RFC PATCH v5 00/26] vDPA shadow virtqueue
在 2021/10/30 上午2:34, Eugenio Pérez 写道: This series enable shadow virtqueue (SVQ) for vhost-vdpa devices. This is intended as a new method of tracking the memory the devices touch during a migration process: Instead of relay on vhost device's dirty logging capability, SVQ intercepts the VQ dataplane forwarding the descriptors between VM and device. This way qemu is the effective writer of guests memory, like in qemu's virtio device operation. When SVQ is enabled qemu offers a new virtual address space to the device to read and write into, and it maps new vrings and the guest memory in it. SVQ also intercepts kicks and calls between the device and the guest. Used buffers relay would cause dirty memory being tracked, but at this RFC SVQ is not enabled on migration automatically. Thanks of being a buffers relay system, SVQ can be used also to communicate devices and drivers with different capabilities, like devices that only supports packed vring and not split and old guest with no driver packed support. It is based on the ideas of DPDK SW assisted LM, in the series of DPDK's https://patchwork.dpdk.org/cover/48370/ . However, these does not map the shadow vq in guest's VA, but in qemu's. For qemu to use shadow virtqueues the guest virtio driver must not use features like event_idx. SVQ needs to be enabled with QMP command: { "execute": "x-vhost-set-shadow-vq", "arguments": { "name": "vhost-vdpa0", "enable": true } } This series includes some patches to delete in the final version that helps with its testing. The first two of the series have been sent sepparately but they haven't been included in qemu main branch. The two after them adds the feature to stop the device and be able to set and get its status. It's intended to be used with vp_vpda driver in a nested environment, so they are also external to this series. The vp_vdpa driver also need modifications to forward the new status bit, they will be proposed sepparately Patches 5-12 prepares the SVQ and QMP command to support guest to host notifications forwarding. If the SVQ is enabled with these ones applied and the device supports it, that part can be tested in isolation (for example, with networking), hopping through SVQ. Same thing is true with patches 13-17, but with device to guest notifications. Based on them, patches from 18 to 22 implement the actual buffer forwarding, using some features already introduced in previous. However, they will need a host device with no iommu, something that is not available at the moment. The last part of the series uses properly the host iommu, so the driver can access this new virtual address space created. Comments are welcome. I think we need do some benchmark to see the performance impact. Thanks TODO: * Event, indirect, packed, and others features of virtio. * To sepparate buffers forwarding in its own AIO context, so we can throw more threads to that task and we don't need to stop the main event loop. * Support multiqueue virtio-net vdpa. * Proper documentation. Changes from v4 RFC: * Support of allocating / freeing iova ranges in IOVA tree. Extending already present iova-tree for that. * Proper validation of guest features. Now SVQ can negotiate a different set of features with the device when enabled. * Support of host notifiers memory regions * Handling of SVQ full queue in case guest's descriptors span to different memory regions (qemu's VA chunks). * Flush pending used buffers at end of SVQ operation. * QMP command now looks by NetClientState name. Other devices will need to implement it's way to enable vdpa. * Rename QMP command to set, so it looks more like a way of working * Better use of qemu error system * Make a few assertions proper error-handling paths. * Add more documentation * Less coupling of virtio / vhost, that could cause friction on changes * Addressed many other small comments and small fixes. Changes from v3 RFC: * Move everything to vhost-vdpa backend. A big change, this allowed some cleanup but more code has been added in other places. * More use of glib utilities, especially to manage memory. v3 link: https://lists.nongnu.org/archive/html/qemu-devel/2021-05/msg06032.html Changes from v2 RFC: * Adding vhost-vdpa devices support * Fixed some memory leaks pointed by different comments v2 link: https://lists.nongnu.org/archive/html/qemu-devel/2021-03/msg05600.html Changes from v1 RFC: * Use QMP instead of migration to start SVQ mode. * Only accepting IOMMU devices, closer behavior with target devices (vDPA) * Fix invalid masking/unmasking of vhost call fd. * Use of proper methods for synchronization. * No need to modify VirtIO device code, all of the changes are contained in vhost code. * Delete superfluous code. * An intermediate RFC was sent with only the notifications forwarding changes. It can be seen in https://patchew.org/QEMU/20210129205415.876290-1-epere...@redhat.com/ v1
Re: [RFC PATCH v5 00/26] vDPA shadow virtqueue
On Fri, Oct 29, 2021 at 8:41 PM Eugenio Pérez wrote: > > This series enable shadow virtqueue (SVQ) for vhost-vdpa devices. This > is intended as a new method of tracking the memory the devices touch > during a migration process: Instead of relay on vhost device's dirty > logging capability, SVQ intercepts the VQ dataplane forwarding the > descriptors between VM and device. This way qemu is the effective > writer of guests memory, like in qemu's virtio device operation. > > When SVQ is enabled qemu offers a new virtual address space to the > device to read and write into, and it maps new vrings and the guest > memory in it. SVQ also intercepts kicks and calls between the device > and the guest. Used buffers relay would cause dirty memory being > tracked, but at this RFC SVQ is not enabled on migration automatically. > > Thanks of being a buffers relay system, SVQ can be used also to > communicate devices and drivers with different capabilities, like > devices that only supports packed vring and not split and old guest > with no driver packed support. > > It is based on the ideas of DPDK SW assisted LM, in the series of > DPDK's https://patchwork.dpdk.org/cover/48370/ . However, these does > not map the shadow vq in guest's VA, but in qemu's. > > For qemu to use shadow virtqueues the guest virtio driver must not use > features like event_idx. > > SVQ needs to be enabled with QMP command: > > { "execute": "x-vhost-set-shadow-vq", > "arguments": { "name": "vhost-vdpa0", "enable": true } } > > This series includes some patches to delete in the final version that > helps with its testing. The first two of the series have been sent > sepparately but they haven't been included in qemu main branch. > > The two after them adds the feature to stop the device and be able to > set and get its status. It's intended to be used with vp_vpda driver in > a nested environment, so they are also external to this series. The > vp_vdpa driver also need modifications to forward the new status bit, > they will be proposed sepparately > > Patches 5-12 prepares the SVQ and QMP command to support guest to host > notifications forwarding. If the SVQ is enabled with these ones > applied and the device supports it, that part can be tested in > isolation (for example, with networking), hopping through SVQ. > > Same thing is true with patches 13-17, but with device to guest > notifications. > > Based on them, patches from 18 to 22 implement the actual buffer > forwarding, using some features already introduced in previous. > However, they will need a host device with no iommu, something that > is not available at the moment. > > The last part of the series uses properly the host iommu, so the driver > can access this new virtual address space created. > > Comments are welcome. > > TODO: > * Event, indirect, packed, and others features of virtio. > * To sepparate buffers forwarding in its own AIO context, so we can > throw more threads to that task and we don't need to stop the main > event loop. > * Support multiqueue virtio-net vdpa. > * Proper documentation. > > Changes from v4 RFC: > * Support of allocating / freeing iova ranges in IOVA tree. Extending > already present iova-tree for that. > * Proper validation of guest features. Now SVQ can negotiate a > different set of features with the device when enabled. > * Support of host notifiers memory regions > * Handling of SVQ full queue in case guest's descriptors span to > different memory regions (qemu's VA chunks). > * Flush pending used buffers at end of SVQ operation. > * QMP command now looks by NetClientState name. Other devices will need > to implement it's way to enable vdpa. > * Rename QMP command to set, so it looks more like a way of working > * Better use of qemu error system > * Make a few assertions proper error-handling paths. > * Add more documentation > * Less coupling of virtio / vhost, that could cause friction on changes > * Addressed many other small comments and small fixes. > > Changes from v3 RFC: > * Move everything to vhost-vdpa backend. A big change, this allowed > some cleanup but more code has been added in other places. > * More use of glib utilities, especially to manage memory. > v3 link: > https://lists.nongnu.org/archive/html/qemu-devel/2021-05/msg06032.html > > Changes from v2 RFC: > * Adding vhost-vdpa devices support > * Fixed some memory leaks pointed by different comments > v2 link: > https://lists.nongnu.org/archive/html/qemu-devel/2021-03/msg05600.html > > Changes from v1 RFC: > * Use QMP instead of migration to start SVQ mode. > * Only accepting IOMMU devices, closer behavior with target devices > (vDPA) > * Fix invalid masking/unmasking of vhost call fd. > * Use of proper methods for synchronization. > * No need to modify VirtIO device code, all of the changes are > contained in vhost code. > * Delete superfluous code. > * An intermediate RFC was sent with only the notifications forwarding >