On 10/3/23 17:55, Stefan Hajnoczi wrote: > On Tue, 3 Oct 2023 at 10:41, Michael S. Tsirkin <m...@redhat.com> wrote: >> >> On Sun, Aug 27, 2023 at 08:29:37PM +0200, Laszlo Ersek wrote: >>> (1) The virtio-1.0 specification >>> <http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.html> writes: >>> >>>> 3 General Initialization And Device Operation >>>> 3.1 Device Initialization >>>> 3.1.1 Driver Requirements: Device Initialization >>>> >>>> [...] >>>> >>>> 7. Perform device-specific setup, including discovery of virtqueues for >>>> the device, optional per-bus setup, reading and possibly writing the >>>> device’s virtio configuration space, and population of virtqueues. >>>> >>>> 8. Set the DRIVER_OK status bit. At this point the device is “live”. >>> >>> and >>> >>>> 4 Virtio Transport Options >>>> 4.1 Virtio Over PCI Bus >>>> 4.1.4 Virtio Structure PCI Capabilities >>>> 4.1.4.3 Common configuration structure layout >>>> 4.1.4.3.2 Driver Requirements: Common configuration structure layout >>>> >>>> [...] >>>> >>>> The driver MUST configure the other virtqueue fields before enabling the >>>> virtqueue with queue_enable. >>>> >>>> [...] >>> >>> These together mean that the following sub-sequence of steps is valid for >>> a virtio-1.0 guest driver: >>> >>> (1.1) set "queue_enable" for the needed queues as the final part of device >>> initialization step (7), >>> >>> (1.2) set DRIVER_OK in step (8), >>> >>> (1.3) immediately start sending virtio requests to the device. >>> >>> (2) When vhost-user is enabled, and the VHOST_USER_F_PROTOCOL_FEATURES >>> special virtio feature is negotiated, then virtio rings start in disabled >>> state, according to >>> <https://qemu-project.gitlab.io/qemu/interop/vhost-user.html#ring-states>. >>> In this case, explicit VHOST_USER_SET_VRING_ENABLE messages are needed for >>> enabling vrings. >>> >>> Therefore setting "queue_enable" from the guest (1.1) is a *control plane* >>> operation, which travels from the guest through QEMU to the vhost-user >>> backend, using a unix domain socket. >>> >>> Whereas sending a virtio request (1.3) is a *data plane* operation, which >>> evades QEMU -- it travels from guest to the vhost-user backend via >>> eventfd. >>> >>> This means that steps (1.1) and (1.3) travel through different channels, >>> and their relative order can be reversed, as perceived by the vhost-user >>> backend. >>> >>> That's exactly what happens when OVMF's virtiofs driver (VirtioFsDxe) runs >>> against the Rust-language virtiofsd version 1.7.2. (Which uses version >>> 0.10.1 of the vhost-user-backend crate, and version 0.8.1 of the vhost >>> crate.) >>> >>> Namely, when VirtioFsDxe binds a virtiofs device, it goes through the >>> device initialization steps (i.e., control plane operations), and >>> immediately sends a FUSE_INIT request too (i.e., performs a data plane >>> operation). In the Rust-language virtiofsd, this creates a race between >>> two components that run *concurrently*, i.e., in different threads or >>> processes: >>> >>> - Control plane, handling vhost-user protocol messages: >>> >>> The "VhostUserSlaveReqHandlerMut::set_vring_enable" method >>> [crates/vhost-user-backend/src/handler.rs] handles >>> VHOST_USER_SET_VRING_ENABLE messages, and updates each vring's "enabled" >>> flag according to the message processed. >>> >>> - Data plane, handling virtio / FUSE requests: >>> >>> The "VringEpollHandler::handle_event" method >>> [crates/vhost-user-backend/src/event_loop.rs] handles the incoming >>> virtio / FUSE request, consuming the virtio kick at the same time. If >>> the vring's "enabled" flag is set, the virtio / FUSE request is >>> processed genuinely. If the vring's "enabled" flag is clear, then the >>> virtio / FUSE request is discarded. >>> >>> Note that OVMF enables the queue *first*, and sends FUSE_INIT *second*. >>> However, if the data plane processor in virtiofsd wins the race, then it >>> sees the FUSE_INIT *before* the control plane processor took notice of >>> VHOST_USER_SET_VRING_ENABLE and green-lit the queue for the data plane >>> processor. Therefore the latter drops FUSE_INIT on the floor, and goes >>> back to waiting for further virtio / FUSE requests with epoll_wait. >>> Meanwhile OVMF is stuck waiting for the FUSET_INIT response -- a deadlock. >>> >>> The deadlock is not deterministic. OVMF hangs infrequently during first >>> boot. However, OVMF hangs almost certainly during reboots from the UEFI >>> shell. >>> >>> The race can be "reliably masked" by inserting a very small delay -- a >>> single debug message -- at the top of "VringEpollHandler::handle_event", >>> i.e., just before the data plane processor checks the "enabled" field of >>> the vring. That delay suffices for the control plane processor to act upon >>> VHOST_USER_SET_VRING_ENABLE. >>> >>> We can deterministically prevent the race in QEMU, by blocking OVMF inside >>> step (1.1) -- i.e., in the write to the "queue_enable" register -- until >>> VHOST_USER_SET_VRING_ENABLE actually *completes*. That way OVMF's VCPU >>> cannot advance to the FUSE_INIT submission before virtiofsd's control >>> plane processor takes notice of the queue being enabled. >>> >>> Wait for VHOST_USER_SET_VRING_ENABLE completion by: >>> >>> - setting the NEED_REPLY flag on VHOST_USER_SET_VRING_ENABLE, and waiting >>> for the reply, if the VHOST_USER_PROTOCOL_F_REPLY_ACK vhost-user feature >>> has been negotiated, or >>> >>> - performing a separate VHOST_USER_GET_FEATURES *exchange*, which requires >>> a backend response regardless of VHOST_USER_PROTOCOL_F_REPLY_ACK. >>> >>> Cc: "Michael S. Tsirkin" <m...@redhat.com> (supporter:vhost) >>> Cc: Eugenio Perez Martin <epere...@redhat.com> >>> Cc: German Maglione <gmagli...@redhat.com> >>> Cc: Liu Jiang <ge...@linux.alibaba.com> >>> Cc: Sergio Lopez Pascual <s...@redhat.com> >>> Cc: Stefano Garzarella <sgarz...@redhat.com> >>> Signed-off-by: Laszlo Ersek <ler...@redhat.com> >> >> >> So you want me to hold on to this patch 7/7 for now? >> And maybe merge rest of the patchset? > > Up to Laszlo, but I wanted to mention that I support merging this > patch series. A ring has not been enabled/disabled until the back-end > replies, so I think this patch series makes sense.
Sorry, I didn't get to see this part of the discussion yesterday, and now I see that Michael has gone ahead with a PR that contains v2 of this set. The night before yesterday I posted v3 <https://patchwork.ozlabs.org/project/qemu-devel/cover/20231002203221.17241-1-ler...@redhat.com/>, with commit message updates / improvements only (based on feedback), so please merge that one. Thanks! Laszlo