On 4/21/2026 8:14 PM, Arnd Bergmann wrote:
> On Tue, Apr 21, 2026, at 11:18, Peng Yang wrote:
>> On 4/21/2026 3:38 PM, Arnd Bergmann wrote:
>>>
>>> Which host implementation do you use? The way the virtio_console
>>> driver works really assumes that virtqueue_kick() consumes the
>>> buffer synchronously. Even though that is not how virtio is
>>> specified, this does tend to work. ;-)
>>>
>> We are using crosvm as the host VMM with its virtio-console backend,
>> running on Android. The trigger is Android host reboot/shutdown: when
>> Android initiates a reboot, the crosvm process exits and tears down
>> the virtio-console backend. At that point, the TX virtqueue is no
>> longer being drained by the host and will never be consumed again.
>
> I see, so the normal behavior is likely just fine, but the error
> handling is what goes wrong. Maybe there is a way for the guest
> to detect the device being turn down already so it does not
> actually have to wait any more?
>
Yes, exactly. Normal operation is fine; the problem is purely
in the error/teardown path.
We investigated both the virtqueue_is_broken() path and the
virtio config status register path. Neither works in our
scenario, for the reasons explained below.
>> The crash dump from the actual failure confirms the exact deadlock
>> scenario:
>>
>> Core 3 holds outvq_lock and spins forever in virtqueue_get_buf waiting
>> for the host to consume the buffer:
>>
>> virtqueue_get_buf
>> __send_to_port
>> put_chars
>> hvc_push
>> hvc_write
>> n_tty_write
>> <- writev() syscall
>
> This current loop here is
>
> while (!virtqueue_get_buf(out_vq, &len)
> && !virtqueue_is_broken(out_vq))
> cpu_relax();
>
> which looks like the virtqueue_is_broken() check is meant to
> catch this exact case. Do you know why this does not break
> out of the loop after crosvm tears down the virtio-console
> device?
>
virtqueue_is_broken() only reads the guest-side vq->broken
flag, which is set either by virtio_break_device() or by a
failed virtqueue_notify() kick. Neither happens here:
- When the host VMM exits, it does so as a pure userspace
process termination. No PCI interrupt or notification is
sent to the guest, so virtio_break_device() is never
called from the guest side.
- __send_to_port() runs with IRQs disabled and outvq_lock
held. Even if the host were to send a config change
interrupt, it cannot be delivered in this context, so the
async chain virtio_config_changed() -> config_intr()
-> virtio_break_device() is completely blocked.
As a result, vq->broken remains false forever and the loop
never exits.
>> Core 0 has a watchdog bark ISR fire and attempts printk, holds the
>> console lock, but spins on _raw_spin_lock_irqsave waiting to acquire
>> outvq_lock:
>>
>> queued_spin_lock_slowpath
>> _raw_spin_lock_irqsave
>> __send_to_port
>> put_chars
>> hvc_console_print
>> console_flush_all
>> console_unlock
>> vprintk_emit
>> <- printk (watchdog bark handler)
>
> My first thought here was that __send_to_port() should perhaps
> release the lock during the while() loop, which should avoid
> blocking the other threads on the spin_lock_irqsave() but
> would not avoid blocking on the loop.
>
Releasing the lock during the spin would unblock other CPUs
waiting on outvq_lock (e.g. the watchdog bark handler trying
to printk). However it does not fix the root issue — the CPU
holding the lock would still spin forever. It also introduces
a TOCTOU race: another thread could modify the port or queue
state between the unlock and re-lock. The timeout avoids both
problems by bounding the spin duration without releasing the
lock.
>> The 200 ms timeout is intended as a minimal, targeted workaround to prevent
>> the watchdog bite in our specific scenario. We are open to suggestions on a
>> better long-term approach.
>
> Not sure how to do it, but I think finding a way to call
> virtio_break_device() at the point the host device goes away is
> the best solution here. Ideally there would just be a notification
> from the host, but since __send_to_port() may be called with
> interrupts disabled and may be running on the only CPU, that
> would still be unreliable.
>
> Maybe there is a way for virtio_console to read a status
> register in the virtio config that tells it whether the
> host has turned it off? I was thinking vdev->config->get_status(vdev)
> but that seems to only get updated by the guest.
>
> Arnd
We checked this. In our host VMM implementation,
VIRTIO_CONFIG_S_NEEDS_RESET is never set on the teardown
path — the device simply stops responding without updating
any status register. So polling vp_modern_get_status()
inside the spin loop would not help here.
We agree that the ideal long-term fix is for the host to
trigger virtio_break_device() via a clean PCI hot-unplug
sequence, but that is not possible in a crash or forced
reboot scenario.
The 200ms value is chosen to be well above normal host
response time (microseconds) to avoid false positives, while
remaining well below the watchdog bark-to-bite window
(3 seconds) to ensure all CPUs can exit the loop and complete
the bark handler before a bite occurs.