On Wed, Feb 16, 2022 at 12:13 PM Richard W.M. Jones <rjo...@redhat.com> wrote:
> On Tue, Feb 15, 2022 at 05:24:14PM -0600, Eric Blake wrote: > > Oh. The QMP command (which is immediately visible through > > nbd-server-add/block-storage-add to qemu and qemu-storage-daemon) > > gains "multi-conn":"on", but you may be right that qemu-nbd would want > > a command line option (either that, or we accellerate our plans that > > qsd should replace qemu-nbd). > > I really hope there will always be something called "qemu-nbd" > that acts like qemu-nbd. > I share this hope. Most projects I work on are based on qemu-nbd. However in oVirt use case, we want to provide an NBD socket for clients to allow direct access to disks. One of the issues we need to solve for this is having a way to tell if the qemu-nbd is active, so we can terminate idle transfers. The way we do this with the ovirt-imageio server is to query the status of the transfer, and use the idle time (time since last request) and active status (has inflight requests) to detect a stale transfer that should be terminated. An example use case is a process on a remote host that started an image transfer, and killed or crashed in the middle of the transfer without cleaning up properly. To be more specific, every request to the imageio server (read, write, flush, zero, options) updates a timestamp in the transfer state. When we get the status we report the time since that timestamp was updated. Additionally we keep and report the number of inflight requests, so we can tell the case when requests are blocked on inaccessible storage (e.g. non responsive NFS). We don't have a way to do this with qemu-nbd, but I guess that using qemu-storage-daemon when we have qmp access will make such monitoring possible. Nir