On Fri, Aug 02, 2024 at 06:00:32PM GMT, Vladimir Sementsov-Ogievskiy wrote:
> On 02.08.24 04:32, Eric Blake wrote:

> [..]
> 
> > -static void nbd_blockdev_client_closed(NBDClient *client, bool ignored)
> > +static void nbd_blockdev_client_closed(NBDClient *client, uint32_t cookie,
> > +                                       bool ignored)
> >   {
> >       nbd_client_put(client);
> > -    assert(nbd_server->connections > 0);
> > -    nbd_server->connections--;
> > -    nbd_update_server_watch(nbd_server);
> > +    /* Ignore any (late) connection made under a previous server */
> > +    if (cookie == nbd_cookie) {
> 
> creating a getter nbd_client_get_cookie(client), and use it instead of 
> passing together with client, will simplify the patch a lot. [*]

I may be able to avoid the cookie altogether if I can add an
AIO_WAIT_WHILE(, nbd_server->connections > 0) after forcefully closing
all of the client sockets (nbd_client_new _should_ progress pretty
rapidly towards eventually calling nbd_blockdev_client_closed once the
socket is closed) - but that still requires patch 2 to keep a list of
open clients.

> 
> Hmm.. don't we need some atomic accessors for nbd_cookie? and for 
> nbs_server->connections.. The function is called from client, which live in 
> coroutine and maybe in another thread? At least client code do atomic 
> accesses of client->refcount..
> 
> > +        assert(nbd_server->connections > 0);
> > +        nbd_server->connections--;
> > +        nbd_update_server_watch(nbd_server);
> > +    }
> >   }
> > 
> 
> [..]
> 
> > @@ -1621,7 +1622,7 @@ static void client_close(NBDClient *client, bool 
> > negotiated)
> > 
> >       /* Also tell the client, so that they release their reference.  */
> >       if (client->close_fn) {
> > -        client->close_fn(client, negotiated);
> > +        client->close_fn(client, client->close_cookie, negotiated);
> 
> [*] passing client->close_cokkie together with client itself looks like we 
> lack a getter for .close_cookie

Whether the cookie be a uint32_t or the void* server object itself, it
is opaque to the client, but the server needs to track something.


> 
> >       }
> >   }
> > 
> 
> [..]
> 
> 
> Hmm, instead of cookies and additional NBDConn objects in the next patch, 
> could we simply have a list of connected NBDClient objects in NBDServer and 
> link to NBDServer in NBDClient? (Ok we actually don't have NBDServer, but 
> NBDServerData in qemu, and several global variables in qemu-nbd, so some 
> refactoring is needed, to put common state to NBDServer, and add clients list 
> to it)
> 
> This way, in nbd_server_free we'll just call client_close() in a loop. And in 
> client_close we'll have nbd_server_client_detach(client->server, client), 
> instead of client->close_fn(...). And server is freed only after all clients 
> are closed. And client never try to detach from another server.
> 
> This way, we also may implement several NBD servers working simultaneously if 
> we want.

Yes, we do eventually want to get to the point of being able to open
parallel NBD servers on different ports simultaneously, at which point
having a client remember which server it is associated makes sense (so
at a bare minimum, pass in a void* instead of a uint32_t to
nbd_client_new).  And given that we can have an NBD server with more
than one export, and with exports running in different threads (if
multithread io is enabled), I probably also need to add missing
locking to protect nbd_server (whether or not it stays global or we
eventually reach the point of having parallel servers on separate
ports).

Looks like I have work cut out for me before posting a v3.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.
Virtualization:  qemu.org | libguestfs.org


Reply via email to