On Thu, Sep 5, 2019 at 8:16 PM Peter Lieven <p...@kamp.de> wrote: > > Am 05.09.19 um 12:05 schrieb ronnie sahlberg: > > On Thu, Sep 5, 2019 at 7:43 PM Peter Lieven <p...@kamp.de> wrote: > >> Am 04.09.19 um 11:34 schrieb Kevin Wolf: > >>> Am 03.09.2019 um 21:52 hat Peter Lieven geschrieben: > >>>>> Am 03.09.2019 um 16:56 schrieb Kevin Wolf <kw...@redhat.com>: > >>>>> > >>>>> Am 03.09.2019 um 15:44 hat Peter Lieven geschrieben: > >>>>>> libnfs recently added support for unmounting. Add support > >>>>>> in Qemu too. > >>>>>> > >>>>>> Signed-off-by: Peter Lieven <p...@kamp.de> > >>>>> Looks trivial enough to review even for me. :-) > >>>>> > >>>>> Thanks, applied to the block branch. > >>>>> > >>>>> Kevin > >>>> I am not sure what the reason is, but with this patch I sometimes run > >>>> into nfs_process_read being called for a cdrom mounted from nfs after > >>>> I ejected it (and the whole nfs client context is already destroyed). > >>> Does this mean that nfs_umount() gets some response, but we don't > >>> properly wait for it? Or is some older request still in flight? > >> > >> nfs_umount itself is a sync call and should only terminate when > >> > >> the call is done. But there is an independent I/O handler in that > >> > >> function polling on the fd. (wait_for_nfs_reply in libnfs-sync.c). > >> > >> This is why I thought the right solution is to stop the Qemu I/O handler > >> > >> before calling nfs_close and nfs_umount. nfs_close also uses this > >> > >> sync I/O handler, but for some reason it seems not to make trouble. > >> > >> > >> The other solution would be to use the async versions of close and umount, > >> > >> but that would make the code in Qemu more complex. > >> > >> > > NFS umount is pretty messy so I think you should continue using the > > sync version. > > In NFSv3 (there is no mount protocol in v4) the Mount call (fetch the > > root filehandle) > > and the Umount calls (tell server we should no longer show up in > > showexports -a output) > > are not part of the NFS protocol but a different service running on a > > separate port. > > > > This does not map well to libnfs since it is centered around a "struct > > nfs_context". > > > > To use nfs_umount() from QEMU I would suggest : > > 1, make sure all commands in flight have finished, because you will > > soon disconnect from the NFS server and will never receive any > > in-flight responses. > > 2, unregister the nfs->fh filedescriptor from your eventsystem. > > Because the fd is about to be closed so there is great chance it will > > be recycled for a completely different purpose if you open any other > > files from qemu. > > > > 3, call nfs_umount() Internally this will close the socket to the > > NFS server, then go through thr process to open a new socket to the > > portmapper to discover the mount server, then close that socket and > > reconnect a new socket again to the mount server and perform the UMNT > > call. > > > What we currently do in Qemu is: > > > 1) bdrv_drain > > 2) bdrv_close which in the end calls nfs_client_close from block/nfs.c. > > There we call: > > 2a) nfs_close(client->fh) > > 2b) aio_set_fd_handler(NULL) > > 2c) nfs_destroy_context(client->context); > > > My first patch added a nfs_umount between 2a) and 2b) so that we have > > 2a) nfs_close(client->fh) > > 2b) nfs_umount(client->context) > > 2c) aio_set_fd_handler(NULL) > > 2d) nfs_destroy_context(client->context); > > > This leads to triggering to assertion for an uninitialized client->mutex > which is called from an invocation > > of nfs_process_read after nfs_destroy_context was called. > > > If I change the order as following I see no more assertions: > > 2a) aio_set_fd_handler(NULL) > > 2b) nfs_close(client->fh) > > 2c) nfs_umount(client->context) > > 2d) nfs_destroy_context(client->context);
That makes sense and looks correct to me. > > > I think we should have done this in the first place, because nfs_close (and > nfs_umount) poll on the nfs_fd in parallel > > if we use the sync calls. > > > Peter > > >