Martin,

i think we should continue offline. Anyway see my comments/answers inline.

Thanks,
Gonzalo.
-->

Martin Uhl wrote:

The dirs blocking the mount are created at import/mount time.
how you know that??

In the previous example I could reconstruct that using zfs mount.  Just look at 
the last post.
You said "...the dirs blocking the mount are created at import/mount time.." and your previous post suggest a different scenario: mount points created prior to the import/not cleared when doing a umount. Fixing the umount "problem" is expensive and will not resolve the problem you've. I've tested on my lab system setting a breakpoint in zfs_umount() which is called for each FS of a pool when you export it but not called for the FS's of an imported pool (other than rootpool) when you stop your system via "/etc/reboot".

I doubt ZFS removes mount directories.
It does. With a simple dtrace script i saw that is done in "zpool_disable_datasets()->remove_mountpoint()" when you export the pool.

# zfs list -o name,mountpoint,mounted,canmount -r tank
NAME              MOUNTPOINT    MOUNTED  CANMOUNT
tank              /tank             yes        on
tank/gongui       /gongui            no        on
tank/gongui/test  /gongui/test      yes        on
# zfs mount tank/gongui
cannot mount '/gongui': directory is not empty
# dtrace -q -n 'syscall::rmdir:entry{printf("Mountpoint deleted: %s\n",stringof(copyinstr(arg0)));ustack();}' -c "zpool export tank"
Mountpoint deleted: /tank

             libc.so.1`rmdir+0x7
             libzfs.so.1`zpool_disable_datasets+0x319
             zpool`zpool_do_export+0x10f
             zpool`main+0x158
             zpool`_start+0x7d
Mountpoint deleted: /gongui/test

             libc.so.1`rmdir+0x7
             libzfs.so.1`zpool_disable_datasets+0x32c
             zpool`zpool_do_export+0x10f
             zpool`main+0x158
             zpool`_start+0x7d

If you're correct you should been able to reproduce the problem by doing a "clean" shutdown (or an export/import), can you reproduce it this way??

The server is in a production environment and we cannot afford the necessary 
downtime for that.
Unfortunately the server has lots of datasets which cause import/export times 
of 45 mins.

We import the pool with the -R parameter, might that contribute to the problem? 
Perhaps a zfs mount -a bug in correspondence with the -R parameter?

See above. If you export the pool you shouldn't have problems. We should study if we can lower the time to import/export the pools but my recommendation is to do a proper shutdown.

Thanks,
Gonzalo.

Greetings, Martin

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to