On Thu, Sep 26, 2013 at 06:35:14PM -0000, Paul Boven wrote: > Running into the same issue here, I can't get glusterfs to mount at > boot.
> System: Ubuntu 13.04 > Output from mountall --verbose: > mount /export/brick0 [2508] exited normally > Usage: glusterfs [OPTION...] --volfile-server=SERVER [MOUNT-POINT] > or: glusterfs [OPTION...] --volfile=VOLFILE [MOUNT-POINT] > Try `glusterfs --help' or `glusterfs --usage' for more information. > mount /gluster [2509] exited normally > The 'exited normally' seems wrong, too. These are all bugs in the glusterfs mount helper, not in mountall. It does not implement the expected interfaces required for mount.$fstype helpers. > Output from mountall --verbose --debug: > run_mount: mtab /gluster > spawn: mount -f -t fuse.glusterfs -o defaults localhost:/gv0 /gluster > spawn: mount /gluster [2599] > Usage: glusterfs [OPTION...] --volfile-server=SERVER [MOUNT-POINT] > or: glusterfs [OPTION...] --volfile=VOLFILE [MOUNT-POINT] > Try `glusterfs --help' or `glusterfs --usage' for more information. > mount /gluster [2599] exited normally > My fstab entry: > localhost:/gv0 /gluster glusterfs defaults,nobootwait 0 > 0 > It seems that mountall tries to emit "mount -f -t fuse.glusterfs", which fails. No, that command almost certainly succeeds, because all it's doing is updating the /etc/mtab to match the list of filesystems that are already mounted. Something else has mounted /gluster for you, according to /proc/mounts, and mountall is making sure /etc/mtab is kept up to date. The failing call is the *second* one, where again, mountall is trying to mount /gluster according to the fstab. You can reproduce this failure by running the following from the commandline: mount -t glusterfs -o defaults localhost:/gv0 /gluster ** Package changed: mountall (Ubuntu) => glusterfs (Ubuntu) -- You received this bug notification because you are a member of Ubuntu High Availability Team, which is subscribed to glusterfs in Ubuntu. https://bugs.launchpad.net/bugs/1205075 Title: mountall doesn't treat glusterfs correctly Status in “glusterfs” package in Ubuntu: Confirmed Bug description: I've two servers replicating data by Gluster every servers are clients too. The problem is that the glusterfs was never mounted at boot 'cause mountall doesn't call correct mount command and doesn't treat gluster filesystem as remote filesystem. Some informations : # lsb_realease -rd Description: Ubuntu 13.04 Release: 13.04 # apt-cache policy mountall mountall: Installed : 2.48build1 Candidate : 2.48build1 Table de version : *** 2.48build1 0 500 http://fr.archive.ubuntu.com/ubuntu/ raring/main amd64 Packages 100 /var/lib/dpkg/status # mountall --version mountall 2.44 Entry in the fstab : 192.162.0.1:test-vol /srv/conf glusterfs defaults,_netdev 0 0 After boot, this entry wasn't mounted. But if I try : # mount -t glusterfs 192.162.0.1:test-vol /srv/test-vol It works perfectly I put a '--verbose' into /etc/init/mountall.conf file to see what happened. I obtain this : / est local /proc est virtual /sys est virtual /sys/fs/cgroup est virtual /sys/fs/fuse/connections est virtual /sys/kernel/debug est virtual /sys/kernel/security est virtual /dev est virtual /dev/pts est virtual /tmp est local /run est virtual /run/lock est virtual /run/shm est virtual /run/user est virtual /srv est local /var est local UUID=ac001a41-9a55-49d6-86c8-1e9e8df41054 est swap /mnt/shared-conf est local /srv/conf est nowait <== Why not remote ? mounting event sent for /sys/fs/cgroup mounting event sent for /sys/fs/fuse/connections mounting event sent for /sys/kernel/debug mounting event sent for /sys/kernel/security mounting event sent for /run/lock mounting event sent for /run/shm mounting event sent for /run/user ... mounting event handled for /srv/conf montage de /srv/conf mounted event handled for /srv/conf local 5/5 remote 0/0 virtual 12/12 swap 1/1 ^ And no /srv/conf mounted... With this command I've found more information : # mountall --verbose ... Usage: glusterfs [OPTION...] --volfile-server=SERVER [MOUNT-POINT] or: glusterfs [OPTION...] --volfile=VOLFILE [MOUNT-POINT] Try `glusterfs --help' or `glusterfs --usage' for more information. mount /srv/conf [11156] s'est terminé normalement ... We can see that the call of mount was not correct (see Usage). If I go deeper : # mountall --verbose --debug ... run_mount: mtab /srv/conf spawn: mount -f -t fuse.glusterfs -o defaults,_netdev 10.130.163.253:volume-conf /srv/conf spawn: mount /srv/conf [11339] Usage: glusterfs [OPTION...] --volfile-server=SERVER [MOUNT-POINT] or: glusterfs [OPTION...] --volfile=VOLFILE [MOUNT-POINT] Try `glusterfs --help' or `glusterfs --usage' for more information. mount /srv/conf [11339] s'est terminé normalement ... When I try the mount mentioned above I've got the same behaviour (send Usage) 'cause fuse.glusterfs wasn't recognized. If I put glusterfs instead the mount works as expected. More, since mountall doesn't treat glusterfs as remote filesystem, it try to mount it even if local filsystems aren't mounted yet. For example, if /var wasn't cleanly unmounted, it will be checked at boot and mountall try to mount glusterfs during check. This cause some troubles because glusterfsd need acces to /var/log which is not mounted yet... To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/glusterfs/+bug/1205075/+subscriptions _______________________________________________ Mailing list: https://launchpad.net/~ubuntu-ha Post to : [email protected] Unsubscribe : https://launchpad.net/~ubuntu-ha More help : https://help.launchpad.net/ListHelp

