Hi all,

I am running a GlusterFS server 6.3 on three Ubuntu 18.04 nodes installed from the https://launchpad.net/~gluster PPA.

I tried upgrading to 6.5 today and ran into an issue with the first (and only) node that has been upgraded so far. When I rebooted the node the underlying brick filesystems failed to mount because of a `pvscan` process timing out on boot.

I did some experimenting and the issue seems to be that on reboot the glusterfsd processes (that expose the bricks as far as I understand) are not being shut down which leads to the underlying filesystems show up as busy and not getting properly unmounted.

Then I found out that `systemctl stop glusterd.service` doesn't stop the brick processes by design and it also seems that for Fedora/RHEL this has been worked around by having a separate `glusterfsd.service` unit that only acts on shutdown.

This however does not seem to be the case on Ubuntu and I can't figure out what is the expected flow there.

So I guess my question is - is this normal/expected behaviour on Ubuntu? How is one supposed to set things up so that bricks get properly unmounted on reboot and properly mounted at startup?

I am also considering migrating from Ubuntu to CentOS now as the upstream support seems much better there. If I decide to switch can I re-use the existing bricks or do I need to spin up a clean node, join the cluster and get the data synced to it?

Thanks!

Best regards,
--
alexander iliev
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to