> Here is what was the setup :
I thought I'd share an update in case it helps others. Your ideas
inspired me to try a different approach.
We support 4 main distros (and a 2 variants of some). We try not to
provide our own versions of distro-supported packages like CTDB where
possible. So a concer
On Tue, Nov 05, 2019 at 05:05:08AM +0200, Strahil wrote:
> Sure,
>
> Here is what was the setup :
Thank you! You're very kind to send me this. I will verify it with my
setup soon. Hoping to to rid myself of these dep problems. Thank you !!!
Erik
Community Meeting Calendar:
APAC Schedu
Sure,
Here is what was the setup :
[root@ovirt1 ~]# systemctl cat var-run-gluster-shared_storage.mount --no-pager
# /run/systemd/generator/var-run-gluster-shared_storage.mount
# Automatically generated by systemd-fstab-generator
[Unit]
SourcePath=/etc/fstab
Documentation=man:fstab(5) man:system
Thank you! I am very interested. I hadn't considered the automounter
idea.
Also, your fstab has a different dependency approach than mine otherwise
as well.
If you happen to have the examples handy, I'll give them a shot here.
I'm looking forward to emerging from this dark place of dependencies
Hi Erik,
I took another approach.
1. I got a systemd mount unit for my ctdb lock volume's brick:
[root@ovirt1 system]# grep var /etc/fstab
gluster1:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs
defaults,x-systemd.requires=glusterd.service,x-systemd.automount0 0
As
So, I have a solution I have written about in the based that is based on
gluster with CTDB for IP and a level of redundancy.
It's been working fine except for a few quirks I need to work out on
giant clusters when I get access.
I have 3x9 gluster volume, each are also NFS servers, using gluster
N