Robert,

I'll verify your patch on comment #6, since the nfs-kernel-server:

(k)inaddy@ctdbserver01:/lib/systemd$ systemctl list-dependencies 
nfs-kernel-server.service
nfs-kernel-server.service
● ├─auth-rpcgss-module.service
● ├─nfs-config.service
● ├─nfs-idmapd.service
● ├─nfs-mountd.service
● ├─proc-fs-nfsd.mount
● ├─rpc-svcgssd.service
● ├─rpcbind.socket
● ├─system.slice
● └─network.target

might be enough for guaranteeing rpc.statd to start/stop altogether with
nfsd when CTDB starts/stops the services on the nodes.

I have also to verify missing (or deactivated) environmental variables
(NEED_GSSD or NEED_SVCGSSD, for example) brought by /etc/default/nfs-*
and, nowadays, with systemd, read by script /usr/lib/systemd/scripts
/nfs-utils_env.sh, generating an environment file in /run, executed by
nfs-config.service as a "oneshot" service whenever nfs is restarted
(will verify if any of those would change by CTDB, for example). This
will cover your initial comment:

" ... ctdb is able to control NFS daemons, but it never looks for
/etc/init.d/nfs-kernel-server for startup or /etc/default/nfs-kernel-
server for settings ... ".

I'll have to think of something for the RPC ports :\. You said in
comment #10 that this:

NFS_HOSTNAME="servername"

RQUOTAD_PORT=598
LOCKD_UDPPORT=599
LOCKD_TCPPORT=599
STATD_PORT=595
STATD_OUTGOING_PORT=596
STATD_HOSTNAME="$NFS_HOSTNAME"

needed to be included in /etc/default/nfs-kernel-server, possibly
because CTDB needs those environment variables (apart from the "same
ports on all nodes need"). If CTDB manipulates systemctl to start/stop
the services, then I would have to make sure those are parsed by the
nfs-config.service logic (nfs-config -> nfs-utils_env.sh ->
/run/sysconfig/nfs-utils environment file).

That would work for needed NFS_HOSTNAME environment variable.. and, for
the ports, I could create systemd overrides to set all RPC services to
the same port, or provide those commented and give the user a message to
comment them out (since touching that during package installation does
not seem appropriate to me).

We have 2 other things:

/etc/modprobe.d/lockd.conf -> that is most likely a no-go, like the
LOCKD_UDPPORT=599 approach. I mean, at least not automatically (possibly
commented, telling user to make sure its all commented out in all CTDB
nodes).

And:

Disabling NFSv4.. (and rpc.idmapd), which could follow the same thing...
providing information to final user about not having it enabled
(specially because installing CTDB would require to have all nfs-*
services disabled anyway, so CTDB postinst can't definitely activate
systemd services by default, for example).

That is all I can think of to check/implement in Ubuntu Eoan. Please,
let me know if there is anything else you can think of.

I'm testing all this in the following scenario:

2 gluster servers providing 1 volume to 3 ctdb servers in 1 network, and
those 3 ctdb servers providing a NFS with LVS network IP to 2 NFS
clients.

It is very likely that CTDB will need a similar work to make sure samba,
for example, is supported, out-of-the-box, like what we are doing here,
by Ubuntu CTDB package.

I'll get back to you soon.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/722201

Title:
  CTDB port is not aware of Ubuntu-specific NFS Settings

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ctdb/+bug/722201/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to