Re: 15.04 and systemd
On Sun, Dec 7, 2014 at 2:11 AM, Martin Pitt martin.p...@ubuntu.com wrote: Tom H [2014-12-05 8:03 -0500]: | $ grep ifup /lib/udev/rules.d/99-systemd.rules | SUBSYSTEM==net, KERNEL!=lo, TAG+=systemd, ENV{SYSTEMD_ALIAS}+=/sys/subsystem/net/devices/$name, ENV{SYSTEMD_WANTS}+=ifup@$name.service I noticed this after sending my original email. I'm now using NM (I had to log on to a WEP network!) and I'd meant to check whether unmasking ifup@.service would result in the same errors because this rule doesn't have check whether something other than ifupdown is bringing up the network (if that's even possible in a udev rule). The rule doesn't have to. ifup will know by itself (through /etc/network/interfaces) if it's responsible for a particular interface; if not, this is a no-op. This will handle hotplugged interfaces which are covered by ifupdown, i. e. /etc/network/interfaces. Except that my interfaces file's empty. Right, then the rule and ifup@.service are irrelevant for your system. This is also why the disabled /etc/init.d/networking init script did not cause acual damage on your system. systemd brings up lo on its own, so you don't need ifupdown for lo even. By hotplugged do you mean when using Debian's allow-hotplug? I meant added the hardware while the computer/user session is running. allow-hotplug is Debian's ifupdown declaration for this (but not supported directly under Ubuntu). I hadn't seen the ifup udev rule when I wrote the above so I thought that systemd was using the sysvinit networking script to trigger ifup@.service. The sysvinit script is called at boot to bring up the non-hotplugged interfaces (lo, builtin ethernet or wifi cards), if they are tagged as auto. I'll set up a VM to try to reproduce this. Do you mean upgrade trusty-to-utopic or utopic-to-vivid? Well, finding out the upgrade path that causes /etc/init.d/networking to be disabled is exactly the exercise :-) It might just be clean utopic install and upgrade to vivid, but it might be more complicated than that. Many thanks for your answers. I'll only answer your last para because we're misunderstanding one another and I don't want to add yet another layer of misunderstanding! I'd installed 15.04 from a daily/nighly ISO and disabled networking manually. There was no automatic/hidden disabling. I set up a utopic VM and upgraded it to vivid. I didn't disable networking or mask ifup@ for either and I didn't get the errors that I'd reported in my initial email. I've also unmasked ifup@ on my laptop and I'm no longer getting these errors. Since I was getting these errors at every boot with v216, I have to assume that the problem's been fixed with the upgrade to v217. Apologies for the noise. -- Ubuntu-devel-discuss mailing list Ubuntu-devel-discuss@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
Re: 15.04 and systemd
Hey Tom, Tom H [2014-12-05 8:03 -0500]: | $ grep ifup /lib/udev/rules.d/99-systemd.rules | SUBSYSTEM==net, KERNEL!=lo, TAG+=systemd, ENV{SYSTEMD_ALIAS}+=/sys/subsystem/net/devices/$name, ENV{SYSTEMD_WANTS}+=ifup@$name.service I noticed this after sending my original email. I'm now using NM (I had to log on to a WEP network!) and I'd meant to check whether unmasking ifup@.service would result in the same errors because this rule doesn't have check whether something other than ifupdown is bringing up the network (if that's even possible in a udev rule). The rule doesn't have to. ifup will know by itself (through /etc/network/interfaces) if it's responsible for a particular interface; if not, this is a no-op. This will handle hotplugged interfaces which are covered by ifupdown, i. e. /etc/network/interfaces. Except that my interfaces file's empty. Right, then the rule and ifup@.service are irrelevant for your system. This is also why the disabled /etc/init.d/networking init script did not cause acual damage on your system. systemd brings up lo on its own, so you don't need ifupdown for lo even. By hotplugged do you mean when using Debian's allow-hotplug? I meant added the hardware while the computer/user session is running. allow-hotplug is Debian's ifupdown declaration for this (but not supported directly under Ubuntu). I hadn't seen the ifup udev rule when I wrote the above so I thought that systemd was using the sysvinit networking script to trigger ifup@.service. The sysvinit script is called at boot to bring up the non-hotplugged interfaces (lo, builtin ethernet or wifi cards), if they are tagged as auto. I'll set up a VM to try to reproduce this. Do you mean upgrade trusty-to-utopic or utopic-to-vivid? Well, finding out the upgrade path that causes /etc/init.d/networking to be disabled is exactly the exercise :-) It might just be clean utopic install and upgrade to vivid, but it might be more complicated than that. Thanks! Martin -- Martin Pitt| http://www.piware.de Ubuntu Developer (www.ubuntu.com) | Debian Developer (www.debian.org) -- Ubuntu-devel-discuss mailing list Ubuntu-devel-discuss@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
Re: 15.04 and systemd
Hey Tom, sorry for the late answer! Tom H [2014-11-03 15:06 -0500]: Cannot add dependency job for unit systemd-vconsole-setup.service, ignoring: Unit systemd-vconsole-setup.service failed to load: No such file or directory. is in the output of journalctl until I remove systemd-vconsole-setup.service from Wants= and After=: This is tracked as https://launchpad.net/bugs/1392970, I'll look into it soon. (Should be mostly cosmetical) I'm using systemd-networkd.service (and libvirt) and ifup@.service isn't enabled: ifup@.service is for ifupdown, it's entirely unrelated to networkd. This is not supposed to be enabled (there's no [Install] section). It gets triggered through udev rules: | $ grep ifup /lib/udev/rules.d/99-systemd.rules | SUBSYSTEM==net, KERNEL!=lo, TAG+=systemd, ENV{SYSTEMD_ALIAS}+=/sys/subsystem/net/devices/$name, ENV{SYSTEMD_WANTS}+=ifup@$name.service | This will handle hotplugged interfaces which are covered by ifupdown, i. e. /etc/network/interfaces. /etc/init.d/networking is disabled: # find /etc/rc?.d -name *networking | sort /etc/rc0.d/K07networking /etc/rc6.d/K07networking /etc/rcS.d/K09networking You are the third person to report that after Didier Roche and Sebastien Bacher, so this isn't pilot error any more. Would you mind filing a bug about this (against ifupdown for now) and describe how you installed/upgraded your system? I'd like to be able to see a reproducer and see where things go wrong. Are you (or someone else) able to reproduce this somehow? Like, install trusty into a schroot/container/VM and dist-upgrade? 3) friendly-recovery.service https://bugs.launchpad.net/ubuntu/+source/friendly-recovery/+bug/1354937 Fixed in vivid. 4) nfs-common, nfs-kernel-server, rpcbind NFS is broken with systemd as pid 1 because nfs-common only has upstart jobs. That's https://launchpad.net/bugs/1312976 and indeed you already posted your proposed patches there, thanks! Thanks! Martin -- Martin Pitt| http://www.piware.de Ubuntu Developer (www.ubuntu.com) | Debian Developer (www.debian.org) signature.asc Description: Digital signature -- Ubuntu-devel-discuss mailing list Ubuntu-devel-discuss@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
15.04 and systemd
I'm running 15.04 with systemd as pid 1 and I want to share some observations. 1) systemd-vconsole-setup.service Cannot add dependency job for unit systemd-vconsole-setup.service, ignoring: Unit systemd-vconsole-setup.service failed to load: No such file or directory. is in the output of journalctl until I remove systemd-vconsole-setup.service from Wants= and After=: # diff /etc/systemd/system/plymouth-start.service /lib/systemd/system/plymouth-start.service 4,5c4,5 Wants=systemd-ask-password-plymouth.path After=systemd-udev-trigger.service systemd-udevd.service --- Wants=systemd-ask-password-plymouth.path systemd-vconsole-setup.service After=systemd-vconsole-setup.service systemd-udev-trigger.service systemd-udevd.service 2) ifup@.service I'm using systemd-networkd.service (and libvirt) and ifup@.service isn't enabled: # find /{etc,lib}/systemd/system -name *networkd* /etc/systemd/system/multi-user.target.wants/systemd-networkd.service /lib/systemd/system/systemd-networkd.service /lib/systemd/system/systemd-networkd-wait-online.service # find /{etc,lib}/systemd/system -name *ifup* /lib/systemd/system/ifup@.service /etc/init.d/networking is disabled: # find /etc/rc?.d -name *networking | sort /etc/rc0.d/K07networking /etc/rc6.d/K07networking /etc/rcS.d/K09networking And yet, unless I run journalctl mask ifup@.service, I have the following in my logs: # journalctl _COMM=systemd | grep ifup Nov 03 13:41:52 yoga systemd[1]: Starting system-ifup.slice. Nov 03 13:41:52 yoga systemd[1]: Created slice system-ifup.slice. Nov 03 13:42:02 yoga systemd[1]: Starting ifup for wlan0... Nov 03 13:42:02 yoga systemd[1]: ifup@wlan0.service: control process exited, code=exited status=1 Nov 03 13:42:02 yoga systemd[1]: ifup@wlan0.service: main process exited, code=exited, status=1/FAILURE Nov 03 13:42:02 yoga systemd[1]: ifup@wlan0.service: control process exited, code=exited status=1 Nov 03 13:42:02 yoga systemd[1]: Failed to start ifup for wlan0. Nov 03 13:42:02 yoga systemd[1]: Unit ifup@wlan0.service entered failed state. Nov 03 13:42:03 yoga systemd[1]: Starting ifup for virbr0... Nov 03 13:42:03 yoga systemd[1]: ifup@virbr0.service: main process exited, code=exited, status=1/FAILURE Nov 03 13:42:03 yoga systemd[1]: ifup@virbr0.service: control process exited, code=exited status=1 Nov 03 13:42:03 yoga systemd[1]: ifup@virbr0.service: control process exited, code=exited status=1 Nov 03 13:42:03 yoga systemd[1]: Failed to start ifup for virbr0. Nov 03 13:42:03 yoga systemd[1]: Unit ifup@virbr0.service entered failed state. Nov 03 13:42:03 yoga systemd[1]: Starting ifup for virbr0/nic... Nov 03 13:42:03 yoga systemd[1]: ifup@virbr0-nic.service: main process exited, code=exited, status=1/FAILURE Nov 03 13:42:03 yoga systemd[1]: ifup@virbr0-nic.service: control process exited, code=exited status=1 Nov 03 13:42:03 yoga systemd[1]: ifup@virbr0-nic.service: control process exited, code=exited status=1 Nov 03 13:42:03 yoga systemd[1]: Failed to start ifup for virbr0/nic. Nov 03 13:42:03 yoga systemd[1]: Unit ifup@virbr0-nic.service entered failed state. 3) friendly-recovery.service https://bugs.launchpad.net/ubuntu/+source/friendly-recovery/+bug/1354937 (with apologies for the typo!) 4) nfs-common, nfs-kernel-server, rpcbind NFS is broken with systemd as pid 1 because nfs-common only has upstart jobs. I've installed and adapted the upstream nfs-utils units and created rpcbind ones. # diff /etc/systemd/system/nfs-config.service /lib/systemd/system/nfs-config.service 7c7 ExecStart=/lib/systemd/scripts/nfs-utils_env.sh --- ExecStart=/usr/lib/systemd/scripts/nfs-utils_env.sh # diff /etc/systemd/system/rpc-statd-notify.service /lib/systemd/system/rpc-statd-notify.service 19c19 ExecStart=-/sbin/sm-notify -d $SMNOTIFYARGS --- ExecStart=-/usr/sbin/sm-notify -d $SMNOTIFYARGS # diff /etc/systemd/system/rpc-statd.service /lib/systemd/system/rpc-statd.service 16,17c16,17 PIDFile=/run/rpc.statd.pid ExecStart=/sbin/rpc.statd --no-notify $STATDARGS --- PIDFile=/var/run/rpc.statd.pid ExecStart=/usr/sbin/rpc.statd --no-notify $STATDARGS # cat /lib/systemd/scripts/nfs-utils_env.sh #!/bin/sh NFS_CONFIG=/run/sysconfig/nfs-utils if [ -e $NFS_CONFIG ] ; then rm -f $NFS_CONFIG fi if [ -r /etc/default/nfs-common ] ; then . /etc/default/nfs-common fi if [ -r /etc/default/nfs-kernel-server ] ; then . /etc/default/nfs-kernel-server fi RPCNFSDARGStmp=$RPCNFSDCOUNT $RPCNFSDOPTS mkdir -p /run/sysconfig echo GSSDARGS=\\ $NFS_CONFIG echo RPCIDMAPDARGS=\$RPCIDMAPDARGS\ $NFS_CONFIG echo SMNOTIFYARGS=\\ $NFS_CONFIG echo STATDARGS=\$STATDOPTS\ $NFS_CONFIG echo RPCMOUNTDARGS=\$RPCMOUNTDOPTS\ $NFS_CONFIG echo RPCNFSDARGS=\$RPCNFSDARGStmp\ $NFS_CONFIG echo SVCGSSDARGS=\$RPCSVCGSSDOPTS\ $NFS_CONFIG [Note on RPCIDMAPDARGS: I set it to -p /var/lib/nfs/rpc_pipefs in /etc/default/nfs-common because nfs-idmapd.service was failing without it. The path to rpc_pipefs was defaulting to /run/rpc_pipefs even though