Re: [systemd-devel] LLDP from Zyxel – Operation not supported
On Mon, 12 Jan 2015 13:23:40 +0530, Mantas Mikulėnas graw...@gmail.com wrote: I enabled LLDP receive for eth* in networkd. It recognizes outgoing packets sent by lldpd (on the computer itself) and by ladvd (on pfSense), but chokes on incoming packets sent by a Zyxel switch: LLDP: Receive frame failed: Operation not supported The Zyxel switch sending port subtype as Port Id Subtype: Locally assigned (7). Currently supported port id are LLDP_PORT_SUBTYPE_PORT_COMPONENT: LLDP_PORT_SUBTYPE_INTERFACE_ALIAS: LLDP_PORT_SUBTYPE_INTERFACE_NAME: LLDP_PORT_SUBTYPE_MAC_ADDRESS: We need to add the LLDP_PORT_SUBTYPE_LOCALLY_ASSIGNED = 7. Attaching the actual packet. (By the way, `networkctl lldp` is a bit boring – it'd be more useful to show the SysName instead of the TTL...) Susant ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] [PATCH] perl-Net-DBus + new interactive authorization
Angelo Naselli wrote on 11/01/15 17:15: Il 08/01/2015 15:01, Daniel P. Berrange ha scritto: I've attached the WIP patch FYI, but feel free to ignore it until a properly git formatted patch is available. FWIW, this would also address this issue: https://rt.cpan.org/Public/Bug/Display.html?id=101369 Thanks, the patch looks broadly good to me - I only see the need for conditional compilation which you already mention FWIW i rebuilt it in mageia 4 and libdbus1_3-1.6.18-1.8.mga4 I haven't any issues of course. Using it as user for StartUnit/StopUnit for instance i got a only a different exception org.freedesktop.DBus.Error.AccessDenied: Rejected send message while using as root worked as before even if i used new api. This is expected unless you have also backported cauldron systemd to MGA4. I think we discussed before that the interactive authorisation stuff was only added in more recent versions of systemd, so this is entirely what I'd expect here. Col -- Colin Guthrie colin(at)mageia.org http://colin.guthr.ie/ Day Job: Tribalogic Limited http://www.tribalogic.net/ Open Source: Mageia Contributor http://www.mageia.org/ PulseAudio Hacker http://www.pulseaudio.org/ Trac Hacker http://trac.edgewall.org/ ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
[systemd-devel] cdrom_id and 60-cdrom_id.rules behavior
Hello, a while back, around 2011, in cdrom_id was added --eject-media, --lock-media and --unlock-media for not much explanation. Now, recently some people noticed that this might actually be a problem. Reference: http://bugzilla.opensuse.org/show_bug.cgi?id=909418 Here is a scenario: 1. add a CD/DVD into the driver 2. mount the driver: mount /dev/sr0 /mnt/my_cd (ensure you don't use the Gnome/KDE auto-mounting or reproduce this in a server setup) 3. eject the media (using the hardware button) and add a new one media (different disk) 4. ls /mnt/my_cd (it will be an empty output or the previous media) Is this expected? Also, I remember a while back (long time ago) that once you added a media into the driver and it was properly mounted, you couldn't eject the media until you unmounted the media. NOTE: This works somewhat OK in the desktop setup, probably due to udisks (using Gnome/KDE), but in the console not really. -- Robert Milasan L3 Support Engineer SUSE Linux (http://www.suse.com) email: rmila...@suse.com GPG fingerprint: B6FE F4A8 0FA3 3040 3402 6FE7 2F64 167C 1909 6D1A ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] [PATCH] Break JobNew signal dbus signature by adding JobType.
On 11 January 2015 at 18:08, Dimitri John Ledkov dimitri.j.led...@intel.com wrote: At the moment JobNew and JobRemoved signals are not useful for tracking streams of events. JobType is missing from both of them, and thus one can only track that something is happening and to which units (And whether something is about to happen, finished, failed, got aborted etc.). To get the JobType, one needs to query property from the job, however this works only for slow jobs, typically the job is gone on the systemd side already and thus subscriber has no chance in quering the job type. Whilst I still believe this is a valid bug in this signal, there is no urgency in resolving this. In my particular use-case that I wanted to use it for, I think I can use an alternative solution involving generators / templated instance targets. -- Regards, Dimitri. Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ. ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] Running system services required for certain filesystems
Colin Guthrie wrote on 12/01/15 10:37: So, overall is remote-fs-pre.target sufficient here, or should we look into supporting this in a more hotplug/JIT friendly way? Digging into this further, I actually notice a problem with remote-fs-pre.target, at least on my system. It seems that it's not activated here any longer, despite me having remote-fs.target enabled and, as remote-fs-pre.target is static, I cannot specifically enable it. It's my understanding that it should be automatically started. Also, in an unrelated issue, remote-fs.target seems to be reached *before* my actual remote fs's are mounted. This all seems a little wrong (although correct in the sense that I am using the nofail option which is what triggers this lack of waiting) After a fresh boot here (note specifically how remote-fs-pre.target is not active and that the rhome.mount is reached about 40 seconds after remote-fs.target is reached): [colin@jimmy systemd (master)]$ systemctl status rhome.mount remote-fs.target remote-fs-pre.target network.target nfs-lock.service ● rhome.mount - /rhome Loaded: loaded (/etc/fstab) Active: active (mounted) since Mon 2015-01-12 10:44:24 GMT; 19min ago Where: /rhome What: nfs:/home/ Docs: man:fstab(5) man:systemd-fstab-generator(8) Process: 19065 ExecMount=/bin/mount -n nfs:/home/ /rhome -t nfs -o _netdev,nfsvers=3,nofail,user,tcp,rsize=8192,wsize=8192,soft (code=exited, status=0/SUCCESS) ● remote-fs.target - Remote File Systems Loaded: loaded (/usr/lib/systemd/system/remote-fs.target; enabled) Active: active since Mon 2015-01-12 10:43:44 GMT; 19min ago Docs: man:systemd.special(7) ● remote-fs-pre.target - Remote File Systems (Pre) Loaded: loaded (/usr/lib/systemd/system/remote-fs-pre.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ● network.target - Network Loaded: loaded (/usr/lib/systemd/system/network.target; static) Active: active since Mon 2015-01-12 10:44:24 GMT; 19min ago Docs: man:systemd.special(7) http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget ● nfs-lock.service - NFS file locking service. Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; static) Active: active (running) since Mon 2015-01-12 10:44:24 GMT; 19min ago Process: 19099 ExecStart=/sbin/rpc.statd $STATDARGS (code=exited, status=0/SUCCESS) Process: 19074 ExecStartPre=/usr/libexec/nfs-utils/scripts/nfs-lock.preconfig (code=exited, status=0/SUCCESS) Main PID: 19101 (rpc.statd) CGroup: /system.slice/nfs-lock.service └─19101 /sbin/rpc.statd On my system I think nfs-lock.service was probably started by the (now fixed) horrible /usr/sbin/start-statd callout script implemented in mount.nfs. I do have nfs-lock unit specified with Before=remote-fs-pre.target, so in the past it was started before any mounts were attempted, but now it doesn't start it might not be activated until later. I'm not sure but is this just a bug in fstab-generator? i.e. Shouldn't it put a Requires+After=remote-fs-pre.target in it's generated .mount units for those mounts that are determined to be remote? (neither of which are present here) Or am I missing something? What else is meant to pull in remote-fs-pre.target? The only mention in the code is in src/core/mount.c but that's just for an After= dep, not a Requires= one. Perhaps the fstab-generator just needs a Requires= dep and mount.c takes care of the After= bit? I don't see any specific commits to fstab-generator since v217 which is what I'm using. This is my generated unit: [colin@jimmy systemd (master)]$ cat /run/systemd/generator/rhome.mount # Automatically generated by systemd-fstab-generator [Unit] SourcePath=/etc/fstab Documentation=man:fstab(5) man:systemd-fstab-generator(8) [Mount] What=nfs:/home/ Where=/rhome Type=nfs Options=_netdev,nfsvers=3,nofail,user,tcp,rsize=8192,wsize=8192,soft And the other deps that are added in memory only (not sure why they are not just encoded into the generated .mount unit). [colin@jimmy systemd (master)]$ systemctl show rhome.mount| grep -Ei before|after|want|require Requires=-.mount Wants=network-online.target system.slice WantedBy=remote-fs.target Before=umount.target After=systemd-journald.socket remote-fs-pre.target network.target network-online.target system.slice -.mount RequiresMountsFor=/ Going back to the fact that remote-fs.target starts before the mounts due to my use of nofail, this is the offending line in fstab-generator: if (post !noauto !nofail !automount) fprintf(f, Before=%s\n, post); In an ideal world, I'd still like remote-fs.target (aka post) to wait, but I can see why local-fs.target should not wait (when post contains that value). e.g. I'd like remote-fs.target to be reached eventually, but only after the mount jobs have timed out Would a condition of: if (post !noauto (!nofail || streq(post, SPECIAL_REMOTE_FS_TARGET) !automount)
Re: [systemd-devel] cdrom_id and 60-cdrom_id.rules behavior
On Mon, Jan 12, 2015 at 02:02:50PM +0100, Oliver Neukum wrote: On Mon, 2015-01-12 at 04:46 -0800, Greg KH wrote: Let me rephrase. Is this desirable? Probably not. But with some hardware, as you have seen, you need to run some type of userspace daemon to poll the device to handle media removal issues when the hardware itself does not report media removal. Well, we have been polling in kernel space since 3.13 Ok, then it seems I don't know what's up in this area at all, so I'm totally out of the loop and am not the one to answer this. sorry, greg k-h ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] cdrom_id and 60-cdrom_id.rules behavior
On Mon, 2015-01-12 at 04:46 -0800, Greg KH wrote: Let me rephrase. Is this desirable? Probably not. But with some hardware, as you have seen, you need to run some type of userspace daemon to poll the device to handle media removal issues when the hardware itself does not report media removal. Well, we have been polling in kernel space since 3.13 It doesn't tell us whether the hardware button should eject the medium in the first place. Some hardware doesn't allow you to lock the door, so this question comes down to what to do if you try to lock it but it does not happen. No, for currently udev doesn't care. And the choice is pretty clear anyway. We are not going to refuse to use the device, are we? So we'll use it anyway. The question is, do we lock if we can? We need to be able to handle surprise removal. That doesn't mean we should in effect never lock the door if we can do so. I think we support locking the door, if userspace asks to, right? Yes and and that is not the problem. To do what? The kernel on its own and udev on their own behave differently. What is the correct way? The kernel on its own does not dictate this type of policy, it's up to userspace to determine if you want to lock the door when you mount I am afraid I am forced to strongly disagree with your statement. The kernel does have a default policy. It locks the door upon open() (which is implied in mount()) If you press the button during such a lock, an event is passed to user space. The question is why udev contains a rule that overrides the kernel and in user space implements an effective policy of no door locking. something, and to poll the device to detect media removal if the device is broken and does not report such things. That is why systems use udisks today, to handle all of these issues. Why not just use it? Could you elaborate on how to use udisks if udev has a rule that unconditionally ejects media if the button is pressed? And what is wrong with the kernel's default? Regards Oliver ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] [PATCH] Add usernames as arguments to tmpfiles ignore directives.
On Donnerstag 2015-01-08 21:29, Zbigniew Jędrzejewski-Szmek wrote: On Thu, Jan 08, 2015 at 01:37:57PM +0100, Thomas Blume wrote: Currently, systemd can only ignore files specified by their path, during tmpdir cleanup. This patch adds the feature to give usernames as argument. During cleanup the file ownership is checked and files that match the specified usernames are ignored. For example, you could give: X /tmp/* - - - - testuser3,testuser2 I think the patch is useful, but the syntax is wrong. We already have a field for user name - it is the 4th column. The advantage is that it would be natually possible to extend it to groups. I was looking at the UID column, but it seems that only one username can be passed that way. For a list of usernames, I'd have to tweak the get_user_creds function, which seemed too intrusive to me. In addition i-uid_set is set when UID is present, and I didn't want to have some undesired side effects from this. Regards Thomas Blume -- SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG Nürnberg) Maxfeldstr. 5 / D-90409 Nürnberg / Phone: +49-911-740 53 - 0 / VOIP: 3919 GPG 2048R/2CD4D3E8 9A50 048F 1C73 59AA 4D2E 424E B3C6 3FD9 2CD4 D3E8___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] [PATCH] perl-Net-DBus + new interactive authorization
Daniel P. Berrange wrote on 12/01/15 11:40: On Mon, Jan 12, 2015 at 11:37:12AM +, Colin Guthrie wrote: Angelo Naselli wrote on 12/01/15 10:30: Il 12/01/2015 10:16, Colin Guthrie ha scritto: Angelo Naselli wrote on 11/01/15 17:15: FWIW i rebuilt it in mageia 4 and libdbus1_3-1.6.18-1.8.mga4 I haven't any issues of course. Using it as user for StartUnit/StopUnit for instance i got a only a different exception org.freedesktop.DBus.Error.AccessDenied: Rejected send message while using as root worked as before even if i used new api. This is expected unless you have also backported cauldron systemd to MGA4. I think we discussed before that the interactive authorisation stuff was only added in more recent versions of systemd, so this is entirely what I'd expect here. eh eh eh, mine was only to say that even if i haven't disabled anything and compiled all against the old library i didn't see any crash or regression, just a different exception for the same -not working- thing. But i haven't test anything of course :) Oh, right! Gotcha, so this is just about the comment regarding the conditional compilation against older libdbus. Sorry for misinterpreting and thanks for testing that. OK, I'll have a further look at it to see if anything special is needed, perhaps the perl binding stuff just works without too much faff here for non-present APIs (I'm certainly not an expert with this stuff!). I think you'll just need some #ifdef magic in the DBus.xs file to deal with the new APIs being missing. Perhaps just write a stub function in the DBus.xs that just raises a suitable perl error (see _croak_error source in DBus.xs for example on raising errors) Perhaps, but I think in this case it would be better to simply silently ignore the error as this is more of a nice additional feature rather than a core part. I think if someone wrote some perl code that took advantage of this, they would prefer it would just work as expected rather than have any need to push up conditional checks into the calling perl code. Col -- Colin Guthrie colin(at)mageia.org http://colin.guthr.ie/ Day Job: Tribalogic Limited http://www.tribalogic.net/ Open Source: Mageia Contributor http://www.mageia.org/ PulseAudio Hacker http://www.pulseaudio.org/ Trac Hacker http://trac.edgewall.org/ ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] cdrom_id and 60-cdrom_id.rules behavior
On Mon, Jan 12, 2015 at 09:39:30AM +0100, Robert Milasan wrote: Hello, a while back, around 2011, in cdrom_id was added --eject-media, --lock-media and --unlock-media for not much explanation. Now, recently some people noticed that this might actually be a problem. Reference: http://bugzilla.opensuse.org/show_bug.cgi?id=909418 Here is a scenario: 1. add a CD/DVD into the driver 2. mount the driver: mount /dev/sr0 /mnt/my_cd (ensure you don't use the Gnome/KDE auto-mounting or reproduce this in a server setup) 3. eject the media (using the hardware button) and add a new one media (different disk) 4. ls /mnt/my_cd (it will be an empty output or the previous media) Is this expected? Yes, because you didn't unmount the media and tell the kernel that the filesystem is now gone. Also, I remember a while back (long time ago) that once you added a media into the driver and it was properly mounted, you couldn't eject the media until you unmounted the media. It depends on the hardware, some devices support this, others don't. NOTE: This works somewhat OK in the desktop setup, probably due to udisks (using Gnome/KDE), but in the console not really. Then use udisks :) thanks, greg k-h ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] cdrom_id and 60-cdrom_id.rules behavior
On Mon, 2015-01-12 at 04:15 -0800, Greg KH wrote: On Mon, Jan 12, 2015 at 09:39:30AM +0100, Robert Milasan wrote: Hello, a while back, around 2011, in cdrom_id was added --eject-media, --lock-media and --unlock-media for not much explanation. Now, recently some people noticed that this might actually be a problem. Reference: http://bugzilla.opensuse.org/show_bug.cgi?id=909418 Here is a scenario: 1. add a CD/DVD into the driver 2. mount the driver: mount /dev/sr0 /mnt/my_cd (ensure you don't use the Gnome/KDE auto-mounting or reproduce this in a server setup) 3. eject the media (using the hardware button) and add a new one media (different disk) 4. ls /mnt/my_cd (it will be an empty output or the previous media) Is this expected? Yes, because you didn't unmount the media and tell the kernel that the filesystem is now gone. You are correct, but not really helpful :) Let me rephrase. Is this desirable? It doesn't tell us whether the hardware button should eject the medium in the first place. Also, I remember a while back (long time ago) that once you added a media into the driver and it was properly mounted, you couldn't eject the media until you unmounted the media. It depends on the hardware, some devices support this, others don't. If the device does not support locking the door, the question is moot. We need to be able to handle surprise removal. That doesn't mean we should in effect never lock the door if we can do so. NOTE: This works somewhat OK in the desktop setup, probably due to udisks (using Gnome/KDE), but in the console not really. Then use udisks :) To do what? The kernel on its own and udev on their own behave differently. What is the correct way? Regards Oliver ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] [PATCH] perl-Net-DBus + new interactive authorization
Angelo Naselli wrote on 12/01/15 10:30: Il 12/01/2015 10:16, Colin Guthrie ha scritto: Angelo Naselli wrote on 11/01/15 17:15: FWIW i rebuilt it in mageia 4 and libdbus1_3-1.6.18-1.8.mga4 I haven't any issues of course. Using it as user for StartUnit/StopUnit for instance i got a only a different exception org.freedesktop.DBus.Error.AccessDenied: Rejected send message while using as root worked as before even if i used new api. This is expected unless you have also backported cauldron systemd to MGA4. I think we discussed before that the interactive authorisation stuff was only added in more recent versions of systemd, so this is entirely what I'd expect here. eh eh eh, mine was only to say that even if i haven't disabled anything and compiled all against the old library i didn't see any crash or regression, just a different exception for the same -not working- thing. But i haven't test anything of course :) Oh, right! Gotcha, so this is just about the comment regarding the conditional compilation against older libdbus. Sorry for misinterpreting and thanks for testing that. OK, I'll have a further look at it to see if anything special is needed, perhaps the perl binding stuff just works without too much faff here for non-present APIs (I'm certainly not an expert with this stuff!). Cheers! Col -- Colin Guthrie colin(at)mageia.org http://colin.guthr.ie/ Day Job: Tribalogic Limited http://www.tribalogic.net/ Open Source: Mageia Contributor http://www.mageia.org/ PulseAudio Hacker http://www.pulseaudio.org/ Trac Hacker http://trac.edgewall.org/ ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
[systemd-devel] systemctl status not showing still running processes in inactive .mount unit cgroups (NFS specifically)
Hi, Looking into a thoroughly broken nfs-utils package here I noticed a quirk in systemctl status and in umount behaviour. In latest nfs-utils there is a helper binary shipped upstream called /usr/sbin/start-statd (I'll send a separate mail talking about this infrastructure with subject: Running system services required for certain filesystems) It sets the PATH to /sbin:/usr/sbin then tries to run systemctl (something that is already broken here as systemctl is in bin, not sbin) to start statd.service (again this seems to be broken as the unit appears to be called nfs-statd.service upstream... go figure). Either way we call the service nfs-lock.service here (for legacy reasons). If this command fails (which it does for us for two reasons) it runs rpc.statd --no-notify directly. This binary then run in the context of the .mount unit and thus in the .mount cgroup. That seems to work OK (from a practical perspective things worked OK and I got my mount) but are obviously sub optimal, especially when the mount point is unmounted. In my case, I called umount but the rpc.statd process was still running: [root@jimmy nfs-utils]$ pscg | grep 3256 3256 rpcuser 4:devices:/system.slice/mnt-media-scratch.mount,1:name=systemd:/system.slice/mnt-media-scratch.mount rpc.statd --no-notify [root@jimmy nfs-utils]$ systemctl status mnt-media-scratch.mount ● mnt-media-scratch.mount - /mnt/media/scratch Loaded: loaded (/etc/fstab) Active: inactive (dead) since Mon 2015-01-12 09:58:52 GMT; 1min 12s ago Where: /mnt/media/scratch What: marley.rasta.guthr.ie:/mnt/media/scratch Docs: man:fstab(5) man:systemd-fstab-generator(8) Jan 07 14:55:13 jimmy mount[3216]: /usr/sbin/start-statd: line 8: systemctl: command not found Jan 07 14:55:14 jimmy rpc.statd[3256]: Version 1.3.0 starting Jan 07 14:55:14 jimmy rpc.statd[3256]: Flags: TI-RPC [root@jimmy nfs-utils]$ As you can see the mount is dead but the process is still running and the systemctl status output does not correctly show the status of binaries running in the cgroup. When the mount is active the process does actually exist in this unit's context (provided systemd is used to do the mount - if you call mount /path command separately, the rpc.statd process can end up in weird cgroups - such as your user session!) Anyway, assuming the process is in the .mount unit cgroup, should systemd detect the umount and kill the processes accordingly, and if not, should calling systemctl status on .mount units show processes even if it's in an inactive state? This is with 217 with a few cherry picks on top so might have been addressed by now. Cheers Col -- Colin Guthrie colin(at)mageia.org http://colin.guthr.ie/ Day Job: Tribalogic Limited http://www.tribalogic.net/ Open Source: Mageia Contributor http://www.mageia.org/ PulseAudio Hacker http://www.pulseaudio.org/ Trac Hacker http://trac.edgewall.org/ ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] [PATCH] perl-Net-DBus + new interactive authorization
On Mon, Jan 12, 2015 at 11:37:12AM +, Colin Guthrie wrote: Angelo Naselli wrote on 12/01/15 10:30: Il 12/01/2015 10:16, Colin Guthrie ha scritto: Angelo Naselli wrote on 11/01/15 17:15: FWIW i rebuilt it in mageia 4 and libdbus1_3-1.6.18-1.8.mga4 I haven't any issues of course. Using it as user for StartUnit/StopUnit for instance i got a only a different exception org.freedesktop.DBus.Error.AccessDenied: Rejected send message while using as root worked as before even if i used new api. This is expected unless you have also backported cauldron systemd to MGA4. I think we discussed before that the interactive authorisation stuff was only added in more recent versions of systemd, so this is entirely what I'd expect here. eh eh eh, mine was only to say that even if i haven't disabled anything and compiled all against the old library i didn't see any crash or regression, just a different exception for the same -not working- thing. But i haven't test anything of course :) Oh, right! Gotcha, so this is just about the comment regarding the conditional compilation against older libdbus. Sorry for misinterpreting and thanks for testing that. OK, I'll have a further look at it to see if anything special is needed, perhaps the perl binding stuff just works without too much faff here for non-present APIs (I'm certainly not an expert with this stuff!). I think you'll just need some #ifdef magic in the DBus.xs file to deal with the new APIs being missing. Perhaps just write a stub function in the DBus.xs that just raises a suitable perl error (see _croak_error source in DBus.xs for example on raising errors) Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] cdrom_id and 60-cdrom_id.rules behavior
On Mon, Jan 12, 2015 at 01:33:46PM +0100, Oliver Neukum wrote: On Mon, 2015-01-12 at 04:15 -0800, Greg KH wrote: On Mon, Jan 12, 2015 at 09:39:30AM +0100, Robert Milasan wrote: Hello, a while back, around 2011, in cdrom_id was added --eject-media, --lock-media and --unlock-media for not much explanation. Now, recently some people noticed that this might actually be a problem. Reference: http://bugzilla.opensuse.org/show_bug.cgi?id=909418 Here is a scenario: 1. add a CD/DVD into the driver 2. mount the driver: mount /dev/sr0 /mnt/my_cd (ensure you don't use the Gnome/KDE auto-mounting or reproduce this in a server setup) 3. eject the media (using the hardware button) and add a new one media (different disk) 4. ls /mnt/my_cd (it will be an empty output or the previous media) Is this expected? Yes, because you didn't unmount the media and tell the kernel that the filesystem is now gone. You are correct, but not really helpful :) Sorry, it's early in the morning for me :) Let me rephrase. Is this desirable? Probably not. But with some hardware, as you have seen, you need to run some type of userspace daemon to poll the device to handle media removal issues when the hardware itself does not report media removal. It doesn't tell us whether the hardware button should eject the medium in the first place. Some hardware doesn't allow you to lock the door, so this question comes down to what to do if you try to lock it but it does not happen. Also, I remember a while back (long time ago) that once you added a media into the driver and it was properly mounted, you couldn't eject the media until you unmounted the media. It depends on the hardware, some devices support this, others don't. If the device does not support locking the door, the question is moot. Agreed. We need to be able to handle surprise removal. That doesn't mean we should in effect never lock the door if we can do so. I think we support locking the door, if userspace asks to, right? NOTE: This works somewhat OK in the desktop setup, probably due to udisks (using Gnome/KDE), but in the console not really. Then use udisks :) To do what? The kernel on its own and udev on their own behave differently. What is the correct way? The kernel on its own does not dictate this type of policy, it's up to userspace to determine if you want to lock the door when you mount something, and to poll the device to detect media removal if the device is broken and does not report such things. That is why systems use udisks today, to handle all of these issues. Why not just use it? thanks, greg k-h ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
[systemd-devel] Running system services required for certain filesystems
Hi, On a related note to my previous message (subject systemctl status not showing still running processes in inactive .mount unit cgroups (NFS specifically)), when mount.nfs runs to mount NFS filesystems, it shells out to /usr/sbin/start-statd which in turn calls sytemctl to start rpc.statd service. This feels ugly. We have a sync point for this in the form of remote-fs-pre.target, but for some reason upstream nfs-utils people still deem that /usr/sbin/start-statd is a required component. But it did get me thinking about how clean remote-fs-pre.target really is. We do need to make sure rpc.statd is running before any NFS filesystems are mounted and and relying on the blunt instrument of remote-fs-pre.target seems kinda wrong. It should be more on demand e.g. when I start an nfs mount, it should be able to specify that rpc.statd service is a prerequisite. So my question is, is there a cleaner way to have dependencies like this specified for particular FS types? With the goal being that before systemd will try and mount any NFS filesystems it will make sure that nfs-lock.service (or statd.service or nfs-statd.service or whatever it's name really should be) is running? I kinda want a Requires=nfs-lock.service and After=nfs-lock.service definitions to go into all my *.mount units for any nfs filesystem, but it a way that means I don't have to actually specify this manually in my fstab. Something like a pseudo service - systemd-fstype@nfs.service with Type=oneshot+RemainAfterExit=true+Exec=/usr/bin/true that is run by systemd before it does it mounting to act as a sync point (thus allowing nfs-lock.service to just put RequiredBy=systemd-fstype@nfs.service+Before=systemd-fstype@nfs.service and all is well) - there shouldn't really be a strong need for any actual changes to systemd-fstype@.service (or any systemd-fstype@nfs.service.d dropins) here, as it can all be specified the other way around in nfs-lock.service. But that said, using a .service unit as a sync point is fugly. That's what .targets are for, but we don't support (AFAIK) templated targets. So, overall is remote-fs-pre.target sufficient here, or should we look into supporting this in a more hotplug/JIT friendly way? (FWIW, I know we could extend the fstab-generator to include this in the generated .mount units, but baking such deps logic in there seems wrong anyway as it wouldn't apply to manual .mount units outside of fstab and it's not really where the dep info should live anyway). Thoughts Col -- Colin Guthrie gmane(at)colin.guthr.ie http://colin.guthr.ie/ Day Job: Tribalogic Limited http://www.tribalogic.net/ Open Source: Mageia Contributor http://www.mageia.org/ PulseAudio Hacker http://www.pulseaudio.org/ Trac Hacker http://trac.edgewall.org/ ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] [PATCH] perl-Net-DBus + new interactive authorization
On Mon, Jan 12, 2015 at 12:04:42PM +, Colin Guthrie wrote: Daniel P. Berrange wrote on 12/01/15 11:40: On Mon, Jan 12, 2015 at 11:37:12AM +, Colin Guthrie wrote: Angelo Naselli wrote on 12/01/15 10:30: Il 12/01/2015 10:16, Colin Guthrie ha scritto: Angelo Naselli wrote on 11/01/15 17:15: FWIW i rebuilt it in mageia 4 and libdbus1_3-1.6.18-1.8.mga4 I haven't any issues of course. Using it as user for StartUnit/StopUnit for instance i got a only a different exception org.freedesktop.DBus.Error.AccessDenied: Rejected send message while using as root worked as before even if i used new api. This is expected unless you have also backported cauldron systemd to MGA4. I think we discussed before that the interactive authorisation stuff was only added in more recent versions of systemd, so this is entirely what I'd expect here. eh eh eh, mine was only to say that even if i haven't disabled anything and compiled all against the old library i didn't see any crash or regression, just a different exception for the same -not working- thing. But i haven't test anything of course :) Oh, right! Gotcha, so this is just about the comment regarding the conditional compilation against older libdbus. Sorry for misinterpreting and thanks for testing that. OK, I'll have a further look at it to see if anything special is needed, perhaps the perl binding stuff just works without too much faff here for non-present APIs (I'm certainly not an expert with this stuff!). I think you'll just need some #ifdef magic in the DBus.xs file to deal with the new APIs being missing. Perhaps just write a stub function in the DBus.xs that just raises a suitable perl error (see _croak_error source in DBus.xs for example on raising errors) Perhaps, but I think in this case it would be better to simply silently ignore the error as this is more of a nice additional feature rather than a core part. I think if someone wrote some perl code that took advantage of this, they would prefer it would just work as expected rather than have any need to push up conditional checks into the calling perl code. Sure, if it is semantically reasonable from an app's POV for it to be a no-op on old DBus, that's fine too. Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] [systemd-commits] src/core
On Mon, Jan 12, 2015 at 05:03:51AM -0800, Daniel Mack wrote: src/core/mount.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) New commits: commit 0c47569ac9eb365ebeb9342f47fb98d52bcc4704 Author: Daniel Mack dan...@zonque.org Date: Mon Jan 12 13:46:39 2015 +0100 core/mount: use isempty() to check for empty strings Thanks! Zbyszek ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
[systemd-devel] [PATCH] random-seed: Avoid errors (and unit failure) when we cannot write random-seed file.
When we call 'systemd-random-seed load' with a read-only /var/lib/systemd, the cleanup code (which rewrites the random-seed file) will fail and exit. Arguably, if the filesystem is read-only and the random-seed file exists then this will be possibly be quite bad for entroy on subsequent reboots but it should still not make the unit fail. --- src/random-seed/random-seed.c | 24 ++-- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git a/src/random-seed/random-seed.c b/src/random-seed/random-seed.c index 06c1239..99497d6 100644 --- a/src/random-seed/random-seed.c +++ b/src/random-seed/random-seed.c @@ -38,6 +38,7 @@ int main(int argc, char *argv[]) { ssize_t k; int r; FILE *f; +bool cleanup_seed_file = true; if (argc != 2) { log_error(This program requires one argument.); @@ -90,6 +91,7 @@ int main(int argc, char *argv[]) { r = -errno; goto finish; } +cleanup_seed_file = false; } random_fd = open(/dev/urandom, O_RDWR|O_CLOEXEC|O_NOCTTY, 0600); @@ -143,17 +145,19 @@ int main(int argc, char *argv[]) { /* This is just a safety measure. Given that we are root and * most likely created the file ourselves the mode and owner * should be correct anyway. */ -fchmod(seed_fd, 0600); -fchown(seed_fd, 0, 0); +if (cleanup_seed_file) { +fchmod(seed_fd, 0600); +fchown(seed_fd, 0, 0); -k = loop_read(random_fd, buf, buf_size, false); -if (k = 0) { -log_error(Failed to read new seed from /dev/urandom: %s, r 0 ? strerror(-r) : EOF); -r = k == 0 ? -EIO : (int) k; -} else { -r = loop_write(seed_fd, buf, (size_t) k, false); -if (r 0) -log_error_errno(r, Failed to write new random seed file: %m); +k = loop_read(random_fd, buf, buf_size, false); +if (k = 0) { +log_error(Failed to read new seed from /dev/urandom: %s, r 0 ? strerror(-r) : EOF); +r = k == 0 ? -EIO : (int) k; +} else { +r = loop_write(seed_fd, buf, (size_t) k, false); +if (r 0) +log_error_errno(r, Failed to write new random seed file: %m); +} } finish: -- 2.2.1 ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] systemctl status not showing still running processes in inactive .mount unit cgroups (NFS specifically)
Hello On 01/12/2015 05:34 AM, Colin Guthrie wrote: Hi, Looking into a thoroughly broken nfs-utils package here I noticed a quirk in systemctl status and in umount behaviour. In latest nfs-utils there is a helper binary shipped upstream called /usr/sbin/start-statd (I'll send a separate mail talking about this infrastructure with subject: Running system services required for certain filesystems) It sets the PATH to /sbin:/usr/sbin then tries to run systemctl (something that is already broken here as systemctl is in bin, not sbin) to start statd.service (again this seems to be broken as the unit appears to be called nfs-statd.service upstream... go figure). The PATH problem has been fixed in the latest nfs-utils. Either way we call the service nfs-lock.service here (for legacy reasons). With the latest nfs-utils rpc-statd.service is now called from start-statd But yes, I did symbolically nfs-lock.service to rpc-statd.service when I moved to the upstream systemd scripts. If this command fails (which it does for us for two reasons) it runs rpc.statd --no-notify directly. This binary then run in the context of the .mount unit and thus in the .mount cgroup. What are the two reason rpc.statd --no-notify fails? That seems to work OK (from a practical perspective things worked OK and I got my mount) but are obviously sub optimal, especially when the mount point is unmounted. In my case, I called umount but the rpc.statd process was still running: What is the expectation? When the umount should bring down rpc.statd? [root@jimmy nfs-utils]$ pscg | grep 3256 3256 rpcuser 4:devices:/system.slice/mnt-media-scratch.mount,1:name=systemd:/system.slice/mnt-media-scratch.mount rpc.statd --no-notify [root@jimmy nfs-utils]$ systemctl status mnt-media-scratch.mount ● mnt-media-scratch.mount - /mnt/media/scratch Loaded: loaded (/etc/fstab) Active: inactive (dead) since Mon 2015-01-12 09:58:52 GMT; 1min 12s ago Where: /mnt/media/scratch What: marley.rasta.guthr.ie:/mnt/media/scratch Docs: man:fstab(5) man:systemd-fstab-generator(8) Jan 07 14:55:13 jimmy mount[3216]: /usr/sbin/start-statd: line 8: systemctl: command not found Jan 07 14:55:14 jimmy rpc.statd[3256]: Version 1.3.0 starting Jan 07 14:55:14 jimmy rpc.statd[3256]: Flags: TI-RPC [root@jimmy nfs-utils]$ Again this is fixed with the latest nfs-utils... Question? Why are you using v3 mounts? With V4 all this goes away. steved. As you can see the mount is dead but the process is still running and the systemctl status output does not correctly show the status of binaries running in the cgroup. When the mount is active the process does actually exist in this unit's context (provided systemd is used to do the mount - if you call mount /path command separately, the rpc.statd process can end up in weird cgroups - such as your user session!) Anyway, assuming the process is in the .mount unit cgroup, should systemd detect the umount and kill the processes accordingly, and if not, should calling systemctl status on .mount units show processes even if it's in an inactive state? This is with 217 with a few cherry picks on top so might have been addressed by now. Cheers Col ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
[systemd-devel] Wierd Segfault in sd_rtnl_message_unref (libnss_myhostname.so.2 by sshd )
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hi. On Arch X64 using 218-1 (first packaging of 218) I have run into the following wierd problem. When trying to connect to a ssh server running dualstack (both ipv4 and ipv6) by ipv6, ssh segfaults when I have loaded the full ipv4 bgp routing table (~500k+ routes). IPv4 connections works for some reason, and Ipv6 recovers if I kill the routing daemon (bird). The stack trace of the core-file starts with Stack trace of thread 515: #0 0x7f48334a3dd5 _int_free (libc.so.6) #1 0x7f4834a1e62a sd_rtnl_message_unref (libnss_myhostname.so.2) #2 0x7f4834a1e657 sd_rtnl_message_unref (libnss_myhostname.so.2) And continues with that line (#1 and #2) until frame 63. I have looked in src/libsystemd/sd-rtnl/rtnl-message.c and have two observations (my C is very rusty so feel free to correct me). Line 589, shouldn't the line if (m REFCNT_DEC(m-n_ref) = 0) { be if (m REFCNT_DEC(m-n_ref) = 0) { (I.e. greater-than-equal instead of less-than-equal) Also, perhaps a test of whether m-next is equal to m on line 597 Thank you in advance Svenne -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQI5BAEBCAAjBQJUtDfOHBpodHRwOi8vc3Zlbm5lLmRrL3BncC9wb2xpY3kACgkQ /zLSj+olL/JeJBAAgobn/GihBKF1T9toBqF1lnHg5W61wErmjyXcNxFSYljcdbVD wdKcTax/RPf5Bqh8BzBn68Qw2VEuQ45UqVTudTx+aP8L173ga67eIOVwrN4e/eJ9 tPK+zSXr5ioCEnDjU3MLfzhjY2yOTdplW6X3yeHiuTRoNKInvhURHtIJOAs4c3Ka NKpZX9ZgZSi46gGVAu+k9J8L+o7hqx2KbGzKODY7+R5iisg1ZAIqvys9cdWEbp3v F+ugRWS0zkS28A8PK5feH7cPeuSxcFkXPIzikZiGxCtETdrcdKjKwlyEpN2XZECO DCNj5YZxvjMnGVss4QIz8JGyi6LUXcEEd2HeeQGCzQEpEVM1KE9Bmeq9TLdq1eZq /nldkQyAzE2qDmEE/ToC5yANtC/0VnjCoha/x5HM90DLwVjlcfRjYMbV9yCQyBiK LR1o213/6bFXsch+z93Cb4JmtfBviY2zNwMNw9jsV5mR+7QYK6kZbevnjuRXoti7 zyoOYymyqzNRIqIhEnTkxV0+dLGB1slGA8EbntKliPNDlu8vyMDHZkW+A091DfGz ioMuWIiCD1qQdLtbT1mRXwcPND/4qUHytZomrnn7JR1lyK2bfwoDCcLQQtOkIF2q XSE7+vUx/ycX6AHn2jcI01nWhZWmnUOvCKsQWDurGacbQ+s/U51uIIBqXC0= =3n/+ -END PGP SIGNATURE- ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] Wierd Segfault in sd_rtnl_message_unref (libnss_myhostname.so.2 by sshd )
On Mon, Jan 12, 2015 at 10:08:30PM +0100, Svenne Krap wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hi. On Arch X64 using 218-1 (first packaging of 218) I have run into the following wierd problem. When trying to connect to a ssh server running dualstack (both ipv4 and ipv6) by ipv6, ssh segfaults when I have loaded the full ipv4 bgp routing table (~500k+ routes). IPv4 connections works for some reason, and Ipv6 recovers if I kill the routing daemon (bird). The stack trace of the core-file starts with Stack trace of thread 515: #0 0x7f48334a3dd5 _int_free (libc.so.6) #1 0x7f4834a1e62a sd_rtnl_message_unref (libnss_myhostname.so.2) #2 0x7f4834a1e657 sd_rtnl_message_unref (libnss_myhostname.so.2) The reference counting might be broken. It is in other places unfortunately. And continues with that line (#1 and #2) until frame 63. I have looked in src/libsystemd/sd-rtnl/rtnl-message.c and have two observations (my C is very rusty so feel free to correct me). Line 589, shouldn't the line if (m REFCNT_DEC(m-n_ref) = 0) { No, it's supposed to do the freeing when it reaches 0. It is spelled as = 0 but that is either simply misleading, or a workaround for a bug. Zbyszek ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] systemctl status not showing still running processes in inactive .mount unit cgroups (NFS specifically)
2015-01-12 22:43 GMT+01:00 Colin Guthrie gm...@colin.guthr.ie: But FWIW, your check for whether systemctl is installed via calling systemctl --help is IMO not very neat. If you're using bash here anyway, you might as well just do a: if [ -d /sys/fs/cgroup/systemd ]; then type check or if you want to be super sure you could do: if mountpoint -q /sys/fs/cgroup/systemd; then The canonical way to check if systemd is the active PID 1 is [1] test -d /run/systemd/system [1] http://www.freedesktop.org/software/systemd/man/sd_booted.html ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] PartOf= Question
Hi Steve, I think I maybe (coincidentally) partly answered some of this question (or at least outlined the problem) in my other reply on a different thread made just a moment ago. Steve Dickson wrote on 12/01/15 19:58: Hello, The nfs-server service starts both the rpc-mountd service and the rpc-idmapd service when the server is started. But only brings down the rpc-mountd service when the NFS server is stopped. I want nfs-server service to bring both services when the server is stopped. Do you mean nfs-mountd/idmapd.service here? That's the name I see in git, but perhaps you've renamed them locally to match their binary names? (I'm forever mixing up the rpc- vs. nfs- prefixes too FWIW!) Question: Are idmapd and mountd *only* required for the server? I thought that idmapd was at least needed for the client too (but this could easily be a problem with my understanding, so feel free to correct me). I'll assume I'm wrong for the sake of argument as you seem to what this behaviour! :) Looking at the difference between rpc-mountd and rpc-idmapd services, I noted the rpc-mountd service had: PartOf=nfs-server.service PartOf=nfs-utils.service I would strongly discourage the use of multiple PartOf= directives. Note that as the man page describes PartOf is a one-way propagation. That is if nfs-server is stopped, started or restarted it will propagate to rpc-mountd. But likewise is nfs-utils is stopped, started or restarted it too will propagate, but this might then go out of sync with nfs-server. In these units, as nfs-mountd is required by nfs-server.service, if nfs-utils is restarted, then (I think this is correct) nfs-server will have to go down because it's requirement is no longer true (during the window when nfs-mountd.service restarting), but there is nothing that will then start nfs-server again after things are back up. So by having two PartOf= directives here, issuing systemctl restart nfs-utils when nfs-server is started, will result in nfs-server being stopped. Now in this particular case, nfs-server is not really a daemon so things will physically work, but the state will be really confusing to a sysadmin! If rpc-mountd and rpc-idmap are essentially bound to nfs-server.service state, then I would remove both PartOf= lines and simply add a BindsTo=nfs-server.service line. Forget nfs-utils.service (which I think should be generally done anyway). This binds both units state to that of nfs-server.service. If it's started, they will start, if it is stopped, they will stop. If they are individually stopped, so will nfs-server (it Requires= them). They should thus continue to not have any [Install] section. PartOf is looser than BindTo. It was introduced to allow targets to be restarted and have all their units restart automatically (often this would be templated units), but to also allow the individual units that are part of that target to be restarted individually without affecting other units in the same target. Perhaps this is actually what you want here (e.g. to be able to restart idmap on it's own without having this propagate to the nfs-server.service too? If so, I believe you should use Wants= in nfs-server.service rather than Requires= as that way the individual units can be restarted without actually affecting the state of the nfs-server itself, but you do have to ensure they are enabled in some other way (as Wants= will not pull them in automatically). and the rpc-idmapd service did not. So I changed the rpc-idmapd.service to: [Unit] Description=NFSv4 ID-name mapping service Requires=var-lib-nfs-rpc_pipefs.mount After=var-lib-nfs-rpc_pipefs.mount After=network.target PartOf=nfs-server.service PartOf=nfs-utils.service Wants=nfs-config.service After=nfs-config.service [Service] EnvironmentFile=-/run/sysconfig/nfs-utils Type=forking ExecStart=/usr/sbin/rpc.idmapd $RPCIDMAPDARGS which does not work. rpc.idmapd is still running when the nfs-server service is stopped. The man page clearly says: When systemd stops or restarts the units listed here, the action is propagated to this unit. What am I missing about the PartOf option? I suspect it's required by something else perhaps? Does it's pid change (i.e. is it restarted)? Or dod you just modify the units on disk but forget to run systemctl daemon-reload to reread them? Col -- Colin Guthrie gmane(at)colin.guthr.ie http://colin.guthr.ie/ Day Job: Tribalogic Limited http://www.tribalogic.net/ Open Source: Mageia Contributor http://www.mageia.org/ PulseAudio Hacker http://www.pulseaudio.org/ Trac Hacker http://trac.edgewall.org/ ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] Running system services required for certain filesystems
Steve Dickson wrote on 12/01/15 20:31: Hello On 01/12/2015 05:37 AM, Colin Guthrie wrote: Hi, On a related note to my previous message (subject systemctl status not showing still running processes in inactive .mount unit cgroups (NFS specifically)), when mount.nfs runs to mount NFS filesystems, it shells out to /usr/sbin/start-statd which in turn calls sytemctl to start rpc.statd service. This feels ugly. Why? This is why rpc.statd does not need to be started on the client default any more. Yes but it requires the shelling out to bash script to do some modification of a pre-calculate set of transactions and dynamically adjusts the systemd jobs. It feels very un-systemd to use systemctl during the initial transaction of start jobs to modify things. Generally speaking you also have to be really, really careful doing such things as they can be called at unexpected times and result in deadlocks (protected by a timeout thankfully) due to ordering cycles. e.g. say something in early boot that has Before=rpc-statd.service is run, that somehow triggers, e.g. an automount, that in turn calls mount.nfs, which in turn calls systemctl start rpc-statd.service, then that systemctl job will block because the job it creates is waiting for the job with Before=rpc-statd.service in it to complete. So calling systemctl during the initial transaction is really something to strongly discourage IMO. Ideally all information would be available after all the generators are run to calculate the initial transaction right at the beginning without any of dynamic modification in the middle. We have a sync point for this in the form of remote-fs-pre.target, but for some reason upstream nfs-utils people still deem that /usr/sbin/start-statd is a required component. I'm not seeing how remote-fs-pre.target is a sync point. Its only used by the nfs-client.target... Well, it's original intention was as as a sync point, but it doesn't seem to be getting used that way now (and there are some good reasons which I'll cover in a reply to Andrei). But it did get me thinking about how clean remote-fs-pre.target really is. We do need to make sure rpc.statd is running before any NFS filesystems are mounted and and relying on the blunt instrument of remote-fs-pre.target seems kinda wrong. It should be more on demand e.g. when I start an nfs mount, it should be able to specify that rpc.statd service is a prerequisite. So my question is, is there a cleaner way to have dependencies like this specified for particular FS types? With the goal being that before systemd will try and mount any NFS filesystems it will make sure that nfs-lock.service (or statd.service or nfs-statd.service or whatever it's name really should be) is running? I kinda want a Requires=nfs-lock.service and After=nfs-lock.service definitions to go into all my *.mount units for any nfs filesystem, but it a way that means I don't have to actually specify this manually in my fstab. Why spread out the pain? I think the sync point we have right now mount.nfs calling start-statd works and keeps everything in one place. Shelling out to start-statd definitely isn't a sync point and as I've outlined above, calling systemctl mid-transaction is really something we should avoid. I do like that it solves the case of calling mount /mountpoint command manually as a sysadmin and it will start the necessary service but I still thing it's ugly if called via systemctl start /mountpoint - we should be able to handle this kind of dep without such shelling out. Something like a pseudo service - systemd-fstype@nfs.service with Type=oneshot+RemainAfterExit=true+Exec=/usr/bin/true that is run by systemd before it does it mounting to act as a sync point (thus allowing nfs-lock.service to just put RequiredBy=systemd-fstype@nfs.service+Before=systemd-fstype@nfs.service and all is well) - there shouldn't really be a strong need for any actual changes to systemd-fstype@.service (or any systemd-fstype@nfs.service.d dropins) here, as it can all be specified the other way around in nfs-lock.service. WOW.. Granted I'm no systemd expert... what did you say?? :-) My apologies but I'm unable to parse the above paragraph at all! In the end, I'm all for making things go smoother but I've never been a fan of fixing something that's not broken... To be fair, I could probably word it better, and (being totally fair) I'm suggesting a similar abuse of a .service unit that the current nfs-utils.service does (which we really shouldn't do!) But ultimately, what the above would do is allow all the deps for the initial transaction to be pre-calculated right at the start without the need to shell out to something that calls systemctl start rpc-statd.service. Sadly, we'd still need a way to know that this was happening (i.e. being called from within systemd, not via mount ... directly) as a problem scenario would be that the machine was booted without any NFS
[systemd-devel] make install busted in git
Happens with top-of-line 720e0be0f00f4a7fee808d1cf60db43970900588. == Summary == + make install DESTDIR=/home/abuild/rpmbuild/BUILDROOT/systemd-218a-0.x86_64 make --no-print-directory install-recursive Making install in . [...] XSLT man/busctl.1 [...] /usr/bin/mkdir -p '/home/abuild/rpmbuild/BUILDROOT/systemd-218a-0.x86_64/usr/share/man/man1' /usr/bin/install -c -m 644 ./man/busctl.1 [...] '/home/abuild/rpmbuild/BUILDROOT/systemd-218a-0.x86_64/usr/share/man/man1' /usr/bin/install: cannot stat './man/busctl.1': No such file or directory The issue is that the manpage is really located at man/man1/busctl.1. As this does not happen with systemd-218, I would suspect that something inside the systemd tree has changed. What is different between v218 and 720e0be0 is that I am adding the extra step of running autogen.sh with the git snapshot - it seems unlikely to cause the installation problem, but maybe it is? == Full output == + make install DESTDIR=/home/abuild/rpmbuild/BUILDROOT/systemd-218a-0.x86_64 make --no-print-directory install-recursive Making install in . XSLT man/bootup.7 XSLT man/busctl.1 XSLT man/daemon.7 XSLT man/file-hierarchy.7 XSLT man/halt.8 XSLT man/hostname.5 XSLT man/journalctl.1 XSLT man/journald.conf.5 XSLT man/kernel-command-line.7 XSLT man/kernel-install.8 XSLT man/locale.conf.5 XSLT man/localtime.5 XSLT man/machine-id.5 XSLT man/machine-info.5 XSLT man/os-release.5 XSLT man/sd-daemon.3 XSLT man/sd-id128.3 XSLT man/sd-journal.3 XSLT man/sd_booted.3 XSLT man/sd_id128_get_machine.3 XSLT man/sd_id128_randomize.3 XSLT man/sd_id128_to_string.3 XSLT man/sd_is_fifo.3 XSLT man/sd_journal_add_match.3 XSLT man/sd_journal_get_catalog.3 XSLT man/sd_journal_get_cursor.3 XSLT man/sd_journal_get_cutoff_realtime_usec.3 XSLT man/sd_journal_get_data.3 XSLT man/sd_journal_get_fd.3 XSLT man/sd_journal_get_realtime_usec.3 XSLT man/sd_journal_get_usage.3 XSLT man/sd_journal_next.3 XSLT man/sd_journal_open.3 XSLT man/sd_journal_print.3 XSLT man/sd_journal_query_unique.3 XSLT man/sd_journal_seek_head.3 XSLT man/sd_journal_stream_fd.3 XSLT man/sd_listen_fds.3 XSLT man/sd_machine_get_class.3 XSLT man/sd_notify.3 XSLT man/sd_watchdog_enabled.3 XSLT man/shutdown.8 XSLT man/sysctl.d.5 XSLT man/systemctl.1 XSLT man/systemd-activate.8 XSLT man/systemd-analyze.1 XSLT man/systemd-ask-password-console.service.8 XSLT man/systemd-ask-password.1 XSLT man/systemd-cat.1 XSLT man/systemd-cgls.1 XSLT man/systemd-cgtop.1 XSLT man/systemd-debug-generator.8 XSLT man/systemd-delta.1 XSLT man/systemd-detect-virt.1 XSLT man/systemd-efi-boot-generator.8 XSLT man/systemd-escape.1 XSLT man/systemd-firstboot.1 XSLT man/systemd-fsck@.service.8 XSLT man/systemd-fstab-generator.8 XSLT man/systemd-getty-generator.8 XSLT man/systemd-gpt-auto-generator.8 XSLT man/systemd-halt.service.8 XSLT man/systemd-hibernate-resume-generator.8 XSLT man/systemd-hibernate-resume@.service.8 XSLT man/systemd-inhibit.1 XSLT man/systemd-initctl.service.8 XSLT man/systemd-journald.service.8 XSLT man/systemd-machine-id-commit.1 XSLT man/systemd-machine-id-commit.service.8 XSLT man/systemd-machine-id-setup.1 XSLT man/systemd-notify.1 XSLT man/systemd-nspawn.1 XSLT man/systemd-path.1 XSLT man/systemd-remount-fs.service.8 XSLT man/systemd-run.1 XSLT man/systemd-shutdownd.service.8 XSLT man/systemd-sleep.conf.5 XSLT man/systemd-socket-proxyd.8 XSLT man/systemd-suspend.service.8 XSLT man/systemd-sysctl.service.8 XSLT man/systemd-system-update-generator.8 XSLT man/systemd-system.conf.5 XSLT man/systemd-sysusers.8 XSLT man/systemd-tmpfiles.8 XSLT man/systemd-tty-ask-password-agent.1 XSLT man/systemd-udevd.service.8 XSLT man/systemd-update-done.service.8 XSLT man/systemd.1 XSLT man/systemd.automount.5 XSLT man/systemd.device.5 XSLT man/systemd.exec.5 XSLT man/systemd.journal-fields.7 XSLT man/systemd.kill.5 XSLT man/systemd.link.5 XSLT man/systemd.mount.5 XSLT man/systemd.path.5 XSLT man/systemd.preset.5 XSLT man/systemd.resource-control.5 XSLT man/systemd.scope.5 XSLT man/systemd.service.5 XSLT man/systemd.slice.5 XSLT man/systemd.snapshot.5 XSLT man/systemd.socket.5 XSLT man/systemd.special.7 XSLT man/systemd.swap.5 XSLT man/systemd.target.5 XSLT man/systemd.time.7 XSLT man/systemd.timer.5 XSLT man/systemd.unit.5 XSLT man/sysusers.d.5 XSLT man/telinit.8 XSLT man/tmpfiles.d.5 XSLT
Re: [systemd-devel] networkd-218 seems to ignore .link files
On Monday 2015-01-12 18:29, Tom Gundersen wrote: In systemd-218, I have configured the following testcase: /etc/systemd/network# ls -al total 20 drwxr-xr-x 2 root root 4096 Jan 11 18:14 . drwxr-xr-x 5 root root 4096 Jan 11 16:23 .. -rw-r--r-- 1 root root 96 Jan 11 18:14 99a-ether.link Hm, isn't this just a problem of 99a-ether.link being ordered after 99-default.link? Well, the manpage states: All link files are collectively sorted and processed in lexical order, with that, I would assume that 99a, being processed after 99, would override 99. It works for me when naming it 98-ether.link instead. Not in my case. I have a feeling networkd won't touch [08:00:27:0a:c5:b2]'s interface name because it has already been named by udev to enp0s3 before networkd got a chance to run. Could that be it? ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] Running system services required for certain filesystems
Andrei Borzenkov wrote on 12/01/15 18:31: В Mon, 12 Jan 2015 11:34:37 + Colin Guthrie gm...@colin.guthr.ie пишет: It seems that it's not activated here any longer, despite me having remote-fs.target enabled and, as remote-fs-pre.target is static, I cannot specifically enable it. It's my understanding that it should be automatically started. ... What else is meant to pull in remote-fs-pre.target? The only mention in the code is in src/core/mount.c but that's just for an After= dep, not a Requires= one. remote-fs-pre.target is intended to be pulled in by consumer; as per man page, unit that wants to be ordered before all remote mounts pulls it in via a Wants= type dependency. May be some other service has changed and lost this dependency? Ahh indeed, I must have missed that change back when it was tweaked around a bit a year or two back :s Although there is probably no harm in always starting it. Hmmm, yeah, but I now see a problem. I have fstab entries with nofail. This means that they do not prevent remote-fs.target from starting before they are mounted. This then allows systemd-user-sessions to start before they are mounted (it has After=remote-fs.target). If we enable remote-fs-pre.target, remote-fs.target is delayed until after that (it as After=remote-fs-pre.target). Now, remote-fs-pre.target won't start until the network is up, so the net result is that even with only nofail NFS fstab definitions, I don't get my graphical login until after the network is up! So darn! This breaks a few of my general desired behaviour and requirements! I'm wondering if the systemd-user-sessions After=remote-fs.target directive should be dropped and perhaps moved over to the fstab generator instead? That would probably keep me happy :D One for the discussion board at the sprint on the 30th maybe. Col -- Colin Guthrie gmane(at)colin.guthr.ie http://colin.guthr.ie/ Day Job: Tribalogic Limited http://www.tribalogic.net/ Open Source: Mageia Contributor http://www.mageia.org/ PulseAudio Hacker http://www.pulseaudio.org/ Trac Hacker http://trac.edgewall.org/ ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] networkd-218 seems to ignore .link files
On Mon, Jan 12, 2015 at 11:46 PM, Jan Engelhardt jeng...@inai.de wrote: On Monday 2015-01-12 18:29, Tom Gundersen wrote: In systemd-218, I have configured the following testcase: /etc/systemd/network# ls -al total 20 drwxr-xr-x 2 root root 4096 Jan 11 18:14 . drwxr-xr-x 5 root root 4096 Jan 11 16:23 .. -rw-r--r-- 1 root root 96 Jan 11 18:14 99a-ether.link Hm, isn't this just a problem of 99a-ether.link being ordered after 99-default.link? Well, the manpage states: All link files are collectively sorted and processed in lexical order, with that, I would assume that 99a, being processed after 99, would override 99. So the ordering is correct, but not its semantics. Later in the same manpage it is explained: The first (in lexical order) of the link files that matches a given device is applied. In other words, since '99-' is ordered before '99a' only '99-' is applied and '99a' is ignored. It works for me when naming it 98-ether.link instead. Not in my case. I have a feeling networkd won't touch [08:00:27:0a:c5:b2]'s interface name because it has already been named by udev to enp0s3 before networkd got a chance to run. Could that be it? Ah, link files are applied by udev (and udev only). They are meant as a way to set sane network-agnostic values when the device first appears. If you think some of the documentation is unclear on this point, please let me know. Some of the .link settings may make sense to tweak also per-network, in which case we support changing them in .network files. This does not apply to interface names though, as changing that at run-time tends to cause lots of problems. We therefore make sure that the interface name is only ever changed by udev, and only before the presence of the device is announced via libudev to the rest of userspace. Cheers, Tom ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] systemctl status not showing still running processes in inactive .mount unit cgroups (NFS specifically)
В Mon, 12 Jan 2015 10:34:07 + Colin Guthrie co...@mageia.org пишет: Anyway, assuming the process is in the .mount unit cgroup, should systemd detect the umount and kill the processes accordingly, and if It does not do it currently. It only starts killing if (u)mount times out. Otherwise if umount is successful it goes to stopped state immediately. Although it probably should, even for the sake of user space helpers. not, should calling systemctl status on .mount units show processes even if it's in an inactive state? I believe something very similar (not only for mount units) was reported recently, but I do not have reference handy. I mean, processes belonging to stopped unit (e.g. with KillMode=none) are not displayed. This is with 217 with a few cherry picks on top so might have been addressed by now. Cheers Col ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] networkd-218 seems to ignore .link files
Hi Jan, On Mon, Jan 12, 2015 at 12:20 AM, Jan Engelhardt jeng...@inai.de wrote: In systemd-218, I have configured the following testcase: /etc/systemd/network# ls -al total 20 drwxr-xr-x 2 root root 4096 Jan 11 18:14 . drwxr-xr-x 5 root root 4096 Jan 11 16:23 .. -rw-r--r-- 1 root root 96 Jan 11 18:14 99a-ether.link Hm, isn't this just a problem of 99a-ether.link being ordered after 99-default.link? It works for me when naming it 98-ether.link instead. -rw-r--r-- 1 root root 241 Jan 11 18:12 brd0.network -rw-r--r-- 1 root root 56 Jan 11 18:12 brg0.netdev # cat 99a-ether.link [Match] MACAddress=08:00:27:0a:c5:b2 [Link] Description=ethernet_link Alias=ether0 Name=ether0 # systemctl status -l systemd-networkd ● systemd-networkd.service - Network Service Loaded: loaded (/usr/lib/systemd/system/systemd-networkd.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2015-01-11 18:14:59 EST; 39s ago Docs: man:systemd-networkd.service(8) Main PID: 417 (systemd-network) Status: Processing requests... CGroup: /system.slice/systemd-networkd.service └─417 /usr/lib/systemd/systemd-networkd Jan 11 18:14:59 jng-sfac systemd-networkd[417]: brg0: netdev ready Jan 11 18:14:59 jng-sfac systemd[1]: Started Network Service. Why would it be ignoring the link definition file for ether0? If I invoke `rmmod e1000; modprobe e1000`, systemctl status has one extra line to say: Jan 11 18:17:52 jng-sfac systemd-networkd[417]: eth0: renamed to enp0s3 The L2 address is certainly correct: 2: enp0s3: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 08:00:27:0a:c5:b2 brd ff:ff:ff:ff:ff:ff I'm not able to reproduce this with current git. ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] [PATCH] Add usernames as arguments to tmpfiles ignore directives.
On Mon, Jan 12, 2015 at 03:11:08PM +0100, Thomas Blume wrote: On Donnerstag 2015-01-08 21:29, Zbigniew Jędrzejewski-Szmek wrote: On Thu, Jan 08, 2015 at 01:37:57PM +0100, Thomas Blume wrote: Currently, systemd can only ignore files specified by their path, during tmpdir cleanup. This patch adds the feature to give usernames as argument. During cleanup the file ownership is checked and files that match the specified usernames are ignored. For example, you could give: X /tmp/* - - - - testuser3,testuser2 I think the patch is useful, but the syntax is wrong. We already have a field for user name - it is the 4th column. The advantage is that it would be natually possible to extend it to groups. I was looking at the UID column, but it seems that only one username can be passed that way. For a list of usernames, I'd have to tweak the get_user_creds function, which seemed too intrusive to me. In addition i-uid_set is set when UID is present, and I didn't want to have some undesired side effects from this. I started refactoring the code because I want to add ACL setting functionality. I tried to add new functionality to the current code, but it was very messy. I'm maybe halfway done, so you can expect an update to this code within a week. One of the changes I'm doing is to allow multiple Items for the same path. This should make it very easy to support multiple UIDs (and GIDs) by simply parsing multiple lines, each specifying a single UID. Zbyszek ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel