Re: [systemd-devel] timesyncd on systems without battery
Hallo, thank you for your advice. Option 1 seems to work without problems now. DT Dne pondělí 4. ledna 2021 16:36:16 CET, Lennart Poettering napsal(a): > On So, 03.01.21 10:39, Dan Tihelka (dtihe...@gmail.com) wrote: > > Hello, > > I run systemd on a NAS without internal-clock-holding battery, so I think > > that the systemd-timesyncd sets the time to the last known value after > > restart and syncs it from the network when on-line. Is it right? > > Yes. > > > Now, I have a shutdown timer unit, which powers the NAS off at the given > > time. However, sometimes (about 50% of cases), when the device is > > powerd-on, it switches off immediately. When switched-on again, it boots > > as > > expected. > > Why does it do so, because the clock is incorrectly set? > > > The question is, is there a way of fixing the issue? I have tried to add > > sleep to skip to the next minute, but unsuccessfully. > > Two options: > > 1. Consider explcitly ordering your timer unit after >"time-set.target". (This will be done implicitly starting with the >upcoming release of systemd, see >fe934b42e480473afba8a29a4a0d3d0e789543ac). THis effectively means >timesyncd has to finish initialization before the timer is >started, and that in turn means the clock is roughly monotonic for >all calculations of the calendar time of the timer unit. > > 2. Consider enabling "systemd-time-wait-sync.service", which is a >small service that blocks until the clock is synchronized. Calendar >based timer units are implicitly ordered after it, once >enabled. This means the timers will only start once the clock is >synced to some network reference clock. This is a much stronger >option, but means the boot process will be delayed based on network >availability. > > Coincidentally we documented this recently in more detail, see git > commit b149d230ea23c14bac2dfe79c47e58782623200f which also will be in > the next release. > > Lennart > > -- > Lennart Poettering, Berlin ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
[systemd-devel] timesyncd on systems without battery
Hello, I run systemd on a NAS without internal-clock-holding battery, so I think that the systemd-timesyncd sets the time to the last known value after restart and syncs it from the network when on-line. Is it right? Now, I have a shutdown timer unit, which powers the NAS off at the given time. However, sometimes (about 50% of cases), when the device is powerd-on, it switches off immediately. When switched-on again, it boots as expected. The question is, is there a way of fixing the issue? I have tried to add sleep to skip to the next minute, but unsuccessfully. Thank you, DT Here are the unit files. timer: [Unit] Description=Poweroff the system on scheduled time [Timer] # Power off at the given time OnCalendar=Mon,Tue,Wed,Thu,Fri *-*-* 01:10:00 OnCalendar=Sat,Sun *-*-* 01:55:00 [Install] WantedBy=timers.target --- service: [Unit] Description=Poweroff the system [Service] Type=oneshot ExecStart=/bin/sleep 80 ExecStart=/usr/bin/systemctl poweroff ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] systemd-tmpfiles for the user instance of systemd
On Tuesday 28 of July 2015 03:31:08 Lennart Poettering wrote: On Sat, 04.07.15 13:23, Tomasz Torcz (to...@pipebreaker.pl) wrote: On Fri, Jul 03, 2015 at 08:31:42PM +0200, Lennart Poettering wrote: On Wed, 01.07.15 12:35, Daniel Tihelka (dtihe...@gmail.com) wrote: Hello, does anyone have an experience with the use of systemd-tmpfiles for the user instance of systemd. This is currently not nicely supported. And I am not sure it should. Note that much of what tmpfiles supports is only necessary for: - aging (automatic time-based clean-up of files). Doesn't really apply to user sessions, since /tmp and /var/tmp are already cleaned up by the system instance of tmpfiles /var and /tmp are not only aged files. I'm using tmpfiles for removing – files in ~/Downloads/* older than 1 year – emails in ~/Mail/.spam/cur/* older than 1 month Out of neccessity I have cleanup configured in system instance for my specific user only. My recommendation would be to clean ~/Downloads up from the root tmpfiles instances. Well, it does not work for encfs (FUSE-based) encrypted directories (root does not have to have an access to it). An option is to link Downloads (or any other) to an unencrypted dir, but it somehow violates the purpose of encryption ... And of course, one must have root access to configure it. The simple issue is that aging for dirs cannot work, since iterating through them will bump the atime, which defeats the aging. You hence break the aging by aging... THis can only work if you have root privs, since then we can turn off the atime bumping... Yes, it is documented. But the aging still works for files, doesn't it? Could it be solved by (optional) deletition of those dirs which stay empty after files cleaning? Dan Lennart ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
[systemd-devel] Mounting cifs per-user
Hello all, I have rather generic idea/question which probably not solvable yet. I, as an ordinary user, would like to mount cifs share (but it can generally be extended to any other dynamic media) on-demand to a given path (preferably /run/mount/UID/mycifs/). Currently, mount -t cifs ... must be called as root, although I can specify uid=,gid= to set the ownership of the mountpoint. Therefore, the mount+automount units must belong to the system instance, not to the user instance of systemd. Is it correct? I am able to create a moutnt mnt-storage_user.mount [Unit] Wants=network.target After=network.target [Mount] What=//addr/user Where=/mnt/storage_user Type=cifs and mnt-shared.automount: [Install] WantedBy=multi-user.target configs. Now, I would like to make the mount unit generic, something like: [Mount] What=//addr/%u Where=/run/mount/%U/storage_%u however, I am not sure if %* variables are allowed here and of course I do not know how to name the unit file (run-mount-%U-storage_%u.mount does not seem appropriate). Well, even when I replaced all %* variables with the defined names, it still refused to mount the directory: -- Subject: Unit run-mount-1000-storage_dtihelka.mount has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/catalog/be02cf6855d2428ba40df7e9d022f03d -- -- Unit run-mount-1000-storage_dtihelka.mount has failed. BTW, the link was invalid ... Did I miss something? On the more hypothetical note, my vision of working with pluggable or network storage media was the following: * in my DM (KDE, Gnome, ...) I can define (somehow, not important here) that I want a FOO share mounted to BAR directory (with the guarantee that no other user will be able to read the data there). * this requirement will be passed to a mounting manager, either it is systemd itself or any other daemon. * when I access the BAR mountpoint, I will be asked to the password (if required) and FOO will be mounted. The password could, of course, be managed by a passwd manager (e..g. kwallet in KDE). Seems to be easy, but surely it is not that easy to implemented since it expects the cooperation of many components. But I would like to ask, if systemd infrastructure allows something like this to be hooked into or if this would have to be solved by an independent (system-level?) daemon (maybe feeding systemd with the mount requests). Hope I haven got it totally wrong. Please correct me if O have overlooked something ... Thank you very much, best regards Dan t. signature.asc Description: This is a digitally signed message part. ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
[systemd-devel] systemd + ssh-agent
Hello, from time to time I must connect through ssh to mu work desktop and make some synchronization, which requires ssh key-based authentication (so ssh from home to work desktop, followed by ssh from desktop to another server, e.g. svn). What I was faced is the inability to use ssh-agent when logged through ssh, since the SSH_AUTH_SOCK is not defined, and thus I must re-type the password several times, which is really annoying ... The problem is that ssh-agent must be started in user-session in place where SSH_AUTH_SOCK env variable is exported to all child processes (in ARCH case it is /etc/kde/env/ssh-agent-startup.sh script called somewhere during KDE start). And of course, the ssh daemon is not started as the part of this startup, so the env variable is not defined ... So I have made the following hack: 1) define SSH_AUTH_SOCK DEFAULT=\${HOME}/.ssh/ssh-agent in /etc/security/pam_env.conf file 2) modify /etc/kde/startup/agent-startup.sh/ to start ssh-agent like: if [ -x /usr/bin/ssh-agent ]; then /usr/bin/ssh-agent -s -a $(eval echo $SSH_AUTH_SOCK) fi 3) modify ~/.bashrc to resolve the variable: export SSH_AUTH_SOCK=$(eval echo $SSH_AUTH_SOCK) In this way, the ssh-agent is started during KDE startup, listening on the required socket, and the socket is defined for ssh sessions as well, tanks to pam_env module. I think that now it should be rather easy to create the user-specific systemd service joined with socket-based authentication to stat ssh-agent automatically on demand, no matter if invoked from ssh session or from desktop itself (however I did not try it yet ...). BUT. Although this solution is working, it is not very easy to configure it. Also, there is a danger that if the ssh-agent startup fail, the SSH_AUTH_SOCK env will be defined anyway - this is also not very nice ... And of course, the same (similar) is valid for gpg-agent ... So I would like to ask, if this situation has been thought about and if here is an easier and more straightforward solution to achieve it. I think that the main problem is in passing env variables (or any other kind of setting ??) through various services of the same user, but not having the same parent (except init itself). It would be great to hear that you have an idea of making this work in easy-to-configure and reliable way, while allowing to use all the cool systemd abilities :-). Thank you very much, Dan T. signature.asc Description: This is a digitally signed message part. ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] Starting/stopping service on net connection
Hello, thank you for such a long and detailed answer. I do agree with you in general. Of course, there is an intersection of two words. On one side there is an init daemon being responsible for starting/stopping services. On the second side there is an network management system/daemon handling the net connection. And as it is true that init daemon does not have to handle network status (as you explained), a network manager does not have to start services ... The way (well, you may have a better solution, probably ...) is in communication. I can imagine that the network manager will emit dbus broadcast messages about network up/down, no matter what particularly it means (there may even be several different kinds of signals). If init daemon can handle the dbus signals, users or admins can write services as the reactions to them. And here I don't mean any standardized set of signals which both systems have to implement. It is enough to allow StartOnDBus=user-write-what- require, SopOnDBus=user-write-what-require and similar stuff. Similarly to sockets activation or automounting capability. Do you think that it is the way, or am I completely mistaken? Thank you, Dan P.S.I think it could possibly be achieved through files as well, using systemd.path (although it not as elegant). I am not sure about stopping the service, however ... On Monday 15 of October 2012 17:51:18 Lennart Poettering wrote: On Sun, 14.10.12 16:15, Dan Tihelka (dtihe...@gmail.com) wrote: Hallo, just a short question, sorry if it seems stupid, I am just curious. And google does not provide satisfactory answers (or I didn't ask the right question) So: is there a robust and reliable way of starting/stopping a service when a net connection is established/lost? And I mean also the case where a cable is disconnected (or wifi signal lost), which should be detected by NetworkManager (or an alternative daemon). We do not offer any special support for something like this in systemd itself. This has various reasons: a) in general we can only recommend to update networking daemons that assume a fixed, and continous network link to exist so that they handle with changing networks better. Today's networks aren't like they used to be. Network links break and come back online, and configuration changes all the time. This is true for client systems, but also very much for servers. Software that assumes networking was static doesn't reflect how things work these days. The kernel has offered APIs for subscribing to network configuration changes for a long time (rtnetlink), and network services really should make use of this, and many already do (for example bind has been subscribing to netlink changes since a long long time). b) There are varying definition what network is up means. Some interfaces might be required, others not. Some people probably assume that up means that the iface is around, others believe up means that there was a link beat, others assume it means IPv4 address is configured, others assume it means IPv6 address is configured, even others IPv4 AND IPv6 is configured, even others IPv4 OR IPv6 is configured, even others A certain host is reachable and so on. In systemd we'd really like to avoid this discussion for as long as we can, especially as we really want to stay away from turning systemd into a monitoring system. c) We generally believe that system boot-up should not be affected by whether network is around or not. You system should not fail to boot or become slow at boot, just because somebody tripped over a network cable. Systems should be robust, and simple, and make the best on what is available, instead of requiring any resources to be strictly available. So, for all of this and more, we currently offer no high-level hooks in systemd itself for waiting for network. That said, We expect that admins/other software adds meaning to network.target, the way he wants. network.target however is very simplistic and static, as it cannot react to network configuration changes. It is hence primarily useful for systems which are static enough, or which are file system clients to a specific network. Other than that the various networking management services tend to have call-out dirs that can be used to start/restart/stop services on certain events. It is relatively easy to stick scripts into those. However, be aware of ordering cycles this might create: i.e. if a service foobar.service wants to be started after network.target, but a service that implements networking actually wants to start foobar.service at initializuation and waits for this to complete will result in a deadlock (simply because foobar.service is both supposed to finish starting before AND after network.target that way, which is contradicting). A way to avoid such cycles is via --no-block on the systemctl cmdline. Or in short: you probably want to rely on hooks
[systemd-devel] Starting/stopping service on net connection
Hallo, just a short question, sorry if it seems stupid, I am just curious. And google does not provide satisfactory answers (or I didn't ask the right question) So: is there a robust and reliable way of starting/stopping a service when a net connection is established/lost? And I mean also the case where a cable is disconnected (or wifi signal lost), which should be detected by NetworkManager (or an alternative daemon). I have looked at systemd.special man and there are basically two targets, none of them doing what I want to: network.target - in LSB $network it is referred to low level networking (ethernet card; may imply PCMCIA running) nss-lookup.target - this I understand as sipmly a network-managing daemon is started, also LSB seems to refer it as daemons which may provide hostname resolution (if present) are running Another option is to wait for a dbus signal broadcasted from a network management daemon, if there is such (the signal). Although it is rather network management system dependent, it could be OK as well. Also, there is a possibility to check the existence of IP address in loop, but I am not sure if it would work *). To use socket/based activation is also not very useful for services being clients connecting to a remote server. Well, to make it clear, I do not need it for any particular program now, I just want to know if it is possible with systemd or not. Because there are two worlds meeting - network management service which is responsible for working network connection, and init responsible for starting/stopping services under various conditions. Thank you for your answers Dan *) I mean a loop doing a regular check, and if address is assigned to an interface an service could be started by systemctl. Or more generally. a dbus signal could be broadcasted, allowing more services be started by it. Similarly, another signal could be sent when address disappears, or reset request when an interface is changed ... But it may require some amount of (non trivial?) shell scripting ... ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
[systemd-devel] Best way of configuring service
Dear all, I porting the start scripts of cruisecontrol system to the native systemd service configuration. It goes quiet well, the only trouble I have with the options configuration. Since cruisecontrol is written in Java, there are two ways of how to configure the daemon (and CC use them both): -D java properties and classic command line options passed to the main jar. I want to have the whole service highly configurable, So I have decided to define Environment= item for each particular option. The result looks like: Environment=cc.prop.item1=-Dcc.XXX=value1 Environment=cc.prop.item2=-Dcc.XXX=value1 Environment=cc.prop.item3=-Dcc.XXX=value1 Environment=cc.prop.item4= Environment=cc.prop.item5= Environment=cc.opt.item1=-opt1 val1 Environment=cc.opt.item2=-opt2 val2 Environment=cc.opt.item3=-opt3 val3 Environment=cc.opt.item4= Environment=cc.opt.item4= Environment=cc.install.dir=/usr/shared/cruisecontrol Where those undefined are left for user to be able to override the configuration, since they MUST appear in the ExecStart= item which looks like: ExecStart=/usr/bin/java -Dcc.install.dir=-Dcc.install.dir${cc.install.dir} -cp ${cc.install.dir}/dist $cc.prop.item1 $cc.prop.item2 ... $cc.prop.item5 -jar cruisecontrol-launcher.jar $cc.opt.item1 $cc.opt.item2 ... $cc.opt.item5 In this way, the user should be able to customize everything in /etc/systemd/system/ configuration, simply re-defining the appropriate Environment definitions. However, there are two issues Iike to clarify: 1) As you could see, one variable is used in two cases - particularly it is: -Dcc.install.dir=-Dcc.install.dir${cc.install.dir} -cp ${cc.install.dir}/dist I have observed that the env must be used in form ${XXX} when it is the part of larger string (despite the form ${XXX} having its special meaning). So the question is: do I it right? Is the concept I have chosen correct? 2) The second issue is that (if I understand it right) everything I define by Environment= will appear in ENV variables of the process started. Is is true? What If I don't want to export those variables to the service? I would guess that it is the reason why -D properties are used instead of raw ENV variables (which could simply be used as well) Also, I have found the inability to resolve env definitions in other env definitions being quiet limiting (although I suppose that some others are complaining as well :-)) ). So, I would like to ask you for your advice - what is the best way of handling those two issues? What do you recommend to daemon upstream developers? It seems to me that the purpose of ENV variables is slightly bit different from using as constant definitions for ExecStart= command simplification (even shell does not export them unless explicitly told, doesn't it). Although I may be completely wrong. Thank you and I wish you many happy system admins (anyway, I see systemd as the best init system with the highest potential). Best regards, Dan ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel