Re: [systemd-devel] Should automount units for network filesystems be Before=local-fs.target?

2017-04-29 Thread Michael Chapman

On Sun, 30 Apr 2017, Lennart Poettering wrote:

On Sat, 29.04.17 22:04, Michael Chapman (m...@very.puzzling.org) wrote:


We can't really do that in the generic case, sorry. The distinction
between local-fs.target and remote-fs.target mostly exists because the
latter may rely on network management services which aren't available
in early boot. Spefically, NetworkManager traditionally runs during
late boot, not during early boot. Now, if we'd permit that an autofs
mount is already established in early boot but can only be fulfilled
in late boot, then we'd open the doors to a variety of deadlocks
triggered by early-boot services accessing these mounts at a point in
time where the backing mounts cannot be established yet.


That would imply these early boot services accessing paths that are going to
be over-mounted by the network filesystem later on... which is a tad
strange, but admittedly quite possible.


Well, I assumed that your services are of this kind. Because if they
aren't, and they do not access the autofs mounts, then you can simply
mount them in late boot without ill effect. Am I missing something?


You were talking about early-boot services. The services I'm dealing with 
are perfectly ordinary late-boot services. The problem is that the network 
filesystem automounts are perfectly ordinary late-boot services too, so 
there's no implicit ordering there the services _sometimes_ (but not 
always!) start before the automounts, which means they sometimes see the 
directory underlying the mount point.


I thought that maybe having these automounts done earlier would solve this 
kind of problem in the general case, but I now understand the new problems 
that could arise with such a change.


Thanks for your help!
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] iwd and systemd-networkd

2017-04-29 Thread Christian Rebischke
Hello everybody,
I have read in a phoronix article that iwd will be integraded into
systemd-networkd.[1] Is this already the case with the newest systemd
version? If not, are there any plans to integrate it into
systemd-networkd? I am really interested in this topic, because
currently I use systemd-networkd with wpa_supplicant@.service.
Do you have already any ideas or plans in this direction?

Best regards,

Chris


[1] 
https://www.phoronix.com/scan.php?page=news_item=New-Linux-Wireless-Daemon


signature.asc
Description: PGP signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Ordering (apt) timer services to not run at the same time

2017-04-29 Thread Julian Andres Klode
On Sat, Apr 29, 2017 at 11:40:44AM +0200, Lennart Poettering wrote:
> That said, there are limits to this: this will only work correctly if
> the start jobs for both units are either enqueued at the same time or
> in the order they are supposed to be run in. If however, the job for
> the unit that is supposed to be run second is enqueued first, it will
> be immediately dispatched (as at that moment ordering deps won't have
> any effect as the other unit isn't enqueue), and if the job for the
> first unit is then enqueued then both will run at the same time. This
> is simply as the queue dispatching order only matters at the moment a
> service is actually triggered, afterwards it doesn't matter anymore.
[...]
> So, I am not sure what i can recommend you, systemd's dependency
> currently cannot express what you want to do, but I sympathize with
> the problem. I am not entirely sure what a good and natural way would
> be though to extend systemd's logic for what you want to do.
> 
> Ideas?

This might sound crazy, but I think it might be sense to do one
of this:

(1) just wait regardless of the direction of the ordering relationship
- AKA: if the ordering is "broken" when queuing, reverse the order. Could
be a flag (NeverRunAtSameTime=yes).

(2) A weak ordering type like AfterIfActive=active.service

I'm not sure how this all works out with non-oneshot services,
but that's basically the optimal thing to do here.

-- 
Debian Developer - deb.li/jak | jak-linux.org - free software dev
  |  Ubuntu Core Developer |
When replying, only quote what is necessary, and write each reply
directly below the part(s) it pertains to ('inline').  Thank you.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Running a set of group isolated from other services?

2017-04-29 Thread Benno Fünfstück
Great, thanks!

Lennart Poettering  schrieb am Sa., 29. Apr. 2017,
19:32:

> On Wed, 26.04.17 11:08, Benno Fünfstück (benno.fuenfstu...@gmail.com)
> wrote:
>
> > > I have the problem that I want to run a set of services that are
> isolated
> > > from the other services. In particular, I'd like to:
> > >
> > > * share some environment variables between these services, that aren't
> > > available for services outside the group
> > > * be able to stop all the services in the group and wait for proper
> > > shutdown
> > > * (would be nice) load services for the group from a different
> directory
> > > than the default one
> > > * (would be nice) be able to add transient services to the group with
> > > systemd-run
> > >
> > > Is such a thing possible with systemd? If not, is it feasible to
> implement
> > > something like this (even if it doesn't match exactly what I want)?
> > >
> > > Regards,
> > > Benno
> > >
> >
> > Just to add if that wasn't clear: I'd like to run this group for multiple
> > different sets of environment variables, and be able to "start" the group
> > for some assignment of environment variables (these variables will not
> > change during the lifetime of the group though)
>
> If you want multiple instantation you can use systemd's instance
> logic. i.e. you could have:
>
>mygroup@.target
>
> This target unit could then have Wants= deps on your various services,
> always passing along the instance identifier:
>
> [Unit]
> Wants=myservice1@%i.service myservice2@%i.service
>
> Then, insce the service units you'd add a PartsOf= dep back:
>
> [Unit]
> PartOf=mygroup@%i.target
>
> And then pull in the environment file based on the instance:
>
> [Service]
> EnvironmentFile=/path/to/my/env-file-%i.conf
>
> I hope that makes sense,
>
> Lennart
>
> --
> Lennart Poettering, Red Hat
>
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] sd_bus_add_match after sd_bus_get_fd

2017-04-29 Thread Federico Di Pierro
Hi!
I'm struggling to understand if this can be done: suppose i start polling
on sd_bus_get_fd() without any match; then, after some event happens, i add
a match on same bus.
Will i receive events on my fd just going on polling?
Ie: will this work?
fd = sd_bus_get_fd();
event_happened = 0;
poll(fd);
if (event_happened) sd_bus_process();
else sd_bus_process(); sd_bus_add_match();

It seems it is not working, but i cannot understand if my idea is
completely wrong or if something else is happening.
I mean, fd is still the same obviously, i am only telling sd_bus that i'm
now interested in certain messages, by hooking a callback; am i mistaken?

Thank you very much for your support,
Federico
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Running a set of group isolated from other services?

2017-04-29 Thread Lennart Poettering
On Wed, 26.04.17 11:08, Benno Fünfstück (benno.fuenfstu...@gmail.com) wrote:

> > I have the problem that I want to run a set of services that are isolated
> > from the other services. In particular, I'd like to:
> >
> > * share some environment variables between these services, that aren't
> > available for services outside the group
> > * be able to stop all the services in the group and wait for proper
> > shutdown
> > * (would be nice) load services for the group from a different directory
> > than the default one
> > * (would be nice) be able to add transient services to the group with
> > systemd-run
> >
> > Is such a thing possible with systemd? If not, is it feasible to implement
> > something like this (even if it doesn't match exactly what I want)?
> >
> > Regards,
> > Benno
> >
> 
> Just to add if that wasn't clear: I'd like to run this group for multiple
> different sets of environment variables, and be able to "start" the group
> for some assignment of environment variables (these variables will not
> change during the lifetime of the group though)

If you want multiple instantation you can use systemd's instance
logic. i.e. you could have:

   mygroup@.target

This target unit could then have Wants= deps on your various services,
always passing along the instance identifier:

[Unit]
Wants=myservice1@%i.service myservice2@%i.service

Then, insce the service units you'd add a PartsOf= dep back:

[Unit]
PartOf=mygroup@%i.target

And then pull in the environment file based on the instance:

[Service]
EnvironmentFile=/path/to/my/env-file-%i.conf

I hope that makes sense,

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Running a set of group isolated from other services?

2017-04-29 Thread Lennart Poettering
On Wed, 26.04.17 11:05, Benno Fünfstück (benno.fuenfstu...@gmail.com) wrote:

> Hi,
> 
> I have the problem that I want to run a set of services that are isolated
> from the other services. In particular, I'd like to:
> 
> * share some environment variables between these services, that aren't
> available for services outside the group

You can share a common environment file between them and use EnvironmentFile=-

> * be able to stop all the services in the group and wait for proper
>   shutdown

Usually you'd create a target unit for that and use PartOf=.

> * (would be nice) load services for the group from a different directory
> than the default one

This is not supported. ATM you cannot extend the unit file search path
locally. It's built into the binary. (Well, to be entirely honest you
can override it with an env var, but it's hard to do for PID 1 and
mostly relevant for debugging)

> * (would be nice) be able to add transient services to the group with
> systemd-run

if you set up such a target unit it doesn't matter whether the
services that are linked to it are transient or persistent.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] how to correctly specify dependency on dbus

2017-04-29 Thread Lennart Poettering
On Wed, 26.04.17 10:09, prashantkumar dhotre (prashantkumardho...@gmail.com) 
wrote:

> Hi
> For my service,  I have:
> 
> # cat my.service
> [Unit]
> Description=My Service
> After=dbus.service
> Requires=dbus.service
> ...
> ...
> 
> Some time i see that my service fails to get dbus connection
> (dbus_bus_get_private() fails without any error msg).
> one possibility i think is that dbus is not fully initialized.
> From above service file config, i understand when i start my service,
> dbus service is started 1st and then my service.
> but i am not sure if my service start step is delayed until dbus is
> fully up , initialized and running and ready to accept conections.
> is there a way to specify this in my service file.

Regular system services do not have to declare any explicit dependency
on D-Bus, as D-Bus is always and unconditionally available in the later
boot phase (where regular services are started) and during runtime.

If your service runs during the early boot phase however (i.e. before
basic.target is reached, meaning your service has
DefaultDependencies=no set), then you do need an explicit dependency,
but should only specify it as After=dbus.socket +
Requires=dbus.socket (i.e. on the socket rather than the service).

If you don't know whether your service is an early or late boot
service then it's almost certainly a late boot service.

> For ex, for systemd-networkd,service, i see it specify  like :
> 
> # On kdbus systems we pull in the busname explicitly, because it
> # carries policy that allows the daemon to acquire its name.
> Wants=org.freedesktop.network1.busname
> After=org.freedesktop.network1.busname
> 
> 
> Can i use same (specifying After/Wants on a dbus name) ?

Please ignore anything related to kdbus in the source, this is
supported no longer, and does not apply to regular D-Bus.



Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Should automount units for network filesystems be Before=local-fs.target?

2017-04-29 Thread Lennart Poettering
On Sat, 29.04.17 22:04, Michael Chapman (m...@very.puzzling.org) wrote:

> > We can't really do that in the generic case, sorry. The distinction
> > between local-fs.target and remote-fs.target mostly exists because the
> > latter may rely on network management services which aren't available
> > in early boot. Spefically, NetworkManager traditionally runs during
> > late boot, not during early boot. Now, if we'd permit that an autofs
> > mount is already established in early boot but can only be fulfilled
> > in late boot, then we'd open the doors to a variety of deadlocks
> > triggered by early-boot services accessing these mounts at a point in
> > time where the backing mounts cannot be established yet.
> 
> That would imply these early boot services accessing paths that are going to
> be over-mounted by the network filesystem later on... which is a tad
> strange, but admittedly quite possible.

Well, I assumed that your services are of this kind. Because if they
aren't, and they do not access the autofs mounts, then you can simply
mount them in late boot without ill effect. Am I missing something?

> Yeah, I don't really want to pull the whole .mount in early too, since I am
> relying on normal late-boot configuration of the network.
> 
> I'm going to have to just add After=remote-fs.target drop-ins for a whole
> _lot_ of services that could _possibly_ access data on network
> filesystems.

You can also add a drop-in to remote-fs.target and add a single line
listing the relevant services in the opposite direction.

> Unfortunately I don't really have control over what these services do, and I
> don't necessarily know which ones will actually need the network
> filesystems, so I'm going to just have to punt and add the drop-in on them
> all.
> 
> It's a pity that After=remote-fs.target isn't in a service's default
> dependencies (obviously, services that set up networking would then need
> DefaultDependencies=no, but there's far fewer of those). People generally
> expect most services to be started after mounting network filesystems, since
> pre-systemd Linux releases tended to do that. But it's too late to change
> the way default dependencies work now.

THere are some exceptions: systemd-user-sessions.service does order
itself after remote-fs.target, which means that anything involved with
human user logins will run with the mounts applied. Also, if you have
/var or /var/tmp on the network we'll pull that in explicitly in early
boot, as we don't support that being mounted in late boot.

On SysV where NM was used for network management, the only way you
could get network mounts established before your service is by
ordering your service after NM too.. It's not too different now,
except that instead of ordering yourself explicitly after NM we ask
you to order yourself after the more specific "remote-fs.target".

I would recommend packagers and downstreams for software that is
frequently used with data located on NFS shares to come with
RequiresMountsFor= for the relevant directories. For example, I think
stuff like MySQL would be good to ship with
RequiresMountsFor=/var/lib/mysql, just to make it nicer for those who
want to place that dir on NFS.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Upgrade 232 -> 233: user@XXX.service: Failed at step PAM spawning...

2017-04-29 Thread Lennart Poettering
On Sat, 29.04.17 16:59, Vlad (vo...@vovan.nl) wrote:

> Thanks for the answer. I'd then rephrase my original question: I'd like
> to know what has been changed in the systemd (pam_systemd?) version 233,
> that now it fails to start user@xxx.service? If I downgrade to the
> version 232, then systemd gives the same error, but still starts
> user@xxx.service successfully (pam configuration is exactly the same for
> both systemd versions).

Here's an educated guess: maybe it's not pam_systemd that fails but
pam_keyring, due to the recent keyring changes? (every service know
gets its own fresh keyring set up, maybe the way you invoke
pam_keyring clashes with that?)

Anyway, please figure out which PAM module precisely fails, using PAM
debugging. For that stuff please consult the PAM community or
documentation.

Thanks,

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Upgrade 232 -> 233: user@XXX.service: Failed at step PAM spawning...

2017-04-29 Thread Vlad
Thanks for the answer. I'd then rephrase my original question: I'd like
to know what has been changed in the systemd (pam_systemd?) version 233,
that now it fails to start user@xxx.service? If I downgrade to the
version 232, then systemd gives the same error, but still starts
user@xxx.service successfully (pam configuration is exactly the same for
both systemd versions).

Regards,
Vlad.

On 29/04/17 13:29, Lennart Poettering wrote:
> On Sat, 29.04.17 13:25, Vlad (vo...@vovan.nl) wrote:
>
>> Lennart,
>>
>> I've just tried your suggestion as well, but it doesn't change behavior.
>> I'm just wondering how it would be possible to investigate the error.
>> The message "user@xxx.service: Failed at step PAM spawning
>> /usr/lib/systemd/systemd: Operation not permitted" isn't very
>> descriptive. I enabled debug for pam_systemd, but it doesn't give useful
>> information in my case.
> Well, I figure you should look for PAM level debugging, not
> systemd level debugging for this. Contact the PAM community for help
> on that.
>
> But do note again that distros vary greatly on PAM, and while "-" is
> typically used on Fedora-based distros, IIRC other distros don't know
> or use that concept. Please ask your distro for help.
>
> Lennart
>

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Ordering (apt) timer services to not run at the same time

2017-04-29 Thread Dan Nicholson
On Apr 27, 2017 4:31 PM, "Julian Andres Klode"  wrote:

Hi systemd folks,

(service and timer files being discussed at the bottom)

we are currently reworking the way automatic updates and upgrades work
on Ubuntu and Debian systems. We basically have two persistent timers
with associated services:

1. apt-daily - Downloads new lists and packages
2. apt-daily-upgrade - Performs upgrades

The first job should run spread out through the day (we run it twice
due to some other reasons), the latter in the morning between 6 and 7,
and at boot, daily-upgrade should be resumed after daily (so we added
After ordering relations to apt-daily-upgrade timer and service).

Now, we seem to be missing one bit: If daily-upgrade is already
running, and daily is about to start, daily should wait for
daily-upgrade to finish. I had hoped that maybe that works
automatically given that there is some ordering relation between the
two, but that did not work out. I tried adding Conflicts, but systemd
then said "loop to fast" and became unresponsive (not sure if caused
by this, but maybe).


It seems to me that this could be easily around by adding
Wants=apt-daily.service to the upgrade unit. That will guarantee that
systemd puts the update job in the queue before the upgrade job. I think
this is what you want, anyways. You want to make sure that you have the
latest lists before starting the upgrade. If you get the timers lined up,
then maybe you won't even get any additional update jobs run.

Dan
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Should automount units for network filesystems be Before=local-fs.target?

2017-04-29 Thread Michael Chapman

On Sat, 29 Apr 2017, Lennart Poettering wrote:

On Thu, 27.04.17 15:53, Michael Chapman (m...@very.puzzling.org) wrote:


Hello all,

At present, when systemd-fstab-generator creates an automount unit for an
fstab entry, it applies the dependencies that would have been put into the
mount unit into the automount unit instead.

For a local filesystem, this automount unit would be Before=local-fs.target.
For a network filesystem, it gets Before=remote-fs.target. If the mount is
not noauto, it also gets a corresponding WantedBy= or RequiredBy=
dependency.

Would it make more sense for the automount unit to be ordered before (and,
if not noauto, be pulled in by) local-fs.target, even for network
filesystems?


We can't really do that in the generic case, sorry. The distinction
between local-fs.target and remote-fs.target mostly exists because the
latter may rely on network management services which aren't available
in early boot. Spefically, NetworkManager traditionally runs during
late boot, not during early boot. Now, if we'd permit that an autofs
mount is already established in early boot but can only be fulfilled
in late boot, then we'd open the doors to a variety of deadlocks
triggered by early-boot services accessing these mounts at a point in
time where the backing mounts cannot be established yet.


That would imply these early boot services accessing paths that are going 
to be over-mounted by the network filesystem later on... which is a tad 
strange, but admittedly quite possible.



You can do what you want locally though if you are reasonably sure
that such a deadlock cannot be triggered (for example, because you
know that your networking management solution already runs in early
boot). One way to do this is by adding
"x-systemd.before=local-fs.target" as mount option to the relevant
mounts in fstab. (note though that x-systemd.before= is a relatively
recent addition to systemd though)

Lennart


Yeah, I don't really want to pull the whole .mount in early too, since I 
am relying on normal late-boot configuration of the network.


I'm going to have to just add After=remote-fs.target drop-ins for a whole 
_lot_ of services that could _possibly_ access data on network 
filesystems. Unfortunately I don't really have control over what these 
services do, and I don't necessarily know which ones will actually need 
the network filesystems, so I'm going to just have to punt and add the 
drop-in on them all.


It's a pity that After=remote-fs.target isn't in a service's default 
dependencies (obviously, services that set up networking would then need 
DefaultDependencies=no, but there's far fewer of those). People generally 
expect most services to be started after mounting network filesystems, 
since pre-systemd Linux releases tended to do that. But it's too late to 
change the way default dependencies work now.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] monitoring systemd unit flapping

2017-04-29 Thread Lennart Poettering
On Tue, 25.04.17 16:05, Jeremy Eder (je...@redhat.com) wrote:

> Sorry, I did not explain myself clearly.  systemd is doing nothing wrong.
> What I'd like to do is find an optimal way to notify our monitoring system
> (zabbix) that a service is flapping.  We can probably script something.
> Just looking to see if there's a more elegant way.  Looking also at
> OnFailure
> 
> https://serverfault.com/questions/694818/get-notification-when-systemd-monitored-service-enters-failed-state
> 
> At the same time, trying to avoid false positives in the monitoring system,
> so one failure is OK but when it hits startburstlimit, things are bad, even
> if the service doesn't immediately crash.  That's the thing; it might take
> a few seconds/minutes to fail.  I realize this could be considered an edge
> case...perhaps an equivalent of OnFailure could be
> OnStartBurstLimit= ?

You can already implement this with OnFailure=, all you need to do is
then check via "systemctl show -p Result" what the precise error
reason was you got called for...

Or you use ExecStop=, as suggested in that other mail.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] monitoring systemd unit flapping

2017-04-29 Thread Lennart Poettering
On Tue, 25.04.17 11:30, Jeremy Eder (je...@redhat.com) wrote:

> If we have a service that is flapping because it's crashing after
> startup...what's the right way to monitor for that condition?  Eventually
> it triggers startburstlimit, was thinking that if we hit startburstlimit
> that the service could set some special bit that we could look for.
> 
> Like ... systemctl is-flapping myservice --quiet
> 
> Any other possibilities?

The unit will be placed in "failed" state if the start burst limit is
hit. it will be placed in that state in other conditions too, hence to
determine the precise reason why your service entered that state check
the "Result" property on it.

# systemctl show -p Result my.service

Yes, this is a bit uncerdocumented.

On recent systemd versions you can also add a drop-in to your service
that defines an ExecStop= stanza. The program you specify there will
receive an env var $SERVICE_RESULT with the same value.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Best way to configure longer start timeout for .device units?

2017-04-29 Thread Lennart Poettering
On Fri, 28.04.17 09:36, Michal Sekletar (msekl...@redhat.com) wrote:

> Hi,
> 
> On big setups (read: a lot of multipathed disks), probing and
> assembling storage may take significant amount of time. However, by
> default systemd waits only 90s (DefaultTimeoutStartSec) for
> "top-level" device unit to show up, i.e. one that is referenced in
> /etc/fstab.
> 
> One possible solution is to change JobTimeout for device unit by
> adding x-systemd.device-timeout= option to fstab entries. This is
> kinda ugly.
> 
> Another option is to bump value of DefaultTimeoutStartSec, since that
> is what systemd uses as default timeout for device's unit start job.
> However, this has possible negative effect on all other units as well,
> e.g. service Exec* timeouts will be affected by this change.
> 
> I am looking for elegant solution that doesn't involve rewriting
> automation scripts that manage /etc/fstab.
> 
> Is there any other way how to configure the timeout? Can't we
> introduce new timeout value specifically for device units?
> 
> Any advice is much appreciated, thanks.

Note that x-systemd.device-tiemout= is implemented by simply writing
out drop-in snippets for the .device unit. Hence, if you know the
device unit names ahead you can write this out from any tool you like.

I am not overly keen on adding global but per-unit-type options for
this, but then again I do see the usefulness, hence such a
DefaultDeviceTimeoutStartSec= setting might be OK to add...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Plan for DHCP Option 119?

2017-04-29 Thread Lennart Poettering
On Tue, 25.04.17 11:28, Daniel Wang (wonder...@google.com) wrote:

> Hi all,
> 
> First of all this is my first email to this list so apologies if it's not
> worded perfectly.
> 
> I am wondering if there's any plan to support Domain Search List option in
> networkd. Some cloud providers like GCE, advertise multiple search domains
> through option 119 and they just get ignored in today's networkd.
> 
> The bug https://github.com/systemd/systemd/issues/2710 doesn't seem to get
> any attention. What's the complexities behind it? Does anyone on this list
> have any reference implementation?

If you want this functionality to be added soon, please provide a
patch, that's usually the best way!

Other than that, we all have a lot to do, and it will only get done if
someone finds the time to.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Should automount units for network filesystems be Before=local-fs.target?

2017-04-29 Thread Lennart Poettering
On Thu, 27.04.17 15:53, Michael Chapman (m...@very.puzzling.org) wrote:

> Hello all,
> 
> At present, when systemd-fstab-generator creates an automount unit for an
> fstab entry, it applies the dependencies that would have been put into the
> mount unit into the automount unit instead.
> 
> For a local filesystem, this automount unit would be Before=local-fs.target.
> For a network filesystem, it gets Before=remote-fs.target. If the mount is
> not noauto, it also gets a corresponding WantedBy= or RequiredBy=
> dependency.
> 
> Would it make more sense for the automount unit to be ordered before (and,
> if not noauto, be pulled in by) local-fs.target, even for network
> filesystems?

We can't really do that in the generic case, sorry. The distinction
between local-fs.target and remote-fs.target mostly exists because the
latter may rely on network management services which aren't available
in early boot. Spefically, NetworkManager traditionally runs during
late boot, not during early boot. Now, if we'd permit that an autofs
mount is already established in early boot but can only be fulfilled
in late boot, then we'd open the doors to a variety of deadlocks
triggered by early-boot services accessing these mounts at a point in
time where the backing mounts cannot be established yet.

You can do what you want locally though if you are reasonably sure
that such a deadlock cannot be triggered (for example, because you
know that your networking management solution already runs in early
boot). One way to do this is by adding
"x-systemd.before=local-fs.target" as mount option to the relevant
mounts in fstab. (note though that x-systemd.before= is a relatively
recent addition to systemd though)

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Upgrade 232 -> 233: user@XXX.service: Failed at step PAM spawning...

2017-04-29 Thread Lennart Poettering
On Sat, 29.04.17 13:25, Vlad (vo...@vovan.nl) wrote:

> Lennart,
> 
> I've just tried your suggestion as well, but it doesn't change behavior.
> I'm just wondering how it would be possible to investigate the error.
> The message "user@xxx.service: Failed at step PAM spawning
> /usr/lib/systemd/systemd: Operation not permitted" isn't very
> descriptive. I enabled debug for pam_systemd, but it doesn't give useful
> information in my case.

Well, I figure you should look for PAM level debugging, not
systemd level debugging for this. Contact the PAM community for help
on that.

But do note again that distros vary greatly on PAM, and while "-" is
typically used on Fedora-based distros, IIRC other distros don't know
or use that concept. Please ask your distro for help.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Upgrade 232 -> 233: user@XXX.service: Failed at step PAM spawning...

2017-04-29 Thread Vlad
Lennart,

I've just tried your suggestion as well, but it doesn't change behavior.
I'm just wondering how it would be possible to investigate the error.
The message "user@xxx.service: Failed at step PAM spawning
/usr/lib/systemd/systemd: Operation not permitted" isn't very
descriptive. I enabled debug for pam_systemd, but it doesn't give useful
information in my case.

Regards,
Vlad.

On 29/04/17 12:21, Lennart Poettering wrote:
> On Sat, 29.04.17 11:13, Vlad (vo...@vovan.nl) wrote:
>
>> Hello,
>>
>> I've recently updated systemd and now user session is failing to start:
>> Apr 29 11:04:02 xxx systemd[550]: user@xxx.service: Failed at step PAM
>> spawning /usr/lib/systemd/systemd: Operation not permitted
>> Apr 29 11:04:02 xxx systemd[1]: Failed to start User Manager for UID xxx.
>> Apr 29 11:04:02 xxx lightdm[535]: pam_systemd(lightdm:session): Failed
>> to create session: Start job for unit user@xxx.service failed with 'failed'
>>
>> Apparently the previous version gives similar error as well, but doesn't
>> fail to start user session:
>> Apr 29 11:09:37 xxx systemd[565]: user@xxx.service: Failed at step PAM
>> spawning /usr/lib/systemd/systemd: Operation not permitted
>> Apr 29 11:09:37 xxx systemd[1]: Started User Manager for UID xxx.
>>
>> I'd appreciate any thoughts about this issue.
> Maybe your PAM snippet for your app changed the pam_systemd invocation
> from "ignore all errors" to "do not ignore errors"?
>
> PAM varies between distros, on Fedora-based distros lines that ignore
> failures in PAM configuration are usually prefixed with a single dash
> character. Maybe this was altered for you?
>
> Lennart
>

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Upgrade 232 -> 233: user@XXX.service: Failed at step PAM spawning...

2017-04-29 Thread Vlad
Lennart,

As I can see pam_systemd is "optional" everywhere in pam.d
configuration. Is that what you meant?
grep pam_systemd *
system-auth:session optionalpam_systemd.so debug
systemd-user:session optional pam_systemd.so

Regards,
Vlad.

On 29/04/17 12:21, Lennart Poettering wrote:
> On Sat, 29.04.17 11:13, Vlad (vo...@vovan.nl) wrote:
>
>> Hello,
>>
>> I've recently updated systemd and now user session is failing to start:
>> Apr 29 11:04:02 xxx systemd[550]: user@xxx.service: Failed at step PAM
>> spawning /usr/lib/systemd/systemd: Operation not permitted
>> Apr 29 11:04:02 xxx systemd[1]: Failed to start User Manager for UID xxx.
>> Apr 29 11:04:02 xxx lightdm[535]: pam_systemd(lightdm:session): Failed
>> to create session: Start job for unit user@xxx.service failed with 'failed'
>>
>> Apparently the previous version gives similar error as well, but doesn't
>> fail to start user session:
>> Apr 29 11:09:37 xxx systemd[565]: user@xxx.service: Failed at step PAM
>> spawning /usr/lib/systemd/systemd: Operation not permitted
>> Apr 29 11:09:37 xxx systemd[1]: Started User Manager for UID xxx.
>>
>> I'd appreciate any thoughts about this issue.
> Maybe your PAM snippet for your app changed the pam_systemd invocation
> from "ignore all errors" to "do not ignore errors"?
>
> PAM varies between distros, on Fedora-based distros lines that ignore
> failures in PAM configuration are usually prefixed with a single dash
> character. Maybe this was altered for you?
>
> Lennart
>

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] fully volatile running from ram

2017-04-29 Thread Lennart Poettering
On Thu, 27.04.17 12:52, jr (darwinsker...@gmail.com) wrote:

> On Wed, Apr 26, 2017 at 04:08:21PM +0200, Lennart Poettering wrote:
> > On Tue, 25.04.17 13:13, jr (darwinsker...@gmail.com) wrote:
> > 
> > > hello,
> > > 
> > > in a fully-volatile boot scenario /usr from a physical disk gets mounted 
> > > on
> > > top of an instance of a tmpfs. my first question is why is that necessary?
> > > (the tmpfs part i mean)
> > 
> > I am not sure I grok what you are trying to say? tmpfs is how on Linux
> > you can easily have a volatile file system, as it lives entirely in
> > memory and never hits the disk (admittedly modulo swapping).
> > 
> 
> but once you mount over that tmpfs from disk the overlaying fs will hide
> the underlying tmpfs, no? baring the fs-caching done in kernel
> (readahead?), for anything that needs to be loaded into memory for
> executing or otherwise disk has to be touched for, well reading, at least
> once, no? this is the part i'm trying to understand, if the overlaying fs
> is mounted from physical disk, does mounting over tmpfs causes kernel to
> cache that fs entirely into memory?
> 
> this looks a lot like initramfs where it is an instance of tmpfs but that
> also gets *shadowed* once "real_root" is switched to, no?

I still don't grok what you are trying to do, but do note that open
files (regardless whether referenced via an open fd, or via an mmap()
mapping, for example because of a ELF binary being mapped into memory)
will always continue to reference the original files on the original
file systems they were opened from regardless if the file system is
now obstructed and the paths originally used to open them would refer
to a different file or file system now.

> > > my second question is, would it be possible to do the same but rather
> > > than mounting the /usr *populate* the said tmpfs with OS tree from said
> > > physical disk, preferably in a blocked or fs cached setup (db-cache or
> > > bcachefs). i realise that this can be done easily in initrd or even
> > > initramfs can hold the /usr but the problem there is when we boot
> > > "developmen" and not "production" in which case we want updates to be
> > > written to disk.
> > 
> > I am not grokking this question either, but keeping the whole OS in
> > memory of course means you need a lot of memory.  Moreover depending on
> > disk speeds it means your boot will be quite slow, as you copy stuff over
> > every single time before you can boot.  If you copy things around like
> > that during boot the best thing would be to do it in the initrd, as
> > otherwise you are running the system already, that you are about to
> > prepare for running, and dropping open references the the old files is
> > hard (though not entirely impossible, after all there is "systemctl
> > daemon-reexec" and things).
> 
> no, no, i'm thinking systemd as rdinit rather than init; i.e. initramfs is
> the real_root. one way of doing it is to pack the /usr into a initramfs
> archive and either build it into kernel or pass it via bootload (never
> worked for me)? then you boot the system into ram, enjoy blazing fast
> responsiveness of it but there comes along some update that one would like
> to apply and it turns out be really great that update. but how do you make
> it stick then? in fact if /var is not volatile and your package manager
> keeps it's records there (in my case portage does) on the next boot system
> is confused because it thinks that updates are going to be there. this has
> a number of solutions; bcachefs or dm-cache or even --overlay.
> --overlay is cool since it stages the upgrade; caching solutions are for
> performance though. i just don't know if the hooks are there (kernel
> and/or systmd) too boot the system this way? i.e. populate the initramfs
> from current or next or the one-after gold-master which resides on disk;
> start working on initramfs; associate this initramfs with original or
> another block-device or subvol of an fs on the disk and let our chosen
> caching system take care of mirroring our working tree with said *backup*. 
> next reboot we should have the option of roll-back or continuing with our
> work and so on.
> 
> please, please let me know if i'm still making no-sense :) English is not
> my strong suit and on top of that i'm horrible at explaining something.

Still not grokking what you are trying to do, but do note that an
initrd is ultimately just a tmpfs preinitialized from a cpio archive.

Also note that if you have a directory tree on disk there's little
reason to copy it into tmpfs performance-wise. The kernel buffer cache
will cache the data on disk into RAM anyway as it is being accessed.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Upgrade 232 -> 233: user@XXX.service: Failed at step PAM spawning...

2017-04-29 Thread Lennart Poettering
On Sat, 29.04.17 11:13, Vlad (vo...@vovan.nl) wrote:

> Hello,
> 
> I've recently updated systemd and now user session is failing to start:
> Apr 29 11:04:02 xxx systemd[550]: user@xxx.service: Failed at step PAM
> spawning /usr/lib/systemd/systemd: Operation not permitted
> Apr 29 11:04:02 xxx systemd[1]: Failed to start User Manager for UID xxx.
> Apr 29 11:04:02 xxx lightdm[535]: pam_systemd(lightdm:session): Failed
> to create session: Start job for unit user@xxx.service failed with 'failed'
> 
> Apparently the previous version gives similar error as well, but doesn't
> fail to start user session:
> Apr 29 11:09:37 xxx systemd[565]: user@xxx.service: Failed at step PAM
> spawning /usr/lib/systemd/systemd: Operation not permitted
> Apr 29 11:09:37 xxx systemd[1]: Started User Manager for UID xxx.
> 
> I'd appreciate any thoughts about this issue.

Maybe your PAM snippet for your app changed the pam_systemd invocation
from "ignore all errors" to "do not ignore errors"?

PAM varies between distros, on Fedora-based distros lines that ignore
failures in PAM configuration are usually prefixed with a single dash
character. Maybe this was altered for you?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Ordering (apt) timer services to not run at the same time

2017-04-29 Thread Lennart Poettering
On Thu, 27.04.17 23:30, Julian Andres Klode (j...@debian.org) wrote:

> Hi systemd folks,
> 
> (service and timer files being discussed at the bottom)
> 
> we are currently reworking the way automatic updates and upgrades work
> on Ubuntu and Debian systems. We basically have two persistent timers
> with associated services:
> 
> 1. apt-daily - Downloads new lists and packages
> 2. apt-daily-upgrade - Performs upgrades
> 
> The first job should run spread out through the day (we run it twice
> due to some other reasons), the latter in the morning between 6 and 7,
> and at boot, daily-upgrade should be resumed after daily (so we added
> After ordering relations to apt-daily-upgrade timer and service).
> 
> Now, we seem to be missing one bit: If daily-upgrade is already
> running, and daily is about to start, daily should wait for
> daily-upgrade to finish. I had hoped that maybe that works
> automatically given that there is some ordering relation between the
> two, but that did not work out.

Hmm, this should just work — with restrictions —, as long as both
services use Type=oneshot, and there's an After/Before= dep between
them. In that case if both services are enqueued at the same time they
will be dispatched in the right order, and the second one only after
the service completed.

That said, there are limits to this: this will only work correctly if
the start jobs for both units are either enqueued at the same time or
in the order they are supposed to be run in. If however, the job for
the unit that is supposed to be run second is enqueued first, it will
be immediately dispatched (as at that moment ordering deps won't have
any effect as the other unit isn't enqueue), and if the job for the
first unit is then enqueued then both will run at the same time. This
is simply as the queue dispatching order only matters at the moment a
service is actually triggered, afterwards it doesn't matter anymore.

> I tried adding Conflicts, but systemd
> then said "loop to fast" and became unresponsive (not sure if caused
> by this, but maybe).

Conflicts is not what you want: it means that any queued jobs of the
other service are removed when you start a conflicting service, in
addition to the service being stopped itself.
"loop to fast" usually indicates that either something in PID 1 is
borked, or you have some kind of feedback loop, where systemd keeps
doing stuff based on a flood of client requests.

So, I am not sure what i can recommend you, systemd's dependency
currently cannot express what you want to do, but I sympathize with
the problem. I am not entirely sure what a good and natural way would
be though to extend systemd's logic for what you want to do.

Ideas?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Upgrade 232 -> 233: user@XXX.service: Failed at step PAM spawning...

2017-04-29 Thread Vlad
Hello,

I've recently updated systemd and now user session is failing to start:
Apr 29 11:04:02 xxx systemd[550]: user@xxx.service: Failed at step PAM
spawning /usr/lib/systemd/systemd: Operation not permitted
Apr 29 11:04:02 xxx systemd[1]: Failed to start User Manager for UID xxx.
Apr 29 11:04:02 xxx lightdm[535]: pam_systemd(lightdm:session): Failed
to create session: Start job for unit user@xxx.service failed with 'failed'

Apparently the previous version gives similar error as well, but doesn't
fail to start user session:
Apr 29 11:09:37 xxx systemd[565]: user@xxx.service: Failed at step PAM
spawning /usr/lib/systemd/systemd: Operation not permitted
Apr 29 11:09:37 xxx systemd[1]: Started User Manager for UID xxx.

I'd appreciate any thoughts about this issue.

Regards,
Vlad.


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Ordering (apt) timer services to not run at the same time

2017-04-29 Thread Andrei Borzenkov
28.04.2017 12:05, Julian Andres Klode пишет:
> On Fri, Apr 28, 2017 at 08:46:45AM +0200, Michal Sekletar wrote:
>> On Thu, Apr 27, 2017 at 11:30 PM, Julian Andres Klode  
>> wrote:
>>
>>> Now, we seem to be missing one bit: If daily-upgrade is already
>>> running, and daily is about to start, daily should wait for
>>> daily-upgrade to finish. I had hoped that maybe that works
>>> automatically given that there is some ordering relation between the
>>> two, but that did not work out. I tried adding Conflicts, but systemd
>>> then said "loop to fast" and became unresponsive (not sure if caused
>>> by this, but maybe).
>>
>> After/Before dependencies ensure ordering between respective jobs in a
>> transaction (actually between both jobs in a single transaction and
>> between jobs that are already in run queue). However, ordering doesn't
>> affect jobs that we've already dispatched, since they are already
>> running we can't do much about them.
> 
> From my testing, if B has After=A, and A is already started, the
> startup of B is delayed until A has completed - do you mean that
> with run queue, or is that merely by accident somehow?
> 

Works as designed. You have Type=oneshot units, for this type "unit
started" means "ExecStart command has finished". So B waits until A is
started a.k.a. ExecStart command completes.

But in opposite case, when B is already being started, there is nothing
that can be done about A; even worse, strict solution would be to abort
B, start A then restart B.

This really needs other type of dependencies. Asymmetrical After (that
does not imply revers Before) is one possibility. Reverse Requisite is
not exactly the right one, as you want to wait, not fail start request
immediately.

>> Indeed, seems like lockfile + condition in other unit is the simplest way 
>> out.
> 
> How unfortunate.
> 

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel