[systemd-devel] getaddrinfo() API with systemd

2015-04-24 Thread Nanda Kumar
Hi,

I am facing problem while querying DNS using getaddrinfo() API under a
process initiated by systemd. Despite having nameserver entry in
/etc/resolv.conf, the query fails to resolve. After few system call traces,
it is found that the problem is due to systemd resolution. It seems like,
for a process inited by systemd, the getaddrinfo() DNS query routed via
systemd while in stand alone mode (i.e. spawned by shell), the query
happens normally. I changed the /etc/systemd/resolved.conf to add my DNS
address and restarted systemd-resolved. Now the DNS query works properly.

Is there anyway to bypass systemd for getaddrinfo() [ex: passing extra
flags to hints], and get the work done in usual way?


In my case /etc/resolv.conf is softlink to runtime resolved.conf of
systemd. Will unlinking and keeping the /etc/resolv.conf independent will
solve the solution?


Regards,

Nandakumar
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv2] core: do not spawn jobs or touch other units during coldplugging

2015-04-24 Thread Ivan Shapovalov
On 2015-04-25 at 04:00 +0300, Ivan Shapovalov wrote:
> On 2015-04-24 at 16:04 +0200, Lennart Poettering wrote:
> > [...]
> > 
> > Actually, it really is about the UNIT_TRIGGERS dependencies only,
> > since we don't do the retroactive deps stuff at all when we are
> > coldplugging, it's conditionalized in m->n_reloading <= 0.
> 
> So, I think I understand the problem. We should do this not only for
> UNIT_TRIGGERS, but also for any dependencies which may matter
> when activating that unit. That is, anything which is referenced by
> transaction_add_job_and_dependencies()... recursively.

Here is what I have in mind. Don't know whether this is correct, but
it fixes the problem for me.

From 515d878e526e52fc154874e93a4c97555ebd8cff Mon Sep 17 00:00:00 2001
From: Ivan Shapovalov 
Date: Sat, 25 Apr 2015 04:57:59 +0300
Subject: [PATCH] core: coldplug all units which participate in jobs

This is yet another attempt to fix coldplugging order (more especially,
the problem which happens when one creates a job during coldplugging, and
it references a not-yet-coldplugged unit).

Now we forcibly coldplug all units which participate in jobs. This
is a superset of previously implemented handling of the UNIT_TRIGGERS
dependencies, so that handling is removed.
---
 src/core/transaction.c | 6 ++
 src/core/unit.c| 8 
 2 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/src/core/transaction.c b/src/core/transaction.c
index 5974b1e..a02c02c 100644
--- a/src/core/transaction.c
+++ b/src/core/transaction.c
@@ -848,6 +848,12 @@ int transaction_add_job_and_dependencies(
 assert(type < _JOB_TYPE_MAX_IN_TRANSACTION);
 assert(unit);
 
+/* Before adding jobs for this unit, let's ensure that its state has 
been loaded.
+ * This matters when jobs are spawned as part of coldplugging itself 
(see. e. g. path_coldplug().
+ * This way, we "recursively" coldplug units, ensuring that we do not 
look at state of
+ * not-yet-coldplugged units. */
+unit_coldplug(unit);
+
 /* log_debug("Pulling in %s/%s from %s/%s", */
 /*   unit->id, job_type_to_string(type), */
 /*   by ? by->unit->id : "NA", */
diff --git a/src/core/unit.c b/src/core/unit.c
index 2b356e2..996b648 100644
--- a/src/core/unit.c
+++ b/src/core/unit.c
@@ -2889,14 +2889,6 @@ int unit_coldplug(Unit *u) {
 
 u->coldplugged = true;
 
-/* Make sure everything that we might pull in through
- * triggering is coldplugged before us */
-SET_FOREACH(other, u->dependencies[UNIT_TRIGGERS], i) {
-r = unit_coldplug(other);
-if (r < 0)
-return r;
-}
-
 if (UNIT_VTABLE(u)->coldplug) {
 r = UNIT_VTABLE(u)->coldplug(u);
 if (r < 0)
-- 
2.3.6

-- 
Ivan Shapovalov / intelfx /


signature.asc
Description: This is a digitally signed message part
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv2] core: do not spawn jobs or touch other units during coldplugging

2015-04-24 Thread Ivan Shapovalov
On 2015-04-24 at 16:04 +0200, Lennart Poettering wrote:
> On Fri, 24.04.15 15:52, Lennart Poettering (lenn...@poettering.net) 
> wrote:
> 
> > before we coldplug a unit, we should coldplug all units it might
> > trigger, which are those with a listed UNIT_TRIGGERS dependency, as
> > well as all those that retroactively_start_dependencies() and
> > retroactively_stop_dependencies() operates on. Of course, we should
> > also avoid running in loops here, but that should be easy by 
> > keeping a
> > per-unit coldplug boolean around.
> 
> Actually, it really is about the UNIT_TRIGGERS dependencies only,
> since we don't do the retroactive deps stuff at all when we are
> coldplugging, it's conditionalized in m->n_reloading <= 0.

So, I think I understand the problem. We should do this not only for
UNIT_TRIGGERS, but also for any dependencies which may matter
when activating that unit. That is, anything which is referenced by
transaction_add_job_and_dependencies()... recursively.

To illustrate:

- A.path triggers A.service
- A.service requires basic.target
- we begin coldplugging
- we coldplug A.path
- by your patch, we first coldplug A.service
  -> A.service is now active
- we continue coldplugging A.path
  -> NB: basic.target is not coldplugged yet!
- A.path enters "running" and starts A.service
- transaction_add_job_and_dependencies() adds jobs for
  all dependencies of A.service
- at this point we're fucked up:
  basic.target is not coldplugged, but a job is added for it

-- 
Ivan Shapovalov / intelfx /


signature.asc
Description: This is a digitally signed message part
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn trouble

2015-04-24 Thread Tobias Hunger
By the way: Is there a way to get the journal from a --ephemeral container?

I had expected --link-journal=host to work, but --link-journal seems
to not be allowed in any way.

On Sat, Apr 25, 2015 at 12:14 AM, Tobias Hunger  wrote:
> Hello,
>
> sorry (again) for the delay. I unfortunately can not check into this
> as often as I would like:-(
>
> Lennart: Thank you for that patch, that does indeed fix my issue with
> read-only machine images.
>
> The networking issue does work better when iptables are used. All I
> needed to do was to make sure that packages from the VM are not
> getting dropped in the forwarding chain. Is there a way for me to do
> that automatically as interfaces to containers are created? I do not
> want to just accept every machine talking to everything else.
> Paranoia:-)
>
> What I noticed though is that the VM has the google nameservers set
> up. That came as a bit of a surprise: I had expected either the host
> to be the only DNS server register (providing a DNS proxy) or at least
> that the nameservers of the host will also be set in the VM. Is that a
> know issue or are my expectations wrong?
>
> Best Regards,
> Tobias
>
>
> On Wed, Apr 22, 2015 at 5:00 PM, Lennart Poettering
>  wrote:
>> On Wed, 22.04.15 16:31, Tobias Hunger (tobias.hun...@gmail.com) wrote:
>>
>>> On Wed, Apr 22, 2015 at 4:04 PM, Lennart Poettering
>>>  wrote:
>>> > Well, if that's what it says, then yes. We can certainly add support
>>> > for manipulating nft too, but so far the APIs fo that appeared much
>>> > less convincing to me, and quite a bit more exotic.
>>>
>>> The user space tools for nft are much nicer than iptables, so I think
>>> they do provide a significant benefit. I would appreciate not having
>>> to go back to iptables:-)
>>>
>>> The exact command line I am running is this (straight out of systemctl
>>> cat systemd-nspawn@vm.service, *THANKS* to whoever implemented that!):
>>>
>>> ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit --boot --ephemeral \
>>> --machine=vm \
>>> --network-veth \
>>> --bind=/mnt/raid0/data/ftp:/mnt/ftp
>>>
>>> /var/lib/machines is a normal read-write btrfs snapshot. vm is a
>>> read-only snapshot.
>>>
>>> It starts fine when vm is read-write.
>>
>> OK, I think I fixed this now, please check:
>>
>> http://cgit.freedesktop.org/systemd/systemd/commit/?id=aee327b8169670986f6a48acbd5ffe1355bfcf27
>>
>> Lennart
>>
>> --
>> Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn trouble

2015-04-24 Thread Tobias Hunger
Hello,

sorry (again) for the delay. I unfortunately can not check into this
as often as I would like:-(

Lennart: Thank you for that patch, that does indeed fix my issue with
read-only machine images.

The networking issue does work better when iptables are used. All I
needed to do was to make sure that packages from the VM are not
getting dropped in the forwarding chain. Is there a way for me to do
that automatically as interfaces to containers are created? I do not
want to just accept every machine talking to everything else.
Paranoia:-)

What I noticed though is that the VM has the google nameservers set
up. That came as a bit of a surprise: I had expected either the host
to be the only DNS server register (providing a DNS proxy) or at least
that the nameservers of the host will also be set in the VM. Is that a
know issue or are my expectations wrong?

Best Regards,
Tobias


On Wed, Apr 22, 2015 at 5:00 PM, Lennart Poettering
 wrote:
> On Wed, 22.04.15 16:31, Tobias Hunger (tobias.hun...@gmail.com) wrote:
>
>> On Wed, Apr 22, 2015 at 4:04 PM, Lennart Poettering
>>  wrote:
>> > Well, if that's what it says, then yes. We can certainly add support
>> > for manipulating nft too, but so far the APIs fo that appeared much
>> > less convincing to me, and quite a bit more exotic.
>>
>> The user space tools for nft are much nicer than iptables, so I think
>> they do provide a significant benefit. I would appreciate not having
>> to go back to iptables:-)
>>
>> The exact command line I am running is this (straight out of systemctl
>> cat systemd-nspawn@vm.service, *THANKS* to whoever implemented that!):
>>
>> ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit --boot --ephemeral \
>> --machine=vm \
>> --network-veth \
>> --bind=/mnt/raid0/data/ftp:/mnt/ftp
>>
>> /var/lib/machines is a normal read-write btrfs snapshot. vm is a
>> read-only snapshot.
>>
>> It starts fine when vm is read-write.
>
> OK, I think I fixed this now, please check:
>
> http://cgit.freedesktop.org/systemd/systemd/commit/?id=aee327b8169670986f6a48acbd5ffe1355bfcf27
>
> Lennart
>
> --
> Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] unit: When stopping due to BindsTo=, log which unit caused it

2015-04-24 Thread Alban Crequy
On Fri, Apr 24, 2015 at 5:34 PM, Lennart Poettering
 wrote:
> On Fri, 24.04.15 17:10, Alban Crequy (al...@endocode.com) wrote:
>
>> On Fri, Apr 24, 2015 at 12:45 PM, Lennart Poettering
>>  wrote:
>> > On Wed, 22.04.15 16:55, Alban Crequy (al...@endocode.com) wrote:
>> >
>> >> Thanks for the commits. They don't seem related to containers.
>> >>
>> >> I can reproduce my issue on git-master:
>> >>
>> >> sudo ~/git/systemd/systemd-nspawn --register=false --bind
>> >> $HOME/tmp/vol -D debian-tree -b
>> >>
>> >> Then, in the container, make sure /bin/umount does NOT exist.
>> >> Then halt the container with kill -37 1 (SIGRTMIN+3)
>> >
>> > We require /bin/mount and /bin/umount to exist. We do not support
>> > systems where you remove those. We also don't support systems without
>> > glibc either, ... ;-)
>>
>> Fair enough about the dependency on umount/mount :)
>>
>> I added /bin/mount and /bin/umount in the container for my test and
>> now systemd in the container says:
>>
>> Unit opt-stage2-sha512(...)-rootfs-dir1.mount is bound to inactive
>> unit 
>> dev-disk-by\x2duuid-25ea81c8\x2d20d8\x2d4ab1\x2d862c\x2d882a04478837.device.
>> Stopping, too.
>
> I figure we shouldn't bother with adding bindsto dependencies for
> .device units in containers, given that .device units are not
> supported there anyway.
>
> Fix:
>
> http://cgit.freedesktop.org/systemd/systemd/commit/?id=47bc12e1ba35d38edda737dae232088d6d3ae688
>
> Please verify,

Thanks for the fix, it works for me! I tested the fix both from git
master and cherry-picked on v219.

Cheers,
Alban
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv2] core: do not spawn jobs or touch other units during coldplugging

2015-04-24 Thread Ivan Shapovalov
On 2015-04-24 at 20:19 +0200, Lennart Poettering wrote:
> On Fri, 24.04.15 20:46, Ivan Shapovalov (intelfx...@gmail.com) wrote:
> 
> > On 2015-04-24 at 19:13 +0200, Lennart Poettering wrote:
> > > On Fri, 24.04.15 20:06, Ivan Shapovalov (intelfx...@gmail.com) 
> > > wrote:
> > > 
> > > > With this patch applied, on `systemctl daemon-reload` I get the
> > > > following:
> > > 
> > > Any chance you can do the same with debugging on? "systemd
> > > -analyze
> > > set-log-level debug" right before the daemon-reload?
> > > 
> > > That should show the transaction being queued in.
> > 
> > Sure, I've ran it (log attached), but well... it did not show
> > any new jobs being enqueued. But alsa-restore.service _did_ run and
> > did reset my ALSA volume to the bootup value.
> > 
> > Pretty confused,
> 
> Note that starting services is recursive: if a service is enqueued,
> then we add all its dependencies to the transaction, verify that the
> transaction is without cycles and can be applied, and then actually
> apply it.
> 
> This means, that starting a service foo.service, that requires
> bar.target, that requires waldo.service, will mean that waldo.service
> is also started, even if bar.target is already started anyway.

Judging by current master, this is not the case. I've created a pair
of throw-away services and a target in the described configuration,
and dependencies of an already started target are not started again. I
think the status quo is correct because activating an already
activated target is a no-op.

Anyway, this is orthogonal. The issue at hand is that the core looks
at the state of not-yet-coldplugged units...

-- 
Ivan Shapovalov / intelfx /


signature.asc
Description: This is a digitally signed message part
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] [RFC] umount: reduce verbosity

2015-04-24 Thread Jonathan Boulle
Naive question, perhaps, but why does systemd even need to umount when
being run in a mount namespace? Can't we let the kernel tear them down when
it exits?

> >
> > When rkt is started with --debug, the systemd logs are printed. When rkt
> > is started without --debug, systemd is started with --log-target=null in
> > order to mute the logs.
>
> That generally sounds a bit extreme...

do you have another suggestion? :-)
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] KillUserProcesses timeout

2015-04-24 Thread Mikhail Morfikov
On Fri, 24 Apr 2015 19:04:53 +0200
Lennart Poettering  wrote:

> On Tue, 27.01.15 04:28, Mikhail Morfikov (mmorfi...@gmail.com) wrote:
> 
> Sorry for the really late reply, still trying to work through piles of
> mail.
> > 
> > > Hmm, not sure I follow. 
> > > 
> > 
> > It only happens if I'm logged in as root in tmux. 
> > 
> > > The session is shown as closing, that's good. Can you check what
> > > "systemctl status" reports on the scope unit if this hang happens?
> > > 
> > > Lennart
> > > 
> > 
> > I'm not sure if I did the right thing, but there it is.
> > 
> > After logout:
> > 
> > ● user-1000.slice
> >Loaded: loaded
> >Active: active since Tue 2015-01-27 04:13:31 CET; 8min ago
> >CGroup: /user.slice/user-1000.slice
> >├─session-7.scope
> >│ ├─32562 gpg-agent -s --enable-ssh-support --daemon
> > --write-env-file /home/morfik/.gpg-agent-info │ ├─32692 tmux
> > attach-session -t logi │ ├─32696 bash -c cat /dev/logi | ccze -m
> > ansi -p syslog -C │ ├─32697 -bash
> >│ ├─32698 newsbeuter
> >│ ├─32702 cat /dev/logi
> >│ ├─32703 ccze -m ansi -p syslog -C
> >│ ├─34376 su -
> >│ └─34393 -su
> 
> This here is probably the issue: you opened a su session from your
> session, and that keeps things referenced and open.
> 
> Lennart
> 
Yep, that's the problem, but after 10-20 secs (I don't remember exactly)
the session will be closed, and the question was: is there a way to
make it faster, I mean without the delay so it would be closed just
after the user logged off.


pgpeXsbIVxNBY.pgp
Description: OpenPGP digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv2] core: do not spawn jobs or touch other units during coldplugging

2015-04-24 Thread Andrei Borzenkov
В Fri, 24 Apr 2015 20:19:33 +0200
Lennart Poettering  пишет:

> On Fri, 24.04.15 20:46, Ivan Shapovalov (intelfx...@gmail.com) wrote:
> 
> > On 2015-04-24 at 19:13 +0200, Lennart Poettering wrote:
> > > On Fri, 24.04.15 20:06, Ivan Shapovalov (intelfx...@gmail.com) wrote:
> > > 
> > > > With this patch applied, on `systemctl daemon-reload` I get the
> > > > following:
> > > 
> > > Any chance you can do the same with debugging on? "systemd-analyze
> > > set-log-level debug" right before the daemon-reload?
> > > 
> > > That should show the transaction being queued in.
> > 
> > Sure, I've ran it (log attached), but well... it did not show
> > any new jobs being enqueued. But alsa-restore.service _did_ run and
> > did reset my ALSA volume to the bootup value.
> > 
> > Pretty confused,
> 
> Note that starting services is recursive: if a service is enqueued,
> then we add all its dependencies to the transaction, verify that the
> transaction is without cycles and can be applied, and then actually
> apply it.
> 
> This means, that starting a service foo.service, that requires
> bar.target, that requires waldo.service, will mean that waldo.service
> is also started, even if bar.target is already started anyway.
> 

I was sure that on reload systemd simply restores previous state of
services. Why it attempts to start anything in the first place?

It makes reload potentially dangerous; what if service was stopped on
purpose and should remain this way?
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv2] core: do not spawn jobs or touch other units during coldplugging

2015-04-24 Thread Lennart Poettering
On Fri, 24.04.15 20:46, Ivan Shapovalov (intelfx...@gmail.com) wrote:

> On 2015-04-24 at 19:13 +0200, Lennart Poettering wrote:
> > On Fri, 24.04.15 20:06, Ivan Shapovalov (intelfx...@gmail.com) wrote:
> > 
> > > With this patch applied, on `systemctl daemon-reload` I get the
> > > following:
> > 
> > Any chance you can do the same with debugging on? "systemd-analyze
> > set-log-level debug" right before the daemon-reload?
> > 
> > That should show the transaction being queued in.
> 
> Sure, I've ran it (log attached), but well... it did not show
> any new jobs being enqueued. But alsa-restore.service _did_ run and
> did reset my ALSA volume to the bootup value.
> 
> Pretty confused,

Note that starting services is recursive: if a service is enqueued,
then we add all its dependencies to the transaction, verify that the
transaction is without cycles and can be applied, and then actually
apply it.

This means, that starting a service foo.service, that requires
bar.target, that requires waldo.service, will mean that waldo.service
is also started, even if bar.target is already started anyway.

hence: your alsa service should really use REmainAfterExit=yes, to not
be started over and over again.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] [PATCH v2] PrivateDevices: fix /dev mount when a service is chrooted

2015-04-24 Thread Lennart Poettering
On Fri, 20.02.15 13:59, Alban Crequy (alban.cre...@gmail.com) wrote:

Sorry for the late review, so much is still queued up!

> From: Alban Crequy 
> 
> When a service is chrooted with the option RootDirectory=/opt/..., then
> the option PrivateDevices=true must mount the private /dev in
> $RootDirectory/dev instead of /dev.

We should probably fix this comprehensively, and make everything that
setup_namespace() does aware of the chroot root directory. Moreover,
if we do full namespacing we should rearrange the whole namespace for
towards new root, and not just rely chroot() anymore.

Hence, please add a new parameter for the root directory to
setup_namespace(), and then prepend it to every path that we use
there, not just the one for /dev.

Then, in exec_child() please *either* invoke setup_namespace() *or*
chroot(). That syscall should then only be called if we do no
namespacing at all, if you follow what I mean.

With this change RootDirectory= will be a normal chroot() when used
alone, but will gain super namespace powers if it is combined with
PrivateTmp=, PrivateDev= and the others...

>  
>  char *tmp = NULL, *var = NULL;
> +char *private_dev_dir = NULL;
>  
>  /* The runtime struct only contains the parent
>   * of the private /tmp, which is
> @@ -1585,6 +1586,12 @@ static int exec_child(
>  var = strjoina(runtime->var_tmp_dir, "/tmp");
>  }
>  
> +if (params->apply_chroot && context->root_directory) {
> +size_t sz = strlen("/dev") + 
> strlen(context->root_directory) + 1;
> +private_dev_dir = alloca0(sz);
> +snprintf(private_dev_dir, sz, "%s/dev",
> context->root_directory);

Concatenating strings like this is best done with strjoina()...

Hope this makes sense,

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv2] core: do not spawn jobs or touch other units during coldplugging

2015-04-24 Thread Lennart Poettering
On Fri, 24.04.15 20:06, Ivan Shapovalov (intelfx...@gmail.com) wrote:

> With this patch applied, on `systemctl daemon-reload` I get the
> following:

Any chance you can do the same with debugging on? "systemd-analyze
set-log-level debug" right before the daemon-reload?

That should show the transaction being queued in.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv2] core: do not spawn jobs or touch other units during coldplugging

2015-04-24 Thread Ivan Shapovalov
On 2015-04-24 at 16:20 +0200, Lennart Poettering wrote:

> On Fri, 24.04.15 16:04, Lennart Poettering (lenn...@poettering.net)
> wrote:
> 

> > On Fri, 24.04.15 15:52, Lennart Poettering (lenn...@poettering.net
> > ) wrote:
> > 

> > > before we coldplug a unit, we should coldplug all units it might
> > > trigger, which are those with a listed UNIT_TRIGGERS dependency,
> > > as
> > > well as all those that retroactively_start_dependencies() and
> > > retroactively_stop_dependencies() operates on. Of course, we
> > > should
> > > also avoid running in loops here, but that should be easy by
> > > keeping a
> > > per-unit coldplug boolean around.
> > 
> > Actually, it really is about the UNIT_TRIGGERS dependencies only,
> > since we don't do the retroactive deps stuff at all when we are
> > coldplugging, it's conditionalized in m->n_reloading <= 0.
> 
> I have implemented this now in git:
> 
> http://cgit.freedesktop.org/systemd/systemd/commit/?id=f78f265f405a61
> 387c6c12a879ac0d6b6dc958db
> 
> Ivan, any chance you can check if this fixes your issue? (Not sure it
> does, because I must admit I am not entirely sure I really understood
> it fully...)

Seems like it didn't help.
I use the following patch to alter coldplugging order slightly (it's a
hashmap, so order is actually arbitrary, so this alteration is valid):

 cut patch here 
diff --git a/src/core/manager.c b/src/core/manager.c
index f13dad5..542dd4f 100644
--- a/src/core/manager.c
+++ b/src/core/manager.c
@@ -975,6 +975,10 @@ int manager_enumerate(Manager *m) {
 return r;
 }
 
+static bool coldplug_first(Unit *u) {
+return !(endswith(u->id, ".service") || endswith(u->id, ".target"));
+}
+
 static void manager_coldplug(Manager *m) {
 Iterator i;
 Unit *u;
@@ -990,6 +994,26 @@ static void manager_coldplug(Manager *m) {
 if (u->id != k)
 continue;
 
+/* we need a reproducer */
+if (!coldplug_first(u))
+continue;
+
+r = unit_coldplug(u);
+if (r < 0)
+log_warning_errno(r, "We couldn't coldplug %s, 
proceeding anyway: %m", u->id);
+}
+
+/* Process remaining units. */
+HASHMAP_FOREACH_KEY(u, k, m->units, i) {
+
+/* ignore aliases */
+if (u->id != k)
+continue;
+
+/* skip already processed units */
+if (coldplug_first(u))
+continue;
+
 r = unit_coldplug(u);
 if (r < 0)
 log_warning_errno(r, "We couldn't coldplug %s, 
proceeding anyway: %m", u->id);
 cut patch here 

With this patch applied, on `systemctl daemon-reload` I get the following:

 cut log here 
2015-04-24T19:42:05+0300 intelfx-laptop sudo[15870]: intelfx : TTY=pts/3 ; 
PWD=/home/intelfx/tmp/build/systemd ; USER=root ; COMMAND=/usr/bin/systemctl 
daemon-reload
2015-04-24T19:42:05+0300 intelfx-laptop sudo[15870]: pam_unix(sudo:session): 
session opened for user root by intelfx(uid=0)
2015-04-24T19:42:05+0300 intelfx-laptop polkitd[8629]: Registered 
Authentication Agent for unix-process:15871:1490725 (system bus name :1.239 
[/usr/bin/pkttyagent --notify-fd 5 --fallback], object path 
/org/freedesktop/PolicyKit1/AuthenticationAgent, locale ru_RU.utf8)
2015-04-24T19:42:05+0300 intelfx-laptop systemd[1]: Reloading.
2015-04-24T19:42:05+0300 intelfx-laptop systemd[1]: Listening on /dev/initctl 
Compatibility Named Pipe.
2015-04-24T19:42:05+0300 intelfx-laptop systemd[1]: Found device 
LITEONIT_LSS-16L6G EFI.
2015-04-24T19:42:05+0300 intelfx-laptop systemd[1]: Listening on fsck to fsckd 
communication Socket.
2015-04-24T19:42:05+0300 intelfx-laptop systemd[1]: Set up automount 
var-lib-pacman-sync.automount.
2015-04-24T19:42:05+0300 intelfx-laptop systemd[1]: Started Daily Cleanup of 
Temporary Directories.
2015-04-24T19:42:05+0300 intelfx-laptop systemd[1]: Listening on RPCbind Server 
Activation Socket.
2015-04-24T19:42:05+0300 intelfx-laptop systemd[1]: Found device 
WDC_WD10JPVX-08JC3T5 datastore0.
2015-04-24T19:42:05+0300 intelfx-laptop systemd[1]: Found device 
WDC_WD10JPVX-08JC3T5 linux-build.
2015-04-24T19:42:05+0300 intelfx-laptop systemd[1]: Found device 
WDC_WD10JPVX-08JC3T5 datastore0.
2015-04-24T19:42:05+0300 intelfx-laptop systemd[1]: Found device 
LITEONIT_LSS-16L6G EFI.
2015-04-24T19:42:05+0300 intelfx-laptop systemd[1]: Activated swap Swap 
Partition.
2015-04-24T19:42:05+0300 intelfx-laptop systemd[1]: Found device 
WDC_WD10JPVX-08JC3T5 datastore0.
2015-04-24T19:42:05+0300 intelfx-laptop systemd[1]: Found device 
WDC_WD10JPVX-08JC3T5 linux-build.
2015-04-24T19:42:05+0300 intelfx-laptop systemd[1]: Mounted POSIX Message Queue 
File System.
2015-04-24T19:42:05+0300 intelfx-laptop systemd[1]: Created slice System Slice.
2015-04-24T19:42:05+0300 intelfx-laptop systemd[1]: Mounted /home.
2015-04-24T1

Re: [systemd-devel] KillUserProcesses timeout

2015-04-24 Thread Lennart Poettering
On Tue, 27.01.15 04:28, Mikhail Morfikov (mmorfi...@gmail.com) wrote:

Sorry for the really late reply, still trying to work through piles of
mail.
> 
> > Hmm, not sure I follow. 
> > 
> 
> It only happens if I'm logged in as root in tmux. 
> 
> > The session is shown as closing, that's good. Can you check what
> > "systemctl status" reports on the scope unit if this hang happens?
> > 
> > Lennart
> > 
> 
> I'm not sure if I did the right thing, but there it is.
> 
> After logout:
> 
> ● user-1000.slice
>Loaded: loaded
>Active: active since Tue 2015-01-27 04:13:31 CET; 8min ago
>CGroup: /user.slice/user-1000.slice
>├─session-7.scope
>│ ├─32562 gpg-agent -s --enable-ssh-support --daemon 
> --write-env-file /home/morfik/.gpg-agent-info
>│ ├─32692 tmux attach-session -t logi
>│ ├─32696 bash -c cat /dev/logi | ccze -m ansi -p syslog -C
>│ ├─32697 -bash
>│ ├─32698 newsbeuter
>│ ├─32702 cat /dev/logi
>│ ├─32703 ccze -m ansi -p syslog -C
>│ ├─34376 su -
>│ └─34393 -su

This here is probably the issue: you opened a su session from your
session, and that keeps things referenced and open.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] machinectl: Support for cache cleaning

2015-04-24 Thread Lennart Poettering
On Mon, 23.02.15 10:54, Peter Paule (systemd-de...@fedux.org) wrote:

> Hi Lennart,
> 
> I asked myself how I can get rid of those broken "temporary" subvolumes, to
> re-pull the image:
> 
>   drwxr-xr-x  1 root root  158 Feb 20 18:46
> .dkr-00b2b6c6a2f93b2dde1d46b06cff32de82dabfd3b5ac6a8f27c5064f429e3e7a
>   drwxr-xr-x  1 root root  158 Feb 20 18:46
> .dkr-052665c23d7f38d475095f383196c5bf0b13dafe8b7fd02e3a4926767f839e95
>   drwxr-xr-x  1 root root  158 Feb 20 18:46
> .dkr-0a6d917a8308476a069be3411d5aefddd34a9d4b3342e5deee5922b9a3abfa14
>   drwxr-xr-x  1 root root  158 Feb 20 18:46
> .dkr-0a9465a17f988e749d3c217ecfd935a093789e7489a3516a7eedd17492b556d9
> 
> Do you plan to add a "cache" cleansing command to machinectl? I think now
> only "btrfs
> subvolume delete" will do the trick, correct?

Well, you can also use "machinectl remove", which does the same thing.

> 
> Can I delete those subvolumes safely without losing any data? 

Yes.

> Now I've got only only one container running, so I can be pretty
> sure to delete the correct volumes, but the situation will get a
> little more complex when I add more and more images/containers. And
> I'm a little bit concerned about running out of storage, when not
> cleaning up the "temporary" subvolumes regularly.

Yeah, I think it makes sense adding a command to remove all hidden
containers (i.e. those beginning in a dot) that are read-only and
aren't ancestor of any "live" container anymore.  Added to TODO list.

Note that as long as you have a "live" container running removing the
subvolumes of its ancestors will not open up much data, since the
"live" container will keep most data of them referenced.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] units: add SecureBits

2015-04-24 Thread Lennart Poettering
On Fri, 24.04.15 16:42, Topi Miettinen (toiwo...@gmail.com) wrote:

> > I think all long-running ones that reasonably can already do. I mean,
> > things like logind simple need too many caps, it's really not worth
> > trying to make them run under a different uid, because they have so
> > much privs otherwise...
> >
> > Which daemons do you precisely have in mind?
> 
> Nothing in particular. Privilege separation could help even in cases
> where some caps need to be retained.

Sure! Note that networkd and timesyncd both setuid() to an
unprivileged user, but do keep CAP_NET_ADMIN/CAP_SYS_TIME. In those
case that's relatively easy to do, because they only require those two
caps, and nothing else. But for stuff like logind its quite different,
it needs a lot of caps...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Removing image from /var/lib/machines

2015-04-24 Thread Lennart Poettering
On Sun, 22.02.15 09:09, Peter Paule (systemd-de...@fedux.org) wrote:

> Does it make sense to avoid copying /etc/resolv.conf to a container if
> the filesystem is read-only?
> 
>   sudo /usr/bin/systemd-nspawn --read-only -M docker-centos-nginx
>   --read-only /usr/sbin/nginx
> 
> Failed to copy /etc/resolv.conf to
> /var/lib/machines/docker-centos-nginx/etc/resolv.conf: Read-only
> file system

Makes sense, also added to the TODO list.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Removing image from /var/lib/machines

2015-04-24 Thread Lennart Poettering
On Sun, 22.02.15 07:54, Peter Paule (systemd-de...@fedux.org) wrote:

> Excerpts from Peter Paule's message of 2015-02-21 19:42:49 +0100:
> > I tried 219 on a different machine as well. I got some "Permission
> > denied errors" for importd as well. I "fixed" them by running importd
> > from console as root. The errors occured when I tried to download a
> > docker image from index.docker.io.
> 
> If I try to bind-mount a non-existing directory (on the host) to a
> container I will get the following error message:
> 
>   Container nginx2 failed with error code 1.
> 
> Is it possible / Does it make sense to output a more meaningful error
> message?

Makes sense. Added to the TODO list.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Removing image from /var/lib/machines

2015-04-24 Thread Lennart Poettering
On Fri, 20.02.15 14:23, Peter Paule (systemd-de...@fedux.org) wrote:

> 
> Zitat von Lennart Poettering :
> 
> >On Thu, 19.02.15 19:46, Peter Paule (systemd-de...@fedux.org) wrote:
> >
> >>BTW:
> >>
> >>--dkr-index-url cannot handle http redirects
> >>
> >>By accident I tried "http://index.docker.io"; and this will be redirected to
> >>"https://index.docker.io"; but importd cannot handle this.
> >
> >I think this is actually a problem on the setup of the servers, not
> >the client side.
> 
> Look at this:
> 
>   % curl http://index.docker.io -I
>   HTTP/1.1 302 Found
>   Server: nginx/1.6.2
>   Date: Fri, 20 Feb 2015 13:12:50 GMT
>   Content-Type: text/html
>   Content-Length: 160
>   Location: https://index.docker.io/
>   Connection: keep-alive
> 
> If you try to get things from http://index.docker.io it will tell you, that
> you
> need to use https://index.docker.io/ instead. It might be questionable if 302
> really the best status code for this - maybe they should better use 301 for
> this. So, yes looking at 301 and 302 it is a server problem somehow, but not
> following 302 is kind of a client problem as well I think.

Hmm, I looked into this today, but this just worked for me now with
git. Is this still broken for you?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Updating existing docker image via machinectl

2015-04-24 Thread Lennart Poettering
On Fri, 20.02.15 14:11, Peter Paule (systemd-de...@fedux.org) wrote:

heya!

> Here's a small patch for changing the documentation.

Sorry for the late review!

I think this patch is a bit misleading, since "--force" actually drops
the old instance, and that's hardly "updating", that's "replacing with
something newer". We need to clarify that this really drops the old
stuff which is then lost afterwards!

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] units: add SecureBits

2015-04-24 Thread Topi Miettinen
On 04/24/15 14:52, Lennart Poettering wrote:
> On Sat, 14.02.15 12:32, Topi Miettinen (toiwo...@gmail.com) wrote:
> 
> Sorry for the late response, still going through piles of mail.
> 
>> No setuid programs are expected to be executed, so add
>> SecureBits=no-setuid-fixup no-setuid-fixup-locked
>> to unit files.
>
> So, hmm, after reading the man page again: what's the rationale for
> precisely these bits?
>
> I mean no-setuid-fixup seems to be something that applies to setuid(),
> setresuid() calls and suchlike, which seems pretty uninteresting. Much
> more interesting is SECBIT_NOROOT, which disables suid binary
> handling...

 Yes, noroot noroot-locked was actually my intention, sorry. I'll update
 the patch.

 Maybe all of "noroot noroot-locked no-setuid-fixup
 no-setuid-fixup-locked" would be OK, but that probably needs another
 look at the programs if they switch UIDs.
>>>
>>> I'd be careful with more than noroot, since the other flags alter
>>> bbehaviour across setuid() and similar calls, and much of our code
>>> makes assumptions that will likely not hold if you set those bits...
>>
>> Going back to no-setuid-fixup no-setuid-fixup-locked: First, I looked at
>> the kernel code if it matches the description on the manual page
>> capabilities(7) to prevent more embarrassment. In this case it does,
>> NO_SETUID_FIXUP prevents capability changes when the task calls
>> set[res]*uid().
> 
> Any kind of changes? Both dropping and acquiring them? I mean, I think
> we should actually allow dropping them unless we explicitly say no
> during one transition.

Both ways IIRC.

> I have the suspicion that the SECBIT_NOROOT thing is the only
> really interesting one...

I think they are all pretty much useless. SECURE_NOROOT could be
improved by splitting it to two bits, one controlling setuid execution
and the other to control whether capabilities are dropped at exec(). At
least the manual page should be in synch with reality.

Perhaps MAC systems would be better to enforce capability limits
throughout the whole system. Unfortunately my favourite, TOMOYO, does
not manage capabilities.

>> There's of course the question whether no-setuid-fixup
>> no-setuid-fixup-locked is useful. For daemons runnig as root, it would
>> not help anything (or could even make things worse e.g. in the library
>> case). But when the daemon runs as a different user, the flags could
>> make the life of attacker a tiny bit more difficult. This leaves only:
>> systemd-journal-gatewayd.service
>> systemd-journal-remote.service
>> systemd-journal-upload.service
>>
>> I can make a patch for those if you agree, or the original patch can be
>> applied selectively.
>>
>> Maybe more daemons should run as unprivileged user.
> 
> I think all long-running ones that reasonably can already do. I mean,
> things like logind simple need too many caps, it's really not worth
> trying to make them run under a different uid, because they have so
> much privs otherwise...
>
> Which daemons do you precisely have in mind?

Nothing in particular. Privilege separation could help even in cases
where some caps need to be retained.

-Topi

> 
> Lennart
> 

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Cgroup limits for user processes

2015-04-24 Thread Lennart Poettering
On Wed, 18.02.15 12:48, Mikhail Morfikov (mmorfi...@gmail.com) wrote:

Sorry for the late reply, still working on keeping up with the piles
of mail that queued up.

> What is the best way to set cgroup limits for user processes? I mean the
> individual processes. I know that you can set limits for user.slice, but
> how to set limits for, let's say, firefox?

We simply do not support this right now. Unprivileged users do not get
access to the cgroup properties of the various controllers right
now, simply because this is unsafe. 

We can open this up one day, bit by bit but this requires some kernel
work, and an OK from Tejun that this is safe.

> BTW, one more thing. Is there a way to set a mark for network packets
> using unit services? I really need this feature, but I couldn't find
> any useful information on this subject.

Daniel is working on adding native support for this via the net_cls
cgroup controller, but in the process he noticed that the kernel
support for this is actually quite broken, and there's work now going
on to fix the kernel first.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [HEADSUP] Removal of shutdownd

2015-04-24 Thread Daniel Mack
The only purpose of the little helper daemon called shutdownd is to keep
track and execute a scheduled shutdown. It prints utmp wall
notifications to TTYs in periodic intervals, makes sure to create the
/run/nologin and /run/systemd/shutdown/scheduled files, and eventually,
once the timeout is reached, it executes systemctl again to really shut
down / reboot / halt the machine. It is socket activated and
communicates with systemctl via a proprietary protocol.

The current procedure for a scheduled shutdown looks like this:

 * shutdown (aka systemctl) is executed with a time argument
   (for instance, 'shutdown -h 18:00').

 * As an execution time for the command was given, systemctl
   communicates with shutdownd via its private socket,
   possibly activating it that way.

 * shutdownd waits for the timer to elapse

 * shutdownd executes 'shutdown' (aka systemctl)

 * This time, the action is immediate, hence systemctl
   directly communicates with PID1 in order to start the
   appropriate shutdown unit.


I have now reworked all this and moved the code from shutdownd into
logind, which already has the logic for inhibitors and other timers.

For this, logind learned two new methods on its DBus-interface:

  .ScheduleShutdown()
  .CancelScheduledShutdown()

and three more properties:

  .ScheduledShutdown()  [r]
  .EnableWallMessages() [rw]
  .WallMessage  [rw]


systemctl now talks to logind instead to shutdownd, and the procedure
looks like this:

 * shutdown (aka systemctl) is executed with a time argument
   (for instance, 'shutdown -h 18:00').

 * As an execution time for the command was given, systemctl
   communicates with logind via DBus in order to schedule the
   shutdown.

 * logind waits for the timer to elapse and, given that there
   are no inhibitors preventing it, directly communicates with
   PID1 in order to start the appropriate shutdown unit.

shutdownd was killed entirely. As a result, we have one less daemon
lurking around, a nice DBus-API for something that used to be
proprietary, and even less code:

 20 files changed, 727 insertions(+), 905 deletions(-)

However, this means that direct users of the shutdownd socket have to
migrate to the DBus interface. We are only aware of one such user, which
is Cockpit, and Stef Walter (Cc) already signaled his agreement on this
change.

The patches are now pushed. They have been reviewed by Lennart before
and I tested it for a while, but as always, we appreciate more testers
for such a rework :)


Thanks,
Daniel
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] unit: When stopping due to BindsTo=, log which unit caused it

2015-04-24 Thread Lennart Poettering
On Fri, 24.04.15 17:10, Alban Crequy (al...@endocode.com) wrote:

> On Fri, Apr 24, 2015 at 12:45 PM, Lennart Poettering
>  wrote:
> > On Wed, 22.04.15 16:55, Alban Crequy (al...@endocode.com) wrote:
> >
> >> Thanks for the commits. They don't seem related to containers.
> >>
> >> I can reproduce my issue on git-master:
> >>
> >> sudo ~/git/systemd/systemd-nspawn --register=false --bind
> >> $HOME/tmp/vol -D debian-tree -b
> >>
> >> Then, in the container, make sure /bin/umount does NOT exist.
> >> Then halt the container with kill -37 1 (SIGRTMIN+3)
> >
> > We require /bin/mount and /bin/umount to exist. We do not support
> > systems where you remove those. We also don't support systems without
> > glibc either, ... ;-)
> 
> Fair enough about the dependency on umount/mount :)
> 
> I added /bin/mount and /bin/umount in the container for my test and
> now systemd in the container says:
> 
> Unit opt-stage2-sha512(...)-rootfs-dir1.mount is bound to inactive
> unit 
> dev-disk-by\x2duuid-25ea81c8\x2d20d8\x2d4ab1\x2d862c\x2d882a04478837.device.
> Stopping, too.

I figure we shouldn't bother with adding bindsto dependencies for
.device units in containers, given that .device units are not
supported there anyway.

Fix:

http://cgit.freedesktop.org/systemd/systemd/commit/?id=47bc12e1ba35d38edda737dae232088d6d3ae688

Please verify,

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Fail to reset-failed as user

2015-04-24 Thread Lennart Poettering
On Sat, 14.02.15 19:37, Olivier Brunel (j...@jjacky.com) wrote:

Heya!

Sorry for responding so late again, but I think we can close this now:

> #0  bus_message_enter_struct (m=0x7f5fb0cb88b0, c=0x7f5fb0cb8250,
> contents=0x7f5faef0d152 "bba{ss}", item_size=0x7fffcebd48e8,
> offsets=0x7fffcebd48d8,
> n_offsets=0x7fffcebd48e0) at src/libsystemd/sd-bus/bus-message.c:3865
> #1  0x7f5faee80136 in sd_bus_message_enter_container
> (m=0x7f5fb0cb88b0, type=114 'r',
> contents=0x7f5faef0d152 "bba{ss}") at
> src/libsystemd/sd-bus/bus-message.c:4012
> #2  0x7f5faee8e00d in bus_verify_polkit_async (call=0x7f5fb0ca59a0,
> capability=21,
> action=0x7f5faeef05f8 "org.freedesktop.systemd1.manage-units",
> interactive=false,
> registry=0x7f5fb0c0a890, error=0x7fffcebd4ad0) at
> src/libsystemd/sd-bus/bus-util.c:374
> #3  0x7f5faee0aa00 in bus_verify_manage_unit_async
> (m=0x7f5fb0c0a460, call=0x7f5fb0ca59a0,
> error=0x7fffcebd4ad0) at src/core/dbus.c:1196
> #4  0x7f5faee12feb in bus_unit_method_reset_failed (bus=0x7f5fb0ca32f0,
> message=0x7f5fb0ca59a0, userdata=0x7f5fb0cc7ff0, error=0x7fffcebd4ad0)
> at src/core/dbus-unit.c:496
> #5  0x7f5faee0c8aa in method_reset_failed_unit (bus=0x7f5fb0ca32f0,
> message=0x7f5fb0ca59a0,
> userdata=0x7f5fb0c0a460, error=0x7fffcebd4ad0) at
> src/core/dbus-manager.c:588
> 
> (gdb) p *c
> $40 = {enclosing = 114 'r', need_offsets = true, index = 2, saved_index
> = 2,
>   signature = 0x7f5fb0ca3ec0 "bba{ss}", before = 0, begin = 0, end =
> 133, array_size = 0x0,
>   offsets = 0x0, n_offsets = 0, offsets_allocated = 8391685410159683651,
> offset_index = 0,
>   item_size = 0, peeked_signature = 0x0}
> (gdb) p contents
> $41 = 0x7f5faef0d152 "bba{ss}"
> 
> And this will fail on:
> if (c->signature[c->index] != SD_BUS_TYPE_STRUCT_BEGIN ||
> and return -ENXIO.
> 
> 
> Hope this can be helpful,

Yes it was!

I am pretty sure this was fixed with
1d22e9068c52c1cf935bcdff70b9b9654e3c939e. Can you check if this fixes
the issue for you?

(This was simply that we checked the PK auth twice, unnecessarily. And
the second time the read ptr into the PK message was already at the
end of the message which meant parsing it failed. But with the change
pointed out above this is fixed, we should authenticate only once
now.)

Thanks for gdb'ing this!

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] unit: When stopping due to BindsTo=, log which unit caused it

2015-04-24 Thread Alban Crequy
On Fri, Apr 24, 2015 at 12:45 PM, Lennart Poettering
 wrote:
> On Wed, 22.04.15 16:55, Alban Crequy (al...@endocode.com) wrote:
>
>> Thanks for the commits. They don't seem related to containers.
>>
>> I can reproduce my issue on git-master:
>>
>> sudo ~/git/systemd/systemd-nspawn --register=false --bind
>> $HOME/tmp/vol -D debian-tree -b
>>
>> Then, in the container, make sure /bin/umount does NOT exist.
>> Then halt the container with kill -37 1 (SIGRTMIN+3)
>
> We require /bin/mount and /bin/umount to exist. We do not support
> systems where you remove those. We also don't support systems without
> glibc either, ... ;-)

Fair enough about the dependency on umount/mount :)

I added /bin/mount and /bin/umount in the container for my test and
now systemd in the container says:

Unit opt-stage2-sha512(...)-rootfs-dir1.mount is bound to inactive
unit 
dev-disk-by\x2duuid-25ea81c8\x2d20d8\x2d4ab1\x2d862c\x2d882a04478837.device.
Stopping, too.

The directory /opt/stage2/sha512xxx/rootfs/dir1 is the bind mount
specified on the "systemd-nspawn --bind" command line. How can I tell
systemd in the nspawn container *not* to umount the volumes prepared
by nspawn?

Note that systemd is also trying to umount other bind-mounted
directories but it fails because the processes in the container are
using it:
umount: /opt/stage2/sha512-ba93cedc478ed21c03d690b5f026205f/rootfs:
target is busy

And systemd keeps trying to umount them in a busy loop.

How does systemd detect that dev-disk-by...device is "inactive"?

Cheers,
Alban
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] units: add SecureBits

2015-04-24 Thread Lennart Poettering
On Sat, 14.02.15 12:32, Topi Miettinen (toiwo...@gmail.com) wrote:

Sorry for the late response, still going through piles of mail.

>  No setuid programs are expected to be executed, so add
>  SecureBits=no-setuid-fixup no-setuid-fixup-locked
>  to unit files.
> >>>
> >>> So, hmm, after reading the man page again: what's the rationale for
> >>> precisely these bits?
> >>>
> >>> I mean no-setuid-fixup seems to be something that applies to setuid(),
> >>> setresuid() calls and suchlike, which seems pretty uninteresting. Much
> >>> more interesting is SECBIT_NOROOT, which disables suid binary
> >>> handling...
> >>
> >> Yes, noroot noroot-locked was actually my intention, sorry. I'll update
> >> the patch.
> >>
> >> Maybe all of "noroot noroot-locked no-setuid-fixup
> >> no-setuid-fixup-locked" would be OK, but that probably needs another
> >> look at the programs if they switch UIDs.
> > 
> > I'd be careful with more than noroot, since the other flags alter
> > bbehaviour across setuid() and similar calls, and much of our code
> > makes assumptions that will likely not hold if you set those bits...
> 
> Going back to no-setuid-fixup no-setuid-fixup-locked: First, I looked at
> the kernel code if it matches the description on the manual page
> capabilities(7) to prevent more embarrassment. In this case it does,
> NO_SETUID_FIXUP prevents capability changes when the task calls
> set[res]*uid().

Any kind of changes? Both dropping and acquiring them? I mean, I think
we should actually allow dropping them unless we explicitly say no
during one transition.

I have the suspicion that the SECBIT_NOROOT thing is the only
really interesting one...

> There's of course the question whether no-setuid-fixup
> no-setuid-fixup-locked is useful. For daemons runnig as root, it would
> not help anything (or could even make things worse e.g. in the library
> case). But when the daemon runs as a different user, the flags could
> make the life of attacker a tiny bit more difficult. This leaves only:
> systemd-journal-gatewayd.service
> systemd-journal-remote.service
> systemd-journal-upload.service
> 
> I can make a patch for those if you agree, or the original patch can be
> applied selectively.
> 
> Maybe more daemons should run as unprivileged user.

I think all long-running ones that reasonably can already do. I mean,
things like logind simple need too many caps, it's really not worth
trying to make them run under a different uid, because they have so
much privs otherwise...

Which daemons do you precisely have in mind?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv2] core: do not spawn jobs or touch other units during coldplugging

2015-04-24 Thread Lennart Poettering
On Fri, 24.04.15 17:23, Ivan Shapovalov (intelfx...@gmail.com) wrote:

> I think I agree with this idea. I just didn't know how to handle
> potentially unbounded recursion. Maybe we can do something along these
> lines (pseudocode):
> 
> while (any units left to coldplug)
> for (unit in hashmap)
> if (not yet marked)
> if (all deps of unit are coldplugged)
> coldplug_and_mark(unit);
> 
> That is, we will repeatedly loop over hashmap, coldplugging only those
> units whose UNIT_TRIGGERS are already coldplugged, and leaving other
> units for next outer iteration.

Well, I just made the recursion a real recursion:

in unit_coldplug(u):
   if (u.coldplugged)
   return;
   u.coldplugeged = true;
   foreach (x in u.triggers):
unit_coldplug(x);

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [systemd-nspawn] nginx: [emerg] open() "/dev/stderr" failed (6: No such device or address)

2015-04-24 Thread Lennart Poettering
On Fri, 24.04.15 17:33, Mantas Mikulėnas (graw...@gmail.com) wrote:

> >> > Any idea what the precise syscall is that triggers that? i.e. what
> >> > strace says?
> >>
> >> It kind of makes sense when stdout is a socket, since
> >> open(/dev/stdout) or open(/proc/self/fd/*) doesn't just dup that fd,
> >> it tries to open the file anew (including permission checks and
> >> everything). A bit annoying.
> >
> > Well, but it's not a socket here, is it? Peter?
> 
> Hmm, I'm pretty sure the default StandardOutput=journal means stdout
> will be a socket connection to journald, doesn't it?

Ah, true!

> (And since it's a process-specific thing, "echo "asdf" > /dev/stdout"
> from an interactive shell will merely test the shell's stdout (which
> is a tty), not nginx's stdout...)

Indeed.

I figure /dev/stderr is simply not compatible with sockets, regardless
if nspawn is in the mix or not... WHich actually came up before, and I
think is someething to accept...

People should really use the shell construct "2>" instead of ">
/dev/stderr" if they want the redirect to work always.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Possible bug when a dummy service declares After= and/or Conflicts= a .mount unit?

2015-04-24 Thread Lennart Poettering
On Fri, 06.03.15 16:17, Michael Biebl (mbi...@gmail.com) wrote:

> 2015-03-06 11:20 GMT+01:00 Didier Roche :
> > It seems like tmp.mount unit was skipped as nothing declared any explicit
> > dependency against it. What seems to confirm this is that if I add any
> > enabled foo.service which declares After=tmp.mount, or if I add the After=
> > statement to systemd-timesync.service, then I get tmp.mount reliably to
> > start (and it was installed as the journal shows up). Does it make sense?
> 
> I do have several units which have PrivateTmp=true (cups.service,
> timesyncd) which *are* started during boot, yet tmp.mount is not being
> activated. Inspecting the units via systemctl shows e.g.
> 
> $ systemctl show cups.service -p After -p Requires
> Requires=basic.target cups.socket -.mount tmp.mount
> After=cups.socket -.mount system.slice tmp.mount basic.target
> cups.path systemd-journald.socket
> 
> Why is tmp.mount then not reliably activated during boot here?

To track this down it would be good seeing a debug boot log for this
case. Also, it would be good to know what "systemctl status" shows for
tmp.mount right after boot.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [systemd-nspawn] nginx: [emerg] open() "/dev/stderr" failed (6: No such device or address)

2015-04-24 Thread Mantas Mikulėnas
On Fri, Apr 24, 2015 at 4:53 PM, Lennart Poettering
 wrote:
> On Fri, 24.04.15 16:51, Mantas Mikulėnas (graw...@gmail.com) wrote:
>
>> On Fri, Apr 24, 2015 at 4:24 PM, Lennart Poettering
>>  wrote:
>> > On Fri, 24.04.15 12:06, Peter Paule (systemd-de...@fedux.org) wrote:
>> >
>> >> Hi,
>> >>
>> >> I run nginx in a CentOS 7.0 container via systemd-nspawn. nginx logs to
>> >> stderr/stdout via configuration to capture logs via journald.
>> >>
>> >> nginx.conf
>> >>
>> >>   error_log  /dev/stderr warn;
>> >>
>> >>
>> >> If I use systemd 219-1 (-1 is the package number of Arch Linux) which 
>> >> seems
>> >> to be a non-patched systemd 219, everything is fine. If I upgrade to 
>> >> systemd
>> >> 219-6, nginx cannot be started via systemd-nspawn. systemd 219-6 includes
>> >> this patch 
>> >> "https://projects.archlinux.org/svntogit/packages.git/tree/repos/core-x86_64/0001-nspawn-when-connected-to-pipes-for-stdin-stdout-pass.patch?h=packages/systemd";.
>> >> BTW: I see the same error if I use systemd-git-HEAD.
>> >>
>> >> I see the following errors in journal - I tried bot "stderr" and "stdout".
>> >>
>> >>   Apr 24 04:48:12 server systemd-nspawn[421]: nginx: [emerg] open()
>> >> "/dev/stdout" failed (6: No such device or address)
>> >>   Apr 24 04:48:45 server systemd-nspawn[496]: nginx: [emerg] open()
>> >> "/dev/stderr" failed (6: No such device or address)
>> >
>> > Any idea what the precise syscall is that triggers that? i.e. what
>> > strace says?
>>
>> It kind of makes sense when stdout is a socket, since
>> open(/dev/stdout) or open(/proc/self/fd/*) doesn't just dup that fd,
>> it tries to open the file anew (including permission checks and
>> everything). A bit annoying.
>
> Well, but it's not a socket here, is it? Peter?

Hmm, I'm pretty sure the default StandardOutput=journal means stdout
will be a socket connection to journald, doesn't it?

(And since it's a process-specific thing, "echo "asdf" > /dev/stdout"
from an interactive shell will merely test the shell's stdout (which
is a tty), not nginx's stdout...)

-- 
Mantas Mikulėnas 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv2] core: do not spawn jobs or touch other units during coldplugging

2015-04-24 Thread Ivan Shapovalov
not yet marked)On 2015-04-24 at 15:52 +0200, Lennart Poettering wrote:
> On Wed, 25.02.15 21:40, Ivan Shapovalov (intelfx...@gmail.com) wrote:
> 
> Ivan,
> 
> > Because the order of coldplugging is not defined, we can reference 
> > a
> > not-yet-coldplugged unit and read its state while it has not yet 
> > been
> > set to a meaningful value.
> > 
> > This way, already active units may get started again.
> 
> > We fix this by deferring such actions until all units have been at 
> > least
> > somehow coldplugged.
> > 
> > Fixes https://bugs.freedesktop.org/show_bug.cgi?id=88401
> 
> Hmm, so firstly, in this case, do those two alsa services 
> have RemainAfterExit=yes set? I mean, if they have not, they really
> should. I they have, then queuing jobs for them a second time is not
> really an issue, because the services are already running they will 
> be
> eaten up eventually.

They do not, but this is actually irrelevant to the root issue.
Setting RemainAfterExit=yes will simply hide it. Actually, in this bug
the basic.target is started second time.

IOW, the point is: no matter what is the configuration of units, none
of them should be re-run on reload (given no configuration changes).

> 
> But regarding the patch:
> 
> I am sorry, but we really should find a different way to fix this, if
> there really is an issue to fix...
> 
> I really don't like the hashmap that maps units to functions. I mean,
> the whole concept of unit vtables exists to encapsulate the
> unit-type-specific functions, and we should not add different place
> for configuring those.

I agree, this is not the cleanest solution. But at least it gets the
semantics right, and I've waited for comments/suggestions for ~1month
before actually sending this patch... Revert as you see fit, let's
just make sure we eventually come up with a solution.

> 
> Also, and more importantly, the whole coldplug function exists only 
> to
> seperate the unit loading and initial state changing into two 
> separate
> steps, so that we know that all units are fully loaded before we 
> start
> making state chnages.
> 
> Now, I see that this falls short of the issue at hand here,

Yes, exactly. The problem is that during coldplug we may accidentally
look at the state of not-yet-coldplugged units.

> but I
> think the right fix is really to alter the order in which we
> coldplug. More specifically, I think we need to make the coldplugging
> order dependency aware:
> 
> before we coldplug a unit, we should coldplug all units it might
> trigger, which are those with a listed UNIT_TRIGGERS dependency, as
> well as all those that retroactively_start_dependencies() and
> retroactively_stop_dependencies() operates on. Of course, we should
> also avoid running in loops here, but that should be easy by keeping 
> a
> per-unit coldplug boolean around.
> 
> Anyway, I reverted the patch for now, this really needs more 
> thinking.

I think I agree with this idea. I just didn't know how to handle
potentially unbounded recursion. Maybe we can do something along these
lines (pseudocode):

while (any units left to coldplug)
for (unit in hashmap)
if (not yet marked)
if (all deps of unit are coldplugged)
coldplug_and_mark(unit);

That is, we will repeatedly loop over hashmap, coldplugging only those
units whose UNIT_TRIGGERS are already coldplugged, and leaving other
units for next outer iteration.

Makes sense?

-- 
Ivan Shapovalov / intelfx /


signature.asc
Description: This is a digitally signed message part
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv2] core: do not spawn jobs or touch other units during coldplugging

2015-04-24 Thread Lennart Poettering
On Fri, 24.04.15 16:04, Lennart Poettering (lenn...@poettering.net) wrote:

> On Fri, 24.04.15 15:52, Lennart Poettering (lenn...@poettering.net) wrote:
> 
> > before we coldplug a unit, we should coldplug all units it might
> > trigger, which are those with a listed UNIT_TRIGGERS dependency, as
> > well as all those that retroactively_start_dependencies() and
> > retroactively_stop_dependencies() operates on. Of course, we should
> > also avoid running in loops here, but that should be easy by keeping a
> > per-unit coldplug boolean around.
> 
> Actually, it really is about the UNIT_TRIGGERS dependencies only,
> since we don't do the retroactive deps stuff at all when we are
> coldplugging, it's conditionalized in m->n_reloading <= 0.

I have implemented this now in git:

http://cgit.freedesktop.org/systemd/systemd/commit/?id=f78f265f405a61387c6c12a879ac0d6b6dc958db

Ivan, any chance you can check if this fixes your issue? (Not sure it
does, because I must admit I am not entirely sure I really understood
it fully...)

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv2] core: do not spawn jobs or touch other units during coldplugging

2015-04-24 Thread Lennart Poettering
On Fri, 24.04.15 15:52, Lennart Poettering (lenn...@poettering.net) wrote:

> before we coldplug a unit, we should coldplug all units it might
> trigger, which are those with a listed UNIT_TRIGGERS dependency, as
> well as all those that retroactively_start_dependencies() and
> retroactively_stop_dependencies() operates on. Of course, we should
> also avoid running in loops here, but that should be easy by keeping a
> per-unit coldplug boolean around.

Actually, it really is about the UNIT_TRIGGERS dependencies only,
since we don't do the retroactive deps stuff at all when we are
coldplugging, it's conditionalized in m->n_reloading <= 0.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv2] core: do not spawn jobs or touch other units during coldplugging

2015-04-24 Thread Lennart Poettering
On Fri, 24.04.15 15:52, Lennart Poettering (lenn...@poettering.net) wrote:

> On Wed, 25.02.15 21:40, Ivan Shapovalov (intelfx...@gmail.com) wrote:
> 
> Ivan,
> 
> > Because the order of coldplugging is not defined, we can reference a
> > not-yet-coldplugged unit and read its state while it has not yet been
> > set to a meaningful value.
> > 
> > This way, already active units may get started again.
> 
> > We fix this by deferring such actions until all units have been at least
> > somehow coldplugged.
> > 
> > Fixes https://bugs.freedesktop.org/show_bug.cgi?id=88401
> 
> Hmm, so firstly, in this case, do those two alsa services 
> have RemainAfterExit=yes set? I mean, if they have not, they really
> should. I they have, then queuing jobs for them a second time is not
> really an issue, because the services are already running they will be
> eaten up eventually.

Oh, is there a simple reproducer for the issue btw?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [systemd-nspawn] nginx: [emerg] open() "/dev/stderr" failed (6: No such device or address)

2015-04-24 Thread Lennart Poettering
On Fri, 24.04.15 16:51, Mantas Mikulėnas (graw...@gmail.com) wrote:

> On Fri, Apr 24, 2015 at 4:24 PM, Lennart Poettering
>  wrote:
> > On Fri, 24.04.15 12:06, Peter Paule (systemd-de...@fedux.org) wrote:
> >
> >> Hi,
> >>
> >> I run nginx in a CentOS 7.0 container via systemd-nspawn. nginx logs to
> >> stderr/stdout via configuration to capture logs via journald.
> >>
> >> nginx.conf
> >>
> >>   error_log  /dev/stderr warn;
> >>
> >>
> >> If I use systemd 219-1 (-1 is the package number of Arch Linux) which seems
> >> to be a non-patched systemd 219, everything is fine. If I upgrade to 
> >> systemd
> >> 219-6, nginx cannot be started via systemd-nspawn. systemd 219-6 includes
> >> this patch 
> >> "https://projects.archlinux.org/svntogit/packages.git/tree/repos/core-x86_64/0001-nspawn-when-connected-to-pipes-for-stdin-stdout-pass.patch?h=packages/systemd";.
> >> BTW: I see the same error if I use systemd-git-HEAD.
> >>
> >> I see the following errors in journal - I tried bot "stderr" and "stdout".
> >>
> >>   Apr 24 04:48:12 server systemd-nspawn[421]: nginx: [emerg] open()
> >> "/dev/stdout" failed (6: No such device or address)
> >>   Apr 24 04:48:45 server systemd-nspawn[496]: nginx: [emerg] open()
> >> "/dev/stderr" failed (6: No such device or address)
> >
> > Any idea what the precise syscall is that triggers that? i.e. what
> > strace says?
> 
> It kind of makes sense when stdout is a socket, since
> open(/dev/stdout) or open(/proc/self/fd/*) doesn't just dup that fd,
> it tries to open the file anew (including permission checks and
> everything). A bit annoying.

Well, but it's not a socket here, is it? Peter?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv2] core: do not spawn jobs or touch other units during coldplugging

2015-04-24 Thread Lennart Poettering
On Wed, 25.02.15 21:40, Ivan Shapovalov (intelfx...@gmail.com) wrote:

Ivan,

> Because the order of coldplugging is not defined, we can reference a
> not-yet-coldplugged unit and read its state while it has not yet been
> set to a meaningful value.
> 
> This way, already active units may get started again.

> We fix this by deferring such actions until all units have been at least
> somehow coldplugged.
> 
> Fixes https://bugs.freedesktop.org/show_bug.cgi?id=88401

Hmm, so firstly, in this case, do those two alsa services 
have RemainAfterExit=yes set? I mean, if they have not, they really
should. I they have, then queuing jobs for them a second time is not
really an issue, because the services are already running they will be
eaten up eventually.

But regarding the patch:

I am sorry, but we really should find a different way to fix this, if
there really is an issue to fix...

I really don't like the hashmap that maps units to functions. I mean,
the whole concept of unit vtables exists to encapsulate the
unit-type-specific functions, and we should not add different place
for configuring those.

Also, and more importantly, the whole coldplug function exists only to
seperate the unit loading and initial state changing into two separate
steps, so that we know that all units are fully loaded before we start
making state chnages.

Now, I see that this falls short of the issue at hand here, but I
think the right fix is really to alter the order in which we
coldplug. More specifically, I think we need to make the coldplugging
order dependency aware:

before we coldplug a unit, we should coldplug all units it might
trigger, which are those with a listed UNIT_TRIGGERS dependency, as
well as all those that retroactively_start_dependencies() and
retroactively_stop_dependencies() operates on. Of course, we should
also avoid running in loops here, but that should be easy by keeping a
per-unit coldplug boolean around.

Anyway, I reverted the patch for now, this really needs more thinking.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [systemd-nspawn] nginx: [emerg] open() "/dev/stderr" failed (6: No such device or address)

2015-04-24 Thread Mantas Mikulėnas
On Fri, Apr 24, 2015 at 4:24 PM, Lennart Poettering
 wrote:
> On Fri, 24.04.15 12:06, Peter Paule (systemd-de...@fedux.org) wrote:
>
>> Hi,
>>
>> I run nginx in a CentOS 7.0 container via systemd-nspawn. nginx logs to
>> stderr/stdout via configuration to capture logs via journald.
>>
>> nginx.conf
>>
>>   error_log  /dev/stderr warn;
>>
>>
>> If I use systemd 219-1 (-1 is the package number of Arch Linux) which seems
>> to be a non-patched systemd 219, everything is fine. If I upgrade to systemd
>> 219-6, nginx cannot be started via systemd-nspawn. systemd 219-6 includes
>> this patch 
>> "https://projects.archlinux.org/svntogit/packages.git/tree/repos/core-x86_64/0001-nspawn-when-connected-to-pipes-for-stdin-stdout-pass.patch?h=packages/systemd";.
>> BTW: I see the same error if I use systemd-git-HEAD.
>>
>> I see the following errors in journal - I tried bot "stderr" and "stdout".
>>
>>   Apr 24 04:48:12 server systemd-nspawn[421]: nginx: [emerg] open()
>> "/dev/stdout" failed (6: No such device or address)
>>   Apr 24 04:48:45 server systemd-nspawn[496]: nginx: [emerg] open()
>> "/dev/stderr" failed (6: No such device or address)
>
> Any idea what the precise syscall is that triggers that? i.e. what
> strace says?

It kind of makes sense when stdout is a socket, since
open(/dev/stdout) or open(/proc/self/fd/*) doesn't just dup that fd,
it tries to open the file anew (including permission checks and
everything). A bit annoying.

-- 
Mantas Mikulėnas 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [systemd-nspawn] nginx: [emerg] open() "/dev/stderr" failed (6: No such device or address)

2015-04-24 Thread Lennart Poettering
On Fri, 24.04.15 12:06, Peter Paule (systemd-de...@fedux.org) wrote:

> Hi,
> 
> I run nginx in a CentOS 7.0 container via systemd-nspawn. nginx logs to
> stderr/stdout via configuration to capture logs via journald.
> 
> nginx.conf
> 
>   error_log  /dev/stderr warn;
> 
> 
> If I use systemd 219-1 (-1 is the package number of Arch Linux) which seems
> to be a non-patched systemd 219, everything is fine. If I upgrade to systemd
> 219-6, nginx cannot be started via systemd-nspawn. systemd 219-6 includes
> this patch 
> "https://projects.archlinux.org/svntogit/packages.git/tree/repos/core-x86_64/0001-nspawn-when-connected-to-pipes-for-stdin-stdout-pass.patch?h=packages/systemd";.
> BTW: I see the same error if I use systemd-git-HEAD.
> 
> I see the following errors in journal - I tried bot "stderr" and "stdout".
> 
>   Apr 24 04:48:12 server systemd-nspawn[421]: nginx: [emerg] open()
> "/dev/stdout" failed (6: No such device or address)
>   Apr 24 04:48:45 server systemd-nspawn[496]: nginx: [emerg] open()
> "/dev/stderr" failed (6: No such device or address)

Any idea what the precise syscall is that triggers that? i.e. what
strace says?

> If I run the container with
> 
>   sudo /usr/bin/systemd-nspawn --register=no -M docker-centos-nginx

What happens if you use "nsenter" instead to join all namespaces of
the running nginx container and invoke a shell there, and then try to
acess /dev/stderr? Does this also work?

What happens if you use "dd" to write to /dev/stdout? Does that work,
too? (i think that bash handles /dev/stderr specially when you use it
with redirection, that's why I am asking).

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] cryptsetup-generator: support rd.luks.key=keyfile:keyfile_device

2015-04-24 Thread Lennart Poettering
On Fri, 24.04.15 13:37, Dimitri John Ledkov (dimitri.j.led...@intel.com) wrote:

> >> the exact name of the option and semantics to specify it to
> >> initramfs-tools is different from dracut's (but that's typical) but
> >> said equivalent feature does exist in the major other initramfs
> >> implementation.
> >
> > What's the syntax of Debian's initrd for this?
> >
> > I mean, if their syntax makes more sense, we might standardise on theirs...
> >
> 
> So in debian this is done via keyscript argument in /etc/crypttab, and
> there is passdev keyscript provided by the package that is typically
> used.
> initramfs-tools hooks copy all of those into initramfs.
> 
> The passdev is a C binary that takes a single argument -
> :[:]
> The binary waits for the device to appear (infinity or up to
> optionally specified time-out parameter), mounts it read-only, and
> read the filepath to attempt luks unlock for the device specified in
> the crypttab.
> 
> See docs and sources at:
> 
> https://sources.debian.net/src/cryptsetup/2:1.6.6-5/debian/README.Debian/#L139
> 
> https://sources.debian.net/src/cryptsetup/2:1.6.6-5/debian/passdev.c/
> 
> I am indifferent about configuration done via keyscript= parameter in
> crypttab, the quality of the passdev implementation. But the
> :[:] argument format is imho a simple &
> sensible one for this.
> 
> There are a bunch of other keyscript= binaries provided for remote
> unlocking over ssh, smartcards, yubikeys and so on, because debian
> supports arbitrary things there. A lot of the times people write their
> own keyscript by hand for their usecases. =) as usual debian prefers
> arbitrary code execution instead of something rigidly declarative ;-)

I am very very un-enthusiastic about the keyscript does. I don't want
to see this upstream. For yubikeys/smartcards I am pretty sure proper
support should be added to the c tools, not via some script glue.

I think DEbian's argument with device before file path certainly makes
more sense, but ultimately I don't really care about the order. WOuld
still be willing to add a patch that adds support for such external keys.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] core: don't change removed devices to state "tentative" [was: Re: [PATCH] unit: When stopping due to BindsTo=, log which unit caused it]

2015-04-24 Thread Martin Pitt
Hey Lennart,

Lennart Poettering [2015-04-24 12:37 +0200]:
> I only gave this light testing, I'd really appreciate if you could
> test this, if this still does the right thing!

Done (in QEMU), still works fine. I. e. it properly cleans up stale
mounts. Thanks for cleaning this up, this looks nice!

Martin
-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] cryptsetup-generator: support rd.luks.key=keyfile:keyfile_device

2015-04-24 Thread Dimitri John Ledkov
On 24 April 2015 at 10:06, Lennart Poettering  wrote:
> On Thu, 23.04.15 21:04, Dimitri John Ledkov (dimitri.j.led...@intel.com) 
> wrote:
>
>> On 23 April 2015 at 13:08, Lennart Poettering  wrote:
>> > On Thu, 23.04.15 19:33, Andrei Borzenkov (arvidj...@gmail.com) wrote:
>> >
>> >> > > > What does this actually do? Is the specified key file read from the
>> >> > > > specified device?
>> >> > >
>> >> > > It reads keyfile from filesystem on device identifed by 
>> >> > > keyfile_device.
>> >> > >
>> >> > > >  The order of keyfile:device sounds weird, no?
>> >> > > > Shouldn't it be the other way round?
>> >> > > >
>> >> > >
>> >> > > keyfile is mandatory, keyfile_device is optional and can be omitted. I
>> >> > > believe dracut looked at all existing devices then. This order makes
>> >> > > it easier to omit optional parameter(s).
>> >> >
>> >> > Well, whether it is [device:]file or file[:device] is hardly any
>> >> > difference for the parser...
>> >>
>> >> Does it really matter?
>> >
>> > Well, we might as well implement this in the most obvious way if it is
>> > not a completely standard feature yet. To me it appears that only one
>> > initrd supported it, and it lost it a while back without too much
>> > complaining...
>> >
>> > But anyway, I don't mind too much. The
>> >
>>
>> debian's initramfs-tools, but not ubuntu's, support keyfile on
>> usb-disk for unlocking luks volumes.
>>
>> the exact name of the option and semantics to specify it to
>> initramfs-tools is different from dracut's (but that's typical) but
>> said equivalent feature does exist in the major other initramfs
>> implementation.
>
> What's the syntax of Debian's initrd for this?
>
> I mean, if their syntax makes more sense, we might standardise on theirs...
>

So in debian this is done via keyscript argument in /etc/crypttab, and
there is passdev keyscript provided by the package that is typically
used.
initramfs-tools hooks copy all of those into initramfs.

The passdev is a C binary that takes a single argument -
:[:]
The binary waits for the device to appear (infinity or up to
optionally specified time-out parameter), mounts it read-only, and
read the filepath to attempt luks unlock for the device specified in
the crypttab.

See docs and sources at:

https://sources.debian.net/src/cryptsetup/2:1.6.6-5/debian/README.Debian/#L139

https://sources.debian.net/src/cryptsetup/2:1.6.6-5/debian/passdev.c/

I am indifferent about configuration done via keyscript= parameter in
crypttab, the quality of the passdev implementation. But the
:[:] argument format is imho a simple &
sensible one for this.

There are a bunch of other keyscript= binaries provided for remote
unlocking over ssh, smartcards, yubikeys and so on, because debian
supports arbitrary things there. A lot of the times people write their
own keyscript by hand for their usecases. =) as usual debian prefers
arbitrary code execution instead of something rigidly declarative ;-)

-- 
Regards,

Dimitri.
Pura Vida!

https://clearlinux.org
Open Source Technology Center
Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] unit: When stopping due to BindsTo=, log which unit caused it

2015-04-24 Thread Lennart Poettering
On Wed, 22.04.15 16:55, Alban Crequy (al...@endocode.com) wrote:

> Thanks for the commits. They don't seem related to containers.
> 
> I can reproduce my issue on git-master:
> 
> sudo ~/git/systemd/systemd-nspawn --register=false --bind
> $HOME/tmp/vol -D debian-tree -b
> 
> Then, in the container, make sure /bin/umount does NOT exist.
> Then halt the container with kill -37 1 (SIGRTMIN+3)

We require /bin/mount and /bin/umount to exist. We do not support
systems where you remove those. We also don't support systems without
glibc either, ... ;-)

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] core: don't change removed devices to state "tentative" [was: Re: [PATCH] unit: When stopping due to BindsTo=, log which unit caused it]

2015-04-24 Thread Lennart Poettering
On Fri, 13.03.15 08:30, Martin Pitt (martin.p...@ubuntu.com) wrote:

> From 05ffa415fa4f75f2e71830d47179b6f4a67c7215 Mon Sep 17 00:00:00 2001
> From: Martin Pitt 
> Date: Fri, 13 Mar 2015 08:23:02 +0100
> Subject: [PATCH] core: don't change removed devices to state "tentative"
> 
> Commit 628c89c introduced the "tentative" device state, which caused devices 
> to
> go from "plugged" to "tentative" on a remove uevent. This breaks the cleanup
> of stale mounts (see commit 3b48ce4), as that only applies to "dead" devices.
> 
> The "tentative" state only really makes sense on adding a device when we don't
> know where it was coming from (i. e. not from udev). But when we get a device
> removal from udev we definitively know that it's gond, so change the device
> state back to "dead" as before 628c89c.

Hmm, so this patch doesn't look right. The "add" boolean you are
checking does not encode what the previous state of the device was,
but simply whether to add or remove a flag bit from the found
variable.

The right way to handle this is to explicitly check if the device was
seen by udev before. I have now fixed that in git.

Also, we need to save/restore the state across daemon reloads, so that
after coming back we can also know whether the device has been
previously seen by udev. I have also added a fix for this now.

I only gave this light testing, I'd really appreciate if you could
test this, if this still does the right thing!

Thanks!

> ---
>  src/core/device.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/src/core/device.c b/src/core/device.c
> index 6b489a4..098a000 100644
> --- a/src/core/device.c
> +++ b/src/core/device.c
> @@ -419,7 +419,7 @@ static void device_update_found_one(Device *d, bool add, 
> DeviceFound found, bool
>  if (now) {
>  if (d->found & DEVICE_FOUND_UDEV)
>  device_set_state(d, DEVICE_PLUGGED);
> -else if (d->found != DEVICE_NOT_FOUND)
> +else if (add && d->found != DEVICE_NOT_FOUND)
>  device_set_state(d, DEVICE_TENTATIVE);
>  else
>  device_set_state(d, DEVICE_DEAD);
> -- 
> 2.1.4
> 





Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [systemd-nspawn] nginx: [emerg] open() "/dev/stderr" failed (6: No such device or address)

2015-04-24 Thread Peter Paule

Hi,

I run nginx in a CentOS 7.0 container via systemd-nspawn. nginx logs  
to stderr/stdout via configuration to capture logs via journald.


nginx.conf

  error_log  /dev/stderr warn;


If I use systemd 219-1 (-1 is the package number of Arch Linux) which  
seems to be a non-patched systemd 219, everything is fine. If I  
upgrade to systemd 219-6, nginx cannot be started via systemd-nspawn.  
systemd 219-6 includes this patch  
"https://projects.archlinux.org/svntogit/packages.git/tree/repos/core-x86_64/0001-nspawn-when-connected-to-pipes-for-stdin-stdout-pass.patch?h=packages/systemd";. BTW: I see the same error if I use  
systemd-git-HEAD.


I see the following errors in journal - I tried bot "stderr" and "stdout".

  Apr 24 04:48:12 server systemd-nspawn[421]: nginx: [emerg] open()  
"/dev/stdout" failed (6: No such device or address)
  Apr 24 04:48:45 server systemd-nspawn[496]: nginx: [emerg] open()  
"/dev/stderr" failed (6: No such device or address)


If I run the container with

  sudo /usr/bin/systemd-nspawn --register=no -M docker-centos-nginx

And check if the device exists, everything looks fine:

  [root@docker-centos-nginx ~]# ls -al /dev/stderr
  lrwxrwxrwx 1 root root 15 Apr 24 09:51 /dev/stderr -> /proc/self/fd/2
  [root@docker-centos-nginx ~]# ls -al /dev/stdout
  lrwxrwxrwx 1 root root 15 Apr 24 09:51 /dev/stdout -> /proc/self/fd/1
  [root@docker-centos-nginx ~]# echo "asdf" > /dev/stdout
  asdf
  [root@docker-centos-nginx ~]# echo "asdf" > /dev/stderr
  asdf

Journal:

  Apr 24 00:00:05 server systemd[1]: Stopping Webservice for server...
  Apr 24 00:00:05 server systemd-nspawn[23539]: Container  
docker-centos-nginx terminated by signal KILL.
  Apr 24 00:00:05 server systemd[1]: nginx@server.service: main  
process exited, code=exited, status=1/FAILURE
  Apr 24 00:00:05 server systemd[1]: Unit nginx@server.service  
entered failed state.

  Apr 24 00:00:05 server systemd[1]: nginx@server.service failed.
  Apr 24 00:00:06 server systemd[1]: Started Webservice for server.
  Apr 24 00:00:06 server systemd[1]: Starting Webservice for server...
  Apr 24 00:00:06 server systemd[1]: Stopping Webservice for server...
  Apr 24 00:00:06 server systemd[1]: Started Webservice for server.
  Apr 24 00:00:06 server systemd[1]: Starting Webservice for server...
  Apr 24 00:00:07 server systemd-nspawn[11016]: Spawning container  
docker-centos-nginx on  
/var/lib/machines/.#docker-centos-nginxb8dda432a4303288.
  Apr 24 00:00:07 server systemd-nspawn[11016]: Press ^] three times  
within 1s to kill container.

  Apr 24 04:43:31 server systemd[1]: Stopping Webservice for server...
  Apr 24 04:43:31 server systemd-nspawn[11016]: Container  
docker-centos-nginx terminated by signal KILL.
  Apr 24 04:43:31 server systemd[1]: nginx@server.service: main  
process exited, code=exited, status=1/FAILURE

  Apr 24 04:43:31 server systemd[1]: Stopped Webservice for server.
  Apr 24 04:43:31 server systemd[1]: Unit nginx@server.service  
entered failed state.

  Apr 24 04:43:31 server systemd[1]: nginx@server.service failed.
  -- Reboot --
  Apr 24 04:47:07 server systemd-nspawn[238]: nginx: [emerg] open()  
"/dev/stdout" failed (6: No such device or address)
  Apr 24 04:48:08 server systemd-nspawn[392]: nginx: [emerg] open()  
"/dev/stdout" failed (6: No such device or address)
  Apr 24 04:48:12 server systemd-nspawn[421]: nginx: [emerg] open()  
"/dev/stdout" failed (6: No such device or address)
  Apr 24 04:48:45 server systemd-nspawn[496]: nginx: [emerg] open()  
"/dev/stderr" failed (6: No such device or address)


Any idea how to solve this issue?

/pp

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH v2] network: Implement fallback DHCPv6 prefix handling for older kernels

2015-04-24 Thread Patrik Flykt

Hi,

On Fri, 2015-04-10 at 14:03 +0300, Patrik Flykt wrote:

> Version 2 attempts to resolve IPv6 address assignment issues at run time,
> first by adding IFA_FLAGS, then without.
> 
> Please test with kernels < 3.14 and >= 3.14.

This may be a case of too few people actually using DHCPv6 these days.
The patch works at least for me, so if I hear no comments I'll just push
this today. And I'm prepared to stay around to fix it should it not
work...

Cheers,

Patrik

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Supporting ExecStartPre= and friends in `systemctl set-property` or `systemd-run -p`

2015-04-24 Thread Lennart Poettering
On Fri, 24.04.15 04:07, Ivan Shapovalov (intelfx...@gmail.com) wrote:

> - do `systemd-run` twice and somehow set up the dependencies between
>   two transient units

I'd be happy to take a patch that allows configuring deps for
transient units when constructing them.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] cryptsetup-generator: support rd.luks.key=keyfile:keyfile_device

2015-04-24 Thread Lennart Poettering
On Fri, 24.04.15 09:05, Jan Synacek (jsyna...@redhat.com) wrote:

> Lennart Poettering  writes:
> 
> > On Fri, 20.02.15 10:56, Jan Synacek (jsyna...@redhat.com) wrote:
> >
> > Sorry for the late review.
> >
> > What's the precise background of this? Can you elaborate? Is there
> > some feature request for this?
> 
> Hi,
> 
> I can see that Andrei already answered most of your questions.
> 
> Some time after writing this patch, I realized that I should just fix
> dracut, so I did [1], and I forgot to leave a mention in this thread.
> I'm not sure what happened to the dracut patch since then, though.
> 
> [1] http://thread.gmane.org/gmane.linux.kernel.initramfs/4072

I think the basic feature set of dracut that we can neatly support
directly in systemd's generators we should support there. But of
course, the question is whether this is a "basic feature" or not.

Anyway, the offer still stands, for merging something like this, see
other mail.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] cryptsetup-generator: support rd.luks.key=keyfile:keyfile_device

2015-04-24 Thread Lennart Poettering
On Thu, 23.04.15 21:04, Dimitri John Ledkov (dimitri.j.led...@intel.com) wrote:

> On 23 April 2015 at 13:08, Lennart Poettering  wrote:
> > On Thu, 23.04.15 19:33, Andrei Borzenkov (arvidj...@gmail.com) wrote:
> >
> >> > > > What does this actually do? Is the specified key file read from the
> >> > > > specified device?
> >> > >
> >> > > It reads keyfile from filesystem on device identifed by keyfile_device.
> >> > >
> >> > > >  The order of keyfile:device sounds weird, no?
> >> > > > Shouldn't it be the other way round?
> >> > > >
> >> > >
> >> > > keyfile is mandatory, keyfile_device is optional and can be omitted. I
> >> > > believe dracut looked at all existing devices then. This order makes
> >> > > it easier to omit optional parameter(s).
> >> >
> >> > Well, whether it is [device:]file or file[:device] is hardly any
> >> > difference for the parser...
> >>
> >> Does it really matter?
> >
> > Well, we might as well implement this in the most obvious way if it is
> > not a completely standard feature yet. To me it appears that only one
> > initrd supported it, and it lost it a while back without too much
> > complaining...
> >
> > But anyway, I don't mind too much. The
> >
> 
> debian's initramfs-tools, but not ubuntu's, support keyfile on
> usb-disk for unlocking luks volumes.
> 
> the exact name of the option and semantics to specify it to
> initramfs-tools is different from dracut's (but that's typical) but
> said equivalent feature does exist in the major other initramfs
> implementation.

What's the syntax of Debian's initrd for this?

I mean, if their syntax makes more sense, we might standardise on theirs...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] cryptsetup-generator: support rd.luks.key=keyfile:keyfile_device

2015-04-24 Thread Jan Synacek
Lennart Poettering  writes:

> On Fri, 20.02.15 10:56, Jan Synacek (jsyna...@redhat.com) wrote:
>
> Sorry for the late review.
>
> What's the precise background of this? Can you elaborate? Is there
> some feature request for this?

Hi,

I can see that Andrei already answered most of your questions.

Some time after writing this patch, I realized that I should just fix
dracut, so I did [1], and I forgot to leave a mention in this thread.
I'm not sure what happened to the dracut patch since then, though.

[1] http://thread.gmane.org/gmane.linux.kernel.initramfs/4072

Cheers,
-- 
Jan Synacek
Software Engineer, Red Hat


signature.asc
Description: PGP signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel