Re: [systemd-devel] [systemd-nspawn] nginx: [emerg] open() "/dev/stderr" failed (6: No such device or address)

2015-04-27 Thread Peter Paule
> Hmm, but that already lists a native config keyword for "stderr"?
> 

Yes, I saw that too late. I copied the default configuration of the Arch Linux
nginx package and used that.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn and IPv6

2015-04-27 Thread Kai Krakow
Lennart Poettering  schrieb:

> On Mon, 27.04.15 20:08, Kai Krakow (hurikha...@gmail.com) wrote:
> 
>> > Or in other words: ipv6 setup needs some manual networking setup on
>> > the host.
>> 
>> Or there... Any pointers?
> 
> Not really. You have to set up ipv6 masquerading with ip6tables. And
> ensure the containers get ipv6 addresses that are stable enough that
> you can refer to them from the ip6tables rules...

Somehow I thought I would be smart by adding this ExecPostStart script (OTOH 
it's probably just time for bed):

#!/bin/bash
IFNAME=${1:0:14} # %I is passed here
if [ -n "$IFNAME" ]; then
IP=$(ip -6 addr show dev $IFNAME scope global | awk '/inet6/ { print 
$2 }')
/sbin/sysctl net.ipv6.conf.$IFNAME.forwarding=1
[ -z "$IP" ] || /sbin/ip6tables -t nat -I POSTROUTING --source $IP 
--dest ::/0
fi
exit 0

and adding Address=::0/126 to the [Network] section of ve-* devices...

But somehow it does not work. If I run it manually after starting the 
container, it does its work. Of course, inside the container, it won't have 
the counterpart address assigned (it works for DHCPv4 only).

If I modify the script to use scope link instead of global, it also works - 
but that won't route anyways.

I suppose, when ExecPostStart is running, the link is just not ready yet. An 
IP address fc00::... will be added to the interface, tho. So at least that 
works.

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] users and per user limits (tmpfs)

2015-04-27 Thread Michał Zegan
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello.

I have discovered how to add resource limits for the user, like how
much memory the user can use, or how much cpu time.
Here is the problem: /tmp seems a way for the user to circumvent this
restriction. Is there a way to protect it too?
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBAgAGBQJVPr5pAAoJEHb1CzgxXKwY+0kP/jTk+wdUPlcjmu8NDhUSBSi4
qluPAbClFUuxQaEgVPPCF6kRR7n4oMJzY4/j1znZfiWkSdJd/vlD/80gxleeHKd3
CchLVqx5x++u308zVI45an9gH9gKrR51tYTCyiwsdJ6IXGGRTza0CmNxtDbzoGO4
cgfs7XMlBjwVJgMRvltLYa+xuMp4pJ65V/lir5LIRgZaTeK9FC4I7C1/+kwKylQq
/STs0QIiale8+/1MaAHBdyl8s6Bs1Eovry4DRJ9NN3Ae5S09b4RpurBeGIjV6+DU
hl4GVpWFhnAWin1FdOCTzJYBJJ1qiAAeEdZovCtSA6hTfGydgEv0nzqh7oxr3KgM
eWDC15XIw3wjGTIEEsRbkMTogmfBXwDd0xD+UJwBpulCtis7j4Y7Pnul1Yi7DtYe
yIh4PaLNP1j6bTOTVx9tRSa/MzCv0n6aKCFNmrFpD0wAoN27gmiu2nHh0fhJWnhL
WmytvPARV/P/5ZfBnCTqzvq1xXeX4UqCcGoZ30GPyaGiQPZ5qleHzwAuWQ7taxgL
/6cJOVv/klQlbrgnm8Fc1DsHzUBRZpns10aUX361iv6fxZKZHn1oNCDB442utJ/f
Vix++eCZy68kFkdj/8X+DSu3GgBKCSGRoKMKNSO9iMyGm/iHKf5pTfLdGZhUk+TS
xw4MZaPTzcJTaOBf021W
=/7yf
-END PGP SIGNATURE-
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] man systemd.network question

2015-04-27 Thread Kai Krakow
Hello!

The man page reads:

[MATCH] SECTION OPTIONS
   The network file contains a "[Match]" section, which determines if a
   given network file may be applied to a given device; and a
   "[Network]" section specifying how the device should be configured.
   The first (in lexical order) of the network files that matches a
   given device is applied.

What does this exactly mean? Will it process further files or stop 
processing files after a match?

Usually, my experience with unix says, that when files are processed in 
lexical order, settings from earlier files are overridden by settings from 
later files - like e.g. in /etc/env.d

In that sense, it can only mean that the processing stops at the first 
matching files. Otherwise the order of overriding would be reversed from 
expectations.

I think this should be made more clear in the man page, like by "The 
processing of files stops at the first match." This would follow example of 
how other projects document behaviour.

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn trouble

2015-04-27 Thread Tobias Hunger
Yes, I was referring to a container when using the name "vm". Sorry if
I caused confusion with this, I used to run lots of real VMs and then
moved those over to containers and still think of those services as
virtual machines.

On Mon, Apr 27, 2015 at 5:01 PM, Lennart Poettering
 wrote:
> I figure we should teach journalctl -m to actually watch running
> containers and accessing their journals directly, without requiring a
> symlink in /var/log/journal. For ephemeral containers (which exist
> purely during runtime), this sounds like the best option since we
> shouldn't litter persistant file systems with objects that should not
> persist.
>
> Added to TODO list.

That would be super awesome! And you could get rid of some of those
those --link-journal options.

PS: Networking works more like I had expected now, but I am not sure
what I changed. Maybe that was an issue with the arch packages or
something. I did I reinstalled both the server as well as all the
containers it runs a couple of times in the meantime.

Best Regards,
Tobias
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn and IPv6

2015-04-27 Thread Kai Krakow
Lennart Poettering  schrieb:

> On Mon, 27.04.15 20:17, Kai Krakow (hurikha...@gmail.com) wrote:
> 
>> Tomasz Torcz  schrieb:
>> 
>> >> Well, would that enable automatic, correcting routing between the
>> >> container and the host's external network? That's kinda what this all
>> >> is about...
>> > 
>> > If you have radvd running, it should.  By the way, speaking of NAT
>> > in context of IPv6 is a heresy.
>> 
>> Why? It's purpose here is not saving some addresses (we have many in
>> IPv6), it's purpose is to have security and containment. The services
>> provided by the container - at least in my project - are meant to be seen
>> as a service of the host (as Lennart pointed out as a possible
>> application in another post). I don't want the containers being
>> addressable/routable from outside in. And putting a firewall in place to
>> counterfeit this is just security by obscurity: Have one configuration
>> problem and your firewall is gone and the container publicly available.
>> 
>> The whole story would be different if I'd setup port forwarding
>> afterwards to make services from the containers available - but that
>> won't be the case.
> 
> Sidenote: systemd-nspawn already covers that for ipv4: use the --port=
> switch (or -p).

Yes, I know... And I will certainly find a use-case for that. :-)

But the general design of my project is to put containers behind a reverse 
proxy like nginx or varnish, setup some caching and waf rules, and 
dynamically point incoming web requests to the right container servicing the 
right environment. :-)

I will probably pull performance data through such a port forwarding. But 
for now the testbed is only my desktop system, some months will pass before 
deploying this on a broader basis, it will certainly not start with IPv6 
support (but it will be kept in mind), and I still have a lot of ideas to 
try out.

I even won't need to have IPv6 pass into the host from external networks 
because a proxy will sit inbetween. But it would be nice if containers could 
use IPv6 from inside without having to worry about packets could pass in 
through a public routing rule. I don't like pulling up a firewall before 
everything is settled, tested, and secured. A firewall is only the last 
resort barrier. The same holds true for stuff like fail2ban or denyhosts.

For the time being, I should simply turn off IPv6 inside the container. 
However, I didn't figure out how to prevent systemd-network inside the 
container from doing that.

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv3] core: coldplug all units which participate in jobs during coldplugging

2015-04-27 Thread Lennart Poettering
On Mon, 27.04.15 21:19, Ivan Shapovalov (intelfx...@gmail.com) wrote:

> This is yet another attempt to fix coldplugging order (more especially,
> the problem which happens when one creates a job during coldplugging and
> it references a not-yet-coldplugged unit).
> 
> Now we forcibly coldplug all units which participate in jobs. This
> is a superset of previously implemented handling of the UNIT_TRIGGERS
> dependencies, so that handling is removed.
> 
> http://lists.freedesktop.org/archives/systemd-devel/2015-April/031212.html
> https://bugs.freedesktop.org/show_bug.cgi?id=88401 (once again)

Looks good! Applied!

Thanks,

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] initrd mount inactive

2015-04-27 Thread Lennart Poettering
On Mon, 27.04.15 11:47, aaron_wri...@selinc.com (aaron_wri...@selinc.com) wrote:

> I applied commit 628c89cc68ab96fce2de7ebba5933725d147aecc - core: rework 
> device state logic, but now I'm left with a random chance to boot or not.
> 
> Some boots it comes up with "/var mounted" and lots of nice colored "[ OK 
> ]"s.
> 
> Some boots it comes up with "Unit var.mount is bound to inactive unit 
> /dev/mapper/. Stopping, too." and no colored "[ OK ]"s and about 
> half the logs; only the "systemd[1]" messages, and it just hangs at some 
> point; it never reaches the default target.
> 
> I create /dev/mapper/ in initrd with cryptsetup, and then mount it 
> to /newroot/var before switching root to /newroot and running systemd. I 
> don't use systemd in initrd.

Make sure to apply 496068a8288084ab3ecf8b179a8403ecff1a6be8
and f62009410a72f5a89bfb8fdd7e48d9d472a6887b.

Also make sure you have LVM/DM compiled with proper udev support.
 

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn and IPv6

2015-04-27 Thread Lennart Poettering
On Mon, 27.04.15 20:17, Kai Krakow (hurikha...@gmail.com) wrote:

> Tomasz Torcz  schrieb:
> 
> >> Well, would that enable automatic, correcting routing between the
> >> container and the host's external network? That's kinda what this all
> >> is about...
> > 
> > If you have radvd running, it should.  By the way, speaking of NAT
> > in context of IPv6 is a heresy.
> 
> Why? It's purpose here is not saving some addresses (we have many in IPv6), 
> it's purpose is to have security and containment. The services provided by 
> the container - at least in my project - are meant to be seen as a service 
> of the host (as Lennart pointed out as a possible application in another 
> post). I don't want the containers being addressable/routable from outside 
> in. And putting a firewall in place to counterfeit this is just security by 
> obscurity: Have one configuration problem and your firewall is gone and the 
> container publicly available.
> 
> The whole story would be different if I'd setup port forwarding afterwards 
> to make services from the containers available - but that won't be
> the case.

Sidenote: systemd-nspawn already covers that for ipv4: use the --port=
switch (or -p).

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [systemd-nspawn] nginx: [emerg] open() "/dev/stderr" failed (6: No such device or address)

2015-04-27 Thread Lennart Poettering
On Mon, 27.04.15 20:11, Peter Paule (systemd-de...@fedux.org) wrote:

> > I'm guessing from the error message that it's not a shell script but nginx
> > itself configured to use "/dev/stderr" as its log file, so there's no >&
> > that could be used...
> 
> Correct - see http://nginx.org/en/docs/ngx_core_module.html
> 
>   Syntax: error_log file | stderr |
> syslog:server=address[,parameter=value] | memory:size [debug | 
> info |
> notice | warn | error | crit | alert | emerg];
>   Default:error_log logs/error.log error;
>   Context:main, http, stream, server, location

What precisely is the setting you picked?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn and IPv6

2015-04-27 Thread Lennart Poettering
On Mon, 27.04.15 20:08, Kai Krakow (hurikha...@gmail.com) wrote:

> > Or in other words: ipv6 setup needs some manual networking setup on
> > the host.
> 
> Or there... Any pointers?

Not really. You have to set up ipv6 masquerading with ip6tables. And
ensure the containers get ipv6 addresses that are stable enough that
you can refer to them from the ip6tables rules...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH 1/3] zsh-completion: add missing completions for machinectl

2015-04-27 Thread Lukas Rusak
I feel like this is already accomplished. The completion function
"_sd_machines" already lists the running containers.

Otherwise currently "_available_machines" is only used for "machinectl
start".
see,
  list*|cancel-transfer|pull-tar|pull-raw|pull-dkr)
msg="no options" ;;
  start)
_available_machines ;;
  *)
_sd_machines

All other functions (other than list-images, list-transfers, list,
cancel-transfer, pull-tar, pull-raw, and pull-dkr) use the already
implemented "_sd_machines" function to list currently running machines.

So, if you would like I can change "_available_machines" to "_available_images"
and "__get_available_machines" to "__get_available_images" if this makes
more sense.

On Thu, Apr 23, 2015 at 7:53 AM, Lennart Poettering 
wrote:

> On Wed, 22.04.15 15:52, Lukas Rusak (loru...@gmail.com) wrote:
>
> > Appologies, I'm still getting used to this mailing list thing and using
> git send-email
> >
> > ---
> >  shell-completion/zsh/_machinectl | 84
> +---
> >  1 file changed, 70 insertions(+), 14 deletions(-)
> >
> > diff --git a/shell-completion/zsh/_machinectl
> b/shell-completion/zsh/_machinectl
> > index c666b7e..a81c5c7 100644
> > --- a/shell-completion/zsh/_machinectl
> > +++ b/shell-completion/zsh/_machinectl
> > @@ -1,5 +1,20 @@
> >  #compdef machinectl
> >
> > +__get_available_machines () {
> > +machinectl --no-legend list-images | awk '{print $1}' |  {while
> read -r a b; do echo $a; done;}
> > +}
> > +
> > +_available_machines() {
> > +local -a _machines
> > +_machines=("${(fo)$(__get_available_machines)}")
> > +typeset -U _machines
> > +if [[ -n "$_machines" ]]; then
> > +_describe 'machines' _machines
> > +else
> > + _message 'no machines'
> > +fi
> > +}
>
> For this to be fully correct, you need to distuingish "images" and
> "machines".
>
> Basically, "machines" are runtime objects, instances of containers
> currently running. "images" are files or directories on disk. You can
> run multiple machines off the same image (by use --read-only or
> --ephemeral).
>
> Other container/VM managers like libvirt-lxc also register their
> running containers with machined as machines, even though the backing
> images of those machines might not be visible to machined.
>
> Usually you run a machine from an image that carries the same name as
> the image, but that's not a requirement really.
>
> Some of machinectl's commands operate on images, others on running
> containers...
>
> Lennart
>
> --
> Lennart Poettering, Red Hat
>
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] initrd mount inactive

2015-04-27 Thread Aaron_Wright
I applied commit 628c89cc68ab96fce2de7ebba5933725d147aecc - core: rework 
device state logic, but now I'm left with a random chance to boot or not.

Some boots it comes up with "/var mounted" and lots of nice colored "[ OK 
]"s.

Some boots it comes up with "Unit var.mount is bound to inactive unit 
/dev/mapper/. Stopping, too." and no colored "[ OK ]"s and about 
half the logs; only the "systemd[1]" messages, and it just hangs at some 
point; it never reaches the default target.

I create /dev/mapper/ in initrd with cryptsetup, and then mount it 
to /newroot/var before switching root to /newroot and running systemd. I 
don't use systemd in initrd.

Am I going about this wrong? What am I doing wrong here? What is the best 
way to mount /var in initrd and make systemd happy?


PS - I also added a commit to log what the inactive unit was.

"Aaron Wright"  wrote on 03/12/2015 08:42:15 AM:
> Andrei Borzenkov  wrote on 03/11/2015 08:44:28 PM:
> > aaron_wri...@selinc.com пишет:
> > 
> > > I'm working with an embedded device that mounts / and /var in 
initrd. It 
> > > then switches root and fires up systemd. Early in the boot, after 
paths 
> > > target, /var gets unmounted. I want systemd to not do that, but I 
can't 
> > > figure out how to stop it.
> > > I would like systemd to leave /var mounted, but still unmount it 
during 
> > > shutdown. I would rather not move the mounting of /var out of 
initrd. Is 
> > > this possible?
> > > I'm trying to use a very stripped down systemd. As minimal as 
possible. 
> > 
> > Do you use udev in initrd?
> > 
> 
> No. initrd is a custom script I wrote, and it mounts devtmpfs for its 
devices.
> 
> > > I'm using systemd-219. The logs say that var.mount is bound to 
> an inactive 
> > > unit, and it is stopping too. I assume that is why /var gets 
unmounted, 
> > > but I don't know what to do to stop it. There is no /etc/fstab 
> file. There 
> > > is no var.mount file.
> > > I assume I'm either missing something simple, or it is not possible.
> > 
> > Did you try this commit?
> > 
> > 
> > commit 628c89cc68ab96fce2de7ebba5933725d147aecc
> > ...snip...
> > 
> 
> I was finally able to get /var to stay mounted when I included the 
> local-fs.target and local-fs-pre.target units on the device. 
> Apparently they are used magically by systemd. I'm not sure why or 
> how, but it does finally work, so I'm happy. This leads to my other 
> question about what units are required. I'll continue on that 
> discussion on that thread. 


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-networkd and systemd-nspawn: missing host-side network

2015-04-27 Thread Kai Krakow
Kai Krakow  schrieb:

Amended below...

> Hello!
> 
> I've created a container with systemd-nspawn, "machinectl enable"d it,
> then added machines.target to my default target (systemctl enable
> machines.target) so that containers will be autostarted on boot. That
> works so far.
> 
> But I discovered that systemd-networkd no longer configures my normal
> ethernet device during boot (it's configured as dhcp client). It just
> configures the ve-* device and that's it. After I manually restart
> networkd, all links are configured.
> 
> Steps to reproduce:
> 
> $ cat /etc/systemd/network/80-dhcp.network
> [Match]
> Name=en*
> [Network]
> DHCP=yes
> [DHCP]
> UseDomains=true
> 
> $ cat /etc/systemd/network/90-veth.network
> # This was added because otherwise after reboot, ve- is stuck in
> # mode "configuring" when looking at networkctl, it changes nothing
> # for the following behaviour, tho...
> [Match]
> Name=ve-*
> [Network]
> DHCP=no
> 
> $ machinectl enable test-machine
> $ systemctl enable machines.target
> $ systemctl reboot
> ...[rebooting]...
> 
> $ networkctl
> IDX LINK TYPE   OPERATIONAL SETUP
>   1 lo   loopback   n/a n/a
>   2 enp4s0   ether  n/a n/a
>   3 sit0 sitn/a n/a
>   4 ve-  ether  routableconfigured
> 
> $ ifconfig
> # shows only lo and ve-
> 
> $ systemctl restart systemd-networkd
> $ networkctl
> IDX LINK TYPE   OPERATIONAL SETUP
>   1 lo   loopback   carrier unmanaged
>   2 enp4s0   ether  routableconfigured
>   3 sit0 sitoff unmanaged
>   4 ve-  ether  routableconfigured

I just discovered that I also need to restart the container from this point, 
otherwise I cannot ssh into the container. The connection just times out.

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn and IPv6

2015-04-27 Thread Kai Krakow
Lennart Poettering  schrieb:

> On Mon, 27.04.15 15:44, Dimitri John Ledkov (dimitri.j.led...@intel.com)
> wrote:
> 
>> > Well, networkd on the host automatically sets up IPv4 masquerading for
>> > each container. We simply don't do anything equivalent for IPv6
>> > currently.
>> >
>> > Ideally we wouldn't have to do NAT for IPv6 to make this work, and
>> > instead would pass on some ipv6 subnet we acquired from uplink without
>> > NAT to each container, but we currently don't have infrastructure for
>> > that in networkd, and I am not even sure how this could really work,
>> > my ipv6-fu is a bit too limited...
>> >
>> > or maybe we should do ipv6 nat after all, under the logic that
>> > containers are just an implementation detail of the local host rather
>> > than something to be made visible to the outside world. however code
>> > for this exists neither.
>> >
>> > Or in other words: ipv6 setup needs some manual networking setup on
>> > the host.
>> 
>> One should roll the dice and generate unique local address /48 prefix
>> and use that to setup local addressing, ideally with
>> autoconfigurations (e.g. derive a fake mac from container uuid and
>> using the "hosts's" ULA prefix auto-assign ipv6 address)
> 
> Well, would that enable automatic, correcting routing between the
> container and the host's external network? That's kinda what this all
> is about...

My IPv6-fu is in apprentice-mode, too. But my first guess would be: no. 
Local addressing is not routed AFAIK. So I need a global scope address (and 
for my use-case I don't want that) or it has to go through NAT.

You said you don't setup IPv6 masquerading, yet. My first guess was I may 
have forgotten to setup IPv6 NAT support in the kernel. I'll check that. 
Along with that I'm eager to read about a proper, official solution within 
systemd-nspawn here.

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [systemd-nspawn] nginx: [emerg] open() "/dev/stderr" failed (6: No such device or address)

2015-04-27 Thread Mantas Mikulėnas
On Mon, Apr 27, 2015 at 9:11 PM, Peter Paule 
wrote:

> > I'm guessing from the error message that it's not a shell script but
> nginx
> > itself configured to use "/dev/stderr" as its log file, so there's no >&
> > that could be used...
>
> Correct - see http://nginx.org/en/docs/ngx_core_module.html
>
>   Syntax:   error_log file | stderr |
> syslog:server=address[,parameter=value] | memory:size [debug |
> info |
> notice | warn | error | crit | alert | emerg];
>   Default:  error_log logs/error.log error;
>   Context:  main, http, stream, server, location
>

Hmm, but that already lists a native config keyword for "stderr"?

-- 
Mantas Mikulėnas 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn and IPv6

2015-04-27 Thread Kai Krakow
Lennart Poettering  schrieb:

> On Sun, 26.04.15 16:50, Kai Krakow (hurikha...@gmail.com) wrote:
> 
>> Hello!
>> 
>> I've successfully created a Gentoo container on top of a Gentoo host. I
>> can start the container with machinectl. I can also login using SSH. So
>> mission almost accomblished (it should become a template for easy vserver
>> cloning).
>> 
>> But from within the IPv6-capable container I cannot access the IPv6
>> outside world. Name resolution via IPv6 fails, as does pinging to IPv6.
>> It looks like systemd-nspawn does only setup IPv4 routes to access
>> outside my gateway boundary. IPv6 does not work.
> 
> Well, networkd on the host automatically sets up IPv4 masquerading for
> each container. We simply don't do anything equivalent for IPv6
> currently.

So it was a good idea to ask before poking around... ;-)

> Ideally we wouldn't have to do NAT for IPv6 to make this work, and
> instead would pass on some ipv6 subnet we acquired from uplink without
> NAT to each container, but we currently don't have infrastructure for
> that in networkd, and I am not even sure how this could really work,
> my ipv6-fu is a bit too limited...
> 
> or maybe we should do ipv6 nat after all, under the logic that
> containers are just an implementation detail of the local host rather
> than something to be made visible to the outside world. however code
> for this exists neither.

Well, my expectation would be to have NAT for IPv6 here. Why should be NAT 
IPv4 private addresses by default but not IPv6 private addresses?

The obvious would be that "it just works." If I wanted routable IPv4, I'd 
configure that. If I wanted routable IPv6, I'd do that, too. But it'd be 
pretty surprising to have IPv4 NAT but IPv6 public access if radvd 
propagated a routable address. This could also become a security problem by 
surprise.

So I suggest, by default both protocols should behave the same.

For my project IPv6 is currently no requirement but it's a future 
improvement plan. I just wanted to test it out. So currently I could resort 
back to switch off IPv6 in the container, tho it's also not obvious how to 
do it. It's probably done by means of putting some config in 
/etc/systemd/network within the container.

> Or in other words: ipv6 setup needs some manual networking setup on
> the host.

Or there... Any pointers?

Thanks,
Kai

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCHv3] core: coldplug all units which participate in jobs during coldplugging

2015-04-27 Thread Ivan Shapovalov
This is yet another attempt to fix coldplugging order (more especially,
the problem which happens when one creates a job during coldplugging and
it references a not-yet-coldplugged unit).

Now we forcibly coldplug all units which participate in jobs. This
is a superset of previously implemented handling of the UNIT_TRIGGERS
dependencies, so that handling is removed.

http://lists.freedesktop.org/archives/systemd-devel/2015-April/031212.html
https://bugs.freedesktop.org/show_bug.cgi?id=88401 (once again)
---
 src/core/transaction.c | 7 +++
 src/core/unit.c| 8 
 2 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/src/core/transaction.c b/src/core/transaction.c
index 5974b1e..6e39809 100644
--- a/src/core/transaction.c
+++ b/src/core/transaction.c
@@ -848,6 +848,13 @@ int transaction_add_job_and_dependencies(
 assert(type < _JOB_TYPE_MAX_IN_TRANSACTION);
 assert(unit);
 
+/* Before adding jobs for this unit, let's ensure that its state has 
been loaded
+ * This matters when jobs are spawned as part of coldplugging itself 
(see e. g. path_coldplug()).
+ * This way, we "recursively" coldplug units, ensuring that we do not 
look at state of
+ * not-yet-coldplugged units. */
+if (unit->manager->n_reloading > 0)
+unit_coldplug(unit);
+
 /* log_debug("Pulling in %s/%s from %s/%s", */
 /*   unit->id, job_type_to_string(type), */
 /*   by ? by->unit->id : "NA", */
diff --git a/src/core/unit.c b/src/core/unit.c
index 2b356e2..996b648 100644
--- a/src/core/unit.c
+++ b/src/core/unit.c
@@ -2889,14 +2889,6 @@ int unit_coldplug(Unit *u) {
 
 u->coldplugged = true;
 
-/* Make sure everything that we might pull in through
- * triggering is coldplugged before us */
-SET_FOREACH(other, u->dependencies[UNIT_TRIGGERS], i) {
-r = unit_coldplug(other);
-if (r < 0)
-return r;
-}
-
 if (UNIT_VTABLE(u)->coldplug) {
 r = UNIT_VTABLE(u)->coldplug(u);
 if (r < 0)
-- 
2.3.6

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn and IPv6

2015-04-27 Thread Kai Krakow
Tomasz Torcz  schrieb:

>> Well, would that enable automatic, correcting routing between the
>> container and the host's external network? That's kinda what this all
>> is about...
> 
> If you have radvd running, it should.  By the way, speaking of NAT
> in context of IPv6 is a heresy.

Why? It's purpose here is not saving some addresses (we have many in IPv6), 
it's purpose is to have security and containment. The services provided by 
the container - at least in my project - are meant to be seen as a service 
of the host (as Lennart pointed out as a possible application in another 
post). I don't want the containers being addressable/routable from outside 
in. And putting a firewall in place to counterfeit this is just security by 
obscurity: Have one configuration problem and your firewall is gone and the 
container publicly available.

The whole story would be different if I'd setup port forwarding afterwards 
to make services from the containers available - but that won't be the case.

Each container has to be in it's own private network (on grouped into a 
private network with selected other containers). Only gateway services on 
the host system (like a web proxy) are allowed to talk to the containers.

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [systemd-nspawn] nginx: [emerg] open() "/dev/stderr" failed (6: No such device or address)

2015-04-27 Thread Peter Paule
> I'm guessing from the error message that it's not a shell script but nginx
> itself configured to use "/dev/stderr" as its log file, so there's no >&
> that could be used...

Correct - see http://nginx.org/en/docs/ngx_core_module.html

  Syntax:   error_log file | stderr |
syslog:server=address[,parameter=value] | memory:size [debug | info 
|
notice | warn | error | crit | alert | emerg];
  Default:  error_log logs/error.log error;
  Context:  main, http, stream, server, location
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Rebooting systemd-nspawn container results in shutdown

2015-04-27 Thread Kai Krakow
Lennart Poettering  schrieb:

> On Sun, 26.04.15 16:55, Kai Krakow (hurikha...@gmail.com) wrote:
> 
>> Hello!
>> 
>> I've successfully created a Gentoo container on top of a Gentoo host. I
>> can start the container with machinectl, as I can with "systemctl start
>> ...".
>> 
>> Inside the container (logged in via SSH), I could issue a reboot command.
>> But that just results in the container being shutdown. It never comes
>> back unless I restart the machine with systemctl or machinectl.
> 
> What systemd versions run on the host and in the container?

systemd-219 on the host, 218 in the container.

> if you strace the nspawn process, and then issue the reboot command,
> what are the last 20 lines this generates when nspawn exits? Please
> paste somewhere.

Sure: https://gist.github.com/kakra/d2ff59deec079e027d71

> Is the service in a failed state or so when this doesn't work?

# systemctl status systemd-nspawn@gentoo\\x2dcontainer\\x2dbase.service
● systemd-nspawn@gentoo\x2dcontainer\x2dbase.service - Container gentoo-
container-base
   Loaded: loaded (/etc/systemd/system/systemd-
nspawn@gentoo\x2dcontainer\x2dbase.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since Mo 2015-04-27 19:54:36 CEST; 2min 30s ago
 Docs: man:systemd-nspawn(1)
  Process: 14721 ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit --
boot --link-journal=try-guest --network-veth --machine=%I --
bind=/usr/portage --bind-ro=/usr/src (code=exited, status=133)
 Main PID: 14721 (code=exited, status=133)
   Status: "Terminating..."

Apr 27 19:54:36 jupiter systemd-nspawn[14721]: [  OK  ] Reached target 
Unmount All Filesystems.
Apr 27 19:54:36 jupiter systemd-nspawn[14721]: [  OK  ] Stopped target Local 
File Systems (Pre).
Apr 27 19:54:36 jupiter systemd-nspawn[14721]: Stopping Remount Root and 
Kernel File Systems...
Apr 27 19:54:36 jupiter systemd-nspawn[14721]: [  OK  ] Stopped Remount Root 
and Kernel File Systems.
Apr 27 19:54:36 jupiter systemd-nspawn[14721]: [  OK  ] Reached target 
Shutdown.
Apr 27 19:54:36 jupiter systemd-nspawn[14721]: Sending SIGTERM to remaining 
processes...
Apr 27 19:54:36 jupiter systemd-nspawn[14721]: Sending SIGKILL to remaining 
processes...
Apr 27 19:54:36 jupiter systemd-nspawn[14721]: Rebooting.
Apr 27 19:54:36 jupiter systemd[1]: Stopping Container gentoo-container-
base...
Apr 27 19:54:36 jupiter systemd[1]: Stopped Container gentoo-container-base.

> What is the log output of the service then?

Is it sufficient what's included in the status above?

>> BTW: Is there a way to automatically bind-mount some directories instead
>> of "systemctl edit --full" the service file and add those?
> 
> Currently not, but there's a TODO item to add ".nspawn" files that may
> be placed next to container directories with additional options.

This is for an imagined use-case where I have multiple similar containers 
running which should all mount the same storage pool (e.g. web pages, just 
each container runs a different PHP).

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Question about system-update.target

2015-04-27 Thread Richard Hughes
On 27 April 2015 at 17:53, Mantas Mikulėnas  wrote:
> In this case, failure doesn't make much sense, if you describe the task as
> "ensuring that the BIOS is up-to-date".

In this case, the task is "upload firmware blob foo.bin in /var/cache
to the flash chip"

Richard.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] KillUserProcesses timeout

2015-04-27 Thread Mikhail Morfikov
This is the log when my system works as usual:

(loginctl session-status)

1 - morfik (1000)
   Since: Sun 2015-04-26 23:19:01 CEST; 18h ago
  Leader: 1720 (lightdm)
Seat: seat0; vc7
 Display: :0
 Service: lightdm; type x11; class user
   State: online
Unit: session-1.scope
  ├─ 1720 lightdm --session-child 12 19
  ├─ 1764 /usr/bin/gnome-keyring-daemon --daemonize --login
  ├─ 1766 /usr/bin/openbox --startup 
/usr/lib/x86_64-linux-gnu/openbox-autostart OPENBOX
  ├─ 1808 /usr/bin/ssh-agent /usr/bin/gpg-agent --daemon --sh 
--write-env-file=/home/morfik/.gnupg/gpg-agent-info-morfikownia 
/usr/bin/dbus-launch --exit-with-session /usr/bin/openbox-session
  ├─ 1809 /usr/bin/gpg-agent --daemon --sh 
--write-env-file=/home/morfik/.gnupg/gpg-agent-info-morfikownia 
/usr/bin/dbus-launch --exit-with-session /usr/bin/openbox-session
  ├─ 1812 /usr/bin/dbus-launch --exit-with-session 
/usr/bin/openbox-session
  ├─ 1813 /usr/bin/dbus-daemon --fork --print-pid 5 
--print-address 7 --session
  ├─ 1829 compton --config /home/morfik/.config/compton.conf -b
  ├─ 1844 /usr/bin/urxvtd -q -f -o
  ├─ 1845 /usr/bin/urxvtd -q -f -o
  ├─ 1848 tint2 -c /home/morfik/.config/tint2/tint2rc_up
  ├─ 1849 tint2 -c /home/morfik/.config/tint2/tint2rc_down
  ├─ 1880 sg p2p -c megasync
  ├─ 1881 claws-mail
  ├─ 1883 volumeicon
  ├─ 1887 megasync
  ├─ 1888 xfce4-volumed
  ├─ 1890 /usr/lib/x86_64-linux-gnu/gconf/gconfd-2
  ├─ 1911 /usr/lib/x86_64-linux-gnu/xfce4/xfconf/xfconfd
  ├─ 1912 tmux attach-session -t system-logs
  ├─ 1972 tmux attach-session -t system-logs
  ├─ 2000 zsh -c journalctl -b --no-pager --since -10m | ccze 
-m ansi && systemctl --failed --no-pager | ccze -m ansi && journalctl -n 0 -f | 
ccze -m ansi
  ├─ 2003 zsh -c cat /dev/log-lxc | ccze -m ansi -p syslog -C
  ├─ 2004 newsbeuter
  ├─ 2056 light-locker
  ├─ 2129 cat /dev/log-lxc
  ├─ 2131 ccze -m ansi -p syslog -C
  ├─ 2177 /usr/lib/at-spi2-core/at-spi-bus-launcher
  ├─ 2180 /usr/lib/dconf/dconf-service
  ├─ 2184 /usr/bin/dbus-daemon 
--config-file=/etc/at-spi2/accessibility.conf --nofork --print-address 3
  ├─ 2194 /usr/lib/at-spi2-core/at-spi2-registryd 
--use-gnome-session
  ├─ 2546 /usr/bin/pulseaudio --start --log-target=syslog
  ├─ 2637 journalctl -n 0 -f
  ├─ 2638 ccze -m ansi
  ├─ 2640 /usr/lib/pulseaudio/pulse/gconf-helper
  ├─ 2691 tmux attach-session -t tmux
  ├─ 2794 -zsh
  ├─ 2881 su -
  ├─ 2898 -su
  ├─ 3557 -zsh
  ├─15023 conky -c /home/morfik/.conky/.conkyrc_fs
  ├─15060 conky -c /home/morfik/.conky/.conkyrc
  ├─15899 conky -c /home/morfik/.conky/1c/.conkyrc_1c
  └─15900 conky -c /home/morfik/.conky/1b1/.conkyrc_1b1_int

Apr 27 13:11:51 morfikownia su[68365]: pam_unix(su:session): session closed for 
user debian-security-support
Apr 27 13:11:51 morfikownia su[68388]: Successful su for 
debian-security-support by root
Apr 27 13:11:51 morfikownia su[68388]: + ??? root:debian-security-support
Apr 27 13:11:51 morfikownia su[68388]: pam_unix(su:session): session opened for 
user debian-security-support by (uid=0)
Apr 27 13:11:51 morfikownia su[68388]: pam_unix(su:session): session closed for 
user debian-security-support
Apr 27 13:12:10 morfikownia su[69569]: Successful su for morfik by root
Apr 27 13:12:10 morfikownia su[69569]: + ??? root:morfik
Apr 27 13:12:10 morfikownia su[69569]: pam_unix(su:session): session opened for 
user morfik by (uid=0)
Apr 27 13:12:10 morfikownia org.freedesktop.Notifications[1813]: 
(xfce4-notifyd:69577): Gtk-WARNING **: Failed to set text from markup due to 
error parsing markup: Unknown tag 'p' on line 1 char 20
Apr 27 13:12:12 morfikownia su[69569]: pam_unix(su:session): session closed for 
user morfik

21 - root (0)
   Since: Mon 2015-04-27 18:00:08 CEST; 6min ago
  Leader: 41244 (login)
Seat: seat0; vc1
 TTY: /dev/tty1
 Service: login; type tty; class user
   State: active
Unit: session-21.scope
  ├─12773 -zsh
  ├─15435 loginctl session-status 1 21 c1
  └─41244 /bin/login -- 

Apr 27 18:00:08 morfikownia systemd[1]: Started Session 21 of user root.
Apr 27 18:00:08 morfikownia systemd[1]: Starting Session 21 of user root.
Apr 27 18:00:08 morfikownia login[

Re: [systemd-devel] Question about system-update.target

2015-04-27 Thread Mantas Mikulėnas
On Mon, Apr 27, 2015 at 11:52 AM, Richard Hughes 
wrote:

> What return code I supposed to return if we launch
> fwupd-offline-update.service and there are no BIOS updates to apply?
>

In this case, failure doesn't make much sense, if you describe the task as
"ensuring that the BIOS is up-to-date".

-- 
Mantas Mikulėnas 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Deadlocks with reloading jobs which are part of current transaction [was: [PATCH] Avoid reloading services when shutting down]

2015-04-27 Thread Lennart Poettering
On Wed, 04.02.15 23:48, Uoti Urpala (uoti.urp...@pp1.inet.fi) wrote:

Sorry for the late reply,

> On Wed, 2015-02-04 at 21:57 +0100, Lennart Poettering wrote:
> > OK, let's try this again, with an example:
> > 
> > a) you have one service mydaemon.service
> > 
> > b) you have a preparation service called
> >mydaemon-convert-config.service that takes config from somewhere,
> >converts it into a suitable format for mydaemon.service's binary
> > 
> > Now, you change that config that is located somewhere, issue a restart
> > request for m-c-c.s, issue a reload request for mydaemon.service.
> > 
> > Now, something like this should always have the result that your
> > config change is applied to mydaemon.service. Regardless if
> > mydaemon.service's start was queued, is already started or is
> > currently being started. You are suggesting that the reload can
> > suppressed when a start is already enqueued, but that's really not the
> > case, because you first have to run m-c-c.s, before you can reload...
> 
> I do not see why that would cause any problems with removing the
> blocking.
> 
> If you mean literally running "systemctl restart
> mydaemon-convert-config.service; systemctl reload mydaemon.service" then
> this should still work fine - the first "restart" will block until the
> operation is complete and new config exists, and then the "reload"
> guarantees that no old config is in use. 

No, the commands you suggest above don't just enqueue the operations,
they enqueue them and then wait for them to finish. Of course, if you
synchronously wait for them to finish then all races are gone, but
this really should work without that, so that things can be enqueued
and work correctly.

> However, I don't see why you'd
> include the part about creating the new configuration via
> mydaemon-convert-config.service in this case - the new configuration
> already exists before any "reload" functionality is invoked, so why the
> irrelevant complication of creating that configuration via another
> service? It seems you are implicitly assuming some kind of parallel
> execution of the restart and the reload?

Well, this is an example of the way people do this, and yes, i was
talking about "enqueuing", and that really means just that: it won't
be the client anymore that controls execution order, but it is solely
the queue and its semantics that enforce how things are run and what
guarantees are made.

> If you mean something like "systemctl restart --no-block
> mydaemon-convert-config.service; systemctl reload mydaemon.service", I
> don't see why you'd ever /expect/ this to work with any reload semantics
> - isn't this clear user error, and will be racy with current systemd
> code just as much as the proposed fix? 

Yupp, this is what I mean. (though I'd actually specify the --no-block
in the second command too, though this doesn't make much of a
difference...)

I am pretty sure that enqueueing these two commands should be sufficient
to get the config that was written out by the first service to be in
effect in the second service.

> There are no ordering constraints
> here, any more than there would be about starting two services and
> expecting the first request to be started first. 

hmm? if you start two services, and they are ordered against each
other, then yes, the second service should only be started after the
first one completed startup.

> And in any case I'd consider the semantics of reload to be "switch
> to configuration equal or newer than what existed *when the reload
> was requested*", without any guarantees that changes from operations
> queued but not finished before calling reload would be taken into
> account.

The queue is really a work queue, and the After= and Before= deps
dictate how the work can be parallelized or needs to be serialized. As
such if i have 5 jobs enqueued that depend on each other, i need to
make sure they are executed in the order specified and can operateon
the results of the previous job.

I hope this makes sense...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH v2] journalctl: Improve boot ID lookup

2015-04-27 Thread Lennart Poettering
On Sat, 25.04.15 15:51, Jan Janssen (medhe...@web.de) wrote:

> >Yeah, patches like these always do end up looking messy. It's much
> >easier to read after applying it.
> >
> >Well, it jumps from one boot to the next boot using _BOOT_ID matches. It
> >starts at the journal head to get the boot ID, makes a _BOOT_ID match
> >and then comes from the opposite journal direction (tail) to get the end
> >a boot. And then flushes the matches, and advances the journal from that
> >exact position one further (which gives us the start and ID of our next
> >boot). Rinse and repeat.
> >Note, v1 differs in that it assumes sd_journal_flush_matches() will also
> >reset the position we are in the journal at that moment. That version
> >went around that by using a cursor and seeking to the after flushing.
> >Hence why I wonder if this behavior of slush_matches is expected/desired
> >or not.
> >

Yes, _flush_matches() should exclusively flush the matches and not
reset the position. If it would change the position this would be a
bug. 

Matches really only change how we look for the next entry, not how we
look at the current one.

> I gave this another look today. Since journalctl uses SD_JOURNAL_LOCAL_ONLY
> by default, the new algorithm cannot trip up on interleaving boot IDs (since
> they shouldn't be interleaving in that case, per the above assumption). Same
> goes for --machine mode. Now, --file, --directory and --merge mode on the
> other hand does confuse the new algorithm.

Yeah, I think using the seek to boot id logic only really makes sense
for local journals. I think we should refuse operation if you mix
--merge (or the related other switches) with it.

> But I think it might be worth it to go with my above suggestion if that'll
> be accepted. Alternatively, we could either refuse --boot and --list-boots
> in those cases, or ship the old algorithm along with the new one and use
> that one in those cases where the faster one gets confused.
> 
> Or we stick with status quo and don't improve on the algorithm altogether.
> I'd like to know the option to go with, to ease me mind...

I think your altered algorithm does make a ton of sense, but please
add code that explicitly fails if you combine --boot with --merge and
so on...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Question about system-update.target

2015-04-27 Thread Richard Hughes
On 27 April 2015 at 16:42, Lennart Poettering  wrote:
> - ship a service packagekit-reboot.service that contains:
>
>   [Service]
>   ExecStart=/bin/systemctl reboot --no-block
>   Type=oneshot

If that file was shipped in systemd, fwupd could use the same method
without having to ship the extra duplicated files. e.g.
system-update-post-reboot.service

> Then, order both your update services
> "Before=packagegit-reboot.service packagekit-poweroff.service", so
> that they run before those services are started.

Makes sense so far.

> Finally, from packagekit, enqueue pretty early on, before you start
> installation either one or the other of the two services, depending on
> packagkit's configuration. This is easily done by issuing the
> StartUnit() bus call on the systemd service file passing either
> "packagekit-reboot.service" or "packagekit-poweroff.service" as first
> parameter, and "replace" as second.

Clever stuff.

> I hope that makes sense?

and remove OnFailure=reboot.target, right?

Richard.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] heads-up: chasing journal(?) related regression in 219 causing boot hang/fail

2015-04-27 Thread Christian Hesse
Martin Pitt  on Sat, 2015/04/11 10:38:
> Hello Tobias,
> 
> Tobias Hunger [2015-04-11  2:17 +0200]:
> > did you make any progress with this bug? Apparently the same issue is
> > blocking systemd-219 from getting into arch linux (
> > https://bugs.archlinux.org/task/44016 ), so this seems to be a
> > wide-spread issue. Is anyone taking a serious look into this issue?
> 
> Sorry, no, I was pretty busy with making systemd work good enough
> for the impending Debian and Ubuntu releases. A few weeks ago I mostly
> wanted to see whether this was specific to Debian/Ubuntu somehow, and
> I couldn't reproduce it in a VM with Fedora 21 plus dbus and systemd
> from rawhide. But in the meantime we got plenty of confirmations that
> it affects Fedora and now Arch, so I don't believe this is actually
> related to d-bus or something such.
> 
> As for the actual lockup, I'm afraid I don't understand at all
> what is happening (I'm anot familiar at all with how journald
> interacts with other services and D-Bus/logind).
> 
> So from my POV my best recommendation would be to revert commit
> 13790add4 upstream for now until this gets understood and fixed
> properly, especially if/when version 220 should be released. Breaking
> booting is much worse than not being able to restart journald.

Any news about this one?
Looks like everybody is waiting for a fix and nobody is working on it...

I do not know how to debug this. If I can help let me know.
-- 
main(a){char*c=/*Schoene Gruesse */"B?IJj;MEH"
"CX:;",b;for(a/*Chris   get my mail address:*/=0;b=c[a++];)
putchar(b-1/(/*   gcc -o sig sig.c && ./sig*/b/42*2-3)*42);}


pgpCZgV7v7BLX.pgp
Description: OpenPGP digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Question about system-update.target

2015-04-27 Thread Lennart Poettering
On Mon, 27.04.15 15:47, Richard Hughes (hughsi...@gmail.com) wrote:

> On 27 April 2015 at 15:18, Lennart Poettering  wrote:
> > Well, thinking about this, maybe OnFailure=reboot.target is missing
> > the point for these services. After all, the system should reboot
> > regardless if the update fails or not...
> 
> Not quite; PackageKit supports an update-offline-and-then-shutdown
> mode at the request of the GNOME designers. If we can configure that
> using systemd I'd gladly rip out the code in PackageKit and move it
> down to systemd.

Hmm, in that case I think this is best done as part of PackageKit really...

Something like this:

- ship a service packagekit-reboot.service that contains:

  [Service]
  ExecStart=/bin/systemctl reboot --no-block
  Type=oneshot

- ship a service packagekit-poweroff.service that contains:

  [Service]
  ExecStart=/bin/systemctl poweroff --no-block
  Type=oneshot

Then, order both your update services
"Before=packagegit-reboot.service packagekit-poweroff.service", so
that they run before those services are started.

Finally, from packagekit, enqueue pretty early on, before you start
installation either one or the other of the two services, depending on
packagkit's configuration. This is easily done by issuing the
StartUnit() bus call on the systemd service file passing either
"packagekit-reboot.service" or "packagekit-poweroff.service" as first
parameter, and "replace" as second.

That way you can keep the setting hwether to reboot or poweroff in
packagekit. And make sure to enqueu the jobs really early on, so that
they are enqueued whatever happens. the deps will make sure that they
aren't run before your two updat services actually finish running.

I hope that makes sense?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv2] core: do not spawn jobs or touch other units during coldplugging

2015-04-27 Thread Lennart Poettering
On Mon, 27.04.15 18:28, Ivan Shapovalov (intelfx...@gmail.com) wrote:

> On 2015-04-27 at 17:14 +0200, Lennart Poettering wrote:
> > On Sat, 25.04.15 05:48, Ivan Shapovalov (intelfx...@gmail.com) wrote:
> > 
> > > On 2015-04-25 at 04:00 +0300, Ivan Shapovalov wrote:
> > > > On 2015-04-24 at 16:04 +0200, Lennart Poettering wrote:
> > > > > [...]
> > > > > 
> > > > > Actually, it really is about the UNIT_TRIGGERS dependencies 
> > > > > only,
> > > > > since we don't do the retroactive deps stuff at all when we are
> > > > > coldplugging, it's conditionalized in m->n_reloading <= 0.
> > > > 
> > > > So, I think I understand the problem. We should do this not only 
> > > > for
> > > > UNIT_TRIGGERS, but also for any dependencies which may matter
> > > > when activating that unit. That is, anything which is referenced 
> > > > by
> > > > transaction_add_job_and_dependencies()... recursively.
> > > 
> > > Here is what I have in mind. Don't know whether this is correct, 
> > > but
> > > it fixes the problem for me.
> > > 
> > > From 515d878e526e52fc154874e93a4c97555ebd8cff Mon Sep 17 00:00:00 
> > > 2001
> > > From: Ivan Shapovalov 
> > > Date: Sat, 25 Apr 2015 04:57:59 +0300
> > > Subject: [PATCH] core: coldplug all units which participate in jobs
> > > 
> > > This is yet another attempt to fix coldplugging order (more 
> > > especially,
> > > the problem which happens when one creates a job during 
> > > coldplugging, and
> > > it references a not-yet-coldplugged unit).
> > > 
> > > Now we forcibly coldplug all units which participate in jobs. This
> > > is a superset of previously implemented handling of the 
> > > UNIT_TRIGGERS
> > > dependencies, so that handling is removed.
> > > ---
> > >  src/core/transaction.c | 6 ++
> > >  src/core/unit.c| 8 
> > >  2 files changed, 6 insertions(+), 8 deletions(-)
> > > 
> > > diff --git a/src/core/transaction.c b/src/core/transaction.c
> > > index 5974b1e..a02c02c 100644
> > > --- a/src/core/transaction.c
> > > +++ b/src/core/transaction.c
> > > @@ -848,6 +848,12 @@ int transaction_add_job_and_dependencies(
> > >  assert(type < _JOB_TYPE_MAX_IN_TRANSACTION);
> > >  assert(unit);
> > >  
> > > +/* Before adding jobs for this unit, let's ensure that 
> > > its state has been loaded.
> > > + * This matters when jobs are spawned as part of 
> > > coldplugging itself (see. e. g. path_coldplug().
> > > + * This way, we "recursively" coldplug units, ensuring 
> > > that we do not look at state of
> > > + * not-yet-coldplugged units. */
> > > +unit_coldplug(unit);
> > 
> > I like the simplicity of this patch actually, but it's unfortunately
> > too simple: coldplugging is to be applied only for services that are
> > around at the time we come back from a reload. If you start a service
> > during runtime, without any reloading anywhere around, we should not
> > coldplug at all.
> > 
> > I figure we need a "coldplugging" bool or so in Manager, which we set
> > while coldplugging and can then check here.
> 
> Yeah, right, I've fixed it locally but forgot to send a follow-up mail.
> Actually, isn't it "unit->manager->n_reloading > 0"?

Yes, indeed, that should suffice.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv2] core: do not spawn jobs or touch other units during coldplugging

2015-04-27 Thread Ivan Shapovalov
On 2015-04-27 at 17:14 +0200, Lennart Poettering wrote:
> On Sat, 25.04.15 05:48, Ivan Shapovalov (intelfx...@gmail.com) wrote:
> 
> > On 2015-04-25 at 04:00 +0300, Ivan Shapovalov wrote:
> > > On 2015-04-24 at 16:04 +0200, Lennart Poettering wrote:
> > > > [...]
> > > > 
> > > > Actually, it really is about the UNIT_TRIGGERS dependencies 
> > > > only,
> > > > since we don't do the retroactive deps stuff at all when we are
> > > > coldplugging, it's conditionalized in m->n_reloading <= 0.
> > > 
> > > So, I think I understand the problem. We should do this not only 
> > > for
> > > UNIT_TRIGGERS, but also for any dependencies which may matter
> > > when activating that unit. That is, anything which is referenced 
> > > by
> > > transaction_add_job_and_dependencies()... recursively.
> > 
> > Here is what I have in mind. Don't know whether this is correct, 
> > but
> > it fixes the problem for me.
> > 
> > From 515d878e526e52fc154874e93a4c97555ebd8cff Mon Sep 17 00:00:00 
> > 2001
> > From: Ivan Shapovalov 
> > Date: Sat, 25 Apr 2015 04:57:59 +0300
> > Subject: [PATCH] core: coldplug all units which participate in jobs
> > 
> > This is yet another attempt to fix coldplugging order (more 
> > especially,
> > the problem which happens when one creates a job during 
> > coldplugging, and
> > it references a not-yet-coldplugged unit).
> > 
> > Now we forcibly coldplug all units which participate in jobs. This
> > is a superset of previously implemented handling of the 
> > UNIT_TRIGGERS
> > dependencies, so that handling is removed.
> > ---
> >  src/core/transaction.c | 6 ++
> >  src/core/unit.c| 8 
> >  2 files changed, 6 insertions(+), 8 deletions(-)
> > 
> > diff --git a/src/core/transaction.c b/src/core/transaction.c
> > index 5974b1e..a02c02c 100644
> > --- a/src/core/transaction.c
> > +++ b/src/core/transaction.c
> > @@ -848,6 +848,12 @@ int transaction_add_job_and_dependencies(
> >  assert(type < _JOB_TYPE_MAX_IN_TRANSACTION);
> >  assert(unit);
> >  
> > +/* Before adding jobs for this unit, let's ensure that 
> > its state has been loaded.
> > + * This matters when jobs are spawned as part of 
> > coldplugging itself (see. e. g. path_coldplug().
> > + * This way, we "recursively" coldplug units, ensuring 
> > that we do not look at state of
> > + * not-yet-coldplugged units. */
> > +unit_coldplug(unit);
> 
> I like the simplicity of this patch actually, but it's unfortunately
> too simple: coldplugging is to be applied only for services that are
> around at the time we come back from a reload. If you start a service
> during runtime, without any reloading anywhere around, we should not
> coldplug at all.
> 
> I figure we need a "coldplugging" bool or so in Manager, which we set
> while coldplugging and can then check here.

Yeah, right, I've fixed it locally but forgot to send a follow-up mail.
Actually, isn't it "unit->manager->n_reloading > 0"?

-- 
Ivan Shapovalov / intelfx /


signature.asc
Description: This is a digitally signed message part
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] KillUserProcesses timeout

2015-04-27 Thread Lennart Poettering
On Fri, 24.04.15 20:54, Mikhail Morfikov (mmorfi...@gmail.com) wrote:

> On Fri, 24 Apr 2015 19:04:53 +0200
> Lennart Poettering  wrote:
> 
> > On Tue, 27.01.15 04:28, Mikhail Morfikov (mmorfi...@gmail.com) wrote:
> > 
> > Sorry for the really late reply, still trying to work through piles of
> > mail.
> > > 
> > > > Hmm, not sure I follow. 
> > > > 
> > > 
> > > It only happens if I'm logged in as root in tmux. 
> > > 
> > > > The session is shown as closing, that's good. Can you check what
> > > > "systemctl status" reports on the scope unit if this hang happens?
> > > > 
> > > > Lennart
> > > > 
> > > 
> > > I'm not sure if I did the right thing, but there it is.
> > > 
> > > After logout:
> > > 
> > > ● user-1000.slice
> > >Loaded: loaded
> > >Active: active since Tue 2015-01-27 04:13:31 CET; 8min ago
> > >CGroup: /user.slice/user-1000.slice
> > >├─session-7.scope
> > >│ ├─32562 gpg-agent -s --enable-ssh-support --daemon
> > > --write-env-file /home/morfik/.gpg-agent-info │ ├─32692 tmux
> > > attach-session -t logi │ ├─32696 bash -c cat /dev/logi | ccze -m
> > > ansi -p syslog -C │ ├─32697 -bash
> > >│ ├─32698 newsbeuter
> > >│ ├─32702 cat /dev/logi
> > >│ ├─32703 ccze -m ansi -p syslog -C
> > >│ ├─34376 su -
> > >│ └─34393 -su
> > 
> > This here is probably the issue: you opened a su session from your
> > session, and that keeps things referenced and open.
> > 
> > Lennart
> > 
> Yep, that's the problem, but after 10-20 secs (I don't remember exactly)
> the session will be closed, and the question was: is there a way to
> make it faster, I mean without the delay so it would be closed just
> after the user logged off.

Hmm that's weird.

Can you reproduce this again and use "loginctl session-status" on all
sessions that remain after logout as well as "loginctl user-status" on
the user in question, and paste this here?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv2] core: do not spawn jobs or touch other units during coldplugging

2015-04-27 Thread Lennart Poettering
On Sat, 25.04.15 05:48, Ivan Shapovalov (intelfx...@gmail.com) wrote:

> On 2015-04-25 at 04:00 +0300, Ivan Shapovalov wrote:
> > On 2015-04-24 at 16:04 +0200, Lennart Poettering wrote:
> > > [...]
> > > 
> > > Actually, it really is about the UNIT_TRIGGERS dependencies only,
> > > since we don't do the retroactive deps stuff at all when we are
> > > coldplugging, it's conditionalized in m->n_reloading <= 0.
> > 
> > So, I think I understand the problem. We should do this not only for
> > UNIT_TRIGGERS, but also for any dependencies which may matter
> > when activating that unit. That is, anything which is referenced by
> > transaction_add_job_and_dependencies()... recursively.
> 
> Here is what I have in mind. Don't know whether this is correct, but
> it fixes the problem for me.
> 
> From 515d878e526e52fc154874e93a4c97555ebd8cff Mon Sep 17 00:00:00 2001
> From: Ivan Shapovalov 
> Date: Sat, 25 Apr 2015 04:57:59 +0300
> Subject: [PATCH] core: coldplug all units which participate in jobs
> 
> This is yet another attempt to fix coldplugging order (more especially,
> the problem which happens when one creates a job during coldplugging, and
> it references a not-yet-coldplugged unit).
> 
> Now we forcibly coldplug all units which participate in jobs. This
> is a superset of previously implemented handling of the UNIT_TRIGGERS
> dependencies, so that handling is removed.
> ---
>  src/core/transaction.c | 6 ++
>  src/core/unit.c| 8 
>  2 files changed, 6 insertions(+), 8 deletions(-)
> 
> diff --git a/src/core/transaction.c b/src/core/transaction.c
> index 5974b1e..a02c02c 100644
> --- a/src/core/transaction.c
> +++ b/src/core/transaction.c
> @@ -848,6 +848,12 @@ int transaction_add_job_and_dependencies(
>  assert(type < _JOB_TYPE_MAX_IN_TRANSACTION);
>  assert(unit);
>  
> +/* Before adding jobs for this unit, let's ensure that its state has 
> been loaded.
> + * This matters when jobs are spawned as part of coldplugging itself 
> (see. e. g. path_coldplug().
> + * This way, we "recursively" coldplug units, ensuring that we do 
> not look at state of
> + * not-yet-coldplugged units. */
> +unit_coldplug(unit);

I like the simplicity of this patch actually, but it's unfortunately
too simple: coldplugging is to be applied only for services that are
around at the time we come back from a reload. If you start a service
during runtime, without any reloading anywhere around, we should not
coldplug at all.

I figure we need a "coldplugging" bool or so in Manager, which we set
while coldplugging and can then check here.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn and IPv6

2015-04-27 Thread Dimitri John Ledkov
On 27 April 2015 at 15:56, Lennart Poettering  wrote:
> On Mon, 27.04.15 15:44, Dimitri John Ledkov (dimitri.j.led...@intel.com) 
> wrote:
>
>> > Well, networkd on the host automatically sets up IPv4 masquerading for
>> > each container. We simply don't do anything equivalent for IPv6
>> > currently.
>> >
>> > Ideally we wouldn't have to do NAT for IPv6 to make this work, and
>> > instead would pass on some ipv6 subnet we acquired from uplink without
>> > NAT to each container, but we currently don't have infrastructure for
>> > that in networkd, and I am not even sure how this could really work,
>> > my ipv6-fu is a bit too limited...
>> >
>> > or maybe we should do ipv6 nat after all, under the logic that
>> > containers are just an implementation detail of the local host rather
>> > than something to be made visible to the outside world. however code
>> > for this exists neither.
>> >
>> > Or in other words: ipv6 setup needs some manual networking setup on
>> > the host.
>>
>> One should roll the dice and generate unique local address /48 prefix
>> and use that to setup local addressing, ideally with
>> autoconfigurations (e.g. derive a fake mac from container uuid and
>> using the "hosts's" ULA prefix auto-assign ipv6 address)
>
> Well, would that enable automatic, correcting routing between the
> container and the host's external network? That's kinda what this all
> is about...

yes... that is host needs to be assigned a subnet and ip from /48, and
containers routed via that host ip.

Or "simply" (aka "expensively") run radvd on the host for the
containers to do all of that (route & ULA prefix advertisement and
complete auto-configuration therefore)

-- 
Regards,

Dimitri.
Pura Vida!

https://clearlinux.org
Open Source Technology Center
Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn trouble

2015-04-27 Thread Lennart Poettering
On Sat, 25.04.15 01:44, Tobias Hunger (tobias.hun...@gmail.com) wrote:

> By the way: Is there a way to get the journal from a --ephemeral container?
> 
> I had expected --link-journal=host to work, but --link-journal seems
> to not be allowed in any way.

I figure we should teach journalctl -m to actually watch running
containers and accessing their journals directly, without requiring a
symlink in /var/log/journal. For ephemeral containers (which exist
purely during runtime), this sounds like the best option since we
shouldn't litter persistant file systems with objects that should not
persist.

Added to TODO list.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn and IPv6

2015-04-27 Thread Tomasz Torcz
On Mon, Apr 27, 2015 at 04:56:18PM +0200, Lennart Poettering wrote:
> On Mon, 27.04.15 15:44, Dimitri John Ledkov (dimitri.j.led...@intel.com) 
> wrote:
> 
> > > Well, networkd on the host automatically sets up IPv4 masquerading for
> > > each container. We simply don't do anything equivalent for IPv6
> > > currently.
> > >
> > > Ideally we wouldn't have to do NAT for IPv6 to make this work, and
> > > instead would pass on some ipv6 subnet we acquired from uplink without
> > > NAT to each container, but we currently don't have infrastructure for
> > > that in networkd, and I am not even sure how this could really work,
> > > my ipv6-fu is a bit too limited...
> > >
> > > or maybe we should do ipv6 nat after all, under the logic that
> > > containers are just an implementation detail of the local host rather
> > > than something to be made visible to the outside world. however code
> > > for this exists neither.
> > >
> > > Or in other words: ipv6 setup needs some manual networking setup on
> > > the host.
> > 
> > One should roll the dice and generate unique local address /48 prefix
> > and use that to setup local addressing, ideally with
> > autoconfigurations (e.g. derive a fake mac from container uuid and
> > using the "hosts's" ULA prefix auto-assign ipv6 address)
> 
> Well, would that enable automatic, correcting routing between the
> container and the host's external network? That's kinda what this all
> is about...

  If you have radvd running, it should.  By the way, speaking of NAT
in context of IPv6 is a heresy.

-- 
Tomasz Torcz "God, root, what's the difference?"
xmpp: zdzich...@chrome.pl "God is more forgiving."

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn and IPv6

2015-04-27 Thread Lennart Poettering
On Mon, 27.04.15 15:44, Dimitri John Ledkov (dimitri.j.led...@intel.com) wrote:

> > Well, networkd on the host automatically sets up IPv4 masquerading for
> > each container. We simply don't do anything equivalent for IPv6
> > currently.
> >
> > Ideally we wouldn't have to do NAT for IPv6 to make this work, and
> > instead would pass on some ipv6 subnet we acquired from uplink without
> > NAT to each container, but we currently don't have infrastructure for
> > that in networkd, and I am not even sure how this could really work,
> > my ipv6-fu is a bit too limited...
> >
> > or maybe we should do ipv6 nat after all, under the logic that
> > containers are just an implementation detail of the local host rather
> > than something to be made visible to the outside world. however code
> > for this exists neither.
> >
> > Or in other words: ipv6 setup needs some manual networking setup on
> > the host.
> 
> One should roll the dice and generate unique local address /48 prefix
> and use that to setup local addressing, ideally with
> autoconfigurations (e.g. derive a fake mac from container uuid and
> using the "hosts's" ULA prefix auto-assign ipv6 address)

Well, would that enable automatic, correcting routing between the
container and the host's external network? That's kinda what this all
is about...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Question about system-update.target

2015-04-27 Thread Richard Hughes
On 27 April 2015 at 15:18, Lennart Poettering  wrote:
> Well, thinking about this, maybe OnFailure=reboot.target is missing
> the point for these services. After all, the system should reboot
> regardless if the update fails or not...

Not quite; PackageKit supports an update-offline-and-then-shutdown
mode at the request of the GNOME designers. If we can configure that
using systemd I'd gladly rip out the code in PackageKit and move it
down to systemd.

> This would mean that any updating service would pull this in, and
> order itself before it. Sicne this new service is hence ordered efter
> each updating service it will only run after all of them finished,
> regardless if failed or succeeded.
> Does this make sense to you?

Sure, it does, modulo the feature above.

Richard.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn and IPv6

2015-04-27 Thread Dimitri John Ledkov
On 27 April 2015 at 15:01, Lennart Poettering  wrote:
> On Sun, 26.04.15 16:50, Kai Krakow (hurikha...@gmail.com) wrote:
>
>> Hello!
>>
>> I've successfully created a Gentoo container on top of a Gentoo host. I can
>> start the container with machinectl. I can also login using SSH. So mission
>> almost accomblished (it should become a template for easy vserver cloning).
>>
>> But from within the IPv6-capable container I cannot access the IPv6 outside
>> world. Name resolution via IPv6 fails, as does pinging to IPv6. It looks
>> like systemd-nspawn does only setup IPv4 routes to access outside my gateway
>> boundary. IPv6 does not work.
>
> Well, networkd on the host automatically sets up IPv4 masquerading for
> each container. We simply don't do anything equivalent for IPv6
> currently.
>
> Ideally we wouldn't have to do NAT for IPv6 to make this work, and
> instead would pass on some ipv6 subnet we acquired from uplink without
> NAT to each container, but we currently don't have infrastructure for
> that in networkd, and I am not even sure how this could really work,
> my ipv6-fu is a bit too limited...
>
> or maybe we should do ipv6 nat after all, under the logic that
> containers are just an implementation detail of the local host rather
> than something to be made visible to the outside world. however code
> for this exists neither.
>
> Or in other words: ipv6 setup needs some manual networking setup on
> the host.

One should roll the dice and generate unique local address /48 prefix
and use that to setup local addressing, ideally with
autoconfigurations (e.g. derive a fake mac from container uuid and
using the "hosts's" ULA prefix auto-assign ipv6 address)

For giggles see http://unique-local-ipv6.com/

-- 
Regards,

Dimitri.
Pura Vida!

https://clearlinux.org
Open Source Technology Center
Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn trouble

2015-04-27 Thread Lennart Poettering
On Sat, 25.04.15 00:14, Tobias Hunger (tobias.hun...@gmail.com) wrote:

> Hello,
> 
> sorry (again) for the delay. I unfortunately can not check into this
> as often as I would like:-(
> 
> Lennart: Thank you for that patch, that does indeed fix my issue with
> read-only machine images.
> 
> The networking issue does work better when iptables are used. All I
> needed to do was to make sure that packages from the VM are not
> getting dropped in the forwarding chain. Is there a way for me to do
> that automatically as interfaces to containers are created? I do not
> want to just accept every machine talking to everything else.
> Paranoia:-)

This is currently not supported, but I figure we could add that. Added
to the TODO list.

> What I noticed though is that the VM has the google nameservers set
> up. That came as a bit of a surprise: I had expected either the host
> to be the only DNS server register (providing a DNS proxy) or at least
> that the nameservers of the host will also be set in the VM. Is that a
> know issue or are my expectations wrong?

When you use the word "vm" you refer to "container"?

(So far i used the name "vm" for full machine virtualization such as
kvm or virtualbox, and "container" for same-kernel virtualization,
such as nspawn).

networkd does not proxy DNS. however, networkd does forward DNS
configuration it learnt via DHCP. Also, nspawn by default actually
copies /etc/resolv.conf from the host into the container at boot,
though we probably should stop doing that...

What does "networkctl status -a" say when run in the container?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] getaddrinfo() API with systemd

2015-04-27 Thread Lennart Poettering
On Sat, 25.04.15 11:05, Nanda Kumar (nandakumar@gmail.com) wrote:

> Hi,
> 
> I am facing problem while querying DNS using getaddrinfo() API under a
> process initiated by systemd. Despite having nameserver entry in
> /etc/resolv.conf, the query fails to resolve. After few system call traces,
> it is found that the problem is due to systemd resolution. It seems like,
> for a process inited by systemd, the getaddrinfo() DNS query routed via
> systemd while in stand alone mode (i.e. spawned by shell), the query
> happens normally. I changed the /etc/systemd/resolved.conf to add my DNS
> address and restarted systemd-resolved. Now the DNS query works properly.
> 
> Is there anyway to bypass systemd for getaddrinfo() [ex: passing extra
> flags to hints], and get the work done in usual way?

Well, if you don#t want to use resolved, then remove it from /etc/nsswitch.conf.

> In my case /etc/resolv.conf is softlink to runtime resolved.conf of
> systemd. Will unlinking and keeping the /etc/resolv.conf independent will
> solve the solution?

The way this works is this: resolved always writes out information
about all DNS servers it learnt to
/run/systemd/resolve/resolv.conf. If your /etc/resolv.conf points to
that file, then any app that bypasses NSS but reads that file will use
the same DNS servers as resolved, but talk directly to them. If
/etc/resolv.conf does not point there, then resolved will notice and
actually use it as source of configuration to learn addition DNS
servers from.

In essence, if you use networkd for all your networking configuration,
then making /etc/resolv.conf a symlink to
/run/systemd/resolve/resolv.conf is the right thing. If you use
something else, then not.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv2] core: do not spawn jobs or touch other units during coldplugging

2015-04-27 Thread Lennart Poettering
On Fri, 24.04.15 21:39, Andrei Borzenkov (arvidj...@gmail.com) wrote:

> В Fri, 24 Apr 2015 20:19:33 +0200
> Lennart Poettering  пишет:
> 
> > On Fri, 24.04.15 20:46, Ivan Shapovalov (intelfx...@gmail.com) wrote:
> > 
> > > On 2015-04-24 at 19:13 +0200, Lennart Poettering wrote:
> > > > On Fri, 24.04.15 20:06, Ivan Shapovalov (intelfx...@gmail.com) wrote:
> > > > 
> > > > > With this patch applied, on `systemctl daemon-reload` I get the
> > > > > following:
> > > > 
> > > > Any chance you can do the same with debugging on? "systemd-analyze
> > > > set-log-level debug" right before the daemon-reload?
> > > > 
> > > > That should show the transaction being queued in.
> > > 
> > > Sure, I've ran it (log attached), but well... it did not show
> > > any new jobs being enqueued. But alsa-restore.service _did_ run and
> > > did reset my ALSA volume to the bootup value.
> > > 
> > > Pretty confused,
> > 
> > Note that starting services is recursive: if a service is enqueued,
> > then we add all its dependencies to the transaction, verify that the
> > transaction is without cycles and can be applied, and then actually
> > apply it.
> > 
> > This means, that starting a service foo.service, that requires
> > bar.target, that requires waldo.service, will mean that waldo.service
> > is also started, even if bar.target is already started anyway.
> 
> I was sure that on reload systemd simply restores previous state of
> services. Why it attempts to start anything in the first place?

well, sure it does restore it. but triggers might start someting after
reload, 

> It makes reload potentially dangerous; what if service was stopped on
> purpose and should remain this way?

well, the next time you start something and that something declares it
needs something else, then we will start that something else before
that something, and that's the right thing to do.

i mean, don't misunderstand what I wrote earlier: if a service is
already started, and you do a reload, then its deps will not be pulled
in again. Only newly enqueued start jobs will have that effect.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Question about system-update.target

2015-04-27 Thread Lennart Poettering
On Mon, 27.04.15 09:52, Richard Hughes (hughsi...@gmail.com) wrote:

> At the moment the only user of system-update.target is PackageKit,
> which does package updates offline in the special system-update boot
> target. The other project that has just started using this mode is
> fwupd, which is using it to update BIOS-based firmware (not UEFI
> capsules) offline.
> 
> I've installed a symlink to system-update.target.wants so that the
> fwupd process gets run, but I'm a little worried about what happens
> when there are two optional services being run, both with
> OnFailure=reboot.target

Well, thinking about this, maybe OnFailure=reboot.target is missing
the point for these services. After all, the system should reboot
regardless if the update fails or not...

So maybe add a service systemd-update-post.service or so, that uses:

[Service]
Type=oneshot
ExecStart=/bin/systemctl reboot --no-block

Then make sure that all your update services are of Type=oneshot
themselves and use Before=system-update-post.service and
Wants=system-update-post.service?

This would mean that any updating service would pull this in, and
order itself before it. Sicne this new service is hence ordered efter
each updating service it will only run after all of them finished,
regardless if failed or succeeded.

Does this make sense to you?

If so, we could probably add system-update-post.service as a standard
service to systemd itself.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn -- bind multiple directories

2015-04-27 Thread arnaud gaboury
On Mon, Apr 27, 2015 at 3:44 PM, Lennart Poettering
 wrote:
> On Mon, 27.04.15 10:19, arnaud gaboury (arnaud.gabo...@gmail.com) wrote:
>
>> To install a Fedora container from the raw image in my host Archlinux,
>> I can do this:
>>
>> # systemd-nspawn -M Fedora-Cloud-Base-22_Beta-20150415.x86_64.raw --
>> bind=/var/lib/machines/enl:/mnt
>>
>> Now for the use of two btrfs subvol, I would like to bind
>> /var/lib/machines/enl/{etc,var}
>>
>> Does the systemd bind options accept multi directories to bind?
>>  Soemthing like this :
>>
>> # systemd-nspawn -M Fedora-Cloud-Base-22_Beta-20150415.x86_64.raw --
>> bind=/var/lib/machines/enl:/mnt /var/lib/machines/enl/etc:/mnt/etc
>> /var/lib/machines/enl/var:/mnt/var
>
> You can specify --bind= multiple times in one command line to bind
> mount multiple directories. I have updated the man page now to
> explicit mention this.
>
> The command line you are using for is hence:
>
> # systemd-nspawn -M Fedora-Cloud-Base-22_Beta-20150415.x86_64.raw 
> --bind=/var/lib/machines/enl:/mnt --bind=/var/lib/machines/enl/etc:/mnt/etc 
> --bind=/var/lib/machines/enl/var:/mnt/var

This features solved my issue regarding my Btrfs setting of three non
nested volumes : rootvol, etc and var.
Boot first the raw Fedora image, # mkdir -p /mnt/{etc,var}, log out
then boot again when binding  all three subvol.


>
> Lennart
>
> --
> Lennart Poettering, Red Hat



-- 

google.com/+arnaudgabourygabx
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [systemd-nspawn] nginx: [emerg] open() "/dev/stderr" failed (6: No such device or address)

2015-04-27 Thread Lennart Poettering
On Mon, 27.04.15 16:59, Mantas Mikulėnas (graw...@gmail.com) wrote:

> I'm guessing from the error message that it's not a shell script but nginx
> itself configured to use "/dev/stderr" as its log file, so there's no >&
> that could be used...

If this indeed is the case, try using /dev/console instead, this is
also forwarded to stderr by nspawn...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Rebooting systemd-nspawn container results in shutdown

2015-04-27 Thread Lennart Poettering
On Sun, 26.04.15 16:55, Kai Krakow (hurikha...@gmail.com) wrote:

> Hello!
> 
> I've successfully created a Gentoo container on top of a Gentoo host. I can 
> start the container with machinectl, as I can with "systemctl start ...".
> 
> Inside the container (logged in via SSH), I could issue a reboot command. 
> But that just results in the container being shutdown. It never comes back 
> unless I restart the machine with systemctl or machinectl.

What systemd versions run on the host and in the container?

if you strace the nspawn process, and then issue the reboot command,
what are the last 20 lines this generates when nspawn exits? Please
paste somewhere.

Is the service in a failed state or so when this doesn't work?

What is the log output of the service then?

> BTW: Is there a way to automatically bind-mount some directories instead of 
> "systemctl edit --full" the service file and add those?

Currently not, but there's a TODO item to add ".nspawn" files that may
be placed next to container directories with additional options.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn and IPv6

2015-04-27 Thread Lennart Poettering
On Sun, 26.04.15 16:50, Kai Krakow (hurikha...@gmail.com) wrote:

> Hello!
> 
> I've successfully created a Gentoo container on top of a Gentoo host. I can 
> start the container with machinectl. I can also login using SSH. So mission 
> almost accomblished (it should become a template for easy vserver cloning).
> 
> But from within the IPv6-capable container I cannot access the IPv6 outside 
> world. Name resolution via IPv6 fails, as does pinging to IPv6. It looks 
> like systemd-nspawn does only setup IPv4 routes to access outside my gateway 
> boundary. IPv6 does not work.

Well, networkd on the host automatically sets up IPv4 masquerading for
each container. We simply don't do anything equivalent for IPv6
currently.

Ideally we wouldn't have to do NAT for IPv6 to make this work, and
instead would pass on some ipv6 subnet we acquired from uplink without
NAT to each container, but we currently don't have infrastructure for
that in networkd, and I am not even sure how this could really work,
my ipv6-fu is a bit too limited...

or maybe we should do ipv6 nat after all, under the logic that
containers are just an implementation detail of the local host rather
than something to be made visible to the outside world. however code
for this exists neither.

Or in other words: ipv6 setup needs some manual networking setup on
the host.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [systemd-nspawn] nginx: [emerg] open() "/dev/stderr" failed (6: No such device or address)

2015-04-27 Thread Mantas Mikulėnas
On Apr 27, 2015 16:39, "Lennart Poettering"  wrote:
>
> On Sun, 26.04.15 14:32, Peter Paule (systemd-de...@fedux.org) wrote:
>
> > BTW: I did the `echo "asdf" > /dev/stderr`-thing just to test if
> > `/dev/stderr` worked as expected.
>
> /dev/stderr does not work for socket fds, and that's a kernel
> limitation, systemd can't do much bout it.
>
> > What I do not understand is: What changed in systemd that the
> > service-unit-thing worked in some earlier versions?
>
> We changed nspawn so that it may be included in shell
> pipelines. Effectively this meant passing through the original
> stdin/stdout that nspawn got all the way down to PID 1 inside the
> container. We do so now if we are invoked non-interactively, i.e. with
> stdin/stdout not being a tty.
>
> Previously, we would never pass through fds, but always create a pty
> inside the container and automatically forward all bytes of
> stdin/stdout from outside the container to it and back. However, that
> broke shell pipelines, since it ate up the independent EOF on stdin
> and stdout: ptys cannot signal those individually (there's only a
> hangup that terminates both directions at once), but that's a property
> you inherently need for any kind of pipelines.
>
> I am pretty sure that the new behaviour is a ton more correct though:
> with this you get the same behaviour if you start a process
> non-intractively as a service or inside an nspawn container, the same
> fds, and hence the same (broken) /dev/stderr semantics.
>
> > And what can I do to make it work again? There seems to be no other
> > logging target _today_ both for nginx and apache which makes them
> > compatible with journald.
>
> Do not use /dev/stderr. If you are in a shell script replace this:
>
>echo foobar > /dev/stderr
>
> with this
>
>echo foobar 1>&2
>
> The latter will just duplicate stdin to stderr, the former will reopen
> stdin as stderr. Which is a difference, though an non-obvious one,
> that is further complicated that GNU bash (though not necessarily
> other shells) actually automatically do the second command if you pass
> it the first command. The first command does not work (in non-bash
> shells..) if stdout is a socket, the second command does.

I'm guessing from the error message that it's not a shell script but nginx
itself configured to use "/dev/stderr" as its log file, so there's no >&
that could be used...

>
> Lennart
>
> --
> Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn -- bind multiple directories

2015-04-27 Thread arnaud gaboury
On Mon, Apr 27, 2015 at 3:44 PM, Lennart Poettering
 wrote:
> On Mon, 27.04.15 10:19, arnaud gaboury (arnaud.gabo...@gmail.com) wrote:
>
>> To install a Fedora container from the raw image in my host Archlinux,
>> I can do this:
>>
>> # systemd-nspawn -M Fedora-Cloud-Base-22_Beta-20150415.x86_64.raw --
>> bind=/var/lib/machines/enl:/mnt
>>
>> Now for the use of two btrfs subvol, I would like to bind
>> /var/lib/machines/enl/{etc,var}
>>
>> Does the systemd bind options accept multi directories to bind?
>>  Soemthing like this :
>>
>> # systemd-nspawn -M Fedora-Cloud-Base-22_Beta-20150415.x86_64.raw --
>> bind=/var/lib/machines/enl:/mnt /var/lib/machines/enl/etc:/mnt/etc
>> /var/lib/machines/enl/var:/mnt/var
>
> You can specify --bind= multiple times in one command line to bind
> mount multiple directories. I have updated the man page now to
> explicit mention this.
>
> The command line you are using for is hence:
>
> # systemd-nspawn -M Fedora-Cloud-Base-22_Beta-20150415.x86_64.raw 
> --bind=/var/lib/machines/enl:/mnt --bind=/var/lib/machines/enl/etc:/mnt/etc 
> --bind=/var/lib/machines/enl/var:/mnt/var

Very good.
Thank you for the hard job

>
> Lennart
>
> --
> Lennart Poettering, Red Hat



-- 

google.com/+arnaudgabourygabx
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] importd assumes mkfs.btrfs is installed

2015-04-27 Thread Lennart Poettering
On Sat, 25.04.15 21:07, Peter Paule (systemd-de...@fedux.org) wrote:

> Hi Lennart,
> 
> I prepared a virtual machine to investigate the nginx-issue. This
> virtual machine is very very basic and had no mkfs.btrfs installed and
> no native btrfs-fs available.
> 
> When I tried to download a new dkr-image machine, I got the following
> error. This error disappear after I installed the btrfs-progs.
> 
>   # machinectl pull-dkr --dkr-index https://index.hub.docker.com 
> feduxorg/centos-nginx --verify=no
>   Failed transfer image: mkfs.btrfs died abnormally.
> 
> I think in the current implementation there's no check if mkfs.btrfs is
> really installed on the system.
> 
> Does it make sense to add a test and a helpful error message to importd?

Yupp, we can certainly improve the error message for this case. Added
to the TODO list, thanks!

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH 2/2] sysv-generator: remove NULL pointer dereference

2015-04-27 Thread Lennart Poettering
On Sun, 26.04.15 21:04, Thomas H.P. Andersen (pho...@gmail.com) wrote:

> On Sun, Apr 26, 2015 at 8:31 PM, Thomas H.P. Andersen  
> wrote:
> > On Sun, Apr 26, 2015 at 8:23 PM, Shawn Landden  
> > wrote:
> >> Actually you missed that free_sysvstub_hashmap does not tolerate NULL 
> >> pointers.
> > Indeed. I will commit that.
> 
> Wait. free_sysvstub_hashmapp does tolerate NULL pointers.
> 
> hashmap_steal_first will return NULL if the hashmap is NULL. And
> hashmap_free is fine with NULL too. Your patch makes it more obvious
> that free_sysvstub_hashmapp does tolerate NULL but destructors should
> tolerate NULL as per the coding style. So I guess it should just be
> assumed? I will leave it up to the others to decide what the best
> style is here.

Thomas, your are right.

The intention with hashmaps is that a NULL hashmap is considered
equivalent to an empty hashmap thus saving us tons of explicit
allocations while keeping the code reasable.

So yes, hashmap_steal_first() handles an empty hashmap correctly and
makes it a NOP retuning NULL, and so does hashmal_free().

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn -- bind multiple directories

2015-04-27 Thread Lennart Poettering
On Mon, 27.04.15 10:19, arnaud gaboury (arnaud.gabo...@gmail.com) wrote:

> To install a Fedora container from the raw image in my host Archlinux,
> I can do this:
> 
> # systemd-nspawn -M Fedora-Cloud-Base-22_Beta-20150415.x86_64.raw --
> bind=/var/lib/machines/enl:/mnt
> 
> Now for the use of two btrfs subvol, I would like to bind
> /var/lib/machines/enl/{etc,var}
> 
> Does the systemd bind options accept multi directories to bind?
>  Soemthing like this :
> 
> # systemd-nspawn -M Fedora-Cloud-Base-22_Beta-20150415.x86_64.raw --
> bind=/var/lib/machines/enl:/mnt /var/lib/machines/enl/etc:/mnt/etc
> /var/lib/machines/enl/var:/mnt/var

You can specify --bind= multiple times in one command line to bind
mount multiple directories. I have updated the man page now to
explicit mention this.

The command line you are using for is hence:

# systemd-nspawn -M Fedora-Cloud-Base-22_Beta-20150415.x86_64.raw 
--bind=/var/lib/machines/enl:/mnt --bind=/var/lib/machines/enl/etc:/mnt/etc 
--bind=/var/lib/machines/enl/var:/mnt/var

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [systemd-nspawn] nginx: [emerg] open() "/dev/stderr" failed (6: No such device or address)

2015-04-27 Thread Lennart Poettering
On Sun, 26.04.15 14:32, Peter Paule (systemd-de...@fedux.org) wrote:

> BTW: I did the `echo "asdf" > /dev/stderr`-thing just to test if
> `/dev/stderr` worked as expected.

/dev/stderr does not work for socket fds, and that's a kernel
limitation, systemd can't do much bout it.

> What I do not understand is: What changed in systemd that the
> service-unit-thing worked in some earlier versions? 

We changed nspawn so that it may be included in shell
pipelines. Effectively this meant passing through the original
stdin/stdout that nspawn got all the way down to PID 1 inside the
container. We do so now if we are invoked non-interactively, i.e. with
stdin/stdout not being a tty.

Previously, we would never pass through fds, but always create a pty
inside the container and automatically forward all bytes of
stdin/stdout from outside the container to it and back. However, that
broke shell pipelines, since it ate up the independent EOF on stdin
and stdout: ptys cannot signal those individually (there's only a
hangup that terminates both directions at once), but that's a property
you inherently need for any kind of pipelines.

I am pretty sure that the new behaviour is a ton more correct though:
with this you get the same behaviour if you start a process
non-intractively as a service or inside an nspawn container, the same
fds, and hence the same (broken) /dev/stderr semantics.

> And what can I do to make it work again? There seems to be no other
> logging target _today_ both for nginx and apache which makes them
> compatible with journald.

Do not use /dev/stderr. If you are in a shell script replace this:

   echo foobar > /dev/stderr

with this

   echo foobar 1>&2

The latter will just duplicate stdin to stderr, the former will reopen
stdin as stderr. Which is a difference, though an non-obvious one,
that is further complicated that GNU bash (though not necessarily
other shells) actually automatically do the second command if you pass
it the first command. The first command does not work (in non-bash
shells..) if stdout is a socket, the second command does.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [systemd-nspawn] nginx: [emerg] open() "/dev/stderr" failed (6: No such device or address)

2015-04-27 Thread Lennart Poettering
On Sun, 26.04.15 15:34, Peter Paule (systemd-de...@fedux.org) wrote:

> Maybe syslog will do the trick?

Well, the journal will do the trick, if you run systemd inside your
container. If you don't, then bind mounting the syslog socket might
suffice.

> 
> BTW:
> 
> Do I need a syslog daemon to receive messages on UDP 514, or is/will be
> systemd-journal-remote able to handle this? Didn't found a clue about
> that in the man-page.

No, journald does not cover that. Use rsyslog or syslog-ng if you care
about classic BSD syslog-over-UDP.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] [RFC] umount: reduce verbosity

2015-04-27 Thread Lennart Poettering
On Fri, 24.04.15 12:37, Jonathan Boulle (jonathanbou...@gmail.com) wrote:

> Naive question, perhaps, but why does systemd even need to umount when
> being run in a mount namespace? Can't we let the kernel tear them down when
> it exits?

Well, so far our intention there was to ensure that the codepaths we
use inside a container and on bare-metal are as as close as possible,
and only deviate where there's really no other sensible way.

I mean, let's not forget that nspawn originally was created as a
testing ground for our pid 1 code, so that we didn't have to
physically reboot all the time just to test it.

Hence: unless there's a really good reason to do something different
inside a container than on the host we will always opt for the same
codepaths. (Also, if there's some code that readlly needs to be
different inside a container, we'll try to avoid conditionalizing on
the fact wether things are in a container or not, but much rather on
the actual feature that is missing/different in a container, for
example a missing capability or such)

> > > When rkt is started with --debug, the systemd logs are printed. When rkt
> > > is started without --debug, systemd is started with --log-target=null in
> > > order to mute the logs.
> >
> > That generally sounds a bit extreme...
> 
> do you have another suggestion? :-)

Log the output but redirect it somewhere useful so that the user can
look at it if he wants, but so that it isn't shown all the time?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Question about system-update.target

2015-04-27 Thread AbH Belxjander Draconis Serechai
This seems to putthe requirement of an If(a||b||c)==Failure Then
reboot.target

Where a b and c are ALL required to complete before rebooting occurs.

I would thing to specifically handle each tool as a process,

Would a specific script or tool already be available?

Is it possible to one-shot async run given services using systemctl then
afterwards pull status and log the individual failure before restarting as
a desired behavior here?

Personally I would make a custom service as the update target and
specifically have it failover to reboot once ALL depends have been run as
one-shot fire-until-finished-with-exit strategy.

Hopefully I am understanding the issue with a possible execution strategy
for this specific case?

Is it even practical with current tools in some way?
On 27/04/2015 5:52 PM, "Richard Hughes"  wrote:

> At the moment the only user of system-update.target is PackageKit,
> which does package updates offline in the special system-update boot
> target. The other project that has just started using this mode is
> fwupd, which is using it to update BIOS-based firmware (not UEFI
> capsules) offline.
>
> I've installed a symlink to system-update.target.wants so that the
> fwupd process gets run, but I'm a little worried about what happens
> when there are two optional services being run, both with
> OnFailure=reboot.target
>
> What return code I supposed to return if we launch
> fwupd-offline-update.service and there are no BIOS updates to apply?
> It seems to me that we want the behaviour of "OnFailure" to be "if
> none of the *-offline-update.service files returned success" rather
> than what happens now in that we launch pk-offline-update with no
> package updates which returns failure, which reboots the machine
> before the fwupd offline update process even gets a chance to properly
> start.
>
> Ideas welcome, thanks.
>
> Richard
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/systemd-devel
>
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Question about system-update.target

2015-04-27 Thread Richard Hughes
At the moment the only user of system-update.target is PackageKit,
which does package updates offline in the special system-update boot
target. The other project that has just started using this mode is
fwupd, which is using it to update BIOS-based firmware (not UEFI
capsules) offline.

I've installed a symlink to system-update.target.wants so that the
fwupd process gets run, but I'm a little worried about what happens
when there are two optional services being run, both with
OnFailure=reboot.target

What return code I supposed to return if we launch
fwupd-offline-update.service and there are no BIOS updates to apply?
It seems to me that we want the behaviour of "OnFailure" to be "if
none of the *-offline-update.service files returned success" rather
than what happens now in that we launch pk-offline-update with no
package updates which returns failure, which reboots the machine
before the fwupd offline update process even gets a chance to properly
start.

Ideas welcome, thanks.

Richard
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] systemd-nspawn -- bind multiple directories

2015-04-27 Thread arnaud gaboury
To install a Fedora container from the raw image in my host Archlinux,
I can do this:

# systemd-nspawn -M Fedora-Cloud-Base-22_Beta-20150415.x86_64.raw --
bind=/var/lib/machines/enl:/mnt

Now for the use of two btrfs subvol, I would like to bind
/var/lib/machines/enl/{etc,var}

Does the systemd bind options accept multi directories to bind?
 Soemthing like this :

# systemd-nspawn -M Fedora-Cloud-Base-22_Beta-20150415.x86_64.raw --
bind=/var/lib/machines/enl:/mnt /var/lib/machines/enl/etc:/mnt/etc
/var/lib/machines/enl/var:/mnt/var

Thank you for hints

-- 

google.com/+arnaudgabourygabx
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel