Re: [systemd-devel] Antw: Re: Unexplainable unit restart ("Start request repeated too quickly")

2019-06-04 Thread Michael Chapman
On Tue, 4 Jun 2019, Ulrich Windl wrote:
> >>> Michael Chapman  schrieb am 03.06.2019 um 13:14 in
> Nachricht :
> [...]
> > 
> > Um, OK. I don't think we're any closer to solving your problem though. :-)
> 
> Actually I am!
> The root of the problem is that any oneshot service without 
> RemainAfterExit=true is listed as "inactive (dead)" after being started. 
> I think the manual page should be more clear on that fact! Then if you 
> have a dependency like this (best viewed with a monospaced font):
>   B
>  / \
> A-C-E
>  \ /
>   D
> 
> where A is a target that wants B,C,D, and each of those in turn wants E 
> (which is the oneshot service), the following happens: When starting A, 
> E is NOT started ONCE, but it is started THREE TIMES, while my 
> expectation was it will be started once only. As I had set a burst limit 
> of 1, an unexpected fauilure was reported.
> 
> What I'm unsure: Does systemd wait for each "start of E" to terminate 
> before calling the next one (i.e. serialized), or are the starts being 
> attempted in parallel?
> 
> The obvious deficit in systemd is that it does not no a proper 2-pass 
> dependency analysis before executing: If it would do a dependency 
> analysis for starting A, it would see that E has to be started ONCE 
> before B,C,D. In stead it seems to do a 1-pass analysis firing the start 
> of E every time.

This doesn't seem right to me, as systemd _does_ have the concept of 
transactions to which multiple jobs can be attached, and the set of jobs 
does appear to be determined in one go based upon the configured 
requirement dependencies.

So I thought I'd set up a test case. I created a set of Type=oneshot units 
with Wants= dependencies as in your diagram. There were no ordering 
dependencies (Before= or After=). Each service just echoed its own name.

Then:

  $ systemctl --user daemon-reload
  $ systemctl --user start A.service

yielded:

  systemd[2312]: Reloading.
  systemd[2312]: Starting B...
  systemd[2312]: Starting D...
  echo[21672]: B
  systemd[2312]: Starting E...
  echo[21673]: D
  systemd[2312]: Starting A...
  echo[21674]: E
  systemd[2312]: Starting C...
  echo[21675]: A
  systemd[2312]: Started B.
  systemd[2312]: Started D.
  echo[21676]: C
  systemd[2312]: Started E.
  systemd[2312]: Started A.
  systemd[2312]: Started C.

As you can see, even E.service was only started once.

Are you sure you were actually doing everything in one transaction?
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

[systemd-devel] Dbus problem fix :) now news programs broken

2019-06-04 Thread Dorian ROSSE
Hello,


DBus is repaired since the qt dbus updates but I have a lot of programs broken 
as Following :

Des erreurs ont été rencontrées pendant l'exécution :
systemd-coredump
openvpn
monotone-server
initramfs-tools
linux-image-unsigned-5.1.6-050106-generic
Error connecting: Error receiving data: Connection reset by peer

Thank you in advance to follow my server for repaired error on 😊

Regards.


Dorian ROSSE.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

[systemd-devel] Q: ConditionPathExists=

2019-06-04 Thread Ulrich Windl
Hi!

I have a question for ConditionPathExists:
If I specify two files like "ConditionPathExists=/etc/idredir.conf 
/etc/isredir.conf", I get a "start condition failed" even if both files exist.

There's also some confusion where exactly a pipe sysmbol has to be placed:
   If multiple conditions are specified, the unit will be executed if
   all of them apply (i.e. a logical AND is applied). Condition checks
   can be prefixed with a pipe symbol (|) in which case a condition
   becomes a triggering condition. If at least one triggering

Do I have to write "|ConditionPathExists=..." or "ConditionPathExists=|..."?

Regards,
Ulrich Windl


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

[systemd-devel] Wtrlt: Re: Antw: Re: Unexplainable unit restart ("Start request repeated too quickly")

2019-06-04 Thread Ulrich Windl
(Forgot to reply to all)

--- Begin Message ---
>>> Michael Chapman  schrieb am 04.06.2019 um 11:04 in
Nachricht :
> On Tue, 4 Jun 2019, Ulrich Windl wrote:
>> >>> Michael Chapman  schrieb am 03.06.2019 um 13:14
in
>> Nachricht :
>> [...]
>> > 
>> > Um, OK. I don't think we're any closer to solving your problem though.
:‑)
>> 
>> Actually I am!
>> The root of the problem is that any oneshot service without 
>> RemainAfterExit=true is listed as "inactive (dead)" after being started. 
>> I think the manual page should be more clear on that fact! Then if you 
>> have a dependency like this (best viewed with a monospaced font):
>>   B
>>  / \
>> A‑C‑E
>>  \ /
>>   D
>> 
>> where A is a target that wants B,C,D, and each of those in turn wants E 
>> (which is the oneshot service), the following happens: When starting A, 
>> E is NOT started ONCE, but it is started THREE TIMES, while my 
>> expectation was it will be started once only. As I had set a burst limit 
>> of 1, an unexpected fauilure was reported.
>> 
>> What I'm unsure: Does systemd wait for each "start of E" to terminate 
>> before calling the next one (i.e. serialized), or are the starts being 
>> attempted in parallel?
>> 
>> The obvious deficit in systemd is that it does not no a proper 2‑pass 
>> dependency analysis before executing: If it would do a dependency 
>> analysis for starting A, it would see that E has to be started ONCE 
>> before B,C,D. In stead it seems to do a 1‑pass analysis firing the start 
>> of E every time.
> 
> This doesn't seem right to me, as systemd _does_ have the concept of 
> transactions to which multiple jobs can be attached, and the set of jobs 
> does appear to be determined in one go based upon the configured 
> requirement dependencies.
> 
> So I thought I'd set up a test case. I created a set of Type=oneshot units 
> with Wants= dependencies as in your diagram. There were no ordering 
> dependencies (Before= or After=). Each service just echoed its own name.
> 
> Then:
> 
>   $ systemctl ‑‑user daemon‑reload
>   $ systemctl ‑‑user start A.service
> 
> yielded:
> 
>   systemd[2312]: Reloading.
>   systemd[2312]: Starting B...
>   systemd[2312]: Starting D...
>   echo[21672]: B
>   systemd[2312]: Starting E...
>   echo[21673]: D
>   systemd[2312]: Starting A...
>   echo[21674]: E
>   systemd[2312]: Starting C...
>   echo[21675]: A
>   systemd[2312]: Started B.
>   systemd[2312]: Started D.
>   echo[21676]: C
>   systemd[2312]: Started E.
>   systemd[2312]: Started A.
>   systemd[2312]: Started C.
> 
> As you can see, even E.service was only started once.
> 
> Are you sure you were actually doing everything in one transaction?

I guess your version is significantly newer than 228. In 228 I have some odd
effects hard to explain (I'm still trying to learn).

As far as I can tell the following commands were issued (during package
upgrade):
reenable E
start E
reenable A
restart A

Regards,
Ulrich


--- End Message ---
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Q: ConditionPathExists=

2019-06-04 Thread Reindl Harald


Am 04.06.19 um 13:32 schrieb Ulrich Windl:
> Hi!
> 
> I have a question for ConditionPathExists:
> If I specify two files like "ConditionPathExists=/etc/idredir.conf 
> /etc/isredir.conf", I get a "start condition failed" even if both files exist.

why don't you just use

ConditionPathExists=/etc/idredir.conf
ConditionPathExists=/etc/isredir.conf

> There's also some confusion where exactly a pipe sysmbol has to be placed:
>If multiple conditions are specified, the unit will be executed if
>all of them apply (i.e. a logical AND is applied). Condition checks
>can be prefixed with a pipe symbol (|) in which case a condition
>becomes a triggering condition. If at least one triggering
> 
> Do I have to write "|ConditionPathExists=..." or "ConditionPathExists=|..."

ConditionPathExists=|

did you stop to read where you stoped o quote: "If at least one
triggering condition is defined for a unit, then the unit will be
executed if at least one of the triggering conditions apply and all of
the non-triggering conditions"

it's the same logic like ReadOnlyPath=-/whatever to avoid failing if the
path don't exist and make a readonly namespace in case it exists

| or - before an option make sno sense ina INI-syle cofnig
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Q: ConditionPathExists=

2019-06-04 Thread Josef Moellers
On 04.06.19 13:32,  Ulrich Windl  wrote:
> Hi!
> 
> I have a question for ConditionPathExists:
> If I specify two files like "ConditionPathExists=/etc/idredir.conf 
> /etc/isredir.conf", I get a "start condition failed" even if both files exist.
> 
> There's also some confusion where exactly a pipe sysmbol has to be placed:
>If multiple conditions are specified, the unit will be executed if
>all of them apply (i.e. a logical AND is applied). Condition checks
>can be prefixed with a pipe symbol (|) in which case a condition
>becomes a triggering condition. If at least one triggering
> 
> Do I have to write "|ConditionPathExists=..." or "ConditionPathExists=|..."?
The second.

Josef
-- 
SUSE Linux GmbH
Maxfeldstrasse 5
90409 Nuernberg
Germany
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Q: ConditionPathExists=

2019-06-04 Thread Reindl Harald


Am 04.06.19 um 13:51 schrieb Reindl Harald:
> 
> 
> Am 04.06.19 um 13:32 schrieb Ulrich Windl:
>> Hi!
>>
>> I have a question for ConditionPathExists:
>> If I specify two files like "ConditionPathExists=/etc/idredir.conf 
>> /etc/isredir.conf", I get a "start condition failed" even if both files 
>> exist.
> 
> why don't you just use
> 
> ConditionPathExists=/etc/idredir.conf
> ConditionPathExists=/etc/isredir.conf
> 
>> There's also some confusion where exactly a pipe sysmbol has to be placed:
>>If multiple conditions are specified, the unit will be executed if
>>all of them apply (i.e. a logical AND is applied). Condition 
>> checks
>>can be prefixed with a pipe symbol (|) in which case a condition
>>becomes a triggering condition. If at least one triggering
>>
>> Do I have to write "|ConditionPathExists=..." or "ConditionPathExists=|..."
> 
> ConditionPathExists=|

BTW:

you could really make your life easier by looking at existing units

[root@srv-rhsoft:/usr/lib/systemd/system]$ cat *.service | grep
ConditionPathExists
ConditionPathExists=!/etc/alsa/state-daemon.conf
ConditionPathExists=/etc/alsa/state-daemon.conf
ConditionPathExists=/etc/ethers
ConditionPathExists=/etc/krb5.keytab
ConditionPathExists=/dev/tty0
ConditionPathExists=/usr/share/sounds/freedesktop/stereo/system-bootup.oga
ConditionPathExists=|/usr/share/sounds/freedesktop/stereo/system-shutdown.oga
ConditionPathExists=|/usr/share/sounds/freedesktop/stereo/system-shutdown-reboot.oga
ConditionPathExists=/usr/share/sounds/freedesktop/stereo/system-shutdown.oga
ConditionPathExists=/dev/console
ConditionPathExists=/dev/pts/%I
ConditionPathExists=/dev/tty9
ConditionPathExists=!/run/ostree-booted
ConditionPathExists=/usr/lib/initrd-release
ConditionPathExistsGlob=|/etc/cmdline.d/*.conf
ConditionPathExists=/usr/lib/initrd-release
ConditionPathExists=|/lib/dracut/need-initqueue
ConditionPathExists=/usr/lib/initrd-release
ConditionPathExists=/usr/lib/initrd-release
ConditionPathExists=/usr/lib/initrd-release
ConditionPathExists=|/dev/root
ConditionPathExists=|/dev/nfs
ConditionPathExists=/usr/lib/initrd-release
ConditionPathExists=/usr/lib/initrd-release
ConditionPathExistsGlob=|/etc/cmdline.d/*.conf
ConditionPathExists=!/run/initramfs/bin/sh
ConditionPathExists=/dev/tty0
ConditionPathExists=/etc/initrd-release
ConditionPathExists=/etc/initrd-release
ConditionPathExists=/etc/initrd-release
ConditionPathExists=/etc/initrd-release
ConditionPathExists=!/sys/devices/virtual/block/%i/md/sync_action
ConditionPathExists=/etc/mdadm.conf
ConditionPathExists=/etc/sysconfig/monitor-httpd
ConditionPathExists=/usr/bin/monitor-httpd.php
ConditionPathExists=/usr/bin/php
ConditionPathExists=/etc/krb5.keytab
ConditionPathExists=/etc/sysconfig/network
ConditionPathExists=/usr/sbin/quotaon
ConditionPathExists=/etc/krb5.keytab
ConditionPathExists=!/.autorelabel
ConditionPathExists=!/run/plymouth/pid
ConditionPathExists=/etc/initrd-release
ConditionPathExists=|!/usr/lib/udev/hwdb.bin
ConditionPathExists=|/etc/udev/hwdb.bin
ConditionPathExists=/usr/sbin/quotacheck
ConditionPathExists=/etc/fstab
ConditionPathExists=/dev/tty0
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

[systemd-devel] Antw: Re: Q: ConditionPathExists=

2019-06-04 Thread Ulrich Windl
>>> Reindl Harald  schrieb am 04.06.2019 um 13:51 in
Nachricht :

> 
> Am 04.06.19 um 13:32 schrieb Ulrich Windl:
>> Hi!
>> 
>> I have a question for ConditionPathExists:
>> If I specify two files like "ConditionPathExists=/etc/idredir.conf 
> /etc/isredir.conf", I get a "start condition failed" even if both files 
> exist.
> 
> why don't you just use
> 
> ConditionPathExists=/etc/idredir.conf
> ConditionPathExists=/etc/isredir.conf

because for other statements it's equivaluent to write
Bla=Foo
Bla=Bar

and "Bla=Foo Bar"

> 
>> There's also some confusion where exactly a pipe sysmbol has to be placed:
>>If multiple conditions are specified, the unit will be executed 
> if
>>all of them apply (i.e. a logical AND is applied). Condition 
> checks
>>can be prefixed with a pipe symbol (|) in which case a condition
>>becomes a triggering condition. If at least one triggering
>> 
>> Do I have to write "|ConditionPathExists=..." or "ConditionPathExists=|..."
> 
> ConditionPathExists=|
> 
> did you stop to read where you stoped o quote: "If at least one
> triggering condition is defined for a unit, then the unit will be
> executed if at least one of the triggering conditions apply and all of
> the non-triggering conditions"

I did read that, but that doesn't answer where to put the pipe.

> 
> it's the same logic like ReadOnlyPath=-/whatever to avoid failing if the
> path don't exist and make a readonly namespace in case it exists
> 
> | or - before an option make sno sense ina INI-syle cofnig

I don't know the parser.

> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org 
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel 




___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

[systemd-devel] Antw: Re: Q: ConditionPathExists=

2019-06-04 Thread Ulrich Windl
>>> Reindl Harald  schrieb am 04.06.2019 um 13:56 in
Nachricht :

[...]
> BTW:
> 
> you could really make your life easier by looking at existing units

I prefer specifications over examples, but you are right, I could have guessed 
what the manual means if looking at an example, assuming the example is correct.
Still I think the language in the manual could be somewhat more precise.

[...]
> ConditionPathExists=/dev/tty0

I can only guess that multiple file names are not allowed for 
ConditionPathExists to allow file names containing spaces, an off concept for 
UNIX, BTW...

Regards,
Ulrich Windl


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Antw: Re: Q: ConditionPathExists=

2019-06-04 Thread Reindl Harald


Am 04.06.19 um 14:17 schrieb Ulrich Windl:
>> | or - before an option makes no sense in a INI-syle cofnig
> 
> I don't know the parser

any INI style is Key=Value no matter the OS or software

BTW: can you please only reply to the list instead reply all, your
offlist copy is typically faster which leads in automatically delete the
later coming list-message on mailservers configured to kill such
needless duplicates and that breaks threading and reply-list in sane
mail clients
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Antw: Re: Q: ConditionPathExists=

2019-06-04 Thread Reindl Harald


Am 04.06.19 um 14:20 schrieb Ulrich Windl:
 Reindl Harald  schrieb am 04.06.2019 um 13:56 in
> Nachricht :
> 
> [...]
>> BTW:
>>
>> you could really make your life easier by looking at existing units
> 
> I prefer specifications over examples, but you are right, I could have 
> guessed what the manual means if looking at an example, assuming the example 
> is correct.

you miss the point

when you are a beginner working with a tool that has so much
capabilities where would you start reading all the manpages

there are dozens of working examples and other than initscripts they are
in a human readable format and copy&paste

adopt them is way faster, then you still can write a bugreport why
"ConditionPathExists" behaves different than most other options why
accept more than one value aftermaking sure it's not fixed long ago and
your pain comes from the outdated version shipped by your distribution

> Still I think the language in the manual could be somewhat more precise.

agreed
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Wtrlt: Re: Antw: Re: Unexplainable unit restart ("Start request repeated too quickly")

2019-06-04 Thread Michael Chapman
On Tue, 4 Jun 2019, Ulrich Windl wrote:
> >>> Michael Chapman  schrieb am 04.06.2019 um 11:04 in
> Nachricht :
[...]
> > As you can see, even E.service was only started once.
> > 
> > Are you sure you were actually doing everything in one transaction?
> 
> I guess your version is significantly newer than 228. In 228 I have some odd
> effects hard to explain (I'm still trying to learn).

I just tested it on systemd 239 (Fedora 29) and systemd 219 (RHEL 7). Same 
behaviour.
 
> As far as I can tell the following commands were issued (during package
> upgrade):
> reenable E
> start E
> reenable A
> restart A

At this point, if E has already completed and become inactive, then 
restarting A will start E again. This is to be expected.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

[systemd-devel] systemd and chroot()

2019-06-04 Thread Steve Dickson
Hello,

We are adding some new functionality to the NFS server that 
will make it a bit more container friendly... 

This new functionality needs to do a chroot(2) system call. 
This systemcall is failing with EPERM due to the
following AVC error:

AVC avc:  denied  { sys_chroot } for  pid=2919 comm="rpc.mountd" capability=18  
scontext=system_u:system_r:nfsd_t:s0 tcontext=system_u:system_r:nfsd_t:s0 
tclass=capability permissive=0

The entery in the /var/loc/audit.log
type=AVC msg=audit(1559659652.217:250): avc:  denied  { sys_chroot } for  
pid=2412 comm="rpc.mountd" capability=18  scontext=system_u:system_r:nfsd_t:s0 
tcontext=system_u:system_r:nfsd_t:s0 tclass=capability permissive=0

It definitely is something with systemd, since I can
start the daemon by hand... 

It was suggested I make the following change to the service unit
# diff -u nfs-mountd.service.orig nfs-mountd.service
--- nfs-mountd.service.orig 2019-06-04 10:38:57.0 -0400
+++ nfs-mountd.service  2019-06-04 12:29:34.339621802 -0400
@@ -11,3 +11,4 @@
 [Service]
 Type=forking
 ExecStart=/usr/sbin/rpc.mountd
+AmbientCapabilities=CAP_SYS_CHROOT

which did not work. 

Any ideas on how to tell systemd its ok for a daemon
to do a chroot(2) system call?

tia,

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] systemd and chroot()

2019-06-04 Thread Matthew Garrett
On Tue, Jun 4, 2019 at 9:42 AM Steve Dickson  wrote:
> AVC avc:  denied  { sys_chroot } for  pid=2919 comm="rpc.mountd" 
> capability=18  scontext=system_u:system_r:nfsd_t:s0 
> tcontext=system_u:system_r:nfsd_t:s0 tclass=capability permissive=0

This is an SELinux policy violation, nothing to do with systemd.
You're probably not seeing it when you run the daemon by hand because
the SELinux policy doesn't specify a transition in that case, so the
daemon doesn't end up running in the confined context.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] systemd and chroot()

2019-06-04 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Jun 04, 2019 at 12:42:35PM -0400, Steve Dickson wrote:
> Hello,
> 
> We are adding some new functionality to the NFS server that 
> will make it a bit more container friendly... 
> 
> This new functionality needs to do a chroot(2) system call. 
> This systemcall is failing with EPERM due to the
> following AVC error:
> 
> AVC avc:  denied  { sys_chroot } for  pid=2919 comm="rpc.mountd" 
> capability=18  scontext=system_u:system_r:nfsd_t:s0 
> tcontext=system_u:system_r:nfsd_t:s0 tclass=capability permissive=0

It doesn't sound right to do any kind of chrooting yourself.
Why can't you use the systemd builtins for this?

Zbyszek

> The entery in the /var/loc/audit.log
> type=AVC msg=audit(1559659652.217:250): avc:  denied  { sys_chroot } for  
> pid=2412 comm="rpc.mountd" capability=18  
> scontext=system_u:system_r:nfsd_t:s0 tcontext=system_u:system_r:nfsd_t:s0 
> tclass=capability permissive=0
> 
> It definitely is something with systemd, since I can
> start the daemon by hand... 
> 
> It was suggested I make the following change to the service unit
> # diff -u nfs-mountd.service.orig nfs-mountd.service
> --- nfs-mountd.service.orig   2019-06-04 10:38:57.0 -0400
> +++ nfs-mountd.service2019-06-04 12:29:34.339621802 -0400
> @@ -11,3 +11,4 @@
>  [Service]
>  Type=forking
>  ExecStart=/usr/sbin/rpc.mountd
> +AmbientCapabilities=CAP_SYS_CHROOT
> 
> which did not work. 
> 
> Any ideas on how to tell systemd its ok for a daemon
> to do a chroot(2) system call?
> 
> tia,
> 
> steved.
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Dbus problem fix :) now news programs broken

2019-06-04 Thread Dorian ROSSE
DBUs zombie process is back :’(

Provenance : Courrier pour 
Windows 10


De : Dorian ROSSE
Envoyé : Tuesday, June 4, 2019 11:34:15 AM
À : systemd-devel@lists.freedesktop.org
Objet : Dbus problem fix :) now news programs broken

Hello,


DBus is repaired since the qt dbus updates but I have a lot of programs broken 
as Following :

Des erreurs ont été rencontrées pendant l'exécution :
systemd-coredump
openvpn
monotone-server
initramfs-tools
linux-image-unsigned-5.1.6-050106-generic
Error connecting: Error receiving data: Connection reset by peer

Thank you in advance to follow my server for repaired error on 😊

Regards.


Dorian ROSSE.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

[systemd-devel] 5.2rc2, circular lock warning systemd-journal and btrfs_page_mkwrite

2019-06-04 Thread Chris Murphy
This is on Fedora Rawhide
systemd-242-3.git7a6d834.fc31.x86_64
kernel 5.2.0-0.rc2.git1.2.fc31.x86_64

Pretty and complete log:
https://drive.google.com/open?id=1vhnIki9lpiWK8T5Qsl81_RToQ8CFdnfU

Probably MUA wrapped, and excerpt only:

[7.816458] fmac.local systemd[1]: Starting Flush Journal to
Persistent Storage...
[7.833724] fmac.local systemd-journald[642]: Time spent on
flushing to /var is 83.426ms for 5804 entries.
[7.833724] fmac.local systemd-journald[642]: System journal
(/var/log/journal/6d5657355b064460b154f7f5da220b50) is 48.0M, max
4.0G, 3.9G free.
[7.886889] fmac.local kernel:
[7.886892] fmac.local kernel:
==
[7.886893] fmac.local kernel: WARNING: possible circular locking
dependency detected
[7.886895] fmac.local kernel: 5.2.0-0.rc2.git1.2.fc31.x86_64 #1 Not tainted
[7.886896] fmac.local kernel:
--
[7.886897] fmac.local kernel: systemd-journal/642 is trying to acquire lock:
[7.886899] fmac.local kernel: (ptrval)
(&fs_info->reloc_mutex){+.+.}, at:
btrfs_record_root_in_trans+0x44/0x70 [btrfs]
[7.886926] fmac.local kernel:
  but task is already holding lock:
[7.886926] fmac.local kernel: (ptrval)
(sb_pagefaults){.+.+}, at: btrfs_page_mkwrite+0x69/0x570 [btrfs]
[7.886943] fmac.local kernel:
  which lock already depends on the new lock.
[7.886944] fmac.local kernel:
  the existing dependency chain (in
reverse order) is:
[7.886945] fmac.local kernel:
  -> #6 (sb_pagefaults){.+.+}:
[7.886949] fmac.local kernel:__sb_start_write+0x12b/0x1b0
[7.886965] fmac.local kernel:btrfs_page_mkwrite+0x69/0x570 [btrfs]
[7.886970] fmac.local kernel:do_page_mkwrite+0x2f/0x100
[7.886973] fmac.local kernel:do_wp_page+0x306/0x570
[7.886975] fmac.local kernel:__handle_mm_fault+0xce8/0x1730
[7.886976] fmac.local kernel:handle_mm_fault+0x16e/0x370
[7.886978] fmac.local kernel:do_user_addr_fault+0x1f9/0x480
[7.886980] fmac.local kernel:do_page_fault+0x33/0x210
[7.886983] fmac.local kernel:page_fault+0x1e/0x30
[7.886984] fmac.local kernel:
  -> #5 (&mm->mmap_sem#2){}:
[7.886987] fmac.local kernel:__might_fault+0x60/0x80
[7.886989] fmac.local kernel:_copy_from_user+0x1e/0x90
[7.886992] fmac.local kernel:scsi_cmd_ioctl+0x218/0x440
[7.886994] fmac.local kernel:cdrom_ioctl+0x3c/0x1272
[7.886997] fmac.local kernel:sr_block_ioctl+0xa0/0xd0
[7.886998] fmac.local kernel:blkdev_ioctl+0x32b/0xad0
[7.887001] fmac.local kernel:block_ioctl+0x3f/0x50
[7.887003] fmac.local kernel:do_vfs_ioctl+0x400/0x740
[7.887004] fmac.local kernel:ksys_ioctl+0x5e/0x90
[7.887005] fmac.local kernel:__x64_sys_ioctl+0x16/0x20
[7.887008] fmac.local kernel:do_syscall_64+0x5c/0xa0
[7.887010] fmac.local kernel:
entry_SYSCALL_64_after_hwframe+0x49/0xbe
[7.887011] fmac.local kernel:
  -> #4 (sr_mutex){+.+.}:
[7.887014] fmac.local kernel:__mutex_lock+0x92/0x930
[7.887016] fmac.local kernel:sr_block_open+0x81/0x100
[7.887018] fmac.local kernel:__blkdev_get+0xed/0x590
[7.887019] fmac.local kernel:blkdev_get+0x4a/0x380
[7.887021] fmac.local kernel:do_dentry_open+0x14c/0x3c0
[7.887022] fmac.local kernel:path_openat+0x4e6/0xca0
[7.887024] fmac.local kernel:do_filp_open+0x91/0x100
[7.887025] fmac.local kernel:do_sys_open+0x184/0x220
[7.887027] fmac.local kernel:do_syscall_64+0x5c/0xa0
[7.887028] fmac.local kernel:
entry_SYSCALL_64_after_hwframe+0x49/0xbe
[7.887029] fmac.local kernel:
  -> #3 (&bdev->bd_mutex){+.+.}:
[7.887032] fmac.local kernel:__mutex_lock+0x92/0x930
[7.887033] fmac.local kernel:__blkdev_get+0x7a/0x590
[7.887034] fmac.local kernel:blkdev_get+0x214/0x380
[7.887036] fmac.local kernel:blkdev_get_by_path+0x46/0x80
[7.887053] fmac.local kernel:btrfs_get_bdev_and_sb+0x1b/0xb0 [btrfs]
[7.887069] fmac.local kernel:open_fs_devices+0x7a/0x2a0 [btrfs]
[7.887086] fmac.local kernel:btrfs_open_devices+0x92/0xa0 [btrfs]
[7.887097] fmac.local kernel:btrfs_mount_root+0x30b/0x690 [btrfs]
[7.887099] fmac.local kernel:legacy_get_tree+0x30/0x50
[7.887102] fmac.local kernel:vfs_get_tree+0x28/0xf0
[7.887104] fmac.local kernel:fc_mount+0xe/0x40
[7.887106] fmac.local kernel:vfs_kern_mount.part.0+0x71/0x90
[7.887117] fmac.local kernel:btrfs_mount+0x155/0x8b0 [btrf

Re: [systemd-devel] systemd and chroot()

2019-06-04 Thread Steve Dickson


On 6/4/19 12:45 PM, Matthew Garrett wrote:
> On Tue, Jun 4, 2019 at 9:42 AM Steve Dickson  wrote:
>> AVC avc:  denied  { sys_chroot } for  pid=2919 comm="rpc.mountd" 
>> capability=18  scontext=system_u:system_r:nfsd_t:s0 
>> tcontext=system_u:system_r:nfsd_t:s0 tclass=capability permissive=0
> 
> This is an SELinux policy violation, nothing to do with systemd.
Yeah... that's what I originally thought it was but when
it was suggested to set  AmbientCapabilities=CAP_SYS_CHROOT
in the service unit I figured I would run it by you guys..

> You're probably not seeing it when you run the daemon by hand because
> the SELinux policy doesn't specify a transition in that case, so the
> daemon doesn't end up running in the confined context.
> 
Makes sense... thanks!

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] systemd and chroot()

2019-06-04 Thread Steve Dickson


On 6/4/19 1:14 PM, Zbigniew Jędrzejewski-Szmek wrote:
> On Tue, Jun 04, 2019 at 12:42:35PM -0400, Steve Dickson wrote:
>> Hello,
>>
>> We are adding some new functionality to the NFS server that 
>> will make it a bit more container friendly... 
>>
>> This new functionality needs to do a chroot(2) system call. 
>> This systemcall is failing with EPERM due to the
>> following AVC error:
>>
>> AVC avc:  denied  { sys_chroot } for  pid=2919 comm="rpc.mountd" 
>> capability=18  scontext=system_u:system_r:nfsd_t:s0 
>> tcontext=system_u:system_r:nfsd_t:s0 tclass=capability permissive=0
> 
> It doesn't sound right to do any kind of chrooting yourself.
> Why can't you use the systemd builtins for this?
The patch set is basically adding a pseudo to all the export
which should make things a bit more container friendly...
There is the thread
https://www.spinics.net/lists/linux-nfs/msg73006.html

steved.
> 
> Zbyszek
> 
>> The entery in the /var/loc/audit.log
>> type=AVC msg=audit(1559659652.217:250): avc:  denied  { sys_chroot } for  
>> pid=2412 comm="rpc.mountd" capability=18  
>> scontext=system_u:system_r:nfsd_t:s0 tcontext=system_u:system_r:nfsd_t:s0 
>> tclass=capability permissive=0
>>
>> It definitely is something with systemd, since I can
>> start the daemon by hand... 
>>
>> It was suggested I make the following change to the service unit
>> # diff -u nfs-mountd.service.orig nfs-mountd.service
>> --- nfs-mountd.service.orig  2019-06-04 10:38:57.0 -0400
>> +++ nfs-mountd.service   2019-06-04 12:29:34.339621802 -0400
>> @@ -11,3 +11,4 @@
>>  [Service]
>>  Type=forking
>>  ExecStart=/usr/sbin/rpc.mountd
>> +AmbientCapabilities=CAP_SYS_CHROOT
>>
>> which did not work. 
>>
>> Any ideas on how to tell systemd its ok for a daemon
>> to do a chroot(2) system call?
>>
>> tia,
>>
>> steved.
>> ___
>> systemd-devel mailing list
>> systemd-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] [vfio-users] Fwd: Proper way for systemd service to wait mdev gvt device initialization

2019-06-04 Thread Erik Skultety
On Sun, May 26, 2019 at 09:28:36PM +0300, Alex Ivanov wrote:
> Could Intel fix that?

This is not tied only to the Intel driver. Like Alex said, mdev is not a
fundamental feature of the parent device. There was an attempt to fix this
behaviour in the mdev core driver last year [1], but was rejected upstream with
a recommendation that all applications should handle this on their own, so we
ended up introducing a wait in libvirt too. Personally, it has always felt
weird to me that fixing 1 thing at N places could be the right approach, but I
hope we'll get the change event at some point.

[1] https://lkml.org/lkml/2018/2/9/164

Regards,
Erik

>
>  Пересылаемое сообщение 
> 20.05.2019, 15:18, "Andrei Borzenkov" :
>
> On Mon, May 20, 2019 at 10:08 AM Mantas Mikulėnas  wrote:
> >  On Sun, May 19, 2019 at 9:50 PM Alex Ivanov  wrote:
> >>  Hello.
> >>  What is the proper way to do that? I have a unit that creates gvt device 
> >> in the system
> >>
> >>  ExecStart = "sh -c 'echo a297db4a-f4c2-11e6-90f6-d3b88d6c9525 > 
> >> /sys/bus/pci/devices/:00:02.0/mdev_supported_types/i915-GVTg_V5_8/create'";
> >>  ExecStop = "sh -c 'echo 1 > 
> >> /sys/bus/pci/devices/:00:02.0/a297db4a-f4c2-11e6-90f6-d3b88d6c9525/remove'";
> >
> >  Personally, I would use an udev rule:
> >
> >  ACTION=="add", SUBSYSTEM=="pci", ENV{PCI_SLOT_NAME}==":00:02.0", 
> > ATTR{mdev_supported_types/i915-GVTg_V5_8/create}="a297db4a-f4c2-11e6-90f6-d3b88d6c9525"
>
> There is a race condition here, driver creates
> .../mdev_supported_types after it has registered device so udev may
> process event before directory is available.
>
> >  Though on the other hand, a service is a good choice if you want to 
> > `systemctl stop` it later on.
> >
> >  ACTION=="add", SUBSYSTEM=="pci", ENV{PCI_SLOT_NAME}==":00:02.0", 
> > ENV{SYSTEMD_WANTS}+="create-gvt.service"
> >
> >>  Ideally I would to like to start this service when :00:02.0 device 
> >> appears in the system, but the problem is that 
> >> /sys/bus/pci/devices/:00:02.0/mdev_supported_types/ tree is populated 
> >> later, so my service will fail.
> >>
> >>  So the question what is the proper way to fix that.
> >
> >  If the driver doesn't populate its sysfs entries in time, maybe it at 
> > least generates 'change' uevents? (udevadm --monitor)
>
> I would tentatively say this is driver bug. This directory is created
> during initial device setup, not in response to some event later. From
> https://github.com/torvalds/linux/blob/master/Documentation/driver-model/device.txt:
>
> --><--
> As explained in Documentation/kobject.txt, device attributes must be
> created before the KOBJ_ADD uevent is generated.
> --><--
>
> Note that some drivers even disable KOBJ_ADD notification during
> device_register() and trigger it manually later, after sysfs layout is
> complete. I cannot evaluate whether this directory can be created and
> populated before device_register().
>
> >  If there are no uevents either, well, there's nothing you can do from 
> > systemd's side. (Other than making a script that loops repeatedly checking 
> > "is it there yet? is it there yet?")
>
> Should really be fixed on kernel side.
>  Конец пересылаемого сообщения 
>
> ___
> vfio-users mailing list
> vfio-us...@redhat.com
> https://www.redhat.com/mailman/listinfo/vfio-users
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel