Re: [systemd-devel] Dracut ifname= and systemd naming schemes

2024-09-16 Thread Thomas HUMMEL

On 9/16/24 2:31 PM, Lukáš Nykrýn wrote:

Hi!
I think this should be fine. ifname= uses udev to rename the device and
udev will not rename the device later again.

Lukas


Ok.

Thanks for your answer.


--
Thomas HUMMEL
HPC Group
Institut PASTEUR
Paris, FRANCE


[systemd-devel] Dracut ifname= and systemd naming schemes

2024-09-13 Thread Thomas HUMMEL

Hello,

Using systemd-udev-239-74 (RHEL 8.8), I wonder if, if one sets the 
ethernet device name with ifname=foobar: passed to 
initialramfs (dracut) this name is at "risk" to be renamed in the real 
root fs according to systemd-udevd.service naming schemes and link name 
policy which, on RHEL 8.8. is NamePolicy=kernel database onboard slot path ?


I'm not sure about the 'keep' algorithm and if or when it's applied and 
which is the only thing which make me think the answer to my question 
would be 'no'


The idea is to know if one could drop into the real root fs a static 
network config referencing this device name initially set with ifname=


Thanks for your help

--
Thomas HUMMEL
HPC Group
Institut PASTEUR
Paris, FRANCE



Re: [systemd-devel] Submitting a service activation to remote mounts success

2024-02-07 Thread Thomas HUMMEL




On 2/7/24 19:55, Andrei Borzenkov wrote:

You can add drop-in to either unit (and add generator to do it 
automatically), but I do not quite see what is it going to buy you.


Hello, thanks for your answer and sorry for the previous confusions I made.

What do you call generator here ? A custom script to generate the 
dropped-in files or some systemd mechanism (I then must admit I still 
don't know yet) ?


What I want (though I don't like to over use systemd dependencies as I 
instinctively think this may not be a good idea), is to prevent an hpc 
scheduler daemon (service unit) to accept jobs if remote mounts 
(mandatory for consistent use) are not all here (successfully mounted)


The initial idea, which avoided listing (or generating) every mounts one 
by one was to express dependencies relative to remote-fs.target by 
adding BindsTo=remote-fs.target to the service (After= comes for free)


But then if some such .mount units would get unmonted remote-fs.target's 
Requires= would not deactivate remote-fs.target and my 
service would in turn not be deactivated


Thanks for your help

--
Thomas HUMMEL


Re: [systemd-devel] Submitting a service activation to remote mounts success

2024-02-07 Thread Thomas HUMMEL




On 2/7/24 11:50, Thomas HUMMEL wrote:

Still I cannot understand where the Requires= comes in 
remote-fs.target unit as doc for special target only describes a Wants= 
dep added by systemd-fstab-generator in the case of auto mounts.


Well, forget about that Wants= dep which is to the mount unit.

Basically my only remaining question is:

is there a way to have remote-fs.target BindsTo= instead 
of Requires= only ?


Thanks for your help

--
Thomas HUMMEL


Re: [systemd-devel] Submitting a service activation to remote mounts success

2024-02-07 Thread Thomas HUMMEL




On 2/6/24 17:06, Silvio Knizek wrote:


Hi Thomas,

RequiresMountsFor=3D should be your friend. It just takes a space-
separated list of paths and does all the other stuff by itself.


Hello, thanks for your reply.
Actually RequiresMountsFor is not what I need because I'd have to point 
some file *inside* the fs.


I mistakenly did my tests on a noauto mount which made me draw false 
conclusions. In fact what I need is just a Requires= ou 
BindsTo=remote-fs.target in my service unit file.


Still I cannot understand where the Requires= comes in 
remote-fs.target unit as doc for special target only describes a Wants= 
dep added by systemd-fstab-generator in the case of auto mounts.


Thanks for your help

--
TH


[systemd-devel] Submitting a service activation to remote mounts success

2024-02-06 Thread Thomas HUMMEL

Hello,

I'm using systemd-239-74 on RHEL 8.8 EUS.

I was wondering if one can express the following :

start some service *only and only if/when* all remote mounts (ex: nfs, 
some parallel fs) has *succeeded*, taking into account it may take some 
time for some mount (some fs clients just live curl | sh themselves at 
start !) to finish (which seems to exlude usage of 
AssertPathIsMountPoint for instance, as it would not wait, or would it ?)


I have no auto option in the fstab for those fs and they use the _netdev 
option


Obvisouly I could statically list all the mounts units as an ordering 
dependency but this is not what I was looking for as there are namy (and 
I'm not even sure - see below - it it would be enough)


Exploring this question I stumbled upon the following points :

my understanding is that:

1. remote-fs.target special target is pulled in by multi-user.target and 
is added by systemd-fstab-generator as a Before= ordering dep to all 
remote .mount units


-> I also see a remote-fs.target has a Requires= 
activation dep : I probably missed it in the doc but I don't see this 
listed in neither implicit nor default dep : where does it come from ?


2. Before=/After= refer, in the case of service units, to when the unit 
has "finished starting up", this being defined by "when it returns 
failed or success", which is dependent of the Type= of the service


Is this understanding correct ?

But when the unit is of type mount : what's the semantic of Before/After 
? (I don't think I saw it in the doc neither)


What's the meaning/use of Type=none in a .mount unit ?

My experience is that the mount may fail and remote-fs.target will still 
be reached, even if one replace Requires with BindsTo, correct ?


So success or failure of the mount process does not seem to be involved 
in the ordering dep, or does it ?


Thanks for your help

--
Thomas HUMMEL


[systemd-devel] Service units handling naive questions

2023-06-07 Thread Thomas HUMMEL

Hello,

I'm running systemd-239-74.el8_8.x86_64 on RHEL 8.8 and have some naive 
questions about services :


Note : I'm talking here only about service units

1) listing of inactive (dead) units


For instance, the following oneshot static service (as it came with the 
distro):


# systemctl cat nfs-utils.service | grep -vE '^#'
[Unit]
Description=NFS server and client services

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/bin/true

# systemctl status nfs-utils.service
● nfs-utils.service - NFS server and client services
   Loaded: loaded (/usr/lib/systemd/system/nfs-utils.service; static; 
vendor preset: disabled)

   Active: inactive (dead)

shows up in the output of :

# systemctl list-units --all --type=service --state=dead 
nfs-utils.service | grep -iE nfs-utils

nfs-utils.service loaded inactive dead NFS server and client services

-> why is it marked as inactive in spite of the RemainAfterExit=yes 
directive ? Shouldn't it be in ACTIVE=active general state ?


b) If I create a simple oneshot static service unit, and start it:

# systemctl cat foobar.service | grep -vE '^#'
[Unit]
Description=Simple service

[Service]
ExecStart=echo "Hello !"

# systemctl status foobar.service
● foobar.service - Simple service
   Loaded: loaded (/etc/systemd/system/foobar.service; static; vendor 
preset: disabled)

   Active: inactive (dead)

Jun 07 18:14:58 orbit systemd[1]: Started Simple service.
Jun 07 18:14:58 orbit echo[1896]: Hello !
Jun 07 18:14:58 orbit systemd[1]: foobar.service: Succeeded.

it ends up inactive (dead) but is not shown by the systemctl list-units 
--all --type=service --state=dead foobar.service command :


# systemctl list-units --all --type=service --state=dead foobar.service
0 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.

c) it does if I turn it into an [Install]-able service and enable it

Could you help me figuring out what logic I am missing ? Does the 
command list only enabled units ?


2) Removing a template unit while instances are still running
#

a) I have a socket-activated sa-sshd@.service template unit (running 
sshd -i) which works fine but I experience many (a lot) of failed 
instances which I have to systemctl reset-failed sa-sshd@'*' just for 
the sake of cleaning up or just having systemctl status completion 
working smoothly


-> is there some way to do it in a smarter way (providing I don't care 
to investigate the failed instances) ?


b) I want to switch from a) to just systemd-socket-proxyd the socket to 
standard non socket activated sshd.service


To migrate, as I see a couple of active sa-sshd@xxx.service instances, 
and assuming new socket activation won't be triggered while I am 
migrating, is it safe to remove the template unit (+ daemon-reload), 
hence turning instances LOADED state to not-found or will it do 
something to the running instances ?


Thanks for your help

--
Thomas HUMMEL





Re: [systemd-devel] systemd-resolved/NetworkManager resolv.conf handling

2022-11-08 Thread Thomas HUMMEL

On 11/7/22 18:35, Barry Scott wrote:


I do not know enough about how that works.


I just tested something like this (as a proof of concept) :

f5.sh root:root/0700 in /etc/NetworkManager/dispatcher.d/

#!/bin/bash

usage()
{
local EXIT_VALUE=$1

echo "usage: "$SCRIPT_NAME"  "

exit $EXIT_VALUE
}


SCRIPT_NAME="$0"

INTERFACE=$1
ACTION=$2

F5_INTERFACE="tun0"

F5_NAMESERVER_1="x.x.x.x"
F5_NAMESERVER_2="x.x.x.x"

[ $# -eq 0 ] && usage 0
[ $# -ne 2 ] && usage 1

 "$INTERFACE" != "$F5_INTERFACE" -o "$ACTION" != "up" ] && exit 0

[ "$INTERFACE" == "$F5_INTERFACE" -a "$ACTION" == "up" ] && echo 
"$SCRIPT_NAME: adding $F5_INTERFACE nameservers to systemd-resolved 
configuration"


/usr/bin/resolvectl dns $F5_INTERFACE $F5_NAMESERVER_1 $F5_NAMESERVER_2 
|| { echo "Pb running resolvectl" ; usage 1 ; }


exit 0

--
Thomas HUMMEL


Re: [systemd-devel] systemd-resolved/NetworkManager resolv.conf handling

2022-11-07 Thread Thomas HUMMEL




On 11/6/22 22:30, Barry wrote:



So a dirty hack is to replace /sbin/resolvconf with a script that 
does-the-right-thing.
Uses resolvectl on the correct interface etc. But only when called by F5.


Hello,

thanks for your answer.

I just do it manually for now. But maybe one could user NM dispatcher 
mechanism to run resolvectl at the right time without touching 
resolvconf ? This would still be a workaround though.


Thanks for your help

--
Thomas HUMMEL


Re: [systemd-devel] systemd-resolved/NetworkManager resolv.conf handling

2022-11-07 Thread Thomas HUMMEL




On 11/6/22 18:24, Petr Menšík wrote:
Oh, understood. Then it is specific problem to Fedora, because I think 
other distributions do not use systemd's implementation of resolvconf 
binary.


Hello,

thanks for your answer



I think original Debian resolvconf package does not use -a interface 
parameter for anything serious.


Yes, my previous system, an Ubuntu LTS did use another implementation 
(and a specific separate package) for resolvconf


 It just uses the same interface

identifier to pair -a and -d for the same connection.


Which interface name then ? I don't remember how it worked...anyway 
that's out of scope here.



It would be worth filling.

https://urldefense.com/v3/__https://support.f5.com/csp/bug-tracker__;!!JFdNOqOXpB6UZW0!p1wHR1oGE8f-pXcg672tlVjCfI6KPyve0K0fvs6xge9oKxr0CicDlyIge8d3gMbjxRvLQFyakawAYJPOVSNlWzk$


Yes, I'll do it.



it restores it when stopped by copying back /etc/resolv.conf.fp-saved
That is exactly what it should do for a VPN, unless it knows a more 
proper way to configure system DNS.


That what resolvconf is for, isn't it ?
Or it may just fill ipv4.dns NM properties maybe.

1) how could, when all resolv.conf-as-a-file-by-NM conf has been 
removed (by me) and symlink to stub has been restored (by me) 
systemd-resolved, with *no trace* of the vpn  nameservers in its own 
/run/systemd/resolv/resolv.conf nor seemingly nowhere else, can be 
still aware of the vpn nameservers (as described in my initial post 
scenario) ?


-> is there a persistent systemd-resolved cache on disk somewhere?
I don't think any persistent cache were ever on disk or that it would be 
a good idea. Most dns caches are able to dump contents of cache 
somewhere on request, but I haven't found a way to do that with resolvectl.


So how, after "reverting" to my initial state, thinking I have removed 
all vpn nameserver references (and rebooted), systemd-resolved is able 
to remember them is still to be answered...



-> Is there any other place where the specific ns <-> interface is 
persited or stored or is this global updating all there is ?
resolvconf might have some hacks to configure rules just for some 
subdomains. openresolv can do something similar. But usually resolvconf 
changes just global set of servers if the interface configured has 
higher priority than previous. Returns them back when such interface is 
stopped. resolvectl layer ignores -m parameter, but it pairs dns 
configuration with real interface. 


Oh, you mean specific int <-> nameserver is handled in the global 
/run/systemd/resolve/resolv.conf which is just swapped and restored ?



AFAIK none such information is
persistent and is lost when systemd-resolved is restarted. But Network 
Manager's plugins configures it from NM interfaces again. 


Well, again in a default NM (i.e. no dns, nor rc.manager directive, 
which on my system means use resolved) NM cannot know about vpn 
nameservers as they're not provided on the provile (yes it is green but 
lists no ipv4.dns property)


$ nmcli -f GENERAL.NM-MANAGED device show tun0
GENERAL.NM-MANAGED: yes
$ nmcli -f ipv4.dns connection show tun0
ipv4.dns:

Thanks for your help

--
Thomas HUMMEL


Re: [systemd-devel] systemd-resolved/NetworkManager resolv.conf handling

2022-11-02 Thread Thomas HUMMEL

On 10/31/22 12:19, Petr Menšík wrote:

Hello, thank you and Barry as well for your answers


I would suggest using strace to find what exactly it does and what it 
tries to modify. I expect sources for that client are not available.


Well, digging a little deeper, here's what I've found out:

1) in the default case (described in my initial post), i.e.

/etc/resolv.conf symlinked to systemd-resolved 
/run/systemd/resolve/stub-resolv.conf

no dns nor rc.manager directives in NM config
no F5 client NM profile

The vpn client:

a) backs up /etc/resolv.conf to /etc/resolv.conf.fp-saved
b) readlinks the symlink
c) execve's /sbin/resolvconf providing nameservers (thus trying to 
play along with systemd-resolved) but on the wrong interface on my 
Fedora (eth0.f5 instead of tun0) [besides with a deprecated and not used 
arg (-m)]


execve("/sbin/resolvconf", ["/sbin/resolvconf", "-a", "eth0.f5", 
"-m 0"], 0x7ffd13bf8568 /* 30 vars */ 


d) set up tun0 interface and bring it up

-> hence we end up with:

a) /etc/resolv.conf.fp-saved as a regular file, copy of 
/run/systemd/resolve/stub-resolv.conf
b) NM managed tun0 interface without and dns property in its 
profile nor any disk persistent profile
c) unchanded /etc/resolv.conf (still linked to 
/run/systemd/resolve/stub-resolv.conf


so, systemd-resolved not knowing about vpn nameservers and vpn 
nameresolution fails without workaround (like resolvectl dns adding the 
tun0 nameserver for instance)


2) with NM handling /etc/resolv.conf as a regular file, i.e.

   /etc symlink rm-ed
   dns=default
   rc.manager=file

the F5 client consider it a 'legacy' setting and overwrite (which is 
wrong to me) NM managed /etc/resolv.conf regular file


it restores it when stopped by copying back /etc/resolv.conf.fp-saved

So, basically I'd say there are 2 bugs :

1) legacy handling which seems to consider pre-NM era legacy
2) resolvconf call when systemd-resolved is used (at least on Fedora)

In any case, I don't understand why it does not set the NM profile 
ipv4.dns property, which would let much more chances for NM and/or 
resolved to work


Anyway, this leaves 2 unanswered questions, the first of which was my 
initial one:


1) how could, when all resolv.conf-as-a-file-by-NM conf has been removed 
(by me) and symlink to stub has been restored (by me) systemd-resolved, 
with *no trace* of the vpn  nameservers in its own 
/run/systemd/resolv/resolv.conf nor seemingly nowhere else, can be still 
aware of the vpn nameservers (as described in my initial post scenario) ?


-> is there a persistent systemd-resolved cache on disk somewhere ?

2) when running resolvconf by hand (resolvconf ) providing specific 
interface specific nameservers (on stdin), it seems to update the 
**global** /run/systemd/resolve/resolv.conf (hence making those 
nameservers available for all interfaces ?)


-> Is there any other place where the specific ns <-> interface is 
persited or stored or is this global updating all there is ?


Thanks for your help

--
Thomas HUMMEL


[systemd-devel] systemd-resolved/NetworkManager resolv.conf handling

2022-10-26 Thread Thomas HUMMEL

Hello,

I'm not sure if this is a systemd-resolved or NetworkManager question 
but it involves both (I know Thomas HALLER is a member of this list too)


on

Fedora release 36 (Thirty Six) using the following kernel and packages

5.19.16-200.fc36.x86_64 #1 SMP PREEMPT_DYNAMIC

systemd-250.8-1.fc36.x86_64
systemd-resolved-250.8-1.fc36.x86_64
NetworkManager-1.38.4-1.fc36.x86_64

I'm using a proprietary vpn client which does not seem to work very well 
with systemd-resolved. As a matter of fact it seems to create a manual 
NM profile which does not include dns properties and it seems to (try 
to) set /etc/resolv.conf aside (F5 vpn linux client f5fpc for the record)


Making it work is not the question here. I'm trying to understand how 
the 2 nameservers it configures may end up in 
/run/systemd/resolve/resolv.conf (and global systemd-resolved config as 
shown by resolvectl status) ONLY when I switch from a non 
systemd-resolved config then back to a systemd-resolved config


Here's exactly what I'm doing/experiencing:

Starting from

a) default NetworkManager config:

# grep -iE 'dns|rc\.manager' NetworkManager.conf
# ls -l conf.d/
total 0

b) systemd-resolved stub-resolv.conf mode:

# ls -l /etc/resolv.conf
lrwxrwxrwx 1 root root 37 Oct 26 19:15 /etc/resolv.conf -> 
/run/systemd/resolve/stub-resolv.conf


and with (not linked from /etc/resolv.conf) :

/run/systemd/resolve/resolve.conf following content:

nameserver 192.168.1.1
nameserver 2a01:cb00:7e1:3300:aa6a:bbff:fe6e:190
search home

matching my auto wireless NM profile

1) I start the vpn client

obviously it does not work very well with systemd-resolved as I don't 
get corresponding nameserver (10.33.1.2,10.33.1.3) anywhere and name 
resolution does not work for corresponding zones


/run/systemd/resolve/resolve.conf content has not changed

2) I stop the vpn client, and switch to the following setup

# rm /etc/resolv.conf
rm: remove symbolic link '/etc/resolv.conf'? y

# cat < /etc/NetworkManager/conf.d/foo.conf
> [main]
> dns=default
> rc.manager=file
> EOF

# reboot

-> after the reboot the /etc/resolv.conf link as been recreated : why ?

(/run/systemd/resolve/resolv.conf hasn't changed, which seems normal to me)

3) I remove it again and reboot

# rm /etc/resolv.conf
rm: remove symbolic link '/etc/resolv.conf'? y

# reboot

-> this time /etc/resolv.conf is as expected a regular file which 
content is handled by NM:


$ ls -l /etc/resolv.conf
-rw-r--r-- 1 root root 114 Oct 26 20:22 /etc/resolv.conf
$ cat /etc/resolv.conf
# Generated by NetworkManager
search home
nameserver 192.168.1.1
nameserver 2a01:cb00:7e1:3300:aa6a:bbff:fe6e:190


4) I start the vpn client

it wrote to /etc/resolv.conf (which seems wrong to me but is out of 
scope here)


$ cat /etc/resolv.conf
#F5 Networks Inc. :File modified by VPN process
search pasteur.fr home
nameserver 10.33.1.2
nameserver 10.33.1.3

the 2 nameservers it provided do not appear in 
/run/systemd/resolve/resolv.conf


6) I stop the vpn client switch back to my orgininal config, and reboot

# rm /etc/NetworkManager/conf.d/foo.conf
rm: remove regular file '/etc/NetworkManager/conf.d/foo.conf'? y

# rm /etc/resolv.conf
rm: remove regular file '/etc/resolv.conf'? y

# ln -s /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf

# reboot

-> everything looks as expected

7) I start the vpn client

-> its provided nameserver appear in /run/systemd/resolv/resolv.conf 
(and resolution of related zones work)


-> why ? Where does the info come from ?

nameserver 10.33.1.2
nameserver 10.33.1.3
nameserver 192.168.1.1
# Too many DNS servers configured, the following entries may be ignored.
nameserver 2a01:cb00:7e1:3300:aa6a:bbff:fe6e:190
search pasteur.fr home

Can you help me figure out what's happening or at least how can the 
behavior seem to change with what seem a rollback to the initial state ?


Thanks for your help

--
Thomas HUMMEL



Re: [systemd-devel] Antw: Re: Re: [EXT] Re: Q: Querying units for "what provides" a target

2022-09-12 Thread THomas HUMMEL

> On 9/9/22 18:09, Andrei Borzenkov wrote:

Hello,

maybe referring to 
https://lists.freedesktop.org/archives/systemd-devel/2022-January/047342.html 
would help clarify ?


--
TH


Re: [systemd-devel] Antw: Re: Antw: [EXT] Re: Q: Change a kernel setting

2022-07-29 Thread Thomas HUMMEL




On 29/07/2022 12:34, Ulrich Windl wrote:

Hello, thanks for your answer


Did you try ConditionPathExists= in the Unit?


No but wouldn't the non existence of the file make the job start to fail 
? Besides, (see in my post) the error message the service logged was a 
permission denied.


I agree though that tmpfiles seems to be the most elegant way in general 
to perform such things.


Thanks.

--
Thomas HUMMEL


Re: [systemd-devel] Antw: [EXT] Re: Q: Change a kernel setting

2022-07-29 Thread Thomas HUMMEL




On 29/07/2022 11:41, Ulrich Windl wrote:


   You can use tmpfiles. In the manpage


Hello, well it seems to depend on the subsystem. I tried the tmpfiles 
way but still encountered some unexplained race condition as explained here


https://lists.freedesktop.org/archives/systemd-devel/2022-July/048100.html

So I rolled back to a service unit and even so I did have to order it 
After= a late (custom) target


None of this was satisfactory but I did not manage to find out what 
happened.


Thanks

--
Thomas HUMMEL


Re: [systemd-devel] Disabling cpufreq/boost at boot time sometimes fails

2022-07-13 Thread Thomas HUMMEL




On 13/07/2022 17:12, killermoe...@gmx.net wrote:

This must explain why my modprobe.d (of acpi_cpufreq) seems to always work but 
not why tmpfiles.d or a .service unit :


Actually, your modprobe.d is much too late



Well, maybe we don't talk about the same thing: I'm only interested in 
/sys/devices/system/cpu/cpufreq/boost file.


I can echo 0 or 1 into this file anytime long after boot and verify 
(running 'stress' for instance) in /proc/cpuinfo that core frequencies 
are boosted or limited (lscpu does not seem to update the info though).


Besides, rmmod'ing the acpi_cpufreq module makes the 'boost' file vanish 
(which seems normal according documentation)



The driver absolutely loaded, as stated earlier. What I find interesting is the 
error message you get with

/sys/devices/system/cpu/cpufreq/boost: Permission denied


Agreed. Assuming module is loaded (hence file present) maybe there's a 
critical section where it does not have the correct permission ? However 
a standard systemd service (with default dependencies) comes quite late 
in the boot process...



Did it failed because the file didn’t exists? Maybe the path you used is wrong?


I don't know. I initially thought about a race thinking that it was some 
systemd services (udevd.service ?) that created or chmoded the file. But 
(see above) this does not seem to be the case or is it ?




In this case I think your best bet is to disable the most option in the 
BIOS/UEFI. At least 
https://urldefense.com/v3/__https://docs.kernel.org/admin-guide/pm/cpufreq.html*rationale-for-boost-control-knob__;Iw!!JFdNOqOXpB6UZW0!t1eSenFnShc2J6NFFCaKdU-KiALOpOCGtva-oRXD7dih4zXs-yXPrA0wcas3OKKLGw4wKo646cvZDartCoacDVmvAi4$
  is speaking of that.
If you don’t have such option in the BIOS/UEFI settings, you could try some 
udev rule reacting to the /sys entry. Something like

`/etc/udev/rules.d/20-disable-cpu-boost.rules`
```
KERNEL=="cpu", ATTR{cpufreq/boost}=="1", ATTR{cpufreq/boost}:="0"
```


Thanks yes, I thought about it. Would like to understand what's failing 
in my original naive setting (tmpfiles or service).


Thanks again anyway.

--
Thomas HUMMEL


Re: [systemd-devel] Disabling cpufreq/boost at boot time sometimes fails

2022-07-13 Thread Thomas HUMMEL




On 13/07/2022 00:35, Silvio Knizek wrote:

Am Dienstag, dem 12.07.2022 um 18:55 +0200 schrieb Thomas HUMMEL:




Hi,


Hello,

thanks for your answer



first of all, no need for /sys in /etc/fstab. /sys will _always_ be
mounted by systemd.


Ok. This must be put by our image generating tool.


Second, this sounds really depending on your used driver (acpi, amd, or
intel). Check out the documentation at
https://docs.kernel.org/admin-guide/pm/cpufreq.html


Well, this states:

"During the initialization of the kernel, the CPUFreq core creates a 
sysfs directory (kobject) called cpufreq under /sys/devices/system/cpu/."


This must explain why my modprobe.d (of acpi_cpufreq) seems to always 
work but not why tmpfiles.d or a .service unit :


As a matter of fact, I assume that since the /sys files seem to be 
created "at initialisation" or more precisely for the boost file, at 
driver, is exposed by the kernel module, this should be done long before 
systemd-tmpfiles-setup.service or my custom service are run ?


The only reason I can think of for those 2 latter setup to fail is that 
driver has not been loaded yet, hence the 
/sys/devices/system/cpu/cpufreq/boost file not existing yet, but I find 
this weird.




Question I have is: why do you want to disable boosting?


One reason is because of rack density/input pdu power ratio.
Another might be performance consistency (at least for benching)

Thanks for your help

--
Thomas HUMMEL


[systemd-devel] Disabling cpufreq/boost at boot time sometimes fails

2022-07-12 Thread Thomas HUMMEL

Hello,

I'm using systemd-239-45 on RHEL 8.4 x86_64 AMD nodes on which I disable 
Turbo Core/Turbo Boost by writing '0' into the following file:


/sys/devices/system/cpu/cpufreq/boost

I want it to be disabled automatically at boot. For that matter I tried 
3 different ways (only one at a time)


1) a service unit configured like this:

[Unit]
Description=Disable CPU Turbo Boost


[Service]
# using tee here to have an output in journalctl to understand when this 
service

# fails to start
ExecStart=/bin/sh -c "/usr/bin/echo 0 | /bin/tee 
/sys/devices/system/cpu/cpufreq/boost"
ExecStop=/bin/sh -c "/usr/bin/echo 1 | /bin/tee 
/sys/devices/system/cpu/cpufreq/boost"

RemainAfterExit=yes

[Install]
WantedBy=sysinit.target

2) adding an entry for systemd-tmpfiles-setup.service like this:

w /sys/devices/system/cpu/cpufreq/boost - - - - 0

3) using modprobe.d like this:

install acpi_cpufreq  /sbin/modprobe --ignore-install acpi_cpufreq 
$CMDLINE_OPTS && echo 0 > /sys/devices/system/cpu/cpufreq/boost



I noticed that *sometimes* using 1) or 2) 
/sys/devices/system/cpu/cpufreq/boost ended up with '1' instead of '0'


I didn't see any error in journal for 2) (tmpfiles.d option) and for 1) 
(systemd service) I saw:



Jul 04 15:06:25  systemd[1]: Started Disable CPU Turbo Boost.
Jul 04 15:06:25  sh[2788]: /bin/tee: 
/sys/devices/system/cpu/cpufreq/boost: Permission denied

Jul 04 15:06:25  sh[2788]: 0
Jul 04 15:06:25  systemd[1]: disable-cpu-turboboost.service: Main 
process exited, code=exited, status=1/FAILURE
Jul 04 15:06:25  systemd[1]: disable-cpu-turboboost.service: Failed 
with result 'exit-code'.


I did not manage to find out if there were a race condition and if so 
what ordering dependencies should be stated.
I tried to compare a "working" and "not working" systemd-analyse output 
but I did not find anything obvious (at least for me)


Besides, /sys is mounted in the fstab (as expected)

sysfs   /sys sysfsdefaults   0 0

is there a corresponding transient .mount unit somewhere ?

Notes:

a) SELinux is disabled

b) I don't think any other service or process is touching the 
/sys/devices/system/cpu/cpufreq/boost file


c) in the event the system boot up with the wrong value, manually echo 0 
into the file (which exists) always work


Can you help me figuring in what direction I should look, if it is 
systemd related at all ?


Thanks for your help

--
Thomas HUMMEL


Re: [systemd-devel] Passive vs Active targets

2022-02-15 Thread Thomas HUMMEL



On 15/02/2022 11:52, Lennart Poettering wrote:

Yes, rsyslog.service should definitely not pull in network.target. 



Thinking again about it after digesting what's been said in this thread 
would it be correct to say that what's "wrong" for rsyslog *pulling* the 
network.target passive target in is that rsyslog is *not* the *provider* 
of the state represented by network.target (whereas NetworkManager is 
for instance) ?


My (current understanding) now is that it's not a technical reason but 
more a design reason (and its possible side effects if such a design is 
not respected):


- it does make sense for a consumer to pull in an active target because 
it wants something to be "done"
- it does make less or no sense for a consumer to pull in an active 
target because it wants just to order relatively to something which is 
"reached", which only the provider know about ?


Thanks for your help

--
Thomas HUMMEL


Re: [systemd-devel] Passive vs Active targets

2022-02-15 Thread Thomas HUMMEL




On 15/02/2022 18:13, Lennart Poettering wrote:

On Di, 15.02.22 17:30, Thomas HUMMEL (thomas.hum...@pasteur.fr) wrote:




A passive unit is a sync point that should be pulled in by the service
that actually needs it to operate correctly. hence: ask the question whether
networkd/NetworkManager will operate only correctly if nftables
finished start-up before it? I think that answer is a clear "no". But
the opposite holds, i.e. nftables only operates as a safe firewall if
it is run *before* networkd/NM start up. Thus it should be nftables
that pulls network-pre.target in, not networkd/NM, because it matters
to nftables, and it doesn't to networkd/NM.


Or maybe it is the other way around : by pulling it *and* knowing that
network interface is configured After= nftable.service is guaranteed to set
up its firewall before any interface gets configured.


So yeah, passive units are mostly about synchronization, i.e. if they
are pulled in they should have units on both sides, otherwise they
make no sense.


Exactly: that's what I meant with my nftables/NetworkManger above: not 
that I thought it made sense for NetworkManager to pull 
network-pre.target in. I meant it made no sense for nftable alone to 
order Before= something it "created".
Hence I kinda wrongfully saw a passive target as a syncpoint for other 
units than those which pull them in. But you're right: one side of the 
synchonization is actually the unit pulling in the passive target ! I 
just took that for granted/forgot it.


I kinda thought/implied it was more or less required (or the way to do 
it) to order Before= a passive target we were pulling in.


So, although I did not see the case : would it be legit to pull a 
passive target and order After= it (I only saw Before= for the one I 
checked I think) ?


Thanks again for your help

--
Thomas HUMMEL


Re: [systemd-devel] Passive vs Active targets

2022-02-15 Thread Thomas HUMMEL

On 15/02/2022 11:52, Lennart Poettering wrote:


a) a passive target "does" nothing and serves only as an ordering checkpoint
b) an active target "does" actually something


Yes, you could see it that way.


Hello, thanks for your answer.


Yes, rsyslog.service should definitely not pull in network.target.


Ok so I this misguided me. Got it now


Then rpcbind.target seems to auto pull itself so without the Before ordering
we see in the NetworkManager.service pulling network.target example



Can't parse this.


Sorry, my mistake, forget about this.


Also, it seems that there are more than one way to pull in a passive
dependency (or maybe several providers which can "publish" it). Like for
instance network-pre.target wich is pulled in by both nftables.service
and/or rdma-ndd.service.


nftables.service should pull it in and order itself before it, if it
intends to set up the firewall before the first network iterface is
configured.


It makes sense but I'm still a bit confused here : I thought that a unit 
which pulled a passive target in was conceptually "publishing it" for 
*other units* to sync After= or Before= it but not to use it itself. 
What you're saying here seems to imply that nftables.services uses 
itself the passive target it "publishes".
Or maybe it is the other way around : by pulling it *and* knowing that 
network interface is configured After= nftable.service is guaranteed to 
set up its firewall before any interface gets configured.




not sure what rdma-ndd does, can't comment on that.


My point was more : is it legit for 2 supposedly different units to pull 
in the same passive target ?



Anyway both point above seem to confirm that one cannot take for granted 
that some passive target will be pulled in, correct ? So before ordering 
around it one can make sure some unit pulls the checkpoint ?


Thanks for your help

--
Thomas HUMMEL


Re: [systemd-devel] Passive vs Active targets

2022-02-15 Thread Thomas HUMMEL

My question was that silly ? ;-)

--
Thomas HUMMEL


Re: [systemd-devel] Antw: [EXT] [systemd‑devel] Why is using fstab the preferred approach according to systemd.mount man page?

2022-02-07 Thread Thomas HUMMEL




On 10/01/2022 21:50, Zbigniew Jędrzejewski-Szmek wrote:


Pretty much. There isn't any big benefit to using mount units (since the
fstab syntax supports all the important use cases), and people are familiar
with fstab. More tools support it too.


Hello,

well although I'm not currently using it, I can see one :

it may be easier to configure .mount units independently (like dropping 
a config file into a .d/ directory) instead of editing one single file 
when done with tools like ansible for instance where you have to regexp 
match lines to edit or use blockinfile like strategies ?


Thanks

--
Thomas HUMMEL


[systemd-devel] Passive vs Active targets

2022-01-31 Thread Thomas HUMMEL

Hello,

I'm successully using systemd with some non trivial (for me!) unit 
dependencies including some performing:


  custom local disk formatting and mounting at boot
  additionnal nics configuration by running postscripts fetched from 
the network

  Infiniband initialisation
  NFS remote mounts
  Infiniband remote mounts
  HPC scheduler and its side services activation

and I've read 
https://www.freedesktop.org/software/systemd/man/systemd.special.html


Still I do not fully (or at all ?) understand the concept of passive vs 
active targets and some related points:


The link above states :

"Note specifically that these passive target units are generally not 
pulled in by the consumer of a service, but by the provider of the 
service. This means: a consuming service should order itself after these 
targets (as appropriate), but not pull it in. A providing service should 
order itself before these targets (as appropriate) and pull it in (via a 
Wants= type dependency)."


nd also :

"Note that these passive units cannot be started manually, i.e. 
"systemctl start time-sync.target" will fail with an error. They can 
only be pulled in by dependency."


Since my first look at a passive dependency was network.target which I 
indeed saw was pulled in by NetworkManager.service which ordered itself 
Before it and which I compared with the active network-online.target 
which pulls in the NetworkManager-wait-online.service I first deduced 
the following:


a) a passive target "does" nothing and serves only as an ordering checkpoint
b) an active target "does" actually something

I thought that a passive target could be seen as "published" by the 
corresponding provider

But this does not seems as simple as that:

For one I see on my system that rsyslog.service also pulls in 
network.target (but orders itself After it and thus does not seeems to 
be the actual "publisher" of it as opposed the NetworkManager.service)


Then rpcbind.target seems to auto pull itself so without the Before 
ordering we see in the NetworkManager.service pulling network.target example


Also, it seems that there are more than one way to pull in a passive 
dependency (or maybe several providers which can "publish" it). Like for 
instance network-pre.target wich is pulled in by both nftables.service 
and/or rdma-ndd.service.


Finally, my understanding is some passive targets are not to be taken 
for granted, i.e. they may not be pulled in at all and it is to the user 
to check it if actually is the case if he want to order a unit againt 
it. I'm not talking here about obvious targets we don't have because out 
of our scope (like not having remote mounts related targets if system is 
purely local) but some we could think we have but maybe not. For 
instance on my system I see remote-fs-pre.target pulled in by 
nfs-client.target but would be remote-fs-pre-target be pulled in (by 
who?) if I had only Infiniband remote mounts ?


So my question would revolve around the above points

Can you help me figuring out the correct way to see those concepts ?

Thanks for your help

--
Thomas HUMMEL



Re: [systemd-devel] Crond session, pam_access and pam_systemd

2020-10-26 Thread Thomas HUMMEL

Hello,

[I was off for one week]

On 16/10/2020 15:45, Mantas Mikulėnas wrote:


If I remember correctly, it's so that the main process would still be 
able to have pid 1 as its parent, without introducing an intermediate 
step in the process tree.


My understanding after thinking about it would rather be :

using PAMName= means that the process the service will execture (let's 
call it the service process) is to be considerred as PAM-ified even if 
it's not, which means a PAM session will be created for it.


As such a sd-executor like process has to do on its behalf the begining 
of the PAM calls (the service process may not do any of this call) . And 
since this executor is replaced (because of exec()) with the actual 
service process) there is no other choice than to fork/exec before that 
the sd-pam handler (and thus monitor the pam_session "from the outside")


If I'm correct, this would be the reason more than the pid 1 direct 
parenthood you mentionned. Otherwise, in the standard services (not 
using PAMName=) case this would work only with the type=forking 
services, wouldn't it ?


Thanks for your help

--
Thomas HUMMEL
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Crond session, pam_access and pam_systemd

2020-10-16 Thread Thomas HUMMEL
of my guesses are correct I still have to figure out the exact 
problem I had when the user (who had a crontab) was not allowed to 
access systemd-user pam service.


Thanks for your help

--
Thomas HUMMEL
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Crond session, pam_access and pam_systemd

2020-10-16 Thread Thomas HUMMEL

Hello,

if I try to sum up all of your answers, I come to the following 
understanding :


- sessions are always created via the pam_systemd module
- which is, in my case called (sshd, crond) via the password-auth stack 
include

- so crond, through pam_systemd will cause a session to be created
- such session is created via the sd-pam helper responsible for 
pam_open_session() and pam_close_session() calls

- such a worker is started by a systemd --user instance
- so a user crontab will ultimately cause the use of the already running 
systemd --user instance of the user (because his logged in or is 
lingered) OR the creation of a systemd --user instance for the purpose 
of the crond session creation


What I still don't quite get is :

- is it sd-pam or systemd --user or user@.service holding them 
which uses the systemd-user pam service name ?


- my understanding was that pam service name is passed to pam_start() : 
in the user crontab case, my guess is that crond does this call with the 
crond service name (so pam knows what module stacks to run).
So this would mean something like the user@.service (or sd-pam) 
would itself call pam_start(systemd-user, ...) when called by pam_systemd ?


So basically pam_systemd module would trigger another service which 
itself would go through pam with the systemd-user service name ?


- again, why is a first ssh login session able to create the user 
session without the user having to be listed for systemd-user in 
access.conf whereas crond semmes to need it (givent no systemd --user 
was previously running in both cases) ?


Thanks for your help

--
Thomas HUMMEL
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Crond session, pam_access and pam_systemd

2020-10-15 Thread Thomas HUMMEL

On 10/14/20 8:13 PM, Andrei Borzenkov wrote:


And both sshd and crond include pam_access in their configuration?


Yes, crond has the same session incude of password-auth.

Thanks

--
Thomas HUMMEL

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Crond session, pam_access and pam_systemd

2020-10-14 Thread Thomas HUMMEL




On 14/10/2020 13:24, Andrei Borzenkov wrote:

On Wed, Oct 14, 2020 at 11:42 AM Thomas HUMMEL  wrote:


Hello,

thanks for your answer. It's getting clearer.

Still : why would the user crond runs on behalf of needs to be allowed
in access.conf to access the systemd-user service ?
My understanding is that the user@.service creation needs this
service type (or just the systemd --user creation ?) such a rule in
access.conf is not needed for let's say a ssh login first session ?



Does PAM configuration for SSH include pam_systemd on your system?


Yes, via the password-auth include :


sshd:


sessioninclude  password-auth

password-auth:


-sessionoptional pam_systemd.so

Thanks

--
Thomas HUMMEL
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Crond session, pam_access and pam_systemd

2020-10-14 Thread Thomas HUMMEL

Hello,

thanks for your answer. It's getting clearer.

Still : why would the user crond runs on behalf of needs to be allowed 
in access.conf to access the systemd-user service ?
My understanding is that the user@.service creation needs this 
service type (or just the systemd --user creation ?) such a rule in 
access.conf is not needed for let's say a ssh login first session ?


Thanks for your help

--
Thomas HUMMEL


On 13/10/2020 20:05, Simon McVittie wrote:

On Tue, 13 Oct 2020 at 13:09:43 +0200, Thomas HUMMEL wrote:

Ok, so for instance, on my debian, when I see:


user@1000.service

│   │ ├─gvfs-goa-volume-monitor.service
│   │ │ └─1480 /usr/lib/gvfs/gvfs-goa-volume-monitor
│   │ ├─gvfs-daemon.service
│   │ │ ├─1323 /usr/lib/gvfs/gvfsd
│   │ │ ├─1328 /usr/lib/gvfs/gvfsd-fuse /run/user/1000/gvfs -f -o big_writes
│   │ │ └─1488 /usr/lib/gvfs/gvfsd-trash --spawner :1.19
/org/gtk/gvfs/exec_spaw
│   │ ├─gvfs-udisks2-volume-monitor.service
│   │ │ └─1453 /usr/lib/gvfs/gvfs-udisks2-volume-monitor
│   │ ├─xfce4-notifyd.service
│   │ │ └─1355 /usr/lib/x86_64-linux-gnu/xfce4/notifyd/xfce4-notifyd

those services jobs are started by the systemd --user in this user init
scope, correct  ?


Yes. In many cases they're started on-demand (for example because
something talks to them over D-Bus) rather than being started "eagerly".


My understanding now after your explanation is that crond, in the case of a
user crontab and pam_systemd in the crond stack, will create a session and
thus instanciate a systemd --user if not already present (like in the
lingered case)


Yes. If uid 1000 is already logged in or is flagged for lingering,
and a cron job for uid 1000 starts, the cron job will reuse their
pre-existing systemd --user. If uid 1000 does not already have a
systemd --user, crond's PAM stack will result in a systemd --user being
started before the cron job, and stopped after the cron job.


Do you confirm that, in the case of crond this systemd --user is useless ?


It might be useful, it might be useless. It depends what's in your
cron jobs.

For example, if you have a cron job that uses GLib to act on SMB shares or
trashed files or anything like that, then it will need gvfs-daemon.service
(just like the fragment of a process tree you quoted above) to be able
to access smb:// or trash:// locations.

 smcv


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Crond session, pam_access and pam_systemd

2020-10-13 Thread Thomas HUMMEL

Hello, thanks again for your answer (and for your patience ;-))

On 12/10/2020 19:48, Mantas Mikulėnas wrote:

Yes, but it is *not* a top level for *all* of the user's processes – 
just for those that are managed through systemctl --user.


Ok, so for instance, on my debian, when I see:


user@1000.service

│   │ ├─gvfs-goa-volume-monitor.service
│   │ │ └─1480 /usr/lib/gvfs/gvfs-goa-volume-monitor
│   │ ├─gvfs-daemon.service
│   │ │ ├─1323 /usr/lib/gvfs/gvfsd
│   │ │ ├─1328 /usr/lib/gvfs/gvfsd-fuse /run/user/1000/gvfs -f -o big_writes
│   │ │ └─1488 /usr/lib/gvfs/gvfsd-trash --spawner :1.19 
/org/gtk/gvfs/exec_spaw

│   │ ├─gvfs-udisks2-volume-monitor.service
│   │ │ └─1453 /usr/lib/gvfs/gvfs-udisks2-volume-monitor
│   │ ├─xfce4-notifyd.service
│   │ │ └─1355 /usr/lib/x86_64-linux-gnu/xfce4/notifyd/xfce4-notifyd

those services jobs are started by the systemd --user in this user init 
scope, correct  ?



So you mean that any service in this placeholder can and do use the
sd-pam helper to call pam_open_session() and pam_close_session instead
of doing it themselves, passing it the relevant PAMName ?


No, I'm talking about system (global) services.

user@.service, itself, is a system service.


Ok it is a system service but why would other system services use the 
sd-pam helper in the init scope inside of a user service ?




I'm not sure I understood in which cases this PAM service name is used


It's used in only one case: when starting the "user@.service" unit.


But in a regular ssh session, this service gets started without the need 
for the user to have (in access.conf) access to systemd-user pam service.


My understanding now after your explanation is that crond, in the case 
of a user crontab and pam_systemd in the crond stack, will create a 
session and thus instanciate a systemd --user if not already present 
(like in the lingered case)


Do you confirm that, in the case of crond this systemd --user is useless 
? It is just created because it is the generic way a session (and side 
user@.service) is created ?


It correct, I still don't get why the user would need to be explcitly 
(in access.conf) allowed to access systemd-user pam service while it's 
not needed if it had ssh'd





Yes, they're completely separate PAM instances.


Ok but again, the crond pam session has nothing to do with sd-pam or 
does it ?




Ok so it's this service (systemd --user) which uses the systemd-user
PAM
service name ? Passed to the generic sd-pam worker ? Correct ?


Yes.


You said above that it was only at the creation of this service ?

Thanks for your help

--
Thomas HUMMEL
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Crond session, pam_access and pam_systemd

2020-10-12 Thread Thomas HUMMEL

Thanks for your answer. Still I'm quite confused.

On 12/10/2020 18:21, Mantas Mikulėnas wrote:


It's a worker process which calls pam_open_session() and 
pam_close_session() on behalf of the user@.service unit.


Well I may be misunderstanding but this user@.service seems like a 
top level (for this user) placeholder for various other services units 
and/or scope, among which the init.scope corresponding to the sd-pam and 
systemd --user processes).


So you mean that any service in this placeholder can and do use the 
sd-pam helper to call pam_open_session() and pam_close_session instead 
of doing it themselves, passing it the relevant PAMName ?



So when you see sd-pam under user@.service, that means it's 
handling the "systemd-user" PAM service.


I'm not sure I understood in which cases this PAM service name is used


They're different but related. Systemd user sessions are always managed 
through PAM (the pam_systemd module), so whenever cron calls 
pam_open_session() it indirectly starts a systemd session as well.


You mean crond running as the user who has his own crontab does call 
pam_open_session() which is defined in the pam_systemd module ?
If this is correct, this has indeed nothing to do with the sd-pam 
pam_open_seesion() mentionned above or does it ?





- what does the first error message refers to and why does the
systemd-user pam service name get passed ? and by which systemd (system
or user) ?


Your systemd --user instance is run as a service


Yes I understood that. But again I'm not really sure what services or 
other units it is supposed to run if I didn't defined user custom 
services. Is it responsible to run things like the user's UI termnials 
for instance ?



Because of that, the service needs to have its own PAM service name and 
makes its own PAM calls independently from crond or anything else.


Ok so it's this service (systemd --user) which uses the systemd-user PAM 
service name ? Passed to the generic sd-pam worker ? Correct ?




- what is the failing systemd job the second message refers to ? Does
this mean that the crond "session" gets created by the systemd --user
instance (as some gnome apps in other contexts for instance) ?


No, it's mostly the opposite – the starting of user@.service is 
triggered by crond opening its PAM session.


Sorry I don't get it : what service exactly is started ? crond opening 
its PAM session does not cause a systemd --user to be instanciated or 
does it ? I thought the only way to have a systemd --user was through 
the creation via pam_systemd notifying systemd-logind at a user fist 
login (and/or to linger the user)


Thanks for your help

--
Thomas HUMMEL
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Crond session, pam_access and pam_systemd

2020-10-12 Thread Thomas HUMMEL

Hello,

Using systemd-239 on CentOS 8.2 I'm trying to figure out what exactly 
happens when a cron "session" is created. In particular, what 
corresponds to the following error messages I get while running a user 
crontab :


2020-10-12T14:27:01.031334+02:00 maestro-orbit systemd: 
pam_access(systemd-user:account): access denied for user `toto' from 
`systemd-user'


2020-10-12T14:27:01.036959+02:00 maestro-orbit crond[135956]: 
pam_systemd(crond:session): Failed to create session: Start job for unit 
user@1000.service failed with 'failed'


- What I'm doing :

ssh to the host, sudo -u toto, crontab -e, exit

so when toto's crontab gets executed toto has no running sessions

- access.conf, for cron, has the line

+:ALL:cron crond

- If, I add

+:toto:systemd-user

the error messages do not occur anymore.

My understanding is that for an standard logged-in user, pam_systemd 
registers the user sessions to systemd-logind and each logged-in user 
has a user slice holding all his session's scopes plus an init scope 
holding a user@.service which in turns holds at least a user 
instance of systemd (systemd --user) and "sd-pam".


So my questions are:

- what is sd-pam ?
- is a crond session different from a user session ?
- what pam service name does crond use ?
- what does the first error message refers to and why does the 
systemd-user pam service name get passed ? and by which systemd (system 
or user) ?
- what is the failing systemd job the second message refers to ? Does 
this mean that the crond "session" gets created by the systemd --user 
instance (as some gnome apps in other contexts for instance) ?

- does the line I added to access.conf makes sense at all ?

I also noticed that if the user gets lingered there is no such error 
message (which makes me think about the creation of the crond session 
through the systemd --user instance running a job)


Thanks for your help and sorry for the confusion

--
Thomas HUMMEL
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd.net-naming-scheme change after update

2020-08-14 Thread Thomas HUMMEL

Thanks for your answer

On 8/11/20 5:43 PM, Michal Sekletar wrote:
On Wed, Aug 5, 2020 at 4:12 PM Thomas HUMMEL 

On RHEL/CentOS 8 biosdevname naming is not used unless it is explicitly 
enabled on the kernel command line using biosdevname=1. 


Indeed I've read the udev rule too fast. No biosdevname involved.


In the case of an updated system net_id failed to generate a name based 
on an on board index provided by the firmware. Hence naming falls back 
to the next naming scheme which is based on PCI topology. I can't 
explain the difference in names between updated and newly provisioned 
system (provided they are exactly identical in terms of HW, firmware, 
...). 


Yes this is the exact same host.
But as you said, it seems it could only be some kind of race condition 
as in one case firmware correctly provided the index, doesn't it ?



To prove this hypothesis you need to modify net_id



 that

it would log about missing attributes. Roughly here,

https://github.com/systemd-rhel/rhel-8/blob/master/src/udev/udev-builtin-net_id.c#L228

you need to call log_error() or something like that only then return 
-ENOENT.



Unfortunately this host is not this often available to play with so I'm 
not sure I can test this.




More details in commit message,

https://patchwork.kernel.org/patch/3733841/


Thanks. So basically with this attribute the kernel can say that the 
name has been set by itself and thus may need to be renamed if one wants 
predictable names ?


However I'm note sure to understand which way it works. Here's how I see 
a simple case like an onboard ethernet nic :


a) kernel is the first to see the device
b) it sends the corresponding event to userspace udev
c) udev via its rules may rename the device
d) kernel sets the name_assign_type attribute accordingly

am I correct ?

If so,

- in a) : does it name the device or not ? with just an enumerated 
suffix (ethX) ?


- in c) could %k be something already diffrent than ethX ?

- in a case where udev applies either onboard or physical path policy, 
would d) be _USER or _RENAMED ?



Thanks for your help

--
TH
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] systemd.net-naming-scheme change after update

2020-08-05 Thread Thomas HUMMEL

Hello,

I've read about consistent network device naming here :

- 
https://www.freedesktop.org/software/systemd/man/systemd.net-naming-scheme.html


- https://www.freedesktop.org/software/systemd/man/systemd.link.html#

and here

- 
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/configuring_and_managing_networking/index#consistent-network-interface-device-naming_configuring-and-managing-networking


But I still cannot explain neither how exactly it works on CentOS nor a 
change I experienced in the onboard ethernet device's name when updating 
my system (runing systemd-udev-239) from CentOS 8.1 to CentOS 8.2 (i386) 
with one method but not with another one :


Starting from CentOS 8.1 where the onboard ethernet device name was 
eno1np0, I tried :


a) to reinstall with a kickstart based mechanism (via the xCAT HPC 
provisioning software) pointing to the 8.2 repos


-> device name stayed the same once booted in CentOS 8.2

# udevadm info /sys/class/net/eno1np0 | grep ID_NET
E: ID_NET_DRIVER=bnxt_en
E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link
E: ID_NET_NAME=eno1np0
E: ID_NET_NAME_MAC=enx3cecef4247de
E: ID_NET_NAME_ONBOARD=eno1np0
E: ID_NET_NAME_PATH=enp198s0f0np0

This is what I was expecting.

b) to update to 8.2 running yum update

-> device name changed to enp198s0f0np0

- I did not change any udev rules
- I did not change NamePolicy (NamePolicy=kernel database onboard slot path)
- I did not disable anything relative to consistent naming with neither 
net.ifnames=0 nor biosdevname=0


According to this policies order, it seems legit to me to end up with 
onboard naming scheme. But how could I have in the yum update case ended 
up with a path naming scheme ?


In a more general way, I don't fully understand how the rename is 
supposed to be done and/or the link between NamePolicy and udev renaming 
rules as they don't seem to apply to ID_NET_NAME_* names


For instance, on CentOS, you go through the following rules :

- 60-net.rules calling /lib/udev/rename_device

which I don't think match my case as my config files has no HWADDR var set :

# Generated by parse-kickstart
TYPE="Ethernet"
DEVICE="eno1np0"
UUID="8482a953-65dc-4814-b6b4-a9d9c7edc4a1"
ONBOOT="yes"
BOOTPROTO="dhcp"
IPV6INIT="yes"

So I should go to

- 71-biosdevname.rules calling biosdevname

SUBSYSTEMS=="pci", PROGRAM="/sbin/biosdevname --smbios 2.6 --nopirq 
--policy physical -i %k", NAME="%c"  OPTIONS+="string_escape=replace" 



which should in my case set the NAME var to em1 as this is the output of 
the command when run manually


- 75-net-description.rules which calls IMPORT{builtin}="net_id"

-> is this what sets the ID_NET_NAME_* udev properties ?

Does this step somehow use the NAME var set after biosdevname ?

- 80-net-setup-link.rules

IMPORT{builtin}="net_setup_link"

NAME=="", ENV{ID_NET_NAME}!="", NAME="$env{ID_NET_NAME}"

What I understand here in my case is that NAME is not empty (because of 
biosdevname step) so I don't understand why I don't end up with em1 
instead of the
 onboard style name. This would mean ID_NET_NAME has been set in a 
previous step ? What was the use of the biosdevname stop then ?



finally, what does "If the kernel claims that the name it has set for a 
device is predictable" mean 
(https://www.freedesktop.org/software/systemd/man/systemd.link.html#) ?


And what is the kernel name (%k) : is it always ethX ?

Thanks for your help

--
Thomas HUMMEL




___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-journald, syslog.socket and service activation

2020-07-03 Thread Thomas HUMMEL




On 02/07/2020 20:48, Andrei Borzenkov wrote:



Once again - dependencies in systemd are between jobs, not between units.


Ok. I may have missed some docs but I've read several man sections 
(likesystemd.service(5) and so on) as well as some 0pointer blog 
articles) and I did experiment a lot.
I did not see this explained as clearly as you do here. At the opposite 
it tends to focus on units (at least that's how I've read it the first 
time), hence the confusion ? In fact, when reading those for the first 
time I was left wanting to know more about transactions and jobs (which 
are mentioned but really quickly). [Note that this is by no way a 
criticism, just a/my feedback]. Watching debug logs gave me hints but 
were not sufficient to come to the understanding you give me right now.



Rule 1: "B requires A" means "when starting B also submit start job for
A and if this job failed *before we start activating B* cancel
activation of B". If there is already start job for A for other reasons,
the first part does nothing.


Ok. What could be other reasons and do you mean this other reason would 
itself have already added the A dependency and its management for itself ?




Rule 2: "B after A" means "if start job for A is present in job queue
wait for this job to complete before proceeding with start job for B".

In your case on boot you have general Before dependency syslog.socket -
sockets.target - basic.target - rsyslog.service and start request for
both syslog.socket and rsyslog.service are queued. Start job for
rsyslog.service is always delayed at least after basic.target (rule 2).
At this point systemd already tried and failed to start syslog.socket,
so rule 1 applies.


Ok. So I guess my test when I, after reboot, run 4 ou 5 systemctl start 
rsyslog.service and only the last one succeeds corresponds to this "race 
condition" you described above ?


Thanks a lot for your explanations. Makes more sense now.

--
Thomas HUMMEL
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-journald, syslog.socket and service activation

2020-07-02 Thread Thomas HUMMEL




On 02/07/2020 19:00, Andrei Borzenkov wrote:


After=syslog.socket will exist only if rsyslog.service is aliased to
syslog.service and your problem was when you removed this alias.


Correct (I did miss this simple thing) !



On boot activation of syslog.socket happens much earlier than activation
of rsyslog.service which gives systemd enough time to register failure
of syslog.socket.



hmmm, but since like you said above the After dependency was not taken 
into account as there is no alias, registering failure should not 
prevent rsyslog to be activated (at boot, it end up being dead) ?



When you start them manually both jobs are submitted
at the same time so activation of rsyslog.service has already happened
when activation of syslog.socket fails. It is already too late for "this
unit will not be started".


I didn't though indeed about these timing diffrences but, again, without 
the After= having any effect, it should not matter, should it ?



So I'm still convinces I'm missing something obvious...

Thanks for your help

--
Thomas HUMMEL

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-journald, syslog.socket and service activation

2020-07-02 Thread Thomas HUMMEL




On 02/07/2020 16:44, Andrei Borzenkov wrote:


This is common misunderstanding. Dependencies are between jobs, not
between units. Requires means systemd will submit additional job for
dependent unit - nothing more nothing less. Unless systemd is also told
to wait for result of this additional job, both are started in parallel
and failure of dependent job does not affect other unit in any way.


You're right. Sorry if I was not clear.
Note however that systemd.unit doc talks about units, not jobs for Requires=

Anyway, it turns out that systemctl list-dependencies --after 
rsyslog.service shows also an ordering dependency on sockets.target / 
syslog.socket.


So, I might be wrong but shouldn't Requires= + After= make the rsyslog 
service fail if syslog.socket fails ?


# systemctl show -p Requires,After rsyslog.service
Requires=syslog.socket system.slice sysinit.target
After=system.slice sysinit.target syslog.socket network.target 
basic.target network-online.target


Doc says:
"If this unit gets activated, the units listed will be activated as 
well. If one of the other units fails to activate, and an ordering 
dependency After= on the failing unit is set, this unit will not be 
started. "


That's what I meant and though it does seems so at boot, I seemed to 
experience the contrary when manually starting rsyslog.service...


So I must like you said misunderstand something...

Thanks for your help

--
Thomas HUMMEL

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] systemd-journald, syslog.socket and service activation

2020-07-02 Thread Thomas HUMMEL
x27;journalctl -xe' for details.' message, but 
ultimately, the last one will launch rsyslog service without any message :


# systemctl start rsyslog
A dependency job for rsyslog.service failed. See 'journalctl -xe' for 
details.

# systemctl start rsyslog
A dependency job for rsyslog.service failed. See 'journalctl -xe' for 
details.

# systemctl start rsyslog
A dependency job for rsyslog.service failed. See 'journalctl -xe' for 
details.

# systemctl start rsyslog
A dependency job for rsyslog.service failed. See 'journalctl -xe' for 
details.

# systemctl start rsyslog
#

rsyslog service will end up running but still no syslog journald socket 
(which seems normal considering the dependency but weird as I did not 
have any message about it at the last start command)


# ls -l /run/systemd/journal/syslog
ls: cannot access '/run/systemd/journal/syslog': No such file or directory


What am I missing ?

Note: rsyslog service is of Type notify and has a Restart value of no-fail

Thanks for your help

--
Thomas HUMMEL



___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] hostnamectl reapplying the same hostname

2020-06-16 Thread Thomas HUMMEL




On 16/06/2020 10:08, Lennart Poettering wrote:

On Mo, 25.05.20 16:19, Thomas HUMMEL (thomas.hum...@pasteur.fr) wrote:


Hello,

the point below has been buried at the end one of a previous thread. So feel
free to ignore it if you find it irrelevant.

With systemd-239 on linux 4.18.0 (CentOS 8.1), why does hostnamectl --static
set-hostname  instantly sets the transient hostname to  *only*
when  is not the current static hostname ?


There's a shortcut in place: if you change a hostname to what it is
already set to things are NOPs, and won't generate security incidents
and so on.


Ok, get it. Thanks.

--
Thomas HUMMEL
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Ordering a service before remote-fs-pre.target makes it quite longer

2020-05-25 Thread Thomas HUMMEL



On 16/05/2020 08:16, Andrei Borzenkov wrote:

15.05.2020 12:57, Thomas HUMMEL пишет:


In other words : is it a bad practice to order a home made service
before remote-fs-pre.target ?



Why would it be? The very reason remote-fs-pre.target was added is to
allow services to be reliably started before remote mounts.


Hello, just to let you know the "end of the movie":

That was exactly my reasoning (ordering before remote mounts). A 
coworker helped me debug the faulty service: in fact this service would 
use ssh to sync files to the node in addition to setup some nics (like 
ib, or put a static ip address on eth nic). The explanation was that the 
user slice depends on systemd-user-sessions.service which is ordered 
After remote-fs.target.


So pam-systemd would fail to create a session (and ssh would fallback to 
some other mecanism I reckon), hence the long timeout delay.


So, you were right of course : nothing special aside a service which was 
not designed to be run before user (even root) sessions.


Thanks

--
TH

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] hostnamectl reapplying the same hostname

2020-05-25 Thread Thomas HUMMEL

Hello,

the point below has been buried at the end one of a previous thread. So 
feel free to ignore it if you find it irrelevant.


With systemd-239 on linux 4.18.0 (CentOS 8.1), why does hostnamectl 
--static set-hostname  instantly sets the transient hostname to 
 *only* when  is not the current static hostname ?


Seems to me different from the caching mecanism which explained name 
sync delays the other way around, no ?


Example:


# cat /proc/sys/kernel/hostname
foobar
# hostnamectl
   Static hostname: toto
Transient hostname: foobar
 Icon name: computer-server
   Chassis: server
Machine ID: 40c61e5c178b444598b68284b02d4148
   Boot ID: efd28246f1dd4a069db77e3f8f1399dc
  Operating System: CentOS Linux 8 (Core)
   CPE OS Name: cpe:/o:centos:centos:8
Kernel: Linux 4.18.0-147.5.1.el8_1.x86_64
  Architecture: x86-64
# hostnamectl --static set-hostname toto
# cat /proc/sys/kernel/hostname
foobar

whereas

# hostnamectl --static set-hostname titi
# cat /proc/sys/kernel/hostname
titi


Thanks

--
TH.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Ordering a service before remote-fs-pre.target makes it quite longer

2020-05-15 Thread Thomas HUMMEL



On 14/05/2020 07:35, Andrei Borzenkov wrote:

It does not match your graphs. Your service is apparently ordered after
network-online.target (not after network.target) and startup is most
certainly initiated before rsyslog.service. Not hat it explains anything
but at least you need to provide accurate facts when you ask question.



Hello,

well this is odd as I didn't express myself such a dependency on 
network-online.target.


The only one I can think of comes indirectly from the 
Before=beegfs-client.service on the runs I did with 
beegfs-client.service enabled. In those conditions, the service which 
takes a long time has a Before=beegfs-client.service dependency which 
itself is ordered after network-online.target. But on the graphs I sent 
I see no beegfs-client so I do think I did send graphs with 
beegfs-client disabled, thus no explicit network-online.target dep...



it is really outside of systemd scope. Systemd has no control
over what your service does once ExecStart is spawned. You need to debug
your service to find out what happens.


Of course. This was by no way a pointing against systemd.

My initial question was maybe I was ordering around remote-fs-pre.target 
in a nonsensical manner.


In other words : is it a bad practice to order a home made service 
before remote-fs-pre.target ?



Thanks for your help.

--

TH


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Ordering a service before remote-fs-pre.target makes it quite longer

2020-05-13 Thread Thomas HUMMEL

Hello,

I'm using the xCAT (xcat.org) software to provision stateless HPC Centos 
8.1 nodes. Via a systemd service called xcatpostinit1.service, it 
enables, at boot time, to run so called postscripts for, for instance


- configure eth nic with a manual (vs dhcp) NetworkManager profile
- configure Infiniband nic
- sync files from the xcat server.

I'm using it exactly for the 3 above examples. File syncing is quite 
light as it consists in syncing pre-created ssh hostkeys.


By default this service has got the following ordering dependency:

After=network.target rsyslog.service

and doesn't pull any dependency

For my own need, I added the following:

Before=beegfs-client.service
Before=beegfs-helperd.service
After=sshd-keygen.target

This works fine.

As one of the postscript this service runs adds a NetworkManager profile 
(to autoconnect from a dhcp-originating boot profile to a manual (same 
ip address) one, and since I nfs mount some filesystems I tought I 
should order units so as to setup network first and only then mount 
remote filesystems


I thus did add:

Before=remote-fs-pre.target

This has a funny result : it ultimately works but this 
xcatpostinit1.service then takes 1min+ to end vs 20sec when the latter 
dependency is not stated.


Please find here to systemd-analyze plot svg's reflecting the with 
Before=remote-fs-pre.target (boot-dep.svg) and without it (boot-nodep.svg)


http://dl.pasteur.fr/fop/GCPbmpii/boot-dep.svg

http://dl.pasteur.fr/fop/AcfI7CSh/boot-nodep.svg

I did spend a lot of time trying to figure out why such a difference, 
all things being equal otherwise.


The part of the service which takes time is the syncing of files which

- occurs before the network reconfiguring
- consists in rsync'ing files from server to node
- triggered by a REST API call (http/80) for what I saw in sources

I did not see any cycles nor anything that caught my eye turning systemd 
in debug mode neither.


As remote-fs-pre.target is a special target I thought I may did misuse 
it for that matter.


Can you help me figure out why the difference ?

Thanks for your help

--
Thomas HUMMEL
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-hostnamed/hostnamectl and transient hostname change

2020-05-11 Thread Thomas HUMMEL

On 5/6/20 11:51 AM, Thomas HUMMEL wrote:

On 5/4/20 3:57 PM, Thomas HUMMEL wrote:

but

hostnamectl --static set-hostname 'static' where current static 
hostname is already 'static' then transient hostname is never set.



Hello,

am I wrong on this one ?


Hello, sorry to insist, I just wanted to know if this was intended or 
not and if so what was the reasoning behind that. As changing static 
hostname also changes transient one, which is fine, reapplying the same 
static hostname when transient is currently different shouldn't also 
change the transient ?


Thanks

--
Thomas HUMMEL

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-hostnamed/hostnamectl and transient hostname change

2020-05-06 Thread Thomas HUMMEL

On 5/4/20 3:57 PM, Thomas HUMMEL wrote:

but

hostnamectl --static set-hostname 'static' where current static hostname 
is already 'static' then transient hostname is never set.


What do you think about it ?


Hello,

am I wrong on this one ?

Thanks for your help

--
Thomas HUMMEL
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] local-fs and remote-fs targets / passive active units

2020-05-05 Thread Thomas HUMMEL

On 5/5/20 7:41 PM, Andrei Borzenkov wrote:


a) Before= does not pull anything anywhere.


Yes I know sorry I did not use the correct term. I did not mean that.


b) as you already found, by default every service is ordered after
local-fs.target. You need DefalutDependencies=no if you want to start
your service that early.


Well, my first naive thought was to think : since fstab mount targets 
gets Before=local-fs.target, let's add this to my service which does 
locally mount some filesystem it creates.
But it then made a cycle with system-tmpfiles-setup which depends on 
local-fs.target and my service which somehow got a dependency on 
sysinit.target which in turn depends on systemd-tmpfiles-setup...


I'll think about another way to do it.

Sorry for the useless bothering...and thanks again for your answer.

--
TH

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] local-fs and remote-fs targets / passive active units

2020-05-05 Thread Thomas HUMMEL

On 5/5/20 5:27 PM, Thomas HUMMEL wrote:

On 5/5/20 5:15 PM, Thomas HUMMEL wrote:

-> this seems to be like an actual run and not only the queuing of a 
job into the transaction which would be discarded afterwards when the 
cycle is discovered ?


Ok I figure out this one : I was confusing the 
systemd-tmpfiles-setup.service from initrd and the one from the actual 
image.


Still my question about why the cycle


Sorry, I must have been blind : the cycle is obvious.

and the explicit pulling of 
Before=local-fs.target stands.


This leads me to the initial questions. Sorry for the confusion...

--
TH
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] local-fs and remote-fs targets / passive active units

2020-05-05 Thread Thomas HUMMEL

On 5/5/20 5:15 PM, Thomas HUMMEL wrote:

-> this seems to be like an actual run and not only the queuing of a job 
into the transaction which would be discarded afterwards when the cycle 
is discovered ?


Ok I figure out this one : I was confusing the 
systemd-tmpfiles-setup.service from initrd and the one from the actual 
image.


Still my question about why the cycle and the explicit pulling of 
Before=local-fs.target stands.


Thanks

--
Thomas HUMMEL
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] local-fs and remote-fs targets / passive active units

2020-05-05 Thread Thomas HUMMEL

On 4/28/20 5:36 PM, Thomas HUMMEL wrote:

3) regarding local-fs dans remote-fs targets : I'm not really sure if 
any fits in either passive or active units.


Hello again,

regarding local-fs.target : is it legit for a custom service unit to 
pull it in with a Before=local-fs.target (no Wants or Requires) ?


For instance, I did create a simple oneshot test service (dodo.service) 
which just sleeps 20s stating Before=local-fs.target. I did that to 
emulate another service I created which formats an ssd and locally mount 
it (but outside of fstab so it would not get the automatic ordering 
dependency  that systemd-fstab-generator gives)


Note : this is on a statless host, so fstab states

rootfs / tmpfs defaults,size=3500M 0 0

1) it seem to create lots of ordering cycles like this one (among many)

dodo.service: Job systemd-tmpfiles-setup.service/start deleted to break 
ordering cycle starting with dodo.service/start


I can't really see why even if I see that systemd-tmpfiles-setup has 
also a Before=local-fs.target ordering dependency ?


2) given the job ultimately gets deleted because of this cycle, I cannot 
understand why it seems to be exectuted anyway :



[root@maestro-1002 systemd]# journalctl  | grep -i 
systemd-tmpfiles-setup.service
May 05 15:45:56 localhost systemd[1]: systemd-tmpfiles-setup.service: 
Installed new job systemd-tmpfiles-setup.service/start as 12
May 05 15:45:56 localhost systemd[1]: systemd-tmpfiles-setup.service: 
Passing 0 fds to service
May 05 15:45:56 localhost systemd[1]: systemd-tmpfiles-setup.service: 
About to execute: /usr/bin/systemd-tmpfiles --create --remove --boot 
--exclude-prefix=/dev
May 05 15:45:56 localhost systemd[1]: systemd-tmpfiles-setup.service: 
Forked /usr/bin/systemd-tmpfiles as 863
May 05 15:45:56 localhost systemd[1]: systemd-tmpfiles-setup.service: 
Changed dead -> start
May 05 15:45:56 localhost systemd[863]: systemd-tmpfiles-setup.service: 
Executing: /usr/bin/systemd-tmpfiles --create --remove --boot 
--exclude-prefix=/dev
May 05 15:45:57 localhost systemd[1]: systemd-tmpfiles-setup.service: 
Child 863 belongs to systemd-tmpfiles-setup.service.
May 05 15:45:57 localhost systemd[1]: systemd-tmpfiles-setup.service: 
Main process exited, code=exited, status=0/SUCCESS
May 05 15:45:57 localhost systemd[1]: systemd-tmpfiles-setup.service: 
Changed start -> exited
May 05 15:45:57 localhost systemd[1]: systemd-tmpfiles-setup.service: 
Job systemd-tmpfiles-setup.service/start finished, result=done
May 05 15:45:57 localhost systemd[1]: Got cgroup empty notification for: 
/system.slice/systemd-tmpfiles-setup.service
May 05 15:46:31 maestro-1002.maestro.pasteur.fr systemd[1]: 
systemd-tmpfiles-setup.service: Changed dead -> exited



-> this seems to be like an actual run and not only the queuing of a job 
into the transaction which would be discarded afterwards when the cycle 
is discovered ?


However once the host booted, I can see that at least one tmpfile (in a 
tmpfiles.d config file) has indeed not be created as it should be (and 
as it actually is when /usr/bin/systemd-tmpfiles --create --remove 
--boot --exclude-prefix=/dev is run manually).


May 05 15:46:32 maestro-1002.maestro.pasteur.fr systemd[1]: 
systemd-tmpfiles-setup.service: Installed new job 
systemd-tmpfiles-setup.service/stop as 84
May 05 15:46:32 maestro-1002.maestro.pasteur.fr systemd[1]: 
systemd-tmpfiles-setup.service: Changed exited -> dead
May 05 15:46:32 maestro-1002.maestro.pasteur.fr systemd[1]: 
systemd-tmpfiles-setup.service: Job systemd-tmpfiles-setup.service/stop 
finished, result=done
May 05 15:46:34 maestro-1002.maestro.pasteur.fr systemd[1]: 
dodo.service: Found dependency on systemd-tmpfiles-setup.service/start
May 05 15:46:34 maestro-1002.maestro.pasteur.fr systemd[1]: 
dodo.service: Job systemd-tmpfiles-setup.service/start deleted to break 
ordering cycle starting with dodo.service/start
May 05 15:51:45 maestro-1002 systemd[1]: Preset files say disable 
systemd-tmpfiles-setup.service.


Can you help me figure out my misunderstanding ?

Thanks for your help

--
Thomas HUMMEL

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-hostnamed/hostnamectl and transient hostname change

2020-05-04 Thread Thomas HUMMEL

On 4/30/20 10:43 AM, Zbigniew Jędrzejewski-Szmek wrote:


Lennart opened a PR to remove the caching:
https://github.com/systemd/systemd/pull/15624.


Great!



The documentation is wrong. The code in hostnamed sets the kernel
hostname when setting the static one. This was changed in
https://github.com/systemd/systemd/commit/c779a44222:


Well, in my experience, it does indeed set the transient hostname if the 
static one is set (which indeed makes sense to me according to the 
semantics of what a static hostname should be) but not in the case when 
you set it to be the same as the current one :


hostnamectl --static set-hostname 'static' when no static hostname is 
set will set the transient as well (immediatly but with some delay for 
hostnamed to see it)


hostnamectl --static set-hostname 'newstatic' when the current static 
hostname is different thant 'newstatic' : same as above


but

hostnamectl --static set-hostname 'static' where current static hostname 
is already 'static' then transient hostname is never set.


What do you think about it ?



(I'm assuming you're not unhappy, just
confused by the unexpected results...). Opinions?


I may be wrong but I experienced what I think is a side effect of 
hostnamed not catching up immediatly the transient hostname change :


As a matter of fact, NetworkManager uses systemd-hostnamed as a proxy 
service to get/set the transient hostname. When this service is, for 
some reason, not available, it falls back to figuring out itself by 
calling gethostname(3).


So, depending of

- the value of hostname-mode in NetworkManager.conf(5)
- the status of systemd-hostnamed service

one, as I myself did, could experience different transient hostname 
settings.


I did try to discuss this in the networkmanager list here :

in particular starting from this post (the post above just show how much 
I was confused)


https://mail.gnome.org/archives/networkmanager-list/2020-April/msg00031.html

of this thread

https://mail.gnome.org/archives/networkmanager-list/2020-April/msg00022.html


Thanks for your help

--
Thomas HUMMEL

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] local-fs and remote-fs targets / passive active units

2020-04-28 Thread Thomas HUMMEL

Hello,

Reading systemd.special(7) and using systemctl show -p 
After,Before,Wants,Requires ..., I tried to figure out if my following 
understanding is true:


doc says:

- an active target is when the consumer pulls in the dependency (ex: 
network-online.target pulled in by nfs-mountd.service)


- a passive target is when the producer pulls in the dependency (ex: 
network.target pulled in by NetworkManager.service and no other units is 
supposed to pull the passive unit in.


1) would it be true to consider that an active target always pulls in 
some units, which is why it is ultimately called "active" : it "does" 
(pull) something ? So an active unit would provide something to the 
consumers and would be on the "requirement" side of dependency type.


2) would it be true to consider that a passive target never pulls in any 
unit, which is why it is ultimately called "passive" as it just consists 
of some provider "publishing" a check point other units can order 
themselves upon ? This would be on the "ordering" side of dependeny type ?


3) regarding local-fs dans remote-fs targets : I'm not really sure if 
any fits in either passive or active units.


I see that local-fs.target can be pulled in by sysinit.target and that 
dracut-pre-pivot.target can pull in remote-fs.target but to me those 2 
targets would rather fit the passive unit category ?


Thanks for your help

--

Thomas HUMMEL
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-hostnamed/hostnamectl and transient hostname change

2020-04-27 Thread Thomas HUMMEL

On 4/27/20 11:51 AM, Mantas Mikulėnas wrote:

Hello, thanks for your answer.

On Mon, Apr 20, 2020 at 6:17 PM Thomas HUMMEL <mailto:thomas.hum...@pasteur.fr>> wrote:


1. why does the transient hostname change while I stated --static only
while running hostnamectl ?

2. why does the change take some time to appear on dbus ?




Hostnamed does not implement receiving hostname change notifications 
from the kernel, so it always reports you the same hostname that it has 
seen on startup.


That was my understanding as well.

You're only seeing changes because hostnamed /exits when idle/ -- the 
next time you're actually talking to a brand new instance of hostnamed, 
which has seen the new hostname.


But this does not explain why the transient hostname is changed as I 
only changed the static one, does it ? Unless this new instance sets it 
from the static one when it starts ? I mean something has to call 
sethostname(2) to set the transient to the new static one, right ?





3. what is supposed to happen the other way around ? i.e. if I change
the transient hostname (hostname(1) command or writing to
/proc/sys/kernel/hostname) : is dbus/hostnamectl supposed to see the
change ? When and how ?


In theory, hostnamed should be waiting for poll() 
<https://git.kernel.org/linus/f1ecf06854a66ee663f4d4cf029c78cd62a15e04> 
on /proc/sys/kernel/hostname. If hostnamed is running when the change 
happens, it should receive an event through epoll and re-read the 
hostname. (If hostnamed is *not* running, it will simply re-read the 
hostname on startup and no special events are needed.)


Ok. I read about this poll() mechanism to catch a transient hostname 
change indeed.


In practice, hostnamed does not do that (although several other systemd 
daemons do). It was probably forgotten to implement.


Ok.



D-Bus doesn't care about hostnames; it's just a message bus.


Yes I didn't mean that, sorry.

Thanks for your help

--
ThomaS HUMMEL


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-hostnamed/hostnamectl and transient hostname change

2020-04-27 Thread Thomas HUMMEL

On 4/27/20 11:16 AM, Thomas HUMMEL wrote:

Actually, I noticed this is true when NetworkManager's hostname-mode 
setting is set to 'none'. If set to 'dhcp', the transient hostname 
aligns instantly to the new static one.


Sorry I may be wrong on this one as I can not reproduce it.

Thanks

--
TH
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-hostnamed/hostnamectl and transient hostname change

2020-04-27 Thread Thomas HUMMEL

On 4/20/20 5:10 PM, Thomas HUMMEL wrote:

At this point, the transient hostname is unchanged (which is what I'd 
expect) as seen with hostnamectl above or directly asking dbus:


[root@maestro-1000 ~]# dbus-send --print-reply --system 
--dest=org.freedesktop.hostname1 /org/freedesktop/hostname1 
org.freedesktop.DBus.Properties.Get string:'org.freedesktop.hostname1' 
string:'Hostname'
method return time=1587394568.997699 sender=:1.80 -> destination=:1.85 
serial=9 reply_serial=2

    variant   string "maestro-1000"

but a moment later, it get sets to 'toto':


Actually, I noticed this is true when NetworkManager's hostname-mode 
setting is set to 'none'. If set to 'dhcp', the transient hostname 
aligns instantly to the new static one.


Thanks

--
TH


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] systemd-hostnamed/hostnamectl and transient hostname change

2020-04-20 Thread Thomas HUMMEL

Hello,

I hope I'm not on the wrong list to ask this. On CentOS 8.1 
x86_64/systemd-239-18.el8_1.4.x86_64 I'm experiencing the following:


Starting with no static hostname and a transient hostname set by 
dracut/initrd (this is a PXE booted stateless osimage and 
'hostname-mode' is set to 'none' in NetworkManager.conf) :


[root@maestro-1000 ~]# cat /proc/sys/kernel/hostname
maestro-1000

[root@maestro-1000 ~]# dbus-send --print-reply --system 
--dest=org.freedesktop.hostname1 /org/freedesktop/hostname1 
org.freedesktop.DBus.Properties.Get string:'org.freedesktop.hostname1' 
string:'Hostname'
method return time=1587394362.680900 sender=:1.73 -> destination=:1.72 
serial=3 reply_serial=2

   variant   string "maestro-1000"

[root@maestro-1000 ~]# dbus-send --print-reply --system 
--dest=org.freedesktop.hostname1 /org/freedesktop/hostname1 
org.freedesktop.DBus.Properties.Get string:'org.freedesktop.hostname1' 
string:'StaticHostname'
method return time=1587394371.482559 sender=:1.73 -> destination=:1.74 
serial=5 reply_serial=2

   variant   string ""

[root@maestro-1000 ~]# cat /etc/hostname
cat: /etc/hostname: No such file or directory

[root@maestro-1000 ~]# hostnamectl status
   Static hostname: n/a
Transient hostname: maestro-1000
 Icon name: computer-server
   Chassis: server
Machine ID: d1996816500e4d5ca3fe8b20af23cec1
   Boot ID: 4cc11d011b35465fa9a2ce2737719e78
  Operating System: CentOS Linux 8 (Core)
   CPE OS Name: cpe:/o:centos:centos:8
Kernel: Linux 4.18.0-147.5.1.el8_1.x86_64
  Architecture: x86-64

I setup a static hostname with hostnamectl(1) command using only the 
--static flag:


# hostnamectl --static set-hostname toto
[root@maestro-1000 ~]# cat /proc/sys/kernel/hostname
toto

As expected a /etc/hostname file is created:

[root@maestro-1000 ~]# cat /proc/sys/kernel/hostname
toto

and the change is visible on dbus or via hostnamectl
[root@maestro-1000 ~]# dbus-send --print-reply --system 
--dest=org.freedesktop.hostname1 /org/freedesktop/hostname1 
org.freedesktop.DBus.Properties.Get string:'org.freedesktop.hostname1' 
string:'StaticHostname'
method return time=1587394555.550333 sender=:1.80 -> destination=:1.83 
serial=8 reply_serial=2

   variant   string "toto"

[root@maestro-1000 ~]# hostnamectl status
   Static hostname: toto
Transient hostname: maestro-1000
 Icon name: computer-server
   Chassis: server
Machine ID: d1996816500e4d5ca3fe8b20af23cec1
   Boot ID: 4cc11d011b35465fa9a2ce2737719e78
  Operating System: CentOS Linux 8 (Core)
   CPE OS Name: cpe:/o:centos:centos:8
Kernel: Linux 4.18.0-147.5.1.el8_1.x86_64
  Architecture: x86-64

At this point, the transient hostname is unchanged (which is what I'd 
expect) as seen with hostnamectl above or directly asking dbus:


[root@maestro-1000 ~]# dbus-send --print-reply --system 
--dest=org.freedesktop.hostname1 /org/freedesktop/hostname1 
org.freedesktop.DBus.Properties.Get string:'org.freedesktop.hostname1' 
string:'Hostname'
method return time=1587394568.997699 sender=:1.80 -> destination=:1.85 
serial=9 reply_serial=2

   variant   string "maestro-1000"

but a moment later, it get sets to 'toto':

[root@maestro-1000 ~]# dbus-send --print-reply --system 
--dest=org.freedesktop.hostname1 /org/freedesktop/hostname1 
org.freedesktop.DBus.Properties.Get string:'org.freedesktop.hostname1' 
string:'Hostname'
method return time=1587394788.123612 sender=:1.99 -> destination=:1.98 
serial=3 reply_serial=2

   variant   string "toto"

So my questions are

1. why does the transient hostname change while I stated --static only 
while running hostnamectl ?


2. why does the change take some time to appear on dbus ?

3. what is supposed to happen the other way around ? i.e. if I change 
the transient hostname (hostname(1) command or writing to 
/proc/sys/kernel/hostname) : is dbus/hostnamectl supposed to see the 
change ? When and how ?


Note : as a Centos 8.1 standard install I've got of course the 
systemd-hostnamed service "enabled" (actually static) but I did not ran 
it myself and NetworkManager (hostname-mode=none) does not manage the 
transient hostname. It only uses this service as a proxy to get the 
'original' hostname.


Thanks for your help

--
Thomas HUMMEL





___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel