Re: [systemd-devel] Antw: [EXT] Re: [systemd‑devel] systemctl log verbosity

2021-08-18 Thread Ulrich Windl
>>> Michael Chapman  schrieb am 18.08.2021 um 08:38 in
Nachricht :
> On Wed, 18 Aug 2021, Ulrich Windl wrote:
>> >>> Michael Chapman  schrieb am 17.08.2021 um 02:52
in
>> Nachricht <885331af-bb7-41d0-e8-26c92023b...@very.puzzling.org>:
>> > On Tue, 17 Aug 2021, Dave Close wrote:
>> >> I'm trying to run "systemctl show" in a cron script. It works but I get
>> >> a huge number of extra lines in my log for each run. Why? Can this be
>> >> suppressed. I don't want to overfill the log.
>> >> 
>> >> There is nothing in the man page (that I noticed) indicating that
"show"
>> >> causes anything to be logged. But here's an example of what I see.
>> >> 
>> >> >Aug 16 16:10:01 svcs systemd[1]: Created slice User Slice of UID 0.
>> >> >Aug 16 16:10:01 svcs systemd[1]: Starting User Runtime Directory 
>> > /run/user/0...
>> >> >Aug 16 16:10:01 svcs systemd[1]: Finished User Runtime Directory 
>> > /run/user/0.
>> >> >Aug 16 16:10:01 svcs systemd[1]: Starting User Manager for UID 0...
>> >> >Aug 16 16:10:01 svcs systemd[80491]: Queued start job for default
target 
>> > Main User Target.
>> >> >Aug 16 16:10:01 svcs systemd[80491]: Created slice User Application
>> Slice.
>> >> >Aug 16 16:10:01 svcs systemd[80491]: Condition check resulted in Mark
boot
>> 
>> > as successful after the 
>> >> user session has run 2 minutes being skipped.
>> >> >Aug 16 16:10:01 svcs systemd[80491]: Started Daily Cleanup of User's 
>> > Temporary Directories.
>> >> >Aug 16 16:10:01 svcs systemd[80491]: Reached target Paths.
>> >> >Aug 16 16:10:01 svcs systemd[80491]: Reached target Timers.
>> >> >Aug 16 16:10:01 svcs systemd[80491]: Starting D‑Bus User Message Bus
>> Socket.
>> >> >Aug 16 16:10:01 svcs systemd[80491]: Condition check resulted in
PipeWire
>> 
>> > PulseAudio being skipped.
>> >> >Aug 16 16:10:01 svcs systemd[80491]: Listening on Multimedia System.
>> >> >Aug 16 16:10:01 svcs systemd[80491]: Starting Create User's Volatile
Files
>> 
>> > and Directories...
>> >> >Aug 16 16:10:01 svcs systemd[80491]: Finished Create User's Volatile
Files
>> 
>> > and Directories.
>> >> >Aug 16 16:10:01 svcs systemd[80491]: Listening on D‑Bus User Message
Bus 
>> > Socket.
>> >> >Aug 16 16:10:01 svcs systemd[80491]: Reached target Sockets.
>> >> >Aug 16 16:10:01 svcs systemd[80491]: Reached target Basic System.
>> >> >Aug 16 16:10:01 svcs systemd[80491]: Reached target Main User Target.
>> >> >Aug 16 16:10:01 svcs systemd[80491]: Startup finished in 151ms.
>> >> >Aug 16 16:10:01 svcs systemd[1]: Started User Manager for UID 0.
>> >> >Aug 16 16:10:01 svcs systemd[1]: Started Session 72 of User root.
>> >> >Aug 16 16:10:01 svcs root[80504]: ## logger output from cron script ##
>> >> >Aug 16 16:10:01 svcs systemd[1]: session‑72.scope: Deactivated
>> successfully.
>> >> 
>> >> I see these additional 23 lines (plus the one‑line script output) every
>> >> time the script runs. That seems excessively verbose to me.
>> >> 
>> >> The system is Fedora 34 x86_64.
>> > 
>> > Cron jobs are run with pam_systemd, so they are run within a logind 
>> > session. If there is no other sessions for root at that time, root's own

>> > systemd manager is started when the Cron job launches, and is stopped
when 
>> > the Cron job terminates. All of these log messages are related to this.
>> > 
>> > You may instead want to make root a lingering user:
>> > 
>> > loginctl enable‑linger root
>> > 
>> > This setting is persistent. You can use disable‑linger at a later time to

>> > turn it off if necessary.
>> > 
>> > With root configured as a lingering user, its systemd manager remains 
>> > running all the time.
>> 
>> After reading the manual page I wonder: Is tha tsetting persistent, i.e.:
>> Where is that setting stored?
> 
> Yes, it is persistent.
> 
> Lingering users are just represented as files under 
> /var/lib/systemd/linger/ (though this is an implementation detail, of 
> course).

Of course, but the manual page of systemd-logind.service could state that
settings are saved persistently "somewhere".
Currently it does not even mention "linger", but the binary has the string
"Failed to open /var/lib/systemd/linger/: %m" inside.

Regards,
Ulrich








Re: [systemd-devel] Antw: [EXT] Re: [systemd‑devel] systemctl log verbosity

2021-08-18 Thread Michael Chapman
On Wed, 18 Aug 2021, Ulrich Windl wrote:
> >>> Michael Chapman  schrieb am 18.08.2021 um 08:38 in
> Nachricht :
> > On Wed, 18 Aug 2021, Ulrich Windl wrote:
> >> >>> Michael Chapman  schrieb am 17.08.2021 um 02:52
> in
> >> Nachricht <885331af-bb7-41d0-e8-26c92023b...@very.puzzling.org>:
> >> > On Tue, 17 Aug 2021, Dave Close wrote:
> >> >> I'm trying to run "systemctl show" in a cron script. It works but I get
> >> >> a huge number of extra lines in my log for each run. Why? Can this be
> >> >> suppressed. I don't want to overfill the log.
> >> >> 
> >> >> There is nothing in the man page (that I noticed) indicating that
> "show"
> >> >> causes anything to be logged. But here's an example of what I see.
> >> >> 
> >> >> >Aug 16 16:10:01 svcs systemd[1]: Created slice User Slice of UID 0.
> >> >> >Aug 16 16:10:01 svcs systemd[1]: Starting User Runtime Directory 
> >> > /run/user/0...
> >> >> >Aug 16 16:10:01 svcs systemd[1]: Finished User Runtime Directory 
> >> > /run/user/0.
> >> >> >Aug 16 16:10:01 svcs systemd[1]: Starting User Manager for UID 0...
> >> >> >Aug 16 16:10:01 svcs systemd[80491]: Queued start job for default
> target 
> >> > Main User Target.
> >> >> >Aug 16 16:10:01 svcs systemd[80491]: Created slice User Application
> >> Slice.
> >> >> >Aug 16 16:10:01 svcs systemd[80491]: Condition check resulted in Mark
> boot
> >> 
> >> > as successful after the 
> >> >> user session has run 2 minutes being skipped.
> >> >> >Aug 16 16:10:01 svcs systemd[80491]: Started Daily Cleanup of User's 
> >> > Temporary Directories.
> >> >> >Aug 16 16:10:01 svcs systemd[80491]: Reached target Paths.
> >> >> >Aug 16 16:10:01 svcs systemd[80491]: Reached target Timers.
> >> >> >Aug 16 16:10:01 svcs systemd[80491]: Starting D‑Bus User Message Bus
> >> Socket.
> >> >> >Aug 16 16:10:01 svcs systemd[80491]: Condition check resulted in
> PipeWire
> >> 
> >> > PulseAudio being skipped.
> >> >> >Aug 16 16:10:01 svcs systemd[80491]: Listening on Multimedia System.
> >> >> >Aug 16 16:10:01 svcs systemd[80491]: Starting Create User's Volatile
> Files
> >> 
> >> > and Directories...
> >> >> >Aug 16 16:10:01 svcs systemd[80491]: Finished Create User's Volatile
> Files
> >> 
> >> > and Directories.
> >> >> >Aug 16 16:10:01 svcs systemd[80491]: Listening on D‑Bus User Message
> Bus 
> >> > Socket.
> >> >> >Aug 16 16:10:01 svcs systemd[80491]: Reached target Sockets.
> >> >> >Aug 16 16:10:01 svcs systemd[80491]: Reached target Basic System.
> >> >> >Aug 16 16:10:01 svcs systemd[80491]: Reached target Main User Target.
> >> >> >Aug 16 16:10:01 svcs systemd[80491]: Startup finished in 151ms.
> >> >> >Aug 16 16:10:01 svcs systemd[1]: Started User Manager for UID 0.
> >> >> >Aug 16 16:10:01 svcs systemd[1]: Started Session 72 of User root.
> >> >> >Aug 16 16:10:01 svcs root[80504]: ## logger output from cron script ##
> >> >> >Aug 16 16:10:01 svcs systemd[1]: session‑72.scope: Deactivated
> >> successfully.
> >> >> 
> >> >> I see these additional 23 lines (plus the one‑line script output) every
> >> >> time the script runs. That seems excessively verbose to me.
> >> >> 
> >> >> The system is Fedora 34 x86_64.
> >> > 
> >> > Cron jobs are run with pam_systemd, so they are run within a logind 
> >> > session. If there is no other sessions for root at that time, root's own
> 
> >> > systemd manager is started when the Cron job launches, and is stopped
> when 
> >> > the Cron job terminates. All of these log messages are related to this.
> >> > 
> >> > You may instead want to make root a lingering user:
> >> > 
> >> > loginctl enable‑linger root
> >> > 
> >> > This setting is persistent. You can use disable‑linger at a later time to
> 
> >> > turn it off if necessary.
> >> > 
> >> > With root configured as a lingering user, its systemd manager remains 
> >> > running all the time.
> >> 
> >> After reading the manual page I wonder: Is tha tsetting persistent, i.e.:
> >> Where is that setting stored?
> > 
> > Yes, it is persistent.
> > 
> > Lingering users are just represented as files under 
> > /var/lib/systemd/linger/ (though this is an implementation detail, of 
> > course).
> 
> Of course, but the manual page of systemd-logind.service could state that
> settings are saved persistently "somewhere".
> Currently it does not even mention "linger", but the binary has the string
> "Failed to open /var/lib/systemd/linger/: %m" inside.

Well, the loginctl documentation says:

If enabled for a specific user, a user manager is spawned for
the user at boot and kept around after logouts.

Which kind of implies that there must be some persistent state somewhere 
-- how else would it do this "at boot"? The actual nature of this state 
isn't really that important.

Re: [systemd-devel] Upgraded multiple systems to systemd 249.3 and all had eth1 not started / configured

2021-08-18 Thread Arian van Putten
https://github.com/systemd/systemd-stable/pull/111 has not landed in a v249
point release yet.   v249.3 was tagged 12 days ago; and that fix was only
merged 8 days ago.

On Wed, Aug 18, 2021 at 8:56 AM Amish  wrote:

> Hello
>
> Further to my previous email:
>
> I see that there is already an *extremely similar issue* reported on July
> 12, 2021 and it has been fixed.
>
> https://github.com/systemd/systemd/issues/20203
>
> But I do not know if this fix exists in systemd v249.3 (Arch Linux)
>
> If it exists that means that fix is breaking my system.
> And if it does not exist that means, I can expect it to fix my issue.
>
> Regards,
>
> Amish.
> On 18/08/21 11:42 am, Amish wrote:
>
> Hello,
>
> Thank you for your reply.
>
> I can understand that there can be race.
>
> *But when I check logs, there is no race happening*.
>
> *Let us see and analyze the logs.*
>
> Stage 1:
> System boots, and kernel assigns eth0, eth1 and eth2 as interface names.
>
> Aug 18 09:17:13 kk kernel: e1000e :00:1f.6 eth0: (PCI
> Express:2.5GT/s:Width x1) e0:d5:5e:8d:7f:2f
> Aug 18 09:17:13 kk kernel: e1000e :00:1f.6 eth0: Intel(R) PRO/1000
> Network Connection
> Aug 18 09:17:13 kk kernel: e1000e :00:1f.6 eth0: MAC: 13, PHY: 12, PBA
> No: FF-0FF
> Aug 18 09:17:13 kk kernel: 8139too :04:00.0 eth1: RealTek RTL8139 at
> 0x0e8fc9bb, 00:e0:4d:05:ee:a2, IRQ 19
> Aug 18 09:17:13 kk kernel: r8169 :02:00.0 eth2: RTL8168e/8111e,
> 50:3e:aa:05:2b:ca, XID 2c2, IRQ 129
> Aug 18 09:17:13 kk kernel: r8169 :02:00.0 eth2: jumbo features
> [frames: 9194 bytes, tx checksumming: ko]
>
> Stage 2:
> Now udev rules are triggered and the interfaces are renamed to tmpeth0,
> tmpeth2 and tmpeth1.
>
> Aug 18 09:17:13 kk kernel: 8139too :04:00.0 tmpeth2: renamed from eth1
> Aug 18 09:17:13 kk kernel: e1000e :00:1f.6 tmpeth0: renamed from eth0
> Aug 18 09:17:13 kk kernel: r8169 :02:00.0 tmpeth1: renamed from eth2
>
> Stage 3:
> Now my script is called and it renames interfaces to eth0, eth2 and eth1.
>
> Aug 18 09:17:13 kk kernel: e1000e :00:1f.6 eth0: renamed from tmpeth0
> Aug 18 09:17:14 kk kernel: r8169 :02:00.0 eth1: renamed from tmpeth1
> Aug 18 09:17:14 kk kernel: 8139too :04:00.0 eth2: renamed from tmpeth2
>
> Effectively original interface eth1 and eth2 are swapped. While eth0
> remains eth0.
>
> All these happened before systemd-networkd started and interface renaming
> was over by 9:17:14.
>
> Stage 4:
> Now systemd-networkd starts, 2 seconds after all interface have been
> assigned their final names.
>
> Aug 18 09:17:16 kk systemd[1]: Starting Network Configuration...
> Aug 18 09:17:17 kk systemd-networkd[426]: lo: Link UP
> Aug 18 09:17:17 kk systemd-networkd[426]: lo: Gained carrier
> Aug 18 09:17:17 kk systemd-networkd[426]: Enumeration completed
> Aug 18 09:17:17 kk systemd[1]: Started Network Configuration.
> Aug 18 09:17:17 kk systemd-networkd[426]: eth2: Interface name change
> detected, renamed to eth1.
> Aug 18 09:17:17 kk systemd-networkd[426]: Could not process link message:
> File exists
> Aug 18 09:17:17 kk systemd-networkd[426]: eth1: Failed
> Aug 18 09:17:17 kk systemd-networkd[426]: eth1: Interface name change
> detected, renamed to eth2.
> Aug 18 09:17:17 kk systemd-networkd[426]: eth1: Interface name change
> detected, renamed to tmpeth2.
> Aug 18 09:17:17 kk systemd-networkd[426]: eth0: Interface name change
> detected, renamed to tmpeth0.
> Aug 18 09:17:17 kk systemd-networkd[426]: eth2: Interface name change
> detected, renamed to tmpeth1.
> Aug 18 09:17:17 kk systemd-networkd[426]: tmpeth0: Interface name change
> detected, renamed to eth0.
> Aug 18 09:17:17 kk systemd-networkd[426]: tmpeth1: Interface name change
> detected, renamed to eth1.
> Aug 18 09:17:17 kk systemd-networkd[426]: tmpeth2: Interface name change
> detected, renamed to eth2.
> Aug 18 09:17:17 kk systemd-networkd[426]: eth1: Link UP
> Aug 18 09:17:17 kk systemd-networkd[426]: eth0: Link UP
> Aug 18 09:17:20 kk systemd-networkd[426]: eth0: Gained carrier
>
> This is when eth0 and eth1 interfaces are up and configured by
> systemd-networkd but eth2 is down and not configured.
>
>
>
> *None of the .network configuration files match by interface names. They
> all match just by MAC address. *# sample .network file.
>
> [Match]
> MACAddress=e0:d5:5e:8d:7f:2f
> Type=ether
>
> [Network]
> IgnoreCarrierLoss=yes
> LinkLocalAddressing=no
> IPv6AcceptRA=no
> ConfigureWithoutCarrier=true
> Address=192.168.25.2/24
>
> Above error message "eth1: failed", was not showing earlier version of
> systemd.
>
> So recent version of systemd-networkd is doing something different and
> this is where something is going wrong.
>
> Stage 5: (my workaround for this issue)
> I wrote a new service file which restarts systemd-networkd after waiting
> for 10 seconds.
>
> Aug 18 09:17:27 kk systemd[1]: Stopping Network Configuration...
> Aug 18 09:17:27 kk systemd[1]: systemd-networkd.service: Deactivated
> successfully.
> Aug 18 09:17

Re: [systemd-devel] [hostnamed] Why the service will automatically exit after 30 seconds

2021-08-18 Thread Michael Biebl
Am Mi., 18. Aug. 2021 um 03:35 Uhr schrieb 李成刚 :
>
> How to configure this service so that it will not automatically exit

You can't. The exit-on-idle timeout of 30s is hard-coded.


[systemd-devel] 回复: [hostnamed] Why the service will automatically exit after 30 seconds

2021-08-18 Thread 李成刚
hi,Colin Guthrie
     When the systemd-hostnamed service is started through 
policykit, the password window will automatically exit, which seems to be a 
terrible experience
 
 
-- 原始邮件 --
发件人: "Colin Guthrie"http://colin.guthr.ie/

Day Job:
   Tribalogic Limited http://www.tribalogic.net/
Open Source:
   Mageia Contributor http://www.mageia.org/
   PulseAudio Hacker http://www.pulseaudio.org/
   Trac Hacker http://trac.edgewall.org/

Re: [systemd-devel] [hostnamed] Why the service will automatically exit after 30 seconds

2021-08-18 Thread Cristian Rodríguez
On Tue, Aug 17, 2021 at 9:35 PM 李成刚  wrote:
>
> How to configure this service so that it will not automatically exit


So what are you trying to accomplish with this ? why do you need yet
another service running when it is totally idle ?

> When the systemd-hostnamed service is started through policykit, the password 
> window will automatically exit, which seems to be a terrible experience

DAFUQ? if polkit is really talking to hostnamed then has to deal with
things so they either remaining alive or connect again ...


Re: [systemd-devel] [hostnamed] Why the service will automatically exit after 30 seconds

2021-08-18 Thread Michael Chapman
On Thu, 19 Aug 2021, Cristian Rodríguez wrote:
> On Tue, Aug 17, 2021 at 9:35 PM 李成刚  wrote:
> >
> > How to configure this service so that it will not automatically exit
> 
> 
> So what are you trying to accomplish with this ? why do you need yet
> another service running when it is totally idle ?
> 
> > When the systemd-hostnamed service is started through policykit, the 
> > password window will automatically exit, which seems to be a terrible 
> > experience
> 
> DAFUQ? if polkit is really talking to hostnamed then has to deal with
> things so they either remaining alive or connect again ...

It's the other way around. hostnamed uses polkit to authorise requests 
sent to it over D-Bus.

If hostnamed is exiting while in the middle of a polkit conversation, then 
yes I would say that is a bug in hostnamed. It really ought not to do 
that.

Re: [systemd-devel] [hostnamed] Why the service will automatically exit after 30 seconds

2021-08-18 Thread Michael Chapman
On Thu, 19 Aug 2021, Michael Chapman wrote:
> On Thu, 19 Aug 2021, Cristian Rodríguez wrote:
> > On Tue, Aug 17, 2021 at 9:35 PM 李成刚  wrote:
> > >
> > > How to configure this service so that it will not automatically exit
> > 
> > 
> > So what are you trying to accomplish with this ? why do you need yet
> > another service running when it is totally idle ?
> > 
> > > When the systemd-hostnamed service is started through policykit, the 
> > > password window will automatically exit, which seems to be a terrible 
> > > experience
> > 
> > DAFUQ? if polkit is really talking to hostnamed then has to deal with
> > things so they either remaining alive or connect again ...
> 
> It's the other way around. hostnamed uses polkit to authorise requests 
> sent to it over D-Bus.
> 
> If hostnamed is exiting while in the middle of a polkit conversation, then 
> yes I would say that is a bug in hostnamed. It really ought not to do 
> that.

I've looked at this more closely now, and it's a bit more complicated than 
I would have liked.

While hostnamed's own idle timeout can easily be disabled while a 
polkit conversation is in progress, that won't necessarily help anybody 
using hostnamectl. hostnamectl uses sd-bus's default method call timeout, 
which is 25 seconds.

Perhaps this should be increased for method calls that are likely to 
result in using polkit? 25 seconds might be too short for some people to 
enter their password.

Is it possible for sd_bus_call to detect that the recipient of a call has 
dropped off the bus and is never going to return a response? If that were 
possible we could possibly rely on that rather than an explicit timeout. I 
think the answer to this question might be "no" though...