Re: [systemd-devel] Antw: Re: Mutually exclusive (timer-triggered) services

2019-10-17 Thread Reindl Harald


Am 17.10.19 um 08:02 schrieb Ulrich Windl:
>> Or did I miss something and the second flock somehow obtains the inode
>> number of the old lock?
> 
> I guess any new process arriving late cannot aquire the (same) lock once the
> first process has removed the name when the crowd has finished.
> But as the remove itself locks, it means that the remove will happen only
> after the lock has been released (requests being server FCFS (FIrst Come, 
> First
> Serviced).
> If there's one process late, a new lock will be created, and likewise any new
> arrivals will use that lock, until the lock is free.
> 
> Doesn't look like a 100% solution to me, but it might work.

it's about *multiple* services and hence might not work

it's the same as wenn you "tail -f" a logfile while in the meantime
logrotate came, tail follows the old file with no further changes, the
disk space is not freed because the open handlebut the rest of the world
don#t care and is using the new one
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Antw: Re: Mutually exclusive (timer-triggered) services

2019-10-16 Thread Ulrich Windl
>>> Alexander Koch  schrieb am 16.10.2019 um 16:14 in
Nachricht <9fb9c1a157e92baef1107ed3b66aa...@alexanderkoch.net>:
> * flock leaves the lock file behind so you'd need some type of
> cleanup in case you really want the jobs to be trace‑free. This is
> not as trivial is it might seem, e.g. you cannot do it from the
> service units themselves in `ExecStartPost=` or similar.
 An
 
 ExecStartPost=‑/usr/bin/flock ‑F /path/to/lock.file \
   /usr/bin/rm /path/to/lock.file
 
 should solve this issue.
>>> 
>>> So you can remove a file other processes are blocked lock‑waiting on?
>>> Didn't
>>> expect this to work, thanks for the hint.
>> 
>> It's a common misconception (especially when grown up with Windows)
>> that "rm" removes a file: Actually it "unlinks" the name from the
>> inode. As long as the inode is opened by the kernel, the file (as seen
>> from the kernel's perspective) still exists.
> 
> I haven't really grown up with Windows ;P

OK!

> 
> Assuming `flock` (the binary) uses the flock() syscall it still needs to 
> go
> through VFS to get a file descriptor. So if a second process calls 
> `flock`
> after the first one has already unlinked the name from the inode, the 
> lock
> file will not be found and thus be re‑created, breaking the locking
> mechanism.
> 
> Or did I miss something and the second flock somehow obtains the inode
> number of the old lock?

I guess any new process arriving late cannot aquire the (same) lock once the
first process has removed the name when the crowd has finished.
But as the remove itself locks, it means that the remove will happen only
after the lock has been released (requests being server FCFS (FIrst Come, First
Serviced).
If there's one process late, a new lock will be created, and likewise any new
arrivals will use that lock, until the lock is free.

Doesn't look like a 100% solution to me, but it might work. 

> 
> 
> Best regards,
> 
> Alex



___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Antw: Re: Mutually exclusive (timer-triggered) services

2019-10-16 Thread Alexander Koch

* flock leaves the lock file behind so you'd need some type of
cleanup in case you really want the jobs to be trace-free. This is
not as trivial is it might seem, e.g. you cannot do it from the
service units themselves in `ExecStartPost=` or similar.

An

ExecStartPost=-/usr/bin/flock -F /path/to/lock.file \
  /usr/bin/rm /path/to/lock.file

should solve this issue.


So you can remove a file other processes are blocked lock-waiting on?
Didn't
expect this to work, thanks for the hint.


It's a common misconception (especially when grown up with Windows)
that "rm" removes a file: Actually it "unlinks" the name from the
inode. As long as the inode is opened by the kernel, the file (as seen
from the kernel's perspective) still exists.


I haven't really grown up with Windows ;P

Assuming `flock` (the binary) uses the flock() syscall it still needs to 
go
through VFS to get a file descriptor. So if a second process calls 
`flock`
after the first one has already unlinked the name from the inode, the 
lock

file will not be found and thus be re-created, breaking the locking
mechanism.

Or did I miss something and the second flock somehow obtains the inode
number of the old lock?


Best regards,

Alex

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

[systemd-devel] Antw: Re: Mutually exclusive (timer-triggered) services

2019-10-16 Thread Ulrich Windl
>>> Alexander Koch  schrieb am 15.10.2019 um 21:48 in
Nachricht <39b05185c3bdf699f7f00c23e0a4a...@alexanderkoch.net>:
>> > * flock leaves the lock file behind so you'd need some type of
>>> cleanup in case you really want the jobs to be trace-free. This is
>>> not as trivial is it might seem, e.g. you cannot do it from the
>>> service units themselves in `ExecStartPost=` or similar.
>> An
>> 
>> ExecStartPost=-/usr/bin/flock -F /path/to/lock.file \
>>   /usr/bin/rm /path/to/lock.file
>> 
>> should solve this issue.
> 
> So you can remove a file other processes are blocked lock-waiting on? 
> Didn't
> expect this to work, thanks for the hint.

It's a common misconception (especially when grown up with Windows) that "rm" 
removes a file: Actually it "unlinks" the name from the inode. As long as the 
inode is opened by the kernel, the file (as seen from the kernel's perspective) 
still exists.

> 
>> If your units are actually dependent on each other, than maybe you
>> should think about your approach in general. But to be able to help you
>> with that we need more information about the actual dependencies of the
>> applications started by your units and at which interval they shall
>> run.
> 
> Okay I guess I should come up with the actual scenario, here we go:
> 
> On my Arch Linux workstation I've got three .timer triggered .service 
> units
> that do package manager housekeeping (I don't know if you're familiar 
> with
> Arch/Pacman so I'll annotate their purposes):
> 
> 1) Synchronize package database (equivalent of `apt-get update` on 
> Debian)
> 
>  [Timer]
>  OnCalendar=8-17/2:00
>  Persistent=true
> 
>  [Service]
>  ExecStart=/usr/bin/pacman -Syq
> 
> 2) Update file database (equivalent of `apt-file update`)
> 
>  [Timer]
>  OnCalendar=weekly
>  Persistent=true
> 
>  [Service]
>  ExecStart=/usr/bin/pacman -Fyq
> 
> 3) Purge old packages from cache (something like `apt-get autoclean`)
> 
>  [Timer]
>  OnCalendar=daily
>  Persistent=true
> 
>  [Service]
>  ExecStart=/bin/sh -c 'paccache -r -k 2; paccache -r -k 0 -u'
> 
> As you can see, I'd like to have different execution intervals for all 
> of
> these tasks so I'd like to keep them as separate services (which also 
> seems
> the intuitive approach to me).
> 
> I must admit that I haven't tried, but I'm pretty sure that at least 1 
> and
> 2 do lock the ALPM database so if you try to issue one of these Pacman 
> calls
> while the other is running it will fail, complaining about a lock file 
> being
> present.
> 
> My current workaround for this is using `RandomizedDelaySec=15m` in
> conjunction with `AccuracySec=1` in the .timer units to spread the 
> triggers.
> 
> While this does work I'm really curious about the 'proper' way of 
> modeling
> this. Is it such an academic problem to have the need of ensuring that 
> two
> timers (or services) don't fire simultaneously? I had thought this to be
> really simple with such an elaborate service manager like systemd, with 
> all
> its graph theory power and the-like.
> 
> (If I were a heretic I'd say 'We can do DNS, DHCP and NTP with systemd
> without any third party software but we need additional utilities to 
> ensure
> that two things don't happen at the same time??' ;) )
> 
> I think there are plenty of other scenarios, e.g. ideally I'd like my 
> backup
> service not to kick in while btrfs-scrub@home.service is running... or 
> maybe
> it's just me seeing this need ;)
> 
> 
> Best regards,
> 
> Alex
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org 
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel 




___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel