[systemd-devel] limiting NFS activity

2022-10-17 Thread Weatherby,Gerard
We have a requirement to limit / throttle the IO activity to an NFS mount for a 
particular system slice. I’m trying to use cgroups v2

Does IODeviceLatencyTargetSec work for NFS mounts?

Does cgroups v2 support net_prio? Can I set it in a 
/etc/systemd/system/*slice.d/*conf file?


Re: [systemd-devel] systemd-devel Digest, Vol 135, Issue 24

2021-07-27 Thread Weatherby,Gerard
Making the service reliable will require updating the clients of the service to 
retry the connection for some period of time while the service restarts.
--

Message: 4
Date: Tue, 27 Jul 2021 12:12:43 +0300
From: Mantas Mikul?nas 
To: Francis Moreau 
Cc: SystemD Devel 
Subject: Re: [systemd-devel] How to restart my socket activated
service safely ?
Message-ID:

Content-Type: text/plain; charset="UTF-8"

On Tue, Jul 27, 2021 at 10:10 AM Francis Moreau  wrote:

Hello,

During my application update, I want to restart my service which is
activated by a socket but want to be sure that no request sent to my
service will be missed. I also want to restart the socket too so
systemd uses the latest version of the socket unit file.

If I restart the socket when the service is still running then I get
an error message: "rotor.socket: Socket service rotor.service already
active, refusing."

If I stop the service first and restart the socket then there's a
short time frame where requests can be lost.

The old socket has to be unbound before a new one can be put in its
place. Trying to keep the service alive (holding the old listener fd)
would just result in systemd not being able to bind a new socket with
the same address... (And even if that was possible, the old service
wouldn't be able to handle requests arriving on the new socket
anyway.)

So whenever you restart a socket, there will *always* be a short time
frame where the old socket is closed but the new one is not yet
bound/listening. But as soon as the new one is listening, it'll start
queuing the requests even if the service isn't yet running (since it's
a socket-activated service after all) and the number of lost requests
should be minimal.

--
Mantas Mikul?nas


--

Subject: Digest Footer

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://urldefense.com/v3/__https://lists.freedesktop.org/mailman/listinfo/systemd-devel__;!!N0rdg9Wr!5lTR7wKPcVtYGDel8mKxXIT0K2BICEZ0RIZKAQT5duI149oGAdk1Ci3yH_m2j7SaKTo$


--

End of systemd-devel Digest, Vol 135, Issue 24
**
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] automount behavior with multiple IPS

2021-01-24 Thread Weatherby,Gerard
When systemd-automount queries an NFS server with multiple IPs, does it try all 
of the them (the default behavior of the similar autofs package) or just use 
one, or something else?

--
Gerard Weatherby | Application Architect
NMRbox | Department of Molecular Biology and Biophysics | UConn Health
263 Farmington Avenue, Farmington, CT 06030-6406
uchc.edu
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel