Re: S6 Queries

2021-08-11 Thread Arjun D R
Thanks Laurent for the detailed explanations. We did a bootup speed
comparison between S6 and systemd. S6 is able to boot up slightly faster
than systemd. Actual result is 4-4.5% faster but we were expecting
something near to 20%.
Ours is a bit complex setup with more than 140 services (includes a lot of
long run services and a lot of dependencies). The main advantage in systemd
is, it starts many critical processes very quickly since it has no
dependency to logging services. We collect the logs from journalctl and
store it in log files. Whereas in S6, the critical services start up is a
bit delayed since it has to depend on logging services which in turn
depends on other services (responsible for backing up the previous logs).

Arjun

On Mon, Aug 2, 2021 at 1:57 PM Laurent Bercot 
wrote:

> >1. In systemd, the services are grouped as targets and each target depends
> >on another target as well. They start as targets. [ex: Reached
> >local-fs.target, Reached network.target, Reached UI target,...]. Is there
> >any way in S6 to start the init system based on bundles?
>
>   Yes, that is what bundles are for. In your stage 2 boot script
> (typically /etc/s6-linux-init/current/scripts/rc.init), you should
> invoke s6-rc as:
>s6-rc change top
> if "top" is the name of your default bundle, i.e. the bundle that
> contains all the services you want to start at boot time. You can
> basically convert the contents of your systemd targets directly into
> contents of your s6-rc bundles; and you decide which one will be
> brought up at boot time via the s6-rc invocation in your stage 2
> init script.
>
>
> >2. Are there any ways to have loosely coupling dependencies? In systemd,
> we
> >have After=. After option will help the current service to start after the
> >mentioned service (in after). And the current service will anyway start
> >even if the mentioned service in After fails to start. Do we have such
> >loosely coupled dependency facility in S6?
>
>   Not at the moment, no. The next version of s6-rc will allow more types
> of dependencies, with clearer semantics than the systemd ones (After=,
> Requires= and Wants= are not orthogonal, which is unintuitive and causes
> misuse); but it is still in early development.
>
>   For now, s6-rc only provides one type of dependency, which is the
> equivalent of Requires+After. I realize this is not flexible enough
> for a lot of real use cases, which is one of the reasons why another
> version is in development. :)
>
>
> >3. Is there any tool available in S6 to measure the time taken by each
> >service to start? We can manually measure it from the logs, but still
> >looking for a tool which can provide accurate data.
>
>   Honestly, if you use the -v2 option to your s6-rc invocation, as in
>s6-rc -v2 change top
> and you ask the catch-all logger to timestamp its lines (which should
> be the default, but you can change the timestamp style via the -t
> option to s6-linux-init-maker)
> then the difference of timestamps between the lines:
>s6-rc: info: service foo: starting
> and
>s6-rc: info: service foo successfully started
> will give you a pretty accurate measurement of the time it took service
> foo to start. These lines are written by s6-rc exactly as the
> "starting" or "completed" event occurs, and they are timestamped by
> s6-log immediately; the code path is the same for both events, so the
> delays cancel out, and the only inaccuracy left is randomness due to
> scheduling, which should not be statistically significant.
>
>   At the moment, the s6-rc log is the easiest place to get this data
> from. You could probably hack something with the "time" shell command
> and s6-svwait, such as
>s6-svwait -u /run/service/foo ; time s6-svwait -U /run/service/foo
> which would give you the time it took for foo to become ready; but
> I doubt it would be any more accurate than using the timestamps in the
> s6-rc logs, and it's really not convenient to set up.
>
>
> >4. Does the S6 init system provide better boot up performance compared to
> >systemd ?  One of our main motives is to attain better bootup performance.
> >Is our expectation correct?
>
>   The boot up performance should be more or less *similar* to systemd.
> The code paths used by the s6 ecosystem are much shorter than the ones
> used by systemd, so _in theory_ you should get faster boots with s6.
>
>   However, systemd cheats, by starting services before their dependencies
> are ready. For instance, it will start services before any kind of
> logging is ready, which is pretty dangerous for several reasons. As a
> part of its "socket activation" thing, it will also claim readiness
> on some sockets before even attempting to run the actual serving
> processes (which may fail, in which case readiness was a lie.)
> Because of that, when everything goes well, systemd cuts corners that
> s6 does not, and may gain some advantage.
>
>   So all in all, I expect that depending on your system, the difference

S6 Queries

2021-08-01 Thread Arjun D R
Hi Team,

We are trying to migrate from systemd init system to S6. We have a few
queries, please help us on the same.

1. In systemd, the services are grouped as targets and each target depends
on another target as well. They start as targets. [ex: Reached
local-fs.target, Reached network.target, Reached UI target,...]. Is there
any way in S6 to start the init system based on bundles?

2. Are there any ways to have loosely coupling dependencies? In systemd, we
have After=. After option will help the current service to start after the
mentioned service (in after). And the current service will anyway start
even if the mentioned service in After fails to start. Do we have such
loosely coupled dependency facility in S6?

3. Is there any tool available in S6 to measure the time taken by each
service to start? We can manually measure it from the logs, but still
looking for a tool which can provide accurate data.

4. Does the S6 init system provide better boot up performance compared to
systemd ?  One of our main motives is to attain better bootup performance.
Is our expectation correct?

Thanks in advance,
Arjun


Query on S6 system shutdown

2021-07-29 Thread Arjun D R
Hi Team,

I am facing an issue at the moment of shutting down the system. Whenever I
reboot the system, it is triggering "s6-rc -v2 -bda change" and stops all
the services. But one service is not responding and it hangs. This blocks
the reboot.

ps -aef | grep s6-rc
root  2770  2707  0 08:24 ?00:00:00 s6-ipcserverd -1 --
s6-ipcserver-access -v0 -E -l0 -i data/rules -- s6-sudod -t 3 --
/libexec/s6-rc-oneshot-run -l ../.. --
root  7880  2615  1 08:28 ?00:00:00 s6-rc -v2 -bda change
root  8603  7880  0 08:28 ?00:00:00 s6-svlisten1 -D --
/run/s6-rc/servicedirs/critical.service s6-svc -d --
/run/s6-rc/servicedirs/critical.service
root  8755  2620  0 08:28 ttyS000:00:00 grep s6-rc

I believe the finish script is not being called by s6-svc. When I run it
manually , the finish script runs and kills the process and graceful
shutdown is happening as expected.

What may be the cause for not triggering the finish script of critical
service.

cat /run/service/critical.service/run.user
#!/bin/execlineb -P
foreground { echo "starting critical service" }
fdmove -c 2 1
envfile -i  /etc/default/critical-env
s6-envdir /etc/s6-rc/source/critical.service/env
<.. many importas commands..>
/usr/bin/critical -b -c /etc/critical/config.json

cat /run/service/critical.service/finish.user
#!/bin/execlineb -P
foreground { echo "Stopping critical service" }
fdmove -c 2 1
backtick -n pid_value { pidof critical }
importas -i pid_value pid_value
/bin/kill -9 -- -${pid_value}


Please help.

Thanks,
Arjun


Re: Query on s6-log and s6-supervise

2021-06-09 Thread Arjun D R
Thanks Laurent and Colin for the suggestions. I will try to build a fully
static s6 with musl toolchain. Thanks for the detailed analysis
once again.
--
Arjun

On Wed, Jun 9, 2021 at 5:18 PM Laurent Bercot 
wrote:

> >I have checked the Private_Dirty memory in "smaps" of a s6-supervise
> >process and I don't see any consuming above 8kB. Just posting it here
> >for reference.
>
>   Indeed, each mapping is small, but you have *a lot* of them. The
> sum of all the Private_Dirty in your mappings, that should be shown
> in smaps_rollup, is 96 kB. 24 pages! That is _huge_.
>
>   In this list, the mappings that are really used by s6-supervise (i.e.
> the incompressible amount of unshareable memory) are the following:
>
>   - the /bin/s6-supervise section: this is static data, s6-supervise
> needs a little, but it should not take more than one page.
>
>   - the [heap] section: this is dynamically allocated memory, and for
> s6-supervise it should not be bigger than 4 kB. s6-supervise does not
> allocate dynamic memory itself, the presence of a heap section is due
> to opendir() which needs dynamic buffers; the size of the buffer is
> determined by the libc, and anything more than one page is wasteful.
>
> ( - anonymous mappings are really memory dynamically allocated for
> internal  libc purposes; they do not show up in [heap] because they're
> not obtained via malloc(). No function used by s6-supervise should
> ever need those; any anonymous mapping you see is libc shenanigans
> and counts as overhead. )
>
>   - the [stack] section: this is difficult to control because the
> amount of stack a process uses depends a lot on the compiler, the
> compilation flags, etc. When built with -O2, s6-supervise should not
> use more than 2-3 pages of stack. This includes a one-page buffer to
> read from notification-fd; I can probably reduce the size of this
> buffer and make sure the amount of needed stack pages never goes
> above 2.
>
>   So in total, the incompressible amount of private mappings is 4 to 5
> pages (16 to 20 kB). All the other mappings are libc overhead.
>
>   - the libpthread-2.31.so mapping uses 8 kB
>   - the librt-2.31.so mapping uses 8 kB
>   - the libc-2.31.so mapping uses 16 kB
>   - the libskarnet.so mapping uses 12 kB
>   - ld.so, the dynamic linker itself, uses 16 kB
>   - there are 16 kB of anonymous mappings
>
>   This is some serious waste; unfortunately, it's pretty much to be
> expected from glibc, which suffers from decades of misdesign and
> tunnel vision especially where dynamic linking is concerned. We are,
> unfortunately, experiencing the consequences of technical debt.
>
>   Linking against the static version of skalibs (--enable-allstatic)
> should save you at least 12 kB (and probably 16) per instance of
> s6-supervise. You should have noticed the improvement; your amount of
> private memory should have dropped by at least 1.5MB when you switched
> to --enable-allstatic.
>   But I understand it is not enough.
>
>   Unfortunately, once you have removed the libskarnet.so mappings,
> it's basically down to the libc, and to achieve further improvements
> I have no other suggestions than to change libcs.
>
> >If possible, can you please share us a reference smap and ps_mem data on
> >s6-supervise. That would really help.
>
>   I don't use ps_mem, but here are the details of a s6-supervise process
> on the skarnet.org server. s6 is linked statically against the musl
> libc, which means:
>   - the text segments are bigger (drawback of static linking)
>   - there are fewer mappings (advantage of static linking, but even when
> you're linking dynamically against musl it maps as little as it can)
>   - the mappings have little libc overhead (advantage of musl)
>
> # cat smaps_rollup
>
> 0040-7ffd53096000 ---p  00:00 0  [rollup]
> Rss:  64 kB
> Pss:  36 kB
> Pss_Anon: 20 kB
> Pss_File: 16 kB
> Pss_Shmem: 0 kB
> Shared_Clean: 40 kB
> Shared_Dirty:  0 kB
> Private_Clean: 8 kB
> Private_Dirty:16 kB
> Referenced:   64 kB
> Anonymous:20 kB
> LazyFree:  0 kB
> AnonHugePages: 0 kB
> ShmemPmdMapped:0 kB
> FilePmdMapped: 0 kB
> Shared_Hugetlb:0 kB
> Private_Hugetlb:   0 kB
> Swap:  0 kB
> SwapPss:   0 kB
> Locked:0 kB
>
>   You can see 40kB of shared, 16kB of Private_Dirty, and 8kB of
> Private_Clean - apparently there's one Private_Clean page of static
> data and one of stack; I have no idea what this corresponds to in the
> code, I will need to investigate and see if it can be trimmed down.
>
> # grep -E '[[:space:]](-|r)(-|w)(-|x)(-|p)[[:space:]]|^Private_Dirty:'
> smaps
>
> 0040-00409000 r-xp  ca:00 659178  /command/s6-supervise
> Private_Dirty: 0 kB
> 00609000-0060b000 rw-p 9000 ca:00 659178  /command/s6-supervise
> Private_Dirty: 4 kB
> 02462000-0

Re: Query on s6-log and s6-supervise

2021-06-08 Thread Arjun D R
Dewayne,
Thanks for the details. We already have such an implementation (multiple
producers with one consumer) but still our s6-log instances are high. Many
of our services require direct logger services. We can reduce the direct
logger services by creating a funnel and using regex to separate the logs
but that indeed is a risky and complicated process. I am just interested to
confirm the memory usage of s6-log and s6-supervise processes.

Thanks,
Arjun

On Wed, Jun 9, 2021 at 9:11 AM Dewayne Geraghty <
dewa...@heuristicsystems.com.au> wrote:

> Apologies, I'd implied that we have multiple s6-supervise processes
> running and their children pipe to one file which is read by one s6-log
> file.
>
> You can achieve this outcome by using s6-rc's, where one consumer can
> receive multiple inputs from producers.
>
> There is a special (but not unique) case where a program, such as apache
> which will have explicit log files (defined in apache's config file) to
> record web-page accesses and error logs, on a per server basis.  Because
> all the supervised apache instances can write to one error logfile, I
> instructed apache to write to a pipe.  Multiple supervised apache
> instances using the one pipe (aka funnel), which was read by one s6-log.
>  This way reducing the number of (s6-log) processes.  I could do the
> same with the access logs and use the regex function of s6-log, but I
> tend to simplicity.
>


Re: Query on s6-log and s6-supervise

2021-06-08 Thread Arjun D R
Thanks Laurent for the brief detail. That really helps.

I have checked the Private_Dirty memory in "smaps" of a s6-supervise
process and I don't see any consuming above 8kB. Just posting it here
for reference.

grep Private_Dirty /proc/991/smaps
Private_Dirty: 0 kB
Private_Dirty: 4 kB
Private_Dirty: 4 kB
Private_Dirty: 8 kB
Private_Dirty: 0 kB
Private_Dirty: 0 kB
Private_Dirty: 4 kB
Private_Dirty: 4 kB
Private_Dirty: 4 kB
Private_Dirty: 0 kB
Private_Dirty: 0 kB
Private_Dirty: 4 kB
Private_Dirty: 4 kB
Private_Dirty: 0 kB
Private_Dirty: 0 kB
Private_Dirty: 8 kB
Private_Dirty: 8 kB
Private_Dirty: 8 kB
Private_Dirty: 0 kB
Private_Dirty: 0 kB
Private_Dirty: 8 kB
Private_Dirty: 4 kB
Private_Dirty: 4 kB
Private_Dirty: 0 kB
Private_Dirty: 8 kB
Private_Dirty: 4 kB
Private_Dirty: 4 kB
Private_Dirty: 4 kB
Private_Dirty: 0 kB
Private_Dirty: 0 kB

cat /proc/991/smaps
0001-00014000 r-xp  07:00 174/bin/s6-supervise

00023000-00024000 r--p 3000 07:00 174/bin/s6-supervise

00024000-00025000 rw-p 4000 07:00 174/bin/s6-supervise

00025000-00046000 rw-p  00:00 0  [heap]

b6e1c000-b6e2d000 r-xp  07:00 3652   /lib/libpthread-2.31.so

b6e2d000-b6e3c000 ---p 00011000 07:00 3652   /lib/libpthread-2.31.so

b6e3c000-b6e3d000 r--p 0001 07:00 3652   /lib/libpthread-2.31.so

b6e3d000-b6e3e000 rw-p 00011000 07:00 3652   /lib/libpthread-2.31.so

b6e3e000-b6e4 rw-p  00:00 0

b6e4-b6e45000 r-xp  07:00 3656   /lib/librt-2.31.so

b6e45000-b6e54000 ---p 5000 07:00 3656   /lib/librt-2.31.so

b6e54000-b6e55000 r--p 4000 07:00 3656   /lib/librt-2.31.so

b6e55000-b6e56000 rw-p 5000 07:00 3656   /lib/librt-2.31.so

b6e56000-b6f19000 r-xp  07:00 3613   /lib/libc-2.31.so

b6f19000-b6f28000 ---p 000c3000 07:00 3613   /lib/libc-2.31.so

b6f28000-b6f2a000 r--p 000c2000 07:00 3613   /lib/libc-2.31.so

b6f2a000-b6f2c000 rw-p 000c4000 07:00 3613   /lib/libc-2.31.so

b6f2c000-b6f2e000 rw-p  00:00 0

b6f2e000-b6f4d000 r-xp  07:00 3665   /lib/libskarnet.so.2.9.2.1

b6f4d000-b6f5c000 ---p 0001f000 07:00 3665   /lib/libskarnet.so.2.9.2.1

b6f5c000-b6f5e000 r--p 0001e000 07:00 3665   /lib/libskarnet.so.2.9.2.1

b6f5e000-b6f5f000 rw-p 0002 07:00 3665   /lib/libskarnet.so.2.9.2.1

b6f5f000-b6f6b000 rw-p  00:00 0

b6f6b000-b6f81000 r-xp  07:00 3605   /lib/ld-2.31.so

b6f87000-b6f89000 rw-p  00:00 0

b6f91000-b6f92000 r--p 00016000 07:00 3605   /lib/ld-2.31.so

b6f92000-b6f93000 rw-p 00017000 07:00 3605   /lib/ld-2.31.so

beaf8000-beb19000 rw-p  00:00 0  [stack]
Size:132 kB
Rss:   4 kB
Pss:   4 kB
Shared_Clean:  0 kB
Shared_Dirty:  0 kB
Private_Clean: 0 kB
Private_Dirty: 4 kB
Referenced:4 kB
Anonymous: 4 kB
AnonHugePages: 0 kB
Swap:  0 kB
KernelPageSize:4 kB
MMUPageSize:   4 kB
Locked:0 kB
VmFlags: rd wr mr mw me gd ac
becd5000-becd6000 r-xp  00:00 0  [sigpage]

-1000 r-xp  00:00 0  [vectors]

Sorry I am not able to post the whole data considering the mail size.

On my Linux system,
ps -axw -o pid,vsz,rss,time,comm | grep s6
1   1732  1128 00:00:06 s6-svscan
  900   1736   452 00:00:00 s6-supervise
  901   1736   480 00:00:00 s6-supervise
  902   1736   444 00:00:00 s6-supervise
  903   1736   444 00:00:00 s6-supervise
  907   1744   496 00:00:00 s6-log
.

And I don't think ps_mem is lying , I just compared it with smem as well.
Clear data on ps_mem:

 Private  +   Shared  =  RAM used   Program

  4.8 MiB + 786.0 KiB =   5.5 MiB   s6-log (46)
 12.2 MiB +   2.1 MiB =  14.3 MiB   s6-supervise (129)

smem:

  PID User Command Swap  USS  PSS
 RSS
1020 root s6-supervise wpa_supplicant0   96   98
 996
 2001 root s6-log -F wpa_supplicant.lo0  104  106
1128

Same(almost) amount of PSS/RSS are used by other s6-supervise and s6-log
processes.

I have tried with flag "--enable-allstatic" and unfortunately I don't see
any improvement. If you were mentioning about shared memory, then yes we
are good here. It is using 2.1 MiB for 129 instances, but the private
memory is around 12.2 MiB. I am not sure whether this is the normal value
or not.

If possible, can you please share us a reference smap and ps_mem data on
s6-supervise. That would really help.

Dewayne, even though we pipe it to a file, we will be having a
s6-supervisor for the log service. Maybe I didn't understand it well. Sorry
about it. Ple

Query on s6-log and s6-supervise

2021-06-08 Thread Arjun D R
Hi Team,



I would like to hear from you for a few queries. Please help.



   1. Why do we need to have separate supervisors for producer and consumer
   long run services? Is it possible to have one supervisor for both producer
   and consumer, because anyhow the consumer service need not to run when the
   producer is down.  I can understand that s6 supervisor is meant to monitor
   only one service, but why not monitor a couple of services when it is
   logically valid if I am not wrong.
   2. Is it possible to have a single supervisor for a bundle of services?
   Like, one supervisor for a bundle (consisting of few services)?
   3. Generally how many instances of s6-supervise can run? We are running
   into a problem where we have 129 instances of s6-supervise that leads to
   higher memory consumption. We are migrating from systemd to s6 init system
   considering the light weight, but we have a lot of s6-log and s6-supervise
   instances that results in higher memory usage compared to systemd.  Is it
   fine to have this many number of s6-supervise instances?ps_mem data -
  5.5 MiB   s6-log (46) ,  14.3 MiB   s6-supervise (129)



Thanks,
Arjun