Re: restarting s6-svscan (as pid 1)

2023-11-18 Thread Laurent Bercot



> I believe (have not yet tested) that I can relatively simply create 
the

maintenance system on the fly by copying a subset of the root fs into a
ramdisk, so it doesn't take any space until it's needed.


 The problem with that approach is that your maintenance system now
depends on your production system: after a rootfs change, you don't
have the guarantee that your maintenance system will be identical to
the previous one. Granted, the subset you want in your maintenance fs
is likely to be reasonably stable, but you never know; imagine your
system is linked against glibc, you copy libc.so.6, but one day one
of the coreutils grows a dependency to libpthread.so, and voilĂ , your
maintenance fs doesn't work anymore.

 You probably think the risk is small, and you're probably correct.
I just have a preference for simple solutions, and I believe that a
small, stable, statically-linked maintenance squashfs would be worth
the space it takes on your drive. :)

--
 Laurent



Re: restarting s6-svscan (as pid 1)

2023-11-18 Thread Daniel Barlow
"Laurent Bercot"  writes:

>   That said, I'm not sure your goal is as valuable as you think it is.
> If you have a running system, by definition, it's running. It has 
> booted,
> and you have access to its rootfs and all the tools on it; there is
> nothing to gain by doing something fragile such as exec'ing into
> another pid 1 and pivot_rooting. Unless I've missed something, the
> amount of space you'll need for your maintenance system will be the
> exact same whether you switch to it from your production system or from
> cold booting.

I would agree with you generally, but in this case the running system
has a readonly squashfs filesystem which can't be updated except by
flashing a complete new filesystem image on top of it.  (I have tried
doing this on the running system just to see what would happen, but the
result was as about terminal as you might imagine it would be.) I
believe (have not yet tested) that I can relatively simply create the
maintenance system on the fly by copying a subset of the root fs into a
ramdisk, so it doesn't take any space until it's needed.


-dan


Re: restarting s6-svscan (as pid 1)

2023-11-17 Thread Laurent Bercot

This may be a weird question, maybe: is there any way to persuade
s6-svscan (as pid 1) to restart _without_ doing a full hardware reboot?
The use case I have in mind is: starting from a regular running system,
I want to create a small "recovery" system in a ramdisk and switch to it
with pivot_root so that the real root filesystem can be unmounted and
manipulated. (This is instead of "just" doing a full reboot into an
initramfs: the device has limited storage space and keeping a couple
of MB around permanently just for "maintenance mode" doesn't seem like a
great use of it)


 As Adam said, this is exactly what .s6-svscan/finish is for. If your
setup somehow requires s6-svscan to exec into something else before
you shut the machine down, .s6-svscan/finish is the hook you have to
make it work.
 Don't let the naming stop you. The way to get strong supervision with
BSD init is to list your services in /etc/ttys. You can't get any
cheesier than that.

 That said, I'm not sure your goal is as valuable as you think it is.
If you have a running system, by definition, it's running. It has 
booted,

and you have access to its rootfs and all the tools on it; there is
nothing to gain by doing something fragile such as exec'ing into
another pid 1 and pivot_rooting. Unless I've missed something, the
amount of space you'll need for your maintenance system will be the
exact same whether you switch to it from your production system or from
cold booting.

--
 Laurent



Re: restarting s6-svscan (as pid 1)

2023-11-17 Thread Steve Litt
d...@telent.net said on Fri, 17 Nov 2023 22:20:32 +


>I was thinking I could use the .svscan/finish script to check for the
>existence of the "maintenance mode" ramfs, remount it onto /
>and then `exec /bin/init` as its last action, though it seems a bit
>cheesy to have a file called `finish` that actually sometimes performs
>`single-user-mode` instead. Would that work?

I don't know if it would work. Why not try it. Also, you might need to
make changes to the 3 script.

As far as "cheesy", the only disadvantage is self-documentation, and
self-documentation could be preserved by adding a properly named script
and calling or exec'ing it.

SteveT

Steve Litt 

Autumn 2023 featured book: Rapid Learning for the 21st Century
http://www.troubleshooters.com/rl21


Re: restarting s6-svscan (as pid 1)

2023-11-17 Thread adam
Quoting d...@telent.net (2023-11-17 14:20:32)
> I was thinking I could use the .svscan/finish script to check for the
> existence of the "maintenance mode" ramfs, remount it onto /
> and then `exec /bin/init` as its last action, though it seems a bit
> cheesy to have a file called `finish` that actually sometimes performs
> `single-user-mode` instead. Would that work?

I think your use case is *precisely* what .svscan/finish is for -- it's how you
get s6-svscan to exec() into some other process.  That other process can be an
instance of itself.

The fact that systemd has a special self-exec() mechanism always seemed weird
and bizzarre.  Just use the general mechanism.

> Perhaps a more general use case for re-execing pid 1 would be after OS
> upgrades as an alternative to rebooting

Sure, if you upgrade the libc or your compiler, and you want s6-svscan to use
those new libc/compiler, this is an easy way to do it.

> though other than wanting to preserve uptime for bragging rights I can't see
> any real advantage...

You can pass arbitrary large chunks of data to a re-exec()'ed pid1.  It's not
always easy to do that across a reboot, since you have to pass the data to the
bootloader and back.  Also "the data" could be open file descriptors.

  - a


restarting s6-svscan (as pid 1)

2023-11-17 Thread dan


This may be a weird question, maybe: is there any way to persuade
s6-svscan (as pid 1) to restart _without_ doing a full hardware reboot?
The use case I have in mind is: starting from a regular running system,
I want to create a small "recovery" system in a ramdisk and switch to it
with pivot_root so that the real root filesystem can be unmounted and
manipulated. (This is instead of "just" doing a full reboot into an
initramfs: the device has limited storage space and keeping a couple
of MB around permanently just for "maintenance mode" doesn't seem like a
great use of it)

I was thinking I could use the .svscan/finish script to check for the
existence of the "maintenance mode" ramfs, remount it onto /
and then `exec /bin/init` as its last action, though it seems a bit
cheesy to have a file called `finish` that actually sometimes performs
`single-user-mode` instead. Would that work?

Perhaps a more general use case for re-execing pid 1 would be after OS
upgrades as an alternative to rebooting - though other than wanting to
preserve uptime for bragging rights I can't see any real advantage...

Any thoughts?


Daniel