On 08/06/2015 16:00, Avery Payne wrote:
This is where I've resisted using sockets.  Not because they are bad
- they are not.  I've resisted because they are difficult to make
100% portable between environments.  Let me explain.

 I have trouble understanding several points of your message.

 - You've resisted using sockets. What does that mean ? A daemon
will, or will not, use a socket; as an integrator, you don't have
much say on the matter. You can decide where the socket will be,
what will open it, how the daemon will get it if it doesn't open
it itself, and other similar details; but you cannot, say, write
a run script for a X11 server if you don't want any Unix domain
sockets on the machine. :) So, can you clarify your resistance ?

 - What tools are available. What does that have to do with
daemons using sockets ? UCSPI tools will, or will not, be available,
but daemons will do as they please. If your scripts rely on UCSPI
tools to ease socket management, then add a package dependency -
your scripts need UCSPI tools installed, end of story. Dependencies
are not a bad thing per se, they just need to be controlled and
justified.
 "UCSPI sockets" does not make sense. You'll have Unix sockets and
INET sockets, and maybe one or two esoteric things such as netlink.
UCSPI is a framework that helps manipulate sockets with command-line
utilities. Use the tools or don't use them, but I don't understand
what your actual problem is.


 So where do
the sockets live?  /var/run? /run?  /var/sockets?
/insert-my-own-flavor-here?

 How about the service directory of the daemon using the socket ?
That's what a service directory is for.


* Make socket activate an admin-controlled feature that is disabled
by default.  You want socket activation, you ask for it first.  The
admin gets control, I get more headache, and mostly everyone can be
happy.

 If all this fuss is about socket activation, then you can simply
forget it altogether. Jonathan was simply mentioning socket activation
as an alternative to real dependency management, as in "that's what
some people do". I don't think he implied it was a good idea. Only
Lennart says it's a good idea. Or people who blindly repeat what
Lennart says.


As a side note, I'm beginning to suspect that the desire for "true
parallel startup" is more of a "mirage caused by desire" rather than
by design.  What I'm saying is that it may be more of an ideal we
aspire to rather than a design that was thought through.  If you have
sequenced dependencies, can you truly gain a lot of time by
attempting parallel startup?  Is the gain for the effort really that
important?  Can we even speed things up when fsck is deemed mandatory
by the admin for a given situation?  Questions like these make me
wonder if this is really a feasible feature at all.

 It's feasible. Anopa does it. s6-rc does it too - there are still a lot
of things missing so I can't release it now, but the core functionality
is done and it works.

 At least, if by "parallel startup" you mean "start things as soon as
they can be started without risk, without needless waiting times".
Because if you want "parallel startup" as in "start all the things
and pray it works", you can already do it today, with any supervision
framework (services will restart until their dependencies are met)
or with systemd (socket activation: everything will be fine unless
something important crashes, in which case we will deny there is a
problem).

 What I don't think is feasible is having an easy way of accomplishing
parallel startup without a real service management tool specifically
thought out and designed to handle dependencies properly, and
especially mixing one-shot services and long-run services. Which is
exactly what anopa and s6-rc do.

--
 Laurent

Reply via email to