On 07/11/17 01:02 +0300, Andrei Borzenkov wrote: > 06.11.2017 22:38, Valentin Vidic пишет: >> On Fri, Oct 13, 2017 at 02:07:33PM +0100, Adam Spiers wrote: >>> I think it depends on exactly what you mean by "synchronous" here. You can >>> start up a daemon, or a process which is responsible for forking into a >>> daemon, but how can you know for sure that a service is really up and >>> running? Even if the daemon ran for a few seconds, it might die soon after. >>> At what point do you draw the line and say "OK start-up is now over, any >>> failures after this are failures of a running service"? In that light, >>> "systemctl start" could return at a number of points in the startup process, >>> but there's probably always an element of asynchronicity in there. >>> Interested to hear other opinions on this. >> >> systemd.service(5) describes a started (running) service depending >> on the service type: >> >> simple - systemd will immediately proceed starting follow-up units (after >> exec) >> forking - systemd will proceed with starting follow-up units as soon as >> the parent process exits >> oneshot - process has to exit before systemd starts follow-up units >> dbus - systemd will proceed with starting follow-up units after the >> D-Bus bus name has been acquired >> notify - systemd will proceed with starting follow-up units after this >> notification message has been sent >> >> Obviously notify is best here > > forking, dbus and notify all allow daemon to signal to systemd that > deamon is ready to service request. Unfortunately ... > >> but not all daemons implement sending >> sd_notify(READY=1) when they are ready to serve clients. >> > > ... as well as not all daemons properly daemonize itself or register on > D-Bus only after they are ready.
Sharing the sentiment about the situation, arising probably primarily from daemon authors never been pushed to indicate full ability to provide service precisely because 1/ it's not the primary objective of init systems -- the only thing they would need to comply with regarding getting these daemons started (as opposed to real service-oriented supervisors, which is also the realm of HA, right?), and 2/ even if it had been desirable to indicate that, no formalized interface (and in turn, system convolutions) that would become widespread was devised for that purpose. On the other hand, sd_notify seems to reconcile that in my eyes (+1 to Valetin's qualifying it the best the above options) as it doesn't impose any other effect (casting extra interpretation on, say, a fork event makes it possibly not intended or at least not-well-timed side-effect of the main, intended effect). To elaborate more, historically, it's customary to perform double fork in the daemons to make them as isolated from controlling terminals and what not as possible. But it may not be desirable to perform anything security sensitive prior to at least the first fork, hence with "forking", you've already lost the preciseness of "ready" indication, unless there is some further synchronization between the parent and its child processes (I am yet to see that in practice). So I'd say, unless the daemon is specifically fine-tuned, both forking and dbus types of services are bound to carry some amount of asynchronicity as mentioned. To the distaste of said service supervisors that strive to maximize service usefulness over the considerable timeframe, which is way more than ticking the "should be running OK because it got started by me without any early failure" checkbox. The main issue (though sometimes workable) of sd_notify approach is that in your composite application you may not have a direct "consider me ready" hook throughout the underlying stack, and tying it with processing of the first request is out of question because it's timing is not guaranteed (if it's ever to arrive). Sorry, didn't add much to the discussion, getting rid of asynchronities is tough in the world that wasn't widely intrested in poll/check-less "true ready" state. -- Poki
pgpz3eLXdZbLh.pgp
Description: PGP signature
_______________________________________________ Developers mailing list [email protected] http://lists.clusterlabs.org/mailman/listinfo/developers
