That's sound possible, but it will make the architecture more complex.
After all, If someone want to get a new node (scheduler or poller) and
use the standard directories and port, he doesn't need any
configuration file in the distant servers.

Maybe we can just create 5 init.d scripts : one for each daemons type.
So administrators have a good control on this (if we say a daemon will
launch others, maybe they will not like it). As a sysadmin, I'm always
afraid by such behavior from daemons. I prefer know exactly what I
start. After all, in Windows, we create 5 services.

If a daemon die, we already have spares that can manage it, we should
not add another way to manage it. I think the simpler way should be
the best here.

It's true the monitor is more dynamic, but adding a new daemon is not
done every day, so take 2min instead of 30s to do it is acceptable :)
(and it ask less code for us :p ). And even for a new VM in the cloud,
if it's already launch scheduler + poller, we should not touch it
after launch (or maybe just update the plugins).

Maybe distribution dev will want to add a upstart/monitor behavior,
but they are betterthan we are to do so, so let them do this (and
thanks them of course).


Jean


On Tue, Jul 13, 2010 at 9:52 PM, Gerhard Lausser
<[email protected]> wrote:
>
>
>> We should have a better (and more multiplatform...) way to launch daemons
> :)
>
> What about having one shinken-master-process per node which spawns and
> monitors the others (workers, broker...)
> This master could be started by the best init/upstart/whatever available on
> a platform.
>
> Why i'm preferring this? Imagine:
>
> hostA and hostB are rebooted.
> Some init-mechanism starts the shinken-master-process on each of them.
> The processes are stupid, they don't know which worker-processes to start,
> because there are no local config files
> So the master-processes just wait and listen on a well-known port.
>
> The arbiter on hostX knows from its shinken-specific.cfg that hostA and
> hostB are part of the "Shinken-cloud".
> The arbiter pings the shinken-master-processes on hostA and hostB (like it
> does already today with workers/schedulers/etc.)
> The arbiter pings successful and learns, that hostA/B have no idea what to
> do.
> The arbiter looks into shinken-specific.cfg and sends them parts of the
> configuration.
> Now the shinken-master-processes know that they have to spawn for example a
> poller on port 3333, a scheduler on port 4444 and so on.
>
> This way we don't need any local daemon config files on non-arbiter nodes.
> (ok, if it's not possible to use the well-known port for the master-process,
> then there will be _one_ config file which is read at boot time. The
> non-default port  also can be found in the arbiter's shinken-specific)
>
>
>
> shinken-specific:
>
>
> define monitor {
>       monitor_name             moni-a
>       address                  hostA
>       port                     9999   # default
> }
> define monitor {
>       monitor_name             moni-b
>       address                  hostB
>       port                     10000   # non-default, hostB requires a
> local config file
> }
>
> define broker{
>       broker_name              broker-a
>       address                  hostA
>       port                     7772
>       spare                    0
> ...
>
>
> That's just my thoughts on how we can make a shinken installation even more
> dynamic.
>
> Hanging worker processes under the control of init.....hmmmm i don't know
> whether i should like this. Shinken has spare daemons which can be activated
> should an active process fail.
> What if for example a poller process is restarted over and over, because it
> fails when executing a plugin, but the pyro-method ping still works? The
> arbiter would always get an answer, although something is severely brokern.
>
>
> Gerhard
>
>
> ------------------------------------------------------------------------------
> This SF.net email is sponsored by Sprint
> What will you do first with EVO, the first 4G phone?
> Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
> _______________________________________________
> Shinken-devel mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/shinken-devel
>

------------------------------------------------------------------------------
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
_______________________________________________
Shinken-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/shinken-devel

Reply via email to