Hi Michael, On Fri, May 15, 2015 at 10:41 AM, Michael Friedrich < [email protected]> wrote:
> Hi, > > Am 15.05.2015 um 10:20 schrieb Henti Smith: > > Good day all. > > I hope somebody can help me,since I'm not really succeeding in getting > what I want ou tof the icinga2 configs I have. > > The set-up I need is as follows. > > We have clusters of machines all over the world. In each cluster I need > to deploy a monitoring server. > > In my lab I've built the following. > > 1 x monitoring server. > 3 x client servers. > > I then used the > > > http://docs.icinga.org/icinga2/latest/doc/module/icinga2/toc#!/icinga2/latest/doc/module/icinga2/chapter/icinga2-client#icinga2-client-configuration-local > > documentation to build a master and clients and generated configuration > for client services on the master. > > The reasoning behind this was that we wanted to control the configs on > the clients. > > This works perfectly while the client is connected to the master, > however when it's not, the information on the master goes stale, instead of > critical, so we can't act on it. > > > Which is perfectly fine from the master's point of view, as there is no > need to create a dummy checkresult enforcing a critical state. Consider the > client reconnecting and replaying the check history - the master would > determine the dummy critical more recent, and ignore the client's > historical data. Which is why you *must not* change the state history or > anything related. > > In terms of your client being disconnected you should've a check in place > (being the host check_command when using 'node update-config') checking > whether the cluster-zone is connected or not. You may then act upon this > state and notify your users based on that. Furthermore you may organize > further dependencies based on that connection information. > > If the client is gone, you'd have other problems than hosts/services > becoming stale. If you really need a filter for stale checks, adopt the > 'last_check' column filter in your dashboards. > Ok I've checked in my lab and it's working as expeced, or as close as I need it to. So it seems my problem is in my pre-prod set-up which I'll rebuild. > Does this mean the the clients with local configuration set-up does not > allow the master to set services to unknown then the client is not > reachable and is there a way to monitor the client and set it to critical > when unreachable ? > > > The master does not influence or modify checks corresponding to a client > zone. That's by design, and works in most cases. I think it's a matter of > understanding (and also throwing away the passive check result approach > from Nagios/Icinga1). > I understand this completely and I would much rather move away from passive. The above set-up works perfectly, the only thing I have to figure out, is how to add checks on the master for clients for things like ssh. running check_ssh on the client means nothing if somebody changed a firewall rule in the DMZ where the client sits, but I'll figure that out. Thanks for the response. Henti.
_______________________________________________ icinga-users mailing list [email protected] https://lists.icinga.org/mailman/listinfo/icinga-users
