I've set up a Icinga2 HA cluster with clients in independant checking
zones(every icinga2 client sends check results back to the icinga master
directly. I followed the following steps:

1. run node wizard on primary master, turn on accept_config in zones.conf
2. copy ca directory to secondary master, run node wizard on secondary
master, turn on accept_config in zones.conf
3. generate endpoint tickets on primary master
4. start icinga2 processes on both master instances
5. run node wizard on client endpoints with the ticket strings generated on
primary master.
6. start icinga2 processes on all client endpoints

My first question is: Is there a quick way to verify the cluster is set up
correctly before I configure a cluster health check and log on to the
icingaweb2 to look up the status of it? For example, do I look under
/var/lib/icinga2/api/repository or do I look up under /etc/icinga2/zones.d
to verify the HA cluster is working as expected? Or do I also need to check
the same directories on the client endpoints as well? This question might
sound a bit confusing, but basically I'm trying to find a quick way to
verify my HA cluster is working properly and I haven't missed any
configuration

second question is if I run icinga2 node update-config on primary master in
the master zone, would the configuration files be synced to
/etc/icinga2/repository.d/ on the secondary master? Or do I need to run
update-config command manually on the secondary master as well? Vice versa,
if my primary master in the HA set up is down for some reason and I run the
update-config command on my secondary master in the HA set up, would those
configuration files under /etc/icinga2/repository.d be synced over to the
primary master when it comes back online?

Thanks,
Max
_______________________________________________
icinga-users mailing list
[email protected]
https://lists.icinga.org/mailman/listinfo/icinga-users

Reply via email to