Hello Tom,

OM internal clustering is described here
https://openmeetings.apache.org/Clustering.html
and seems to be much simpler than what you have configured :)

What would you like to achieve?
The clustering available out-of-the-box is made on top of Hazelcast
Which reports to openmeetings.log (on each node) if it was able to see
other nodes, is it works for you?

On Tue, 19 Oct 2021 at 21:04, Tom Meierjürgen <tomca...@web.de> wrote:

> Sorry, i forgot to add one Info:
> OM-Version:
> 6.1.0
> Revision c03148c
> Builddate 2021-07-17T05:52:00Z
>
>
>
> Am 19.10.2021 um 15:28 schrieb Tom Meierjürgen:
>
> Hi Mailinglist,
>
> i´ve got some small struggles with a 2 Node (active-active configuration
> due to actually lacking a third node for the beginning)
> Clusterconfiguration. In some combinations of config-Parameters i´ve got
> proper communication but only on the first node,but not on the second,on
> some no Communication with other clinets on both nodes and sometimes
> only betweeen selected clients (coming from the same homenetwork but
> different machines, third client from other lan will ony get avatars to
> see but no audio/video except his own stream).
>
>  My setting contains following parts:
>
> 1.) An IPSEC Tunnel including a GRE-Tunnel on its inside between the two
> nodes, prepared for later DMVPN Usage
>
> 2.) Using frr for routing BGP/NHRP/PIM for Mulitcast-Routing over the
> Tunnel
>
> 3.) A well working 2-Node Galera-Cluster
>
> 4.) GlusterFS Volume across the two nodes (to eleminate the single point
> of failure NFS-Server), working well mounting a directory of the volume
> to /opt/openmeetings/webapps/openmeetings/data
>
> 5.) Coturn Turnserver running on each node
>
> 6.) Docker on Each node, tried standalone KMS on each node, swarm on
> both nodes running kms as service in global Mode and replica Mode,same
> result in each mode, sometimes working on node 1 properly but no
> communications between the users on node2
>
> 7.) openmeetings configured to use .pem Certifcates instead of jks due
> to using letsencrypt on both nodes for webservices+mailservices
> (web+mailservices not clustered up to now)
>
> 8.) Hazelcast Interface set to the according tunnelinterfaces so that
> the multicast-communication follows the tunnel
>
> 9.) Kurento-ws interface set to several versions :
>
>        on both nodes
>
>       1.) localhost interface
>
>       2.) Dockers hostinterface
>
>       3.) Dockers gwbridge interface
>
>       tried also to mix like node 1 localhostinterface - node 2 bridge
> interface and so on (which mostly leads to not working at all)
>
> 10:) tried also different kurento User-ID on both nodes as well as same
> id,different secrets as well as same secret for kurento and turnserver
> on both nodes
>
> 11.) in turnserverconfig (Coturn) allowed-peer-ip is set to all local
> interfaces and reachable interfaces of the companion-node, relay-ip is
> not set at all as the config-docu stated taht then tuirnserver will
> figure it out by himself
>
> 12.) Node 1 is running with plesk and psa-firewall, node 2 pure ubuntu
> with ufw (both ubuntu versions are 20.04, the reason to use docker
> instead of pure kurento on the nodes as kurento seems to be not
> available for 20.04 up to now), iptables rules seem to be complete for
> running all required services
>
> Actually im really running out of ideas where to look for getting the
> second node running well same as node1...has anyone a clue where to look
> for reasons why node 1 works well but node 2 not? i also get sometimes a
> yellow reconnecting message inside om in rooms,but only sonetimes on
> node 1 and regulary on node 2...
>
> If its a conceptual error, where ist the error based? Is it perhaps
> impossible to get om in a 2 node cluster running due to design? Which
> config or log (om logs seem not to have real errors anymore, just
> warnings fr5om time to time which seem not to coreespond to the given
> problem) or statsuinfos should be postetd here to be helpful in finding
> the Caveeat?
>
> Thanks in advance for any constructive reply,
>
>  Tom
>
>

-- 
Best regards,
Maxim

Reply via email to