Using v2.4.3, I have a three node cluster with the following settings for
clusterer-related usrloc parameters:
modparam("usrloc", "db_url", "mysql://user:pass@localhost/opensips")
modparam("usrloc", "working_mode_preset", "full-sharing-cluster")
modparam("usrloc", "location_cluster", 1)

I have been testing restart persistency by stopping and restarting the node,
one at a time.
My expectation is that, immediately after a restart on any node, it should
read the usrloc data from the other nodes via the bin interface. Thus,
existing registrations should appear when using the command "opensipsctl ul
show" within a few seconds of the restart.
This works as expected if I restart node2 or node3, but it does not when I
restart node1. Node1 needs a device to register before it appears in the "ul
show" list.
The only difference is that node1 is set as "seed" in the clusterer
database. I hoped this only meant that the seed server must start first,
before any others. Once all three nodes are running I want them all to be
equal thereby allowing me to add new nodes and remove existing nodes at will
- horizontal scalability.

Does the seed node do restart persistency differently? Is it, for example,
trying to read back the registrations from its local mysql database?
If so, I have a problem because my nodes are using containers and the local
database is rebuilt (effectively cleared) every time I restart the node.

John Quick
Web: www.smartvox.co.uk



_______________________________________________
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users

Reply via email to