Strahil,

Interesting...
Yet, this doesn't explain why token of 30000 causes the nodes to never assemble 
a cluster (waiting for half an hour, using wait_for_all=1) , while setting it 
to 29000 works like a charm.

Definitively.

Could you please provide a bit more info about your setup (config/logs/how many nodes cluster has/...)? Because I've just briefly tested two nodes setup with 30 sec token timeout and it was working perfectly fine.


Thankfully we got RH subsciption, so RH devs will provide more detailed output 
on the issue.

As Jehan correctly noted if it would really get to RH devs it would probably get to me ;) But before that GSS will take care of checking configs/hw/logs/... and they are really good in finding problems with setup/hw/...


I was hoping that I missed in the documentation about the maximum token size...

Nope.

No matter what, if you can send config/logs/... we may try to find out what is root of the problem here on ML or you can really try GSS, but as Jehan told, it would be nice if you can post result so other people (me included) knows what was the main problem.

Thanks and regards,
  Honza


Best Regards,
Strahil Nikolov






В четвъртък, 11 март 2021 г., 19:12:58 ч. Гринуич+2, Jan Friesse 
<jfrie...@redhat.com> написа:





Strahil,
Hello all,
I'm building a test cluster on RHEL8.2 and I have noticed that the cluster 
fails to assemble ( nodes stay inquorate as if the network is not working) if I 
set the token at 30000 or more (30s+).

Knet waits for enough pong replies for other nodes before it marks them
as alive and starts sending/receiving packets from them. By default it
needs to receive 2 pongs and ping is sent 4 times in token timeout so it
means 15 sec until node is considered up for 30 sec token timeout.

What is the maximum token value with knet ?On SLES12 (I think it was  corosync 
1) , I used to set the token/consensus with far greater values on some of our 
clusters.

I'm really not aware about any arbitrary limits.


Best Regards,Strahil Nikolov


Regards,

   Honza



_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/




_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to