Hello team,
Just to add on to the discussion, one may run,
Nodetool disablebinary followed by a nodetool disablethrift followed by
nodetool drain.
Nodetool drain also does the work of nodetool flush+ declaring in the
cluster that I'm down and not accepting traffic.
Thanks
On Mon, 25 Nov, 2019,
Before Cassandra shutdown, nodetool drain should be executed first. As soon
as you do nodetool drain, others node will see this node down and no new
traffic will come to this node.
I generally gives 10 seconds gap between nodetool drain and Cassandra stop.
On Sun, Nov 24, 2019 at 9:52 AM Paul Mena
Thank you for the replies. I had made no changes to the config before the
rolling restart.
I can try another restart but was wondering if I should do it differently. I
had simply done "service cassandra stop" followed by "service cassandra start".
Since then I've seen some suggestions to proc
Did you change the name of datacenter or any other config changes before
the rolling restart?
On Sun, Nov 24, 2019 at 8:49 PM Paul Mena wrote:
> I am in the process of doing a rolling restart on a 4-node cluster running
> Cassandra 2.1.9. I stopped and started Cassandra on node 1 via "service
>
It sounds silly but sometimes restarting again the node which is showing
down from other nodes fix the issue. This looks like a gossip issue.
On Sun, Nov 24, 2019 at 7:19 AM Paul Mena wrote:
> I am in the process of doing a rolling restart on a 4-node cluster running
> Cassandra 2.1.9. I stopped
I am in the process of doing a rolling restart on a 4-node cluster running
Cassandra 2.1.9. I stopped and started Cassandra on node 1 via "service
cassandra stop/start", and noted nothing unusual in either system.log or
cassandra.log. Doing a "nodetool status" from node 1 shows all four nodes up