the below solution should work.
For each node in the cluster :
a : Stop cassandra service on the node.
b : manually delete data under $data_directory/system/peers/ directory.
c : In cassandra-env.sh file, add the line JVM_OPTS="$JVM_OPTS
-Dcassandra.load_ring_state=false".
d : Restart service on the node.
e : delete the added line in cassandra-env.sh JVM_OPTS="$JVM_OPTS
-Dcassandra.load_ring_state=false".
thanks
Sai Potturi
On Thu, Oct 8, 2015 at 11:27 AM, Robert Wille wrote:
> We had some problems with a node, so we decided to rebootstrap it. My IT
> guy screwed up, and when he added -Dcassandra.replace_address to
> cassandra-env.sh, he forgot the closing quote. The node bootstrapped, and
> then refused to join the cluster. We shut it down, and then noticed that
> nodetool status no longer showed that node, and the “Owns” column had
> increased from ~10% per node to ~11% (we originally had 10 nodes). I don’t
> know why Cassandra decided to automatically remove the node from the
> cluster, but it did. We figured it would be best to make sure the node was
> completely forgotten, and then add it back into the cluster as a new node.
> Problem is, it won’t completely go away.
>
> nodetool status doesn’t list it, but its still in system.peers, and
> OpsCenter still shows it. When I run nodetool removenode, it says that it
> can’t find the node.
>
> How do I completely get rid of it?
>
> Thanks in advance
>
> Robert
>
>