Hi All - I have a 13 node cluster running Cassandra 4.0.1.  If I stop a node, edit the cassandra.yaml file, comment out the first drive in the list, and restart the node, it fails to start saying that a node already exists in the cluster with the IP address.

If I put the drive back into the list, the node still fails to start with the same error.  At this point the node is useless and I think the only option is to remove all the data, and re-boostrap it?
---------

ERROR [main] 2022-01-07 15:50:09,155 CassandraDaemon.java:909 - Exception encountered during startup java.lang.RuntimeException: A node with address /172.16.100.39:7000 already exists, cancelling join. Use cassandra.replace_address if you want to replace this node.         at org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:659)         at org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:934)         at org.apache.cassandra.service.StorageService.initServer(StorageService.java:784)         at org.apache.cassandra.service.StorageService.initServer(StorageService.java:729)         at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:420)         at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:763)         at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:887)

-----------

If I remove a drive other than the first one, this problem doesn't occur.  Any other options?  It appears that if it the first drive in the list goes bad, or is just removed, that entire node must be replaced.

-Joe

Reply via email to