Accidentaly removed SSTables of unneeded data
Hi, I am running Cassandra v2.1 on a 3 node cluster. /# yum list installed | grep cassa// //cassandra21.noarch 2.1.12-1 @datastax // //cassandra21-tools.noarch 2.1.12-1 @datastax / Unfortunately, I accidentally removed the SSTables (using rm) (older than 10 days) of a table on the 3 nodes. Running 'nodetool repair' on one of the 3 nodes returns error. Whereas, it does not on another. I don't need to recover the lost data but I would like 'nodetool repair' not returning an error. Thanks for any advice. Simon
Re: Accidentaly removed SSTables of unneeded data
Hi Shalom, I've runned refresh as Nitan suggested without sstablescrub. Then i tried drain/restart on the 3 nodes. The repair is now OK. Thanks for your help Simon On 02/05/2019 16:58, shalom sagges wrote: Hi Simon, If you haven't did that already, try to drain and restart the node you deleted the data from. Then run the repair again. Regards, On Thu, May 2, 2019 at 5:53 PM Simon ELBAZ <mailto:sel...@linagora.com>> wrote: Hi, I am running Cassandra v2.1 on a 3 node cluster. /# yum list installed | grep cassa// //cassandra21.noarch 2.1.12-1 @datastax // //cassandra21-tools.noarch 2.1.12-1 @datastax / Unfortunately, I accidentally removed the SSTables (using rm) (older than 10 days) of a table on the 3 nodes. Running 'nodetool repair' on one of the 3 nodes returns error. Whereas, it does not on another. I don't need to recover the lost data but I would like 'nodetool repair' not returning an error. Thanks for any advice. Simon
Nodetool status load value
Hi, Sorry if the question has already been answered. Where nodetool status is run on a 3 node cluster (replication factor : 3), the load between the different nodes is not equal. /# nodetool status opush// //Datacenter: datacenter1// //===// //Status=Up/Down// //|/ State=Normal/Leaving/Joining/Moving// //-- Address Load Tokens Owns (effective) Host ID Rack// //UN 192.168.11.3 9,14 GB 256 100,0% 989589e8-9fcf-4c2f-85e9-c0599ac872e5 rack1// //UN 192.168.11.2 8,54 GB 256 100,0% 42223dd0-1adf-433c-810d-8bc87f0d3af4 rack1// //UN 192.168.11.4 8,92 GB 256 100,0% 1cecacc3-c301-4ae9-a71e-1a1a944d5731 rack1/ Is it OK to have differences between nodes ? Thanks for your answer. Simon