Okay, so how big exactly is the data on disk? You said removing and adding a new node gives you 20GB on disk, was that done via the '-Dcassandra.replace_address=...' parameter? If not, the new node will almost certainly have a different token range and not directly comparable to the existing node if you have uneven partitions or small number of partitions in the table. Also, try major compaction, it's a lot easier than replacing a node.

On 17/09/2021 12:28, Abdul Patel wrote:
Yes i checked and cleared all snapshots and also i had incremental backups in backup folder ..i removed the same .. its purely data..


On Friday, September 17, 2021, Bowen Song <bo...@bso.ng <mailto:bo...@bso.ng>> wrote:

    Assuming your total disk space is a lot bigger than 50GB in size
    (accounting for disk space amplification, commit log, logs, OS
    data, etc.), I would suspect the disk space is being used by
    something else. Have you checked that the disk space is actually
    being used by the cassandra data directory? If so, have a look at
    'nodetool listsnapshots' command output as well.


    On 17/09/2021 05:48, Abdul Patel wrote:

        Hello

        We have cassandra with leveledcompaction strategy, recently
        found filesystem almost 90% full but the data was only 10m
        records.
        Manual compaction will work? As not sure its recommended and
        space is also constraint ..tried removing and adding one node
        and now data is at 20GB which looks appropropiate.
        So is only solution to reclaim space is remove/add node?

Reply via email to