Thanks Jeff. Appreciate your reply. as you said , looks like some there
were entries in commitlogs and when cassandra was brought up after deleting
sstables, data from commitlog replayed. May be next time I will let the
replay happen after deleting sstable and then truncate table using CQL.
This will ensure my table is empty. I could not truncate from CQL in the
first place as one of the node was not up.

Regards,
Kunal

On Tue, Aug 11, 2020 at 8:45 AM Jeff Jirsa <jji...@gmail.com> wrote:

> The data probably came from either hints or commitlog replay.
>
> If you use `truncate` from CQL, it solves both of those concerns.
>
>
> On Tue, Aug 11, 2020 at 8:42 AM Kunal <kunal.v...@gmail.com> wrote:
>
>> HI,
>>
>> We have a 3 nodes cassandra cluster and one of the table grew big, around
>> 2 gb while it was supposed to be few MBs. During nodetool repair, one of
>> the cassandra went down. Even after multiple restart, one of the node was
>> going down after coming up for few mins. We decided to truncate the table
>> by removing the corresponding sstable from the disk since truncating a
>> table from cqlsh needs all the nodes to be up which was not the case in our
>> env. After deleting sstable from disk on all the 3 nodes, we brought up
>> cassandra and all the nodes came up fine and dont see any issue , but we
>> observed the size of the sstable is~100MB which was bit strange and the
>> table has old rows (around 20K) from previous date, before removing the
>> rows were 500K. Not sure how the table has old records and sstable is of
>> ~100M even after removing the sstable.
>> Any ideas ? Any help to understand this would be appreciated.
>>
>> Regards,
>> Kunal
>>
>

-- 



Regards,
Kunal Vaid

Reply via email to