Unless using spark or hadoop nothing consumes the data in that table (unless 
you have tooling that may use it like opscenter or something) so your safe to 
just truncate it or rm the sstables when instance offline you will be fine, if 
you do use that table you can then do a `nodetool refreshsizeestimates` to 
readd it or just wait for it to re-run automatically (every 5 min).

Chris

> On Mar 5, 2018, at 1:02 AM, Kunal Gangakhedkar <kgangakhed...@gmail.com> 
> wrote:
> 
> Hi all,
> 
> I have a 2-node cluster running cassandra 2.1.18.
> One of the nodes has run out of disk space and died - almost all of it shows 
> up as occupied by size_estimates CF.
> Out of 296GiB, 288GiB shows up as consumed by size_estimates in 'du -sh' 
> output.
> 
> This is while the other node is chugging along - shows only 25MiB consumed by 
> size_estimates (du -sh output).
> 
> Any idea why this descripancy?
> Is it safe to remove the size_estimates sstables from the affected node and 
> restart the service?
> 
> Thanks,
> Kunal


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org

Reply via email to