Hello Martin,

How do you perform the repairs?

Are you using incremental repairs or full repairs but without subranges?
Alex described issues related to these repairs here:
http://thelastpickle.com/blog/2017/12/14/should-you-use-incremental-repair.html
.

*tl;dr: *

The only way to perform repair without anticompaction in “modern” versions
> of Apache Cassandra is subrange repair, which fully skips anticompaction.
> To perform a subrange repair correctly, you have three options :
> - Compute valid token subranges yourself and script repairs accordingly
> - Use the Cassandra range repair script which performs subrange repair
> - Use Cassandra Reaper, which also performs subrange repair


If you can prevent anti-compaction, disk space growth should be more
predictable.

There might be more solutions now out there, C* should also soon be shipped
with a side-car it's being actively discussed. Finally, Incremental repairs
will receive important fixes in Cassandra 4.0, Alex also wrote about this
too (yes, this guy loves repairs ¯\_(ツ)_/¯)
http://thelastpickle.com/blog/2018/09/10/incremental-repair-improvements-in-cassandra-4.html

I believe (and hope) this information is relevant to help you fix this
issue.

C*heers,
-----------------------
Alain Rodriguez - @arodream - al...@thelastpickle.com
France / Spain

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

Le mer. 12 sept. 2018 à 10:14, Martin Mačura <m.mac...@gmail.com> a écrit :

> Hi,
> we're on cassandra 3.11.2 . During an anticompaction after repair,
> TotalDiskSpaceUsed value of one table gradually went from 700GB to
> 1180GB, and then suddenly jumped back to 700GB.  This happened on all
> nodes involved in the repair. There was no change in PercentRepaired
> during or after this process. SSTable count is currently 857, with a
> peak of 2460 during the repair.
>
> Table is using TWCS with 1-day time window.  Most daily SSTables are
> around 1 GB but the oldest one is 156 GB - caused by a major
> compaction.
>
> system.log.6.zip:INFO  [CompactionExecutor:9923] 2018-09-10
> 15:29:54,238 CompactionManager.java:649 - [repair
> #88c36e30-b4cb-11e8-bebe-cd3efd73ed33] Starting anticompaction for ...
> on 519 [...]  SSTables
> ...
> system.log:INFO  [CompactionExecutor:9923] 2018-09-12 00:29:39,262
> CompactionManager.java:1524 - Anticompaction completed successfully,
> anticompacted from 0 to 518 sstable(s).
>
> What could be the cause of the temporary increase, and how can we
> prevent it?  We are concerned about running out of disk space soon.
>
> Thanks for any help
>
> Martin
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>

Reply via email to