We use SizeTieredCompaction. The nodes were about 67% full and we were planning on adding new nodes (doubling the cluster to 6) soon. I've been watching the disk space used, and the nodes were taking about 100GB during compaction, so I thought we were going to be okay for another week. The other nodes are still like that. It's just this one node that's now taking a lot more and I'm worried about running out of disk space. I've gone ahead and added 2 new nodes and was hoping cleanup would buy some space, but it looks like compaction still has to complete, and is just continuing to eat up space.
I guess, worst case scenario, I can remove that node and replace it, but it's just really strange that this is happening with just the one node, and apparently adding the new nodes hasn't helped in the short term. On Sat, Feb 13, 2016 at 4:37 PM, Jan Kesten <j.kes...@enercast.de> wrote: > Hi, > > what kind of compaction strategy do you use? What you are about to see is > a compaction likely - think of 4 sstables of 50gb each, compacting those > can take up 200g while rewriting the new sstable. After that the old ones > are deleted and space will be freed again. > > If using SizeTieredCompaction you can end up with very huge sstables as I > do (>250gb each). In the worst case you could possibly need twice the space > - a reason why I set up my monitoring for disk to 45% usage. > > Just my 2 cents. > Jan > > Von meinem iPhone gesendet > > > Am 13.02.2016 um 08:48 schrieb Branton Davis <branton.da...@spanning.com > >: > > > > One of our clusters had a strange thing happen tonight. It's a 3 node > cluster, running 2.1.10. The primary keyspace has RF 3, vnodes with 256 > tokens. > > > > This evening, over the course of about 6 hours, disk usage increased > from around 700GB to around 900GB on only one node. I was at a loss as to > what was happening and, on a whim, decided to run nodetool cleanup on the > instance. I had no reason to believe that it was necessary, as no nodes > were added or tokens moved (not intentionally, anyhow). But it immediately > cleared up that extra space. > > > > I'm pretty lost as to what would have happened here. Any ideas where to > look? > > > > Thanks! > > >