There seem to be a lot of SSTables in a repaired state and a lot in an 
unrepaired state. For example, for this one table, the logs report

TRACE [main] 2017-08-15 23:50:30,732 LeveledManifest.java:473 - L0 contains 2 
SSTables (176997267 bytes) in Manifest@1217144872
TRACE [main] 2017-08-15 23:50:30,732 LeveledManifest.java:473 - L1 contains 10 
SSTables (2030691642 bytes) in Manifest@1217144872
TRACE [main] 2017-08-15 23:50:30,732 LeveledManifest.java:473 - L2 contains 94 
SSTables (19352545435 bytes) in Manifest@1217144872

and 

TRACE [main] 2017-08-15 23:50:30,731 LeveledManifest.java:473 - L0 contains 1 
SSTables (65038718 bytes) in Manifest@499561185
TRACE [main] 2017-08-15 23:50:30,731 LeveledManifest.java:473 - L2 contains 5 
SSTables (117221111 bytes) in Manifest@499561185
TRACE [main] 2017-08-15 23:50:30,731 LeveledManifest.java:473 - L3 contains 39 
SSTables (7377654173 bytes) in Manifest@499561185

Is it possible that there's always a compaction to be run in the "repaired" 
state, with that many SSTables, that unrepaired compactions are essentially 
"starved", considering the WrappingCompactionStrategy prioritizes the 
"repaired" set?On Wednesday, August 2, 2017, 2:35:02 PM PDT, Sotirios 
Delimanolis <sotodel...@yahoo.com.INVALID> wrote:

Turns out there are already logs for this in Tracker.java. I enabled those and 
clearly saw the old files are being tracked.
What else can I look at for hints about whether these files are later 
invalidated/filtered out somehow?

On Tuesday, August 1, 2017, 3:29:38 PM PDT, Sotirios Delimanolis 
<sotodel...@yahoo.com.INVALID> wrote:

There aren't any ERROR logs for failure to load these files and they do get 
compacted away. I'll try to plug some DEBUG logs in a custom Cassandra 
version.On Tuesday, August 1, 2017, 12:13:09 PM PDT, Jeff Jirsa 
<jji...@gmail.com> wrote:

I don't have time to dive deep into the code of your version, but it may be ( 
https://issues.apache.org/jira/browse/CASSANDRA-13620 ) , or it may be 
something else.
I wouldn't expect compaction to touch them if they're invalid. The handle may 
be a leftover from trying to load them. 


On Tue, Aug 1, 2017 at 10:01 AM, Sotirios Delimanolis 
<sotodel...@yahoo.com.invalid> wrote:

@Jeff, why does compaction clear them and why does Cassandra keep a handle to 
them? Shouldn't they be ignored entirely? Is there an error log I can enable to 
detect them?
@kurt, there are no such logs for any of these tables. We have a custom log in 
our build of Cassandra that does shows that compactions are happening for that 
table but only ever include the files from July.
On Tuesday, August 1, 2017, 12:55:53 AM PDT, kurt greaves 
<k...@instaclustr.com> wrote:

Seeing as there aren't even 100 SSTables in L2, LCS should be gradually trying 
to compact L3 with L2. You could search the logs for "Adding high-level (L3)" 
to check if this is happening. ​

Reply via email to