[ 
https://issues.apache.org/jira/browse/CASSANDRA-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13978319#comment-13978319
 ] 

T Jake Luciani commented on CASSANDRA-6916:
-------------------------------------------

This is a really interesting ticket!  I'm just looking at the final patch and 
wondering about how this works with leveled compaction.  Aren't the sstables 
being merged from the previous level updated with the new level at the end of 
compaction?  

What level does the partial sstable read use?  Are these considered part of 
level 0 till they are fully written? 

More generally, is there a stress test we have that uses wide rows? I think 
it's important we stress both as Leveled is becoming more widely used.



> Preemptive opening of compaction result
> ---------------------------------------
>
>                 Key: CASSANDRA-6916
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6916
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: Benedict
>            Assignee: Benedict
>            Priority: Minor
>              Labels: performance
>             Fix For: 2.1 beta2
>
>         Attachments: 6916-stock2_1.mixed.cache_tweaks.tar.gz, 
> 6916-stock2_1.mixed.logs.tar.gz, 6916v3-preempive-open-compact.logs.gz, 
> 6916v3-preempive-open-compact.mixed.2.logs.tar.gz, 
> 6916v3-premptive-open-compact.mixed.cache_tweaks.2.tar.gz
>
>
> Related to CASSANDRA-6812, but a little simpler: when compacting, we mess 
> quite badly with the page cache. One thing we can do to mitigate this problem 
> is to use the sstable we're writing before we've finished writing it, and to 
> drop the regions from the old sstables from the page cache as soon as the new 
> sstables have them (even if they're only written to the page cache). This 
> should minimise any page cache churn, as the old sstables must be larger than 
> the new sstable, and since both will be in memory, dropping the old sstables 
> is at least as good as dropping the new.
> The approach is quite straight-forward. Every X MB written:
> # grab flushed length of index file;
> # grab second to last index summary record, after excluding those that point 
> to positions after the flushed length;
> # open index file, and check that our last record doesn't occur outside of 
> the flushed length of the data file (pretty unlikely)
> # Open the sstable with the calculated upper bound
> Some complications:
> # must keep running copy of compression metadata for reopening with
> # we need to be able to replace an sstable with itself but a different lower 
> bound
> # we need to drop the old page cache only when readers have finished



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to