[ 
https://issues.apache.org/jira/browse/CASSANDRA-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-1608:
--------------------------------------

    Attachment: 1608-v2.txt

Thanks, Ben. This is promising!

I pretty much concententrated on the Manifest, which I moved to a top-level 
class.  (Can you summarize what is different in LDBCompactionTask?)

I don't think trying to build levels out of non-leveled data is useful.  Even 
if you tried all permutations the odds of ending up with something useful are 
infinitesmally small.  I'd suggest adding a startup hook instead to 
CompactionStrategy, and if we start up w/ unleveled SSTables we level them 
before doing anything else.  (This will take a while, but not as long as 
leveling everything naively would, since we can just do a single 
compaction-of-everything, spitting out non-overlapping sstables of the desired 
size, and set those to the appropriate level.)

Updated DataTracker to add streamed sstables to level 0.  DataTracker public 
API probably needs a more thorough look though to see if we're missing 
anything. (Speaking of streaming, I think we do need to go by data size not 
sstable count b/c streamed sstables from repair can be arbitrarily large or 
small.)

In promote, do we need to check for all the removed ones being on the same 
level?  I can't think of a scenario where we're not merging from multiple 
levels.  If so I'd change that to an assert.  (In fact there should be exactly 
two levels involved, right?)

Did some surgery on getCompactionCandidates.  Generally renamed things to be 
more succinct. Feels like we getCompactionCandidates should do lower levels 
before doing higher levels?

We'll also need to think about which parts of the strategy/manifest need to be 
threadsafe. (All of them?)  Should definitely document this in AbstractCS.


> Redesigned Compaction
> ---------------------
>
>                 Key: CASSANDRA-1608
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1608
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Chris Goffinet
>         Attachments: 0001-leveldb-style-compaction.patch, 1608-v2.txt
>
>
> After seeing the I/O issues in CASSANDRA-1470, I've been doing some more 
> thinking on this subject that I wanted to lay out.
> I propose we redo the concept of how compaction works in Cassandra. At the 
> moment, compaction is kicked off based on a write access pattern, not read 
> access pattern. In most cases, you want the opposite. You want to be able to 
> track how well each SSTable is performing in the system. If we were to keep 
> statistics in-memory of each SSTable, prioritize them based on most accessed, 
> and bloom filter hit/miss ratios, we could intelligently group sstables that 
> are being read most often and schedule them for compaction. We could also 
> schedule lower priority maintenance on SSTable's not often accessed.
> I also propose we limit the size of each SSTable to a fix sized, that gives 
> us the ability to  better utilize our bloom filters in a predictable manner. 
> At the moment after a certain size, the bloom filters become less reliable. 
> This would also allow us to group data most accessed. Currently the size of 
> an SSTable can grow to a point where large portions of the data might not 
> actually be accessed as often.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to