[ 
https://issues.apache.org/jira/browse/CASSANDRA-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12920353#action_12920353
 ] 

Stu Hood commented on CASSANDRA-1608:
-------------------------------------

One way to provide locality of reference for sstables would be to persist 
summaries of individual rows which 'supersede' the content in sstables written 
before them. For example, if you have five sstables containing key 'A', you 
would create a new sstable #6 containing all content for 'A', and marked as 
superseding for 'A'. Then, since you have a full copy of data for 'A', you no 
longer need to read from the other sstables.

But how would a reader know which sstables to check for a particular key? We 
have sstable generation numbers, but they are currently only used as unique 
ids. Three approaches:
# The 'superseding' mark for a particular key could indicate which generations 
it superseded, so we could prevent reads to superseded sstables
# If we refactored the system to read memtables/sstables in generation order, 
we could stop looking for content for A when we reached an sstable that 
superseded older sstables
# When superseding a value, we could delete it from the bloom filters for the 
superseded sstables. While hella cool, this solution  would be a big change:
** If you reached a threshold of emptiness in the bloom filter, you could 
perform single-sstable-compactions filtered by values that still match
** Requires a bloom filter which supports deletes (ours don't yet)

> Redesigned Compaction
> ---------------------
>
>                 Key: CASSANDRA-1608
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1608
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Chris Goffinet
>             Fix For: 0.7.1
>
>
> After seeing the I/O issues in CASSANDRA-1470, I've been doing some more 
> thinking on this subject that I wanted to lay out.
> I propose we redo the concept of how compaction works in Cassandra. At the 
> moment, compaction is kicked off based on a write access pattern, not read 
> access pattern. In most cases, you want the opposite. You want to be able to 
> track how well each SSTable is performing in the system. If we were to keep 
> statistics in-memory of each SSTable, prioritize them based on most accessed, 
> and bloom filter hit/miss ratios, we could intelligently group sstables that 
> are being read most often and schedule them for compaction. We could also 
> schedule lower priority maintenance on SSTable's not often accessed.
> I also propose we limit the size of each SSTable to a fix sized, that gives 
> us the ability to  better utilize our bloom filters in a predictable manner. 
> At the moment after a certain size, the bloom filters become less reliable. 
> This would also allow us to group data most accessed. Currently the size of 
> an SSTable can grow to a point where large portions of the data might not 
> actually be accessed as often.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to