[ 
https://issues.apache.org/jira/browse/CASSANDRA-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13071489#comment-13071489
 ] 

Jonathan Ellis commented on CASSANDRA-1608:
-------------------------------------------

bq. The interval tree does a good job here making sure that bloom filters are 
only queried only for those SSTables that fall into the queried range

Is it even worth keeping bloom filters around with such a drastic reduction in 
worst-case number of sstables to check (for read path too)?

bq. Compactions do back up

Not a deal breaker for me -- it's not hard to get old-style compactions to back 
up under sustained writes, either.  Given a choice between "block writes until 
compactions catch up" or "let them back up and let the operater deal with it 
how he will," I'll take the latter.

bq. flush size to 64MB and the leveled SSTable size to anywhere between 5-10MB

I'd like to have a better understanding of what the tradeoff is between making 
these settings larger/smaller.  Can we make these one-size-fits-all?

bq. For datasets that frequently overwrite old data that has already been 
flushed to disk there is the potential for substantial de-duplication of data

Yes, this is a big win.  Even people who will never fill up half their disk, 
complain about the worst-case major compaction scenario for old-style 
compaction.


> Redesigned Compaction
> ---------------------
>
>                 Key: CASSANDRA-1608
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1608
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Chris Goffinet
>            Assignee: Benjamin Coverston
>         Attachments: 1608-v2.txt, 1608-v8.txt, 1609-v10.txt
>
>
> After seeing the I/O issues in CASSANDRA-1470, I've been doing some more 
> thinking on this subject that I wanted to lay out.
> I propose we redo the concept of how compaction works in Cassandra. At the 
> moment, compaction is kicked off based on a write access pattern, not read 
> access pattern. In most cases, you want the opposite. You want to be able to 
> track how well each SSTable is performing in the system. If we were to keep 
> statistics in-memory of each SSTable, prioritize them based on most accessed, 
> and bloom filter hit/miss ratios, we could intelligently group sstables that 
> are being read most often and schedule them for compaction. We could also 
> schedule lower priority maintenance on SSTable's not often accessed.
> I also propose we limit the size of each SSTable to a fix sized, that gives 
> us the ability to  better utilize our bloom filters in a predictable manner. 
> At the moment after a certain size, the bloom filters become less reliable. 
> This would also allow us to group data most accessed. Currently the size of 
> an SSTable can grow to a point where large portions of the data might not 
> actually be accessed as often.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to