[ 
https://issues.apache.org/jira/browse/CASSANDRA-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13051097#comment-13051097
 ] 

Jonathan Ellis commented on CASSANDRA-1608:
-------------------------------------------

I checked what leveldb actually does: 
http://www.google.com/codesearch#mHLldehqYMA/trunk/db/version_set.cc, methods 
Finalize and PickCompaction.

What it does is compute a score for each level, as the ratio of bytes in that 
level to desired bytes.  For level 0, it computes files / desired files 
instead.  (Apparently leveldb doesn't have row-level bloom filters, so merging 
on reads is extra painful.*) The level with the highest score is compacted.

When compacting L0 the only special casing done by leveldb is that after 
picking the primary L0 file to compact, it will check other L0 files for 
overlapping-ness too.  (Again, we can expect this to usually if not always be 
"all L0 files," but it's not much more code than a "always compact all L0 
files" special case would be, so why not avoid some i/o if we can.)

*I'm pretty sure that (a) we don't need to special case for this reason and (b) 
we should standardize on bytes instead of file count, the latter is too subject 
to inaccuracy from streamed files as mentioned and on later levels the fact 
that compaction results are not going to be clean -- if we merge one sstable of 
size S from L with two of size S from L+1, odds are poor we'll end up with 
merged bytes divisible by S or even very close to it.  The overwhelming 
likelihood is you end up with 2 of size S and one of size 0 < size < S.  Do 
enough of these and using sstable count as an approximation for size gets 
pretty inaccurate. Fortunately a method to sum SSTableReader.length() would be 
easy enough to write instead.


> Redesigned Compaction
> ---------------------
>
>                 Key: CASSANDRA-1608
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1608
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Chris Goffinet
>         Attachments: 0001-leveldb-style-compaction.patch, 1608-v2.txt
>
>
> After seeing the I/O issues in CASSANDRA-1470, I've been doing some more 
> thinking on this subject that I wanted to lay out.
> I propose we redo the concept of how compaction works in Cassandra. At the 
> moment, compaction is kicked off based on a write access pattern, not read 
> access pattern. In most cases, you want the opposite. You want to be able to 
> track how well each SSTable is performing in the system. If we were to keep 
> statistics in-memory of each SSTable, prioritize them based on most accessed, 
> and bloom filter hit/miss ratios, we could intelligently group sstables that 
> are being read most often and schedule them for compaction. We could also 
> schedule lower priority maintenance on SSTable's not often accessed.
> I also propose we limit the size of each SSTable to a fix sized, that gives 
> us the ability to  better utilize our bloom filters in a predictable manner. 
> At the moment after a certain size, the bloom filters become less reliable. 
> This would also allow us to group data most accessed. Currently the size of 
> an SSTable can grow to a point where large portions of the data might not 
> actually be accessed as often.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to