[ https://issues.apache.org/jira/browse/HBASE-15181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15127052#comment-15127052 ]
Clara Xiong commented on HBASE-15181: ------------------------------------- Not really. The algorithm depends on the max time stamp. If the bulk load file has a time span significantly bigger than the window, the scan performance will suffer by the extra data being scanned. But once the files are moved to larger windows on higher tiers, the penalty decreases and finally disappear. > A simple implementation of date based tiered compaction > ------------------------------------------------------- > > Key: HBASE-15181 > URL: https://issues.apache.org/jira/browse/HBASE-15181 > Project: HBase > Issue Type: New Feature > Components: Compaction > Reporter: Clara Xiong > Assignee: Clara Xiong > Fix For: 2.0.0 > > Attachments: HBASE-15181-v1.patch, HBASE-15181-v2.patch > > > This is a simple implementation of date-based tiered compaction similar to > Cassandra's for the following benefits: > 1. Improve date-range-based scan by structuring store files in date-based > tiered layout. > 2. Reduce compaction overhead. > 3. Improve TTL efficiency. > Perfect fit for the use cases that: > 1. has mostly date-based date write and scan and a focus on the most recent > data. > 2. never or rarely deletes data. > Out-of-order writes are handled gracefully so the data will still get to the > right store file for time-range-scan and re-compacton with existing store > file in the same time window is handled by ExploringCompactionPolicy. > Time range overlapping among store files is tolerated and the performance > impact is minimized. > Configuration can be set at hbase-site or overriden at per-table or > per-column-famly level by hbase shell. > Design spec is at > https://docs.google.com/document/d/1_AmlNb2N8Us1xICsTeGDLKIqL6T-oHoRLZ323MG_uy8/edit?usp=sharing -- This message was sent by Atlassian JIRA (v6.3.4#6332)