[ 
https://issues.apache.org/jira/browse/HBASE-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13617951#comment-13617951
 ] 

Sergey Shelukhin commented on HBASE-7667:
-----------------------------------------

I did a c1.xlarge test (with default, 3, 10 and 25 stripes, 2 times each). The 
results for different stripe configurations are very consistent across both 
runs.
Compared to m1.large test the positive effect of increasing number of stripes 
on write speed is less.

For this load, sweet spot appears to be around 10-12 stripes based on two 
tests. 3 stripes have large compactions similar to default (well, not as 
large); 25 stripes does too many small compactions, so select-compact loop 
cannot keep up with the number of files produced - on "Iteration 2" test 
described in the doc at least some stripes in 25-stripe case always have 6-8 
small files (as they get compacted other stripes get more files). This appears 
to be the limiting factor on increasing the number of stripes. 
I think the main point is that, for count scheme, there's perf parity (writes 
are generally slightly slower, reads slightly faster), despite existing and 
fixable write amplification; and there's reduction of variability, which was 
the goal. I will try to devise a more realistic read workload, but I don't 
think it should change much given above.
For sequential data, with size-based stripe scheme there's reduction in 
compactions, as expected, despite even L0.

Next steps:
1) On existing data I want to correlate read/write perf with compactions. It is 
interesting that stripe scheme has slower writes in general, as Jimmy has noted 
- it touches read path but not anything at all on write path, so it is probably 
I/O related, or stresses some interaction between existing write and compaction 
paths.
2) Run tests for more realistic read workloads (and parallel read/writes), by 
not using LoadTestTool? Optional-ish.
3) Clean up integration test patch in HBASE-8000.
4) Review and commit? :)

5) Get rid of L0?
                
> Support stripe compaction
> -------------------------
>
>                 Key: HBASE-7667
>                 URL: https://issues.apache.org/jira/browse/HBASE-7667
>             Project: HBase
>          Issue Type: New Feature
>          Components: Compaction
>            Reporter: Sergey Shelukhin
>            Assignee: Sergey Shelukhin
>         Attachments: Stripe compaction perf evaluation.pdf, Stripe compaction 
> perf evaluation.pdf, Stripe compactions.pdf, Stripe compactions.pdf, Stripe 
> compactions.pdf
>
>
> So I was thinking about having many regions as the way to make compactions 
> more manageable, and writing the level db doc about how level db range 
> overlap and data mixing breaks seqNum sorting, and discussing it with Jimmy, 
> Matteo and Ted, and thinking about how to avoid Level DB I/O multiplication 
> factor.
> And I suggest the following idea, let's call it stripe compactions. It's a 
> mix between level db ideas and having many small regions.
> It allows us to have a subset of benefits of many regions (wrt reads and 
> compactions) without many of the drawbacks (managing and current 
> memstore/etc. limitation).
> It also doesn't break seqNum-based file sorting for any one key.
> It works like this.
> The region key space is separated into configurable number of fixed-boundary 
> stripes (determined the first time we stripe the data, see below).
> All the data from memstores is written to normal files with all keys present 
> (not striped), similar to L0 in LevelDb, or current files.
> Compaction policy does 3 types of compactions.
> First is L0 compaction, which takes all L0 files and breaks them down by 
> stripe. It may be optimized by adding more small files from different 
> stripes, but the main logical outcome is that there are no more L0 files and 
> all data is striped.
> Second is exactly similar to current compaction, but compacting one single 
> stripe. In future, nothing prevents us from applying compaction rules and 
> compacting part of the stripe (e.g. similar to current policy with rations 
> and stuff, tiers, whatever), but for the first cut I'd argue let it "major 
> compact" the entire stripe. Or just have the ratio and no more complexity.
> Finally, the third addresses the concern of the fixed boundaries causing 
> stripes to be very unbalanced.
> It's exactly like the 2nd, except it takes 2+ adjacent stripes and writes the 
> results out with different boundaries.
> There's a tradeoff here - if we always take 2 adjacent stripes, compactions 
> will be smaller but rebalancing will take ridiculous amount of I/O.
> If we take many stripes we are essentially getting into the 
> epic-major-compaction problem again. Some heuristics will have to be in place.
> In general, if, before stripes are determined, we initially let L0 grow 
> before determining the stripes, we will get better boundaries.
> Also, unless unbalancing is really large we don't need to rebalance really.
> Obviously this scheme (as well as level) is not applicable for all scenarios, 
> e.g. if timestamp is your key it completely falls apart.
> The end result:
> - many small compactions that can be spread out in time.
> - reads still read from a small number of files (one stripe + L0).
> - region splits become marvelously simple (if we could move files between 
> regions, no references would be needed).
> Main advantage over Level (for HBase) is that default store can still open 
> the files and get correct results - there are no range overlap shenanigans.
> It also needs no metadata, although we may record some for convenience.
> It also would appear to not cause as much I/O.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to