It is great to hear successful stories from large scale production
deployment!

On Mon, May 3, 2021 at 1:31 PM clara xiong <clarax98...@gmail.com> wrote:

> It is great to hear this is still in use  in a large production and getting
> improvement!
>
> On Tue, Apr 27, 2021 at 3:01 AM 张铎(Duo Zhang) <palomino...@gmail.com>
> wrote:
>
> > Stripe compaction is a feature which has been implemented long ago in
> HBase
> > but I've never seen extensive usage in the community. And recently I
> found
> > that a big company in China, Meituan, has made use of stripe compaction
> in
> > their production cluster. One of the team member shared some information
> on
> > a github PR and she agreed that I could share it in the mailing list.
> >
> > Hi, @Apache9 <https://github.com/Apache9> , it's my pleasure to share
> > these
> > > information.
> > > We use StripeCompactionPolicy in almost all of our production clusters.
> > > And we let recently data in memstore flush to L0, limit stripe size to
> > > about 10G. Most of our regions are large than 50G, there even exist
> > regions
> > > larger than 2T... StripeCompactionPolicy has no major compactions, and
> it
> > > can limit compactions in only L0 and one stripe files, and it can
> perform
> > > cells deletion in one stripe just like the major-compaction. The
> pressure
> > > of compactions is broken down. Though the total files count in a region
> > > maybe a little larger than normal compactions, because the files are
> > > organized as in mini-regions, it works well for most read requests.
> > > And we also implemented a fast split and compact method based on
> > > StripeCompactionPolicy, practiced in all our production clusters,
> results
> > > show that split is very light weight and no need to perform read+write
> > > files compactions right after split. Details are in HBASE-25302
> > > <https://issues.apache.org/jira/browse/HBASE-25302>, hope you have
> > > interest...
> > > Thanks.
> >
> >
> > The original link:
> > https://github.com/apache/hbase/pull/3152#issuecomment-824166990
> >
>

Reply via email to