>> If I read this correct the size is 4.8G and the throttle is 2.5G so it
should have been put into the Large compaction pool.

You answered your question yourself. Minor compaction in the same pool (1
thread default) will be waiting until major is finished.

-Vlad


On Tue, Oct 6, 2015 at 3:59 PM, Randy Fox <r...@connexity.com> wrote:

> 2015-10-06 14:50:35,644 INFO org.apache.hadoop.hbase.regionserver.HStore:
> Starting compaction of 4 file(s) in L of PROD_audience4,\x00
> \xB6\x0B\xA7,\x186,1443751119137.26f321d7b240c85a9350a95f6c288e49. into
> tmpdir=hdfs://woz/hbase/data/default/PROD_audience4/26f321d7b240c85a9350a95f6c288e49/.tmp,
> totalSize=4.8 G
> 2015-10-06 14:50:35,646 DEBUG
> org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction
> requested:
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext@7f236eb8;
> Because: User-triggered major compaction; compaction_queue=(0:0),
> split_queue=0, merge_queue=0
>
>
> If I read this correct the size is 4.8G and the throttle is 2.5G so it
> should have been put into the Large compaction pool.
>
>
>
>
>
> On 10/6/15, 3:07 PM, "Randy Fox" <r...@connexity.com> wrote:
>
> >We just did a big leap forward from 0.94 to 1.0. We do our own major
> compacts manually every night. One of the first things we noticed is that
> when a major compact runs, no minors run.  The compaction queue grows and
> when the major compact finishes the minors then run.   I have not found any
> new knobs we should be setting.  Any ideas?
> >Our config is:
> >
> ><property>
> >  <name>hbase.hregion.majorcompaction</name>
> >  <value>0</value>
> >  <final>true</final>
> ></property>
> ><property>
> >    <name>hbase.hstore.compactionThreshold</name>
> >    <value>3</value>
> ></property>
> >
> ><property>
> >    <name>hbase.hstore.compaction.max</name>
> >    <value>9</value>
> ></property>
> ><property>
> >    <name>hbase.regionserver.thread.compaction.throttle</name>
> >    <value>2684354560</value>
> ></property>
> >
> >
> >Thanks in advance,
> >
> >Randy
>

Reply via email to