Re: How to designTemporal table in Hbase

2016-06-01 Thread Mohammad Tariq
I don't think trying to 'mimic' an RDBS table in HBase is a very good idea.
They both are completely different from each other, both in design and
usage. For example, In order to get the latest value for a given column you
don't have to do anything extra. You just do a Get operation on the desired
rowkey, and by default HBase always gives you the latest value.

You could probably have a composite rowkey consisting of the fields you are
going to use most frequently during data retrieval. It's actually a bit
difficult to suggest something substantial without proper information on
your use case.



[image: http://]

Tariq, Mohammad
about.me/mti
[image: http://]



On Wed, Jun 1, 2016 at 12:03 AM, Subhash Pophale 
wrote:

> Hi
>
> It would be like:
> 1) What is the latest (Expired = '31-Dec-') value of "col2" for
> "col1" in date range between "From_Date" and "To_Date"?
>
>
> On 5/31/16, Mohammad Tariq  wrote:
> > Hi Subhash,
> >
> > What would be your query pattern like?
> >
> >
> >
> > [image: http://]
> >
> > Tariq, Mohammad
> > about.me/mti
> > [image: http://]
> > 
> >
> >
> > On Tue, May 31, 2016 at 11:48 PM, Subhash Pophale
> >  >> wrote:
> >
> >> Hi All,
> >>
> >> I am new to Hbase database. I am planning to create rather mimic table
> >> in RDBMS which is temporal table which has below columns:
> >>
> >> Col1
> >> Col2
> >> From_Date
> >> To_Date
> >> Created
> >> Expired
> >>
> >> Primary Key is : {Col1,From_Date,Expired}
> >>
> >> Anyone has idea what is best Hbase design structure for such table?
> >>
> >> Many Thanks!
> >>
> >> Regards,
> >> Subhash Pophale
> >>
> >
>


Re: Major compaction cannot remove deleted rows until the region is split. Strange!

2016-06-01 Thread Stack
On Wed, Jun 1, 2016 at 10:56 AM, Tianying Chang  wrote:

> Hi, Stack
>
> After moving the region and issue a major compact on that region, its size
> shrink from 99G down to 24G. So it looks like the region is in a bad state
> that cannot recover, close/open it fixed the issue. And from the region
> size metric graph, we can see major compaction stop working  since March
> 31, so some bug that caused region enter into bad state... Unfortunately,
> we don't have DEBUG enabled and that is the last region that has the issue,
> it is hard to figure out what is the bug that caused the bad state...
>
>
Interesting. So moving it to another RS make it major-compactable? That
would seem to indicate some state kept in the RS memory is preventing the
major compaction running. Is moving the region a workaround for you until
we figure what it is Tian-Ying?

St.



> Thanks
> Tian-Ying
>
> On Tue, May 31, 2016 at 3:43 PM, Tianying Chang  wrote:
>
> > Hi, Stack
> >
> > Based on the log, the major compaction was run, and it took 5+ hours.
> And
> > I also manually run major_compact from hbase shell explicitly to verify.
> >
> > I just moved the region to a different RS and issued a major_compact on
> > that region again, let me see if the major compaction can succeed and
> will
> > report back.
> >
> > Thanks
> > Tian-Ying
> >
> > On Sun, May 29, 2016 at 4:35 PM, Stack  wrote:
> >
> >> On Fri, May 27, 2016 at 3:17 PM, Tianying Chang 
> >> wrote:
> >>
> >> > Yes, it is 94.26.  By a quick glance, I didn't  see any put that is
> >> older
> >> > than the delete marker's TS, which could go as far as about couple
> weeks
> >> > ago since major compaction on it for long time seems.
> >> >
> >> Also it is really strange that if the region is split, then seems
> >> > everything is working as expected. Also we noticed, the same region
> >> > replicated at the slave side is totally normal, i.e. at 20+G
> >> >
> >> >
> >> If you move the region to another server, does that work?
> >>
> >> Looking in 0.94 codebase, I see this in Compactor#compact
> >>
> >>
> >>   // For major compactions calculate the earliest put timestamp
> >>
> >>   // of all involved storefiles. This is used to remove
> >>
> >>   // family delete marker during the compaction.
> >>
> >>   if (majorCompaction) {
> >>
> >> tmp = fileInfo.get(StoreFile.EARLIEST_PUT_TS);
> >>
> >> if (tmp == null) {
> >>
> >>   // There's a file with no information, must be an old one
> >>
> >>   // assume we have very old puts
> >>
> >>   earliestPutTs = HConstants.OLDEST_TIMESTAMP;
> >>
> >> } else {
> >>
> >>   earliestPutTs = Math.min(earliestPutTs, Bytes.toLong(tmp));
> >>
> >> }
> >>
> >>   }
> >>
> >>
> >> The above is followed by this log line:
> >>
> >>
> >>   if (LOG.isDebugEnabled()) {
> >>
> >> LOG.debug("Compacting " + file +
> >>
> >>   ", keycount=" + keyCount +
> >>
> >>   ", bloomtype=" + r.getBloomFilterType().toString() +
> >>
> >>   ", size=" + StringUtils.humanReadableInt(r.length()) +
> >>
> >>   ", encoding=" + r.getHFileReader().getEncodingOnDisk() +
> >>
> >>   (majorCompaction? ", earliestPutTs=" + earliestPutTs: ""));
> >>
> >>   }
> >>
> >> This prints out earliestPutTs. You see that in the logs?  You running
> with
> >> DEBUG? Does the earliest put ts preclude our dropping delete family?
> >>
> >>
> >> Looking more in code, we retain deletes in following circumstances:
> >>
> >>
> >> this.retainDeletesInOutput = scanType == ScanType.MINOR_COMPACT ||
> >> scan
> >> .isRaw();
> >>
> >>
> >> So, for sure we are running major compaction?
> >>
> >> Otherwise, have to dig in a bit more here.. This stuff is a little
> >> involved.
> >> St.Ack
> >>
> >>
> >>
> >>
> >> > On Fri, May 27, 2016 at 3:13 PM, Stack  wrote:
> >> >
> >> > > On Fri, May 27, 2016 at 2:32 PM, Tianying Chang 
> >> > wrote:
> >> > >
> >> > > > Hi,
> >> > > >
> >> > > > We saw a very strange case in one of our production cluster. A
> >> couple
> >> > > > regions cannot get their deleted rows or delete marker removed
> even
> >> > after
> >> > > > major compaction. However when the region triggered split (we set
> >> 100G
> >> > > for
> >> > > > auto split), the deletion worked. The 100G region becomes two 10G
> >> > > daughter
> >> > > > regions, and all the delete marker are gone.
> >> > > >
> >> > > > Also, the same region in the slave cluster (through replication)
> >> have
> >> > > > normal size at about 20+G.
> >> > > >
> >> > > > BTW, the delete marker in the regions are mostly deleteFamily if
> it
> >> > > > matters.
> >> > > >
> >> > > > This is really weird. Anyone has any clue for this strange
> behavior?
> >> > > >
> >> > > > Thanks
> >> > > > Tian-Ying
> >> > > >
> >> > > > These 0.94 Tian-Ying?
> >> > >
> >> > > It looks like 

Re: HBase Master is shutting down with error

2016-06-01 Thread Dima Spivak
Hey Pranavan,

You’ll likely have more luck on the user@hbase.apache.org mailing list.

Cheers,
  Dima

On Wed, Jun 1, 2016 at 9:39 AM, Pranavan Theivendiram <
pranavan...@cse.mrt.ac.lk> wrote:

> Hi Devs,
>
> I am Pranavan from Sri Lanka. I am doing a GSoC project for apache
> pheonix. Please help me in the following problem.
>
> I set up a cluster with hadoop, hbase, and zookeeper. I am running a
> single node. But HMaster is failing with the following error.
>
> 2016-06-01 22:01:27,273 INFO
>  [megala-Inspiron-N5110:6.activeMasterManager]
> master.ActiveMasterManager: Deleting ZNode for
> /hbase/backup-masters/megala-inspiron-n5110,6,1464798671030 from backup
> master directory
> 2016-06-01 22:01:27,632 INFO
>  [megala-Inspiron-N5110:6.activeMasterManager]
> master.ActiveMasterManager: Registered Active
> Master=megala-inspiron-n5110,6,1464798671030
> 2016-06-01 22:01:28,148 FATAL
> [megala-Inspiron-N5110:6.activeMasterManager] master.HMaster: Failed to
> become active master
> java.lang.IllegalStateException
>
> I attached the log file as well.
> Can anyone help me on this problem?
>
> The versions of the components are listed below
>
>1. hadoop 2.6.4
>2. hbase 1.2.1
>3. zookeeper 3.4.6
>
>
> Thanks
> *T. Pranavan*
> *Junior Consultant | Department of Computer Science & Engineering
> ,University of Moratuwa*
> *Mobile| *0775136836
>


Re: dfs.block.size recommendations for HBase

2016-06-01 Thread Vladimir Rodionov
>>  Does
>> the datanode need to read that entire block when HBase tries to fetch
data
>> from it?

No.

-Vlad


On Wed, Jun 1, 2016 at 8:51 AM, Bryan Beaudreault 
wrote:

> Hello,
>
> There is very little information that I can find online with regards to
> recommended dfs.block.size setting for HBase. Often it conflates with the
> HBase blocksize, which we know should be smaller.  Any chance we can get
> some recommendations for dfs.block.size?
>
> The default shipped with HDFS in later versions of CDH is 128mb. In highly
> random-read online database scenarios, should we be tuning that lower? Does
> the datanode need to read that entire block when HBase tries to fetch data
> from it? It seems hard to believe that the default block size would be good
> for HBase, considering how different it is from other hadoop workloads.
>
> Thanks!
>


Re: Major compaction cannot remove deleted rows until the region is split. Strange!

2016-06-01 Thread Tianying Chang
Hi, Stack

After moving the region and issue a major compact on that region, its size
shrink from 99G down to 24G. So it looks like the region is in a bad state
that cannot recover, close/open it fixed the issue. And from the region
size metric graph, we can see major compaction stop working  since March
31, so some bug that caused region enter into bad state... Unfortunately,
we don't have DEBUG enabled and that is the last region that has the issue,
it is hard to figure out what is the bug that caused the bad state...

Thanks
Tian-Ying

On Tue, May 31, 2016 at 3:43 PM, Tianying Chang  wrote:

> Hi, Stack
>
> Based on the log, the major compaction was run, and it took 5+ hours.  And
> I also manually run major_compact from hbase shell explicitly to verify.
>
> I just moved the region to a different RS and issued a major_compact on
> that region again, let me see if the major compaction can succeed and will
> report back.
>
> Thanks
> Tian-Ying
>
> On Sun, May 29, 2016 at 4:35 PM, Stack  wrote:
>
>> On Fri, May 27, 2016 at 3:17 PM, Tianying Chang 
>> wrote:
>>
>> > Yes, it is 94.26.  By a quick glance, I didn't  see any put that is
>> older
>> > than the delete marker's TS, which could go as far as about couple weeks
>> > ago since major compaction on it for long time seems.
>> >
>> Also it is really strange that if the region is split, then seems
>> > everything is working as expected. Also we noticed, the same region
>> > replicated at the slave side is totally normal, i.e. at 20+G
>> >
>> >
>> If you move the region to another server, does that work?
>>
>> Looking in 0.94 codebase, I see this in Compactor#compact
>>
>>
>>   // For major compactions calculate the earliest put timestamp
>>
>>   // of all involved storefiles. This is used to remove
>>
>>   // family delete marker during the compaction.
>>
>>   if (majorCompaction) {
>>
>> tmp = fileInfo.get(StoreFile.EARLIEST_PUT_TS);
>>
>> if (tmp == null) {
>>
>>   // There's a file with no information, must be an old one
>>
>>   // assume we have very old puts
>>
>>   earliestPutTs = HConstants.OLDEST_TIMESTAMP;
>>
>> } else {
>>
>>   earliestPutTs = Math.min(earliestPutTs, Bytes.toLong(tmp));
>>
>> }
>>
>>   }
>>
>>
>> The above is followed by this log line:
>>
>>
>>   if (LOG.isDebugEnabled()) {
>>
>> LOG.debug("Compacting " + file +
>>
>>   ", keycount=" + keyCount +
>>
>>   ", bloomtype=" + r.getBloomFilterType().toString() +
>>
>>   ", size=" + StringUtils.humanReadableInt(r.length()) +
>>
>>   ", encoding=" + r.getHFileReader().getEncodingOnDisk() +
>>
>>   (majorCompaction? ", earliestPutTs=" + earliestPutTs: ""));
>>
>>   }
>>
>> This prints out earliestPutTs. You see that in the logs?  You running with
>> DEBUG? Does the earliest put ts preclude our dropping delete family?
>>
>>
>> Looking more in code, we retain deletes in following circumstances:
>>
>>
>> this.retainDeletesInOutput = scanType == ScanType.MINOR_COMPACT ||
>> scan
>> .isRaw();
>>
>>
>> So, for sure we are running major compaction?
>>
>> Otherwise, have to dig in a bit more here.. This stuff is a little
>> involved.
>> St.Ack
>>
>>
>>
>>
>> > On Fri, May 27, 2016 at 3:13 PM, Stack  wrote:
>> >
>> > > On Fri, May 27, 2016 at 2:32 PM, Tianying Chang 
>> > wrote:
>> > >
>> > > > Hi,
>> > > >
>> > > > We saw a very strange case in one of our production cluster. A
>> couple
>> > > > regions cannot get their deleted rows or delete marker removed even
>> > after
>> > > > major compaction. However when the region triggered split (we set
>> 100G
>> > > for
>> > > > auto split), the deletion worked. The 100G region becomes two 10G
>> > > daughter
>> > > > regions, and all the delete marker are gone.
>> > > >
>> > > > Also, the same region in the slave cluster (through replication)
>> have
>> > > > normal size at about 20+G.
>> > > >
>> > > > BTW, the delete marker in the regions are mostly deleteFamily if it
>> > > > matters.
>> > > >
>> > > > This is really weird. Anyone has any clue for this strange behavior?
>> > > >
>> > > > Thanks
>> > > > Tian-Ying
>> > > >
>> > > > These 0.94 Tian-Ying?
>> > >
>> > > It looks like the DeleteFamily is retained only; do you see incidence
>> > where
>> > > there may have been versions older than the DeleteFamily that are also
>> > > retained post-major-compaction?
>> > >
>> > > St.Ack
>> > >
>> > >
>> > >
>> > > > A snippet of the HFile generated by the major compaction:
>> > > >
>> > > > : \xA0\x00\x00L\x1A@\x1CBe\x00\x00\x08m\x03\x1A@
>> > > > \x10\x00?PF/d:/1459808114380/DeleteFamily/vlen=0/ts=2292870047
>> > > > V:
>> > > > K: \xA0\x00\x00L\x1A@\x1CBe\x00\x00\x08m\x03\x1A@
>> > > > \x10\x00?PF/d:/1459808114011/DeleteFamily/vlen=0/ts=2292869794
>> > > > V:
>> > > > K: 

dfs.block.size recommendations for HBase

2016-06-01 Thread Bryan Beaudreault
Hello,

There is very little information that I can find online with regards to
recommended dfs.block.size setting for HBase. Often it conflates with the
HBase blocksize, which we know should be smaller.  Any chance we can get
some recommendations for dfs.block.size?

The default shipped with HDFS in later versions of CDH is 128mb. In highly
random-read online database scenarios, should we be tuning that lower? Does
the datanode need to read that entire block when HBase tries to fetch data
from it? It seems hard to believe that the default block size would be good
for HBase, considering how different it is from other hadoop workloads.

Thanks!


Re: Memstore blocking

2016-06-01 Thread ????
hi Stack:
 
 
 1.Is region always on same machine or do you see this phenomenon on more than 
one machine?
Not always on the same machine, but always on the machine which hold 
the first region of a table(the only table that its first region cannot 
flush,when restart the regionserver,  the first region would move to another 
machine)
   
  2.The RS is ABORTED? Because it can't flush? Is that what it says in the log? 
Can we see the log message around the ABORT?
sorry??I did not express clear here. It is the MemStore of the first 
region can??t flush, not the RS. 
The RS Log is like this:
INFO  [regionserver60020.periodicFlusher] regionserver.HRegionServer: 
regionserver60020.periodicFlusher requesting flush for region 
qtrace,,1458012479440.dd8f92e3c161a8534b30ab17c28ae8be. after a delay of 9452
DEBUG [MemStoreFlusher.39] regionserver.HRegion: NOT flushing memstore 
for region qtrace,,1458012479440.dd8f92e3c161a8534b30ab17c28ae8be., 
flushing=true, writesEnabled=true
   
And the web UI shows:
Aborted flushing .(Not flushing since already flushing.) But  the 
Flusher thread never finish.

  3.100% disk only? The CPU does not go up too?  Can we see a thread dump? Do 
jstack -l  PID if you can
Only the disk usage(command : df -h) increases faster than other 
machine, not the IO usage. The usage of CPU is very low.

 
 
  4.Any other special configurations going on on this install? Phoenix or use 
of Coprocessors?
NO, no phoenix. Only AccessController coprocessor.
 
 
  5.If you thread dump a few times, it is always stuck here?
Yes, always stuck here. here is the jstack log.(in this log, it is the 
MemStoreFlusher.13 can ??t flush)

 
  

PS: As I see. I think this is because the first region cannot flush 
cause the problem. But I do not know why it can??t flush and why just the first 
region of the only table has this problem.  
 
 
 
   ?? 2016??6??13:10??Stack  ??
 
  On Mon, May 30, 2016 at 7:03 PM,  <175998...@qq.com> wrote:
 
 HI ALL:   Recently,I met a strange problem,  the first Region??s
 Memstore of one table (the only one) often blocked when flushing.(Both
 Version:  hbase-0.98.6-cdh5.2.0  and 1.0.1.1, I updated 0.98 to
 1.0.1.1,hope to solve the problem,But failed)
 
  
 
 Is region always on same machine or do you see this phenomenon on more than
 one machine?
 
 
 
   On the web UI, I can see the status shows:  ABORTED(since XXsec
 ago), Not flushing since already flushing.
 
  
 
 The RS is ABORTED? Because it can't flush? Is that what it says in the log?
 Can we see the log message around the ABORT?
 
 
 
   But it will never flush success, and the usage of the disk will
 increase very high.Now other regionservers just use 30% of the disk
 capacity, the problematic region server will increase to 100%,unless
 killing the process.
 
  
 
 100% disk only? The CPU does not go up too?
 
 Can we see a thread dump? Do jstack -l  PID if you can.
 
 
 
   What??s more, the region server process cannot be shutdown
 normally,every time I have to use the KILL -9 command.
   I check the log,the reason why cannot flush is one of the
 MemstoreFlusher cannot exiting.
   The log is like blow:
   2016-05-29 19:54:11,982 INFO  [MemStoreFlusher.13]
 regionserver.MemStoreFlusher: MemStoreFlusher.13 exiting
 2016-05-29 19:54:13,016 INFO  [MemStoreFlusher.6]
 regionserver.MemStoreFlusher: MemStoreFlusher.6 exiting
 2016-05-29 19:54:13,260 INFO  [MemStoreFlusher.16]
 regionserver.MemStoreFlusher: MemStoreFlusher.16 exiting
 2016-05-29 19:54:16,032 INFO  [MemStoreFlusher.33]
 regionserver.MemStoreFlusher: MemStoreFlusher.33 exiting
 2016-05-29 19:54:16,341 INFO  [MemStoreFlusher.25]
 regionserver.MemStoreFlusher: MemStoreFlusher.25 exiting
 2016-05-29 19:54:16,620 INFO  [MemStoreFlusher.31]
 regionserver.MemStoreFlusher: MemStoreFlusher.31 exiting
 2016-05-29 19:54:16,621 INFO  [MemStoreFlusher.29]
 regionserver.MemStoreFlusher: MemStoreFlusher.29 exiting
 2016-05-29 19:54:16,621 INFO  [MemStoreFlusher.23]
 regionserver.MemStoreFlusher: MemStoreFlusher.23 exiting
 2016-05-29 19:54:16,621 INFO  [MemStoreFlusher.32]
 regionserver.MemStoreFlusher: MemStoreFlusher.32 exiting
 2016-05-29 19:54:16,621 INFO  [MemStoreFlusher.1]
 regionserver.MemStoreFlusher: MemStoreFlusher.1 exiting
 2016-05-29 19:54:16,621 INFO  [MemStoreFlusher.38]
 regionserver.MemStoreFlusher: MemStoreFlusher.38 exiting
 2016-05-29 19:54:16,621 INFO  [MemStoreFlusher.10]
 regionserver.MemStoreFlusher: MemStoreFlusher.10 exiting
 2016-05-29 19:54:16,620 INFO  [MemStoreFlusher.7]
 regionserver.MemStoreFlusher: MemStoreFlusher.7 exiting
 2016-05-29 19:54:16,620 INFO  [MemStoreFlusher.12]
 regionserver.MemStoreFlusher: MemStoreFlusher.12 exiting
 2016-05-29 19:54:16,620 INFO  [MemStoreFlusher.21]
 regionserver.MemStoreFlusher: MemStoreFlusher.21 exiting
 2016-05-29 

Re: Memstore blocking

2016-06-01 Thread 吴国泉wgq
hi all:

1.Is region always on same machine or do you see this phenomenon on more than 
one machine?
   Not always on the same machine, but always on the machine which hold the 
first region of a table(the only table that its first region cannot flush,when 
restart the regionserver,  the first region would move to another machine)

 2.The RS is ABORTED? Because it can't flush? Is that what it says in the log? 
Can we see the log message around the ABORT?
   sorry,I did not express clear here. It is the MemStore of the first 
region can’t flush, not the RS.
   The RS Log is like this:
   INFO  [regionserver60020.periodicFlusher] regionserver.HRegionServer: 
regionserver60020.periodicFlusher requesting flush for region 
qtrace,,1458012479440.dd8f92e3c161a8534b30ab17c28ae8be. after a delay of 9452
   DEBUG [MemStoreFlusher.39] regionserver.HRegion: NOT flushing memstore 
for region qtrace,,1458012479440.dd8f92e3c161a8534b30ab17c28ae8be., 
flushing=true, writesEnabled=true

   And the web UI shows:
   Aborted flushing .(Not flushing since already flushing.) But  the 
Flusher thread never finish.

 3.100% disk only? The CPU does not go up too?  Can we see a thread dump? Do 
jstack -l  PID if you can
   Only the disk usage(command : df -h) increases faster than other 
machine, not the IO usage. The usage of CPU is very low.


 4.Any other special configurations going on on this install? Phoenix or use of 
Coprocessors?
   NO, no phoenix. Only AccessController coprocessor.

 5.If you thread dump a few times, it is always stuck here?
   Yes, always stuck here. here is the jstack log.(in this log, it is the 
MemStoreFlusher.13 can ’t flush)


   PS: As I see. I think this is because the first region cannot flush 
cause the problem. But I do not know why it can’t flush and why just the first 
region of the only table has this problem.


在 2016年6月1日,上午3:10,Stack > 写道:

On Mon, May 30, 2016 at 7:03 PM, 聪聪 <175998...@qq.com> 
wrote:

HI ALL:   Recently,I met a strange problem,  the first Region’s
Memstore of one table (the only one) often blocked when flushing.(Both
Version:  hbase-0.98.6-cdh5.2.0  and 1.0.1.1, I updated 0.98 to
1.0.1.1,hope to solve the problem,But failed)



Is region always on same machine or do you see this phenomenon on more than
one machine?



  On the web UI, I can see the status shows:  ABORTED(since XXsec
ago), Not flushing since already flushing.



The RS is ABORTED? Because it can't flush? Is that what it says in the log?
Can we see the log message around the ABORT?



  But it will never flush success, and the usage of the disk will
increase very high.Now other regionservers just use 30% of the disk
capacity, the problematic region server will increase to 100%,unless
killing the process.



100% disk only? The CPU does not go up too?

Can we see a thread dump? Do jstack -l  PID if you can.



  What’s more, the region server process cannot be shutdown
normally,every time I have to use the KILL -9 command.
  I check the log,the reason why cannot flush is one of the
MemstoreFlusher cannot exiting.
  The log is like blow:
  2016-05-29 19:54:11,982 INFO  [MemStoreFlusher.13]
regionserver.MemStoreFlusher: MemStoreFlusher.13 exiting
2016-05-29 19:54:13,016 INFO  [MemStoreFlusher.6]
regionserver.MemStoreFlusher: MemStoreFlusher.6 exiting
2016-05-29 19:54:13,260 INFO  [MemStoreFlusher.16]
regionserver.MemStoreFlusher: MemStoreFlusher.16 exiting
2016-05-29 19:54:16,032 INFO  [MemStoreFlusher.33]
regionserver.MemStoreFlusher: MemStoreFlusher.33 exiting
2016-05-29 19:54:16,341 INFO  [MemStoreFlusher.25]
regionserver.MemStoreFlusher: MemStoreFlusher.25 exiting
2016-05-29 19:54:16,620 INFO  [MemStoreFlusher.31]
regionserver.MemStoreFlusher: MemStoreFlusher.31 exiting
2016-05-29 19:54:16,621 INFO  [MemStoreFlusher.29]
regionserver.MemStoreFlusher: MemStoreFlusher.29 exiting
2016-05-29 19:54:16,621 INFO  [MemStoreFlusher.23]
regionserver.MemStoreFlusher: MemStoreFlusher.23 exiting
2016-05-29 19:54:16,621 INFO  [MemStoreFlusher.32]
regionserver.MemStoreFlusher: MemStoreFlusher.32 exiting
2016-05-29 19:54:16,621 INFO  [MemStoreFlusher.1]
regionserver.MemStoreFlusher: MemStoreFlusher.1 exiting
2016-05-29 19:54:16,621 INFO  [MemStoreFlusher.38]
regionserver.MemStoreFlusher: MemStoreFlusher.38 exiting
2016-05-29 19:54:16,621 INFO  [MemStoreFlusher.10]
regionserver.MemStoreFlusher: MemStoreFlusher.10 exiting
2016-05-29 19:54:16,620 INFO  [MemStoreFlusher.7]
regionserver.MemStoreFlusher: MemStoreFlusher.7 exiting
2016-05-29 19:54:16,620 INFO  [MemStoreFlusher.12]
regionserver.MemStoreFlusher: MemStoreFlusher.12 exiting
2016-05-29 19:54:16,620 INFO  [MemStoreFlusher.21]
regionserver.MemStoreFlusher: MemStoreFlusher.21 exiting
2016-05-29 19:54:16,622 INFO  [MemStoreFlusher.37]
regionserver.MemStoreFlusher: MemStoreFlusher.37 exiting
2016-05-29 19:54:16,622 INFO