may be a bug in ServerShutDownHandler leading to region in RIT

2016-10-30 Thread WangYQ
hbase version: 0.98.10
when a rs died, master will use ServerShutDownHanlder to process this dead rs
in ServerShutDownHandler, will call RegionStates.serverOffline to get all 
regions need to reAssign


in method  RegionStates.serverOffline, line 538
LOG.warn("THIS SHOULD NOT HANPPEN..");


but, there is a scenario can make line 538 happen and leading to region in RIT


setps to reproduce this problem:
1. hmaster asks rs1 to open region1
2. rs1 opens region1 successfully
3. hmaster use method "handleRegion" to process RS_ZK_REGION_OPEN zk-event
such as : delete ZNode, update RIT, make region to online state, update 
lastAssignment
4. before hmaster process node-deleted event, rs1 died
then hmaster will skip the following steps to make region online
5. hmaster submit serverShutDownHanlder to process dead rs1
   find region1 in RIT with online state, and the serverName in RIT equals the 
dead server


then, in method  RegionStates.serverOffline, line 538
LOG.warn("THIS SHOULD NOT HANPPEN..");   will happen 


finally, region1 will stay in RIT. unless we restart hmaster to refresh 
hmaster's memory


 

Re: Re: HostAndWeight

2016-06-22 Thread WangYQ
to ram
i am not sure if I misunstand the code, so I want to make sure if there is any 
improvements

to ted
sorry, I make some mistakes in previous description

i mean if we can use guava multiMap:
private MultiMap<String, Long> hostAndWeight

do not need to store hostname many times




thanks



On 2016-06-23 12:50 , Ted Yu Wrote:

YQ:
The HostAndWeight is basically a tuple.
In getTopHosts(), hosts are retrieved.
In getWeight(String host), weight is retrieved.

Why do you think a single Long is enough ?

Cheers

On Wed, Jun 22, 2016 at 9:28 PM, ramkrishna vasudevan <
ramkrishna.s.vasude...@gmail.com> wrote:

> Hi WangYQ,
>
> For code related suggestions if you feel there is an improvement or bug it
> is preferrable to raise a JIRA and give a patch. Pls feel free to raise a
> JIRA with your suggestion and why you plan to change it.
>
> Regards
> Ram
>
> On Thu, Jun 23, 2016 at 9:36 AM, WangYQ <wangyongqiang0...@163.com> wrote:
>
> >
> > there is a class named "HDFSBlockDistribution",  use a tree map
> > "hostAndWeight" to store data
> > private Map<String, HostAndWeight> hostAndWeight
> >
> > I think we can use
> > private Map<String, Long> hostAndWeight
> > to store data
> >
> >
> > thanks
>


(无主题)

2016-06-22 Thread WangYQ

there is a class named "HDFSBlockDistribution",  use a tree map "hostAndWeight" 
to store data
private Map hostAndWeight

I think we can use
private Map hostAndWeight
to store data


thanks

Re:Re: maybe waste on blockCache

2016-06-16 Thread WangYQ


I set all user tables with blockCache on, but set the IN_MEMORY conf to false
 







At 2016-06-16 18:18:44, "Heng Chen" <heng.chen.1...@gmail.com> wrote:
>bq. if we do not set any user tables IN_MEMORY to true, then the whole
>hbase just need to cache hbase:meta data to in_memory LruBlockCache.
>
>You set blockcache to be false for other tables?
>
>2016-06-16 16:21 GMT+08:00 WangYQ <wangyongqiang0...@163.com>:
>
>> in hbase 0.98.10, if we use LruBlockCache, and set regionServer's max heap
>> to 10G
>> in default:
>> the size of in_memory priority of LruBlockCache is :
>> 10G * 0.4 * 0.25 = 1G
>>
>>
>> 0.4: hfile.block.cache.size
>> 0.25: hbase.lru.blockcache.memory.percentage
>>
>>
>> if we do not set any user tables IN_MEMORY to true, then the whole hbase
>> just need to cache hbase:meta data to in_memory LruBlockCache.
>> hbase:meta does not split , so just need one regionServer to cache, so
>> there is some waste in blockCache
>>
>>
>> i think the regionServer open hbase:meta need to set  in_memory
>> LruBlockCache to a certain size
>> other regionServer set hbase.lru.blockcache.memory.percentage to 0, do not
>> need to allocate  in_memory LruBlockCache.


maybe waste on blockCache

2016-06-16 Thread WangYQ
in hbase 0.98.10, if we use LruBlockCache, and set regionServer's max heap to 
10G
in default: 
the size of in_memory priority of LruBlockCache is :
10G * 0.4 * 0.25 = 1G


0.4: hfile.block.cache.size
0.25: hbase.lru.blockcache.memory.percentage


if we do not set any user tables IN_MEMORY to true, then the whole hbase just 
need to cache hbase:meta data to in_memory LruBlockCache.
hbase:meta does not split , so just need one regionServer to cache, so there is 
some waste in blockCache


i think the regionServer open hbase:meta need to set  in_memory LruBlockCache 
to a certain size
other regionServer set hbase.lru.blockcache.memory.percentage to 0, do not need 
to allocate  in_memory LruBlockCache.

on blockCache hitRatio

2016-06-15 Thread WangYQ
HBASE_HEAP_SIZE=10G, use LruBlockCache with 0.4 of HBASE_HEAP_SIZE
after hbase run 15 days , in find in some RS, there are 200M free block cache, 
but hit ration is 10%, too low 


i think the hit ration is low may be bacause of small block cache size(4G)


is there any suggestions to get a higher hit ration?


does use offheap-bucket cache can have effect? in hbase doc, i see bucket cache 
is mainly used to descraese CMS by GC
and when we use combined cache(LRUBlokcCache+bucketCache), meta data is stored 
in lru, data is stored on offheap-bucketCache, does not has effect on read 
performance?
is there any test data?



question on class CacheConfig

2016-06-13 Thread WangYQ
in hbase 0.98.10
class CacheConfig
in line 417, we get lruCacheSize from heap conf and cachePercentage
in line 450, when we use combineWithLru, we get a new lruCacheSize again from 
bucketCacheSize


lruCacheSize  should have no relation with bucketCache, i think is not so 
property to get lruCacheSize from bucketCacheSize

question on region split/flush

2016-06-11 Thread WangYQ
in hbase 0.98.10 doc, section 6.2  "on the number of column families"


" we currently does not do well with anything above two or three column 
families"


i am wondering are there any new improvements on this



Re: Re: Re: pool MASTER_OPEN_REGION in hmaster is not used

2016-06-03 Thread WangYQ
in 98, many operations are executed concurrently using zkWorkers, so some pools 
are not used.
can be removed to make code clean and easy to read


thanks



On 2016-06-03 21:17 , Ted Yu Wrote:

I checked head of 0.98 branch.

For the two 'new OpenedRegionHandler()' calls
in AssignmentManager.java, process() is invoked directly.

Looks like you're right.

On Thu, Jun 2, 2016 at 10:58 PM, WangYQ <wangyongqiang0...@163.com> wrote:

> RS_ZK_REGION_OPENED   (4, ExecutorType.MASTER_OPEN_REGION),there is a
> relation between the enevt type "RS_ZK_REGION_OPENED" and pool
> "MASTER_OPEN_REGION", but in hbase 0.98.10, we do not use these pools
> anymoreexamples are :class AssignmentManagermethod: handleRegionin case
> RS_ZK_REGION_OPENED, we construct a OpendRegionHandler and call the process
> method, not submit this handler to pool(in class AssignmentManager, line
> 1078)
>
>
>
>
>
>
>
>
> At 2016-06-02 21:29:09, "Ted Yu" <yuzhih...@gmail.com> wrote:
> >Have you seen this line in EventType.java ?
> >
> >  RS_ZK_REGION_OPENED   (4, ExecutorType.MASTER_OPEN_REGION),
> >
> >If you follow RS_ZK_REGION_OPENED, you would see how the executor is used.
> >
> >On Thu, Jun 2, 2016 at 4:56 AM, WangYQ <wangyongqiang0...@163.com> wrote:
> >
> >> in hbase 0.98.10, class HMaster, method startServiceThread, hmaster
> open a
> >> thread poolwith type MASTER_OPEN_REGION, but this pool is not used in
> any
> >> place
> >> can remove this.
> >>
> >>
> >> thanks
>


Re: Re: RS open hbase:meta may throw NullPointerException

2016-06-03 Thread WangYQ
yes, the code below can be improved
   // See HBASE-5094. Cross check with hbase:meta if still this RS
is owning
// the region.
Pair<HRegionInfo, ServerName> p = MetaReader.getRegion(
this.catalogTracker, region.getRegionName());


if regionserver opens hbase:meta timeout(1 minute), then master will retry to 
open hbase:meta by send another rpc to region server
if rs opens hbas:meta before receive the second open request, null pointer will 
be thorwn

in common case, rs can opens hbase:meta in one minutes, so i am not sure 
whether this is a problem



thanks



On 2016-06-03 21:25 , Ted Yu Wrote:

Were you referring to the following lines ?

 // See HBASE-5094. Cross check with hbase:meta if still this RS
is owning
 // the region.
 Pair<HRegionInfo, ServerName> p = MetaReader.getRegion(
 this.catalogTracker, region.getRegionName());

The above is at line 3967 at head of 0.98 branch.

If you can come up with test which shows the problem for head of 0.98
branch, suggest opening a JIRA.

Cheers

On Fri, Jun 3, 2016 at 2:16 AM, WangYQ <wangyongqiang0...@163.com> wrote:

> in hbase 0.98.10,
> class HRegionServer
> method openRegion
> line 3827
>
>
> when RS open region, if this region is already opened by RS, we will check
> hbase:meta to see if hbase:meta is updated.
> but, if the RS is opening hbase:meta, then MetaReader.getRegion will
> return null(we do not store hbase:meta data in table hbase:meta), and lead
> to nullPointerException
>
>
> we can reproduce this problem as follows:
> 1. master assign hbase:meta
> 2. RS open hbase:meta slowly, master timeout(may because of high load or
> network problem),
> 3. master send open region request again(RS is still opening hbase:meta,
> and opened)
> 4. RS will throw NullPointerException and hmaster will retry forever
>
>


RS open hbase:meta may throw NullPointerException

2016-06-03 Thread WangYQ
in hbase 0.98.10, 
class HRegionServer
method openRegion
line 3827


when RS open region, if this region is already opened by RS, we will check 
hbase:meta to see if hbase:meta is updated.
but, if the RS is opening hbase:meta, then MetaReader.getRegion will return 
null(we do not store hbase:meta data in table hbase:meta), and lead to 
nullPointerException


we can reproduce this problem as follows:
1. master assign hbase:meta
2. RS open hbase:meta slowly, master timeout(may because of high load or 
network problem),
3. master send open region request again(RS is still opening hbase:meta, and 
opened)
4. RS will throw NullPointerException and hmaster will retry forever



open store and open storeFile use the same conf

2016-06-03 Thread WangYQ
in hbase 0.98.10, class HRegion, line 1277 to 1286: 
there are two methods: "getStoreOpenAndCloseThread" and 
"getStoreFileOpenAndCloseThreadPool", getStoreOpenAndCloseThread is to get the 
thread pool size for open/close Stores, and getStoreFileOpenAndCloseThreadPool 
is used to get pool size for open/close storeFiles, but they use the same conf: 
"HSTORE_OPEN_AND_CLOSE_THREADS_MAX".


there shoud be no relation with store number and storeFile number, so we should 
use different conf for two methods



Re:Re: pool MASTER_OPEN_REGION in hmaster is not used

2016-06-02 Thread WangYQ
RS_ZK_REGION_OPENED   (4, ExecutorType.MASTER_OPEN_REGION),there is a 
relation between the enevt type "RS_ZK_REGION_OPENED" and pool 
"MASTER_OPEN_REGION", but in hbase 0.98.10, we do not use these pools 
anymoreexamples are :class AssignmentManagermethod: handleRegionin case 
RS_ZK_REGION_OPENED, we construct a OpendRegionHandler and call the process 
method, not submit this handler to pool(in class AssignmentManager, line 1078) 








At 2016-06-02 21:29:09, "Ted Yu" <yuzhih...@gmail.com> wrote:
>Have you seen this line in EventType.java ?
>
>  RS_ZK_REGION_OPENED   (4, ExecutorType.MASTER_OPEN_REGION),
>
>If you follow RS_ZK_REGION_OPENED, you would see how the executor is used.
>
>On Thu, Jun 2, 2016 at 4:56 AM, WangYQ <wangyongqiang0...@163.com> wrote:
>
>> in hbase 0.98.10, class HMaster, method startServiceThread, hmaster open a
>> thread poolwith type MASTER_OPEN_REGION, but this pool is not used in any
>> place
>> can remove this.
>>
>>
>> thanks


pool MASTER_OPEN_REGION in hmaster is not used

2016-06-02 Thread WangYQ
in hbase 0.98.10, class HMaster, method startServiceThread, hmaster open a 
thread poolwith type MASTER_OPEN_REGION, but this pool is not used in any place
can remove this.


thanks

Re: Re: how to make a safe use of hbase

2016-05-17 Thread WangYQ
I have 2 goals
1. protect 60010 web page
2. control the access to hbase,  such as read and write
for example, when want to access hbase, must have the correct password


thanks



On 2016-05-17 22:04 , Ted Yu Wrote:


Is your goal to protect web page access ?


Take a look at HBASE-5291.



If I didn't understand your use case, please elaborate.


Use user@hbase in the future.


On Tue, May 17, 2016 at 4:02 AM, WangYQ <wangyongqiang0...@163.com> wrote:
in hbase, if we know zookeeper address, we can write and read hbase
if we know hmaster's address, we can see 60010 page


how can we make a safe use of hbase
such as, if we want to see 60010, to write/read hbase, we must have the correct 
password, like linux



Re:Re: Re: question on "drain region servers"

2016-04-26 Thread WangYQ
yes,  there is a tool graceful_stop.sh to graceful stop regionserver, and can 
move the regions back to the rs after rs come back.
but i can not find the relation with drain region servers...


i think drain region servers function is good, but can not think up with a 
pracital use case 








At 2016-04-26 16:01:55, "Dejan Menges" <dejan.men...@gmail.com> wrote:
>One of use cases we use it is graceful stop of regionserver - you unload
>regions from the server before you restart it. Of course, after restart you
>expect HBase to move regions back.
>
>Now I'm not really remembering correctly, but I kinda remember that one of
>the features was at least that it will move back regions which were already
>there, hence not destroy too much block locality.
>
>On Tue, Apr 26, 2016 at 8:15 AM WangYQ <wangyongqiang0...@163.com> wrote:
>
>> thanks
>> in hbase 0.99.0,  I find the rb file: draining_servers.rb
>>
>>
>> i have some suggestions on this tool:
>> 1. if I add rs hs1 to draining_servers, when hs1 restart, the zk node
>> still exists in zk, but hmaster will not treat hs1 as draining_servers
>> i think when we add a hs to draining_servers, we do not need to store
>> the start code in zk, just store the hostName and port
>> 2.  we add hs1 to draining_servers, but if hs1 always restart, we will
>> need to add hs1 several times
>>   when we need to delete the draining_servers info of hs1, we  will  need
>> to delete hs1 several times
>>
>>
>>
>> finally, what is the original motivation of this tool, some scenario
>> descriptions are good.
>>
>>
>>
>>
>>
>>
>> At 2016-04-26 11:33:10, "Ted Yu" <yuzhih...@gmail.com> wrote:
>> >Please take a look at:
>> >bin/draining_servers.rb
>> >
>> >On Mon, Apr 25, 2016 at 8:12 PM, WangYQ <wangyongqiang0...@163.com>
>> wrote:
>> >
>> >> in hbase,  I find there is a "drain regionServer" feature
>> >>
>> >>
>> >> if a rs is added to drain regionServer in ZK, then regions will not be
>> >> move to on these regionServers
>> >>
>> >>
>> >> but, how can a rs be add to  drain regionServer,   we add it handly or
>> rs
>> >> will add itself automaticly
>>


Re:Re: question on "drain region servers"

2016-04-26 Thread WangYQ
thanks
in hbase 0.99.0,  I find the rb file: draining_servers.rb


i have some suggestions on this tool:
1. if I add rs hs1 to draining_servers, when hs1 restart, the zk node still 
exists in zk, but hmaster will not treat hs1 as draining_servers
i think when we add a hs to draining_servers, we do not need to store the 
start code in zk, just store the hostName and port 
2.  we add hs1 to draining_servers, but if hs1 always restart, we will need to 
add hs1 several times 
  when we need to delete the draining_servers info of hs1, we  will  need to 
delete hs1 several times


 
finally, what is the original motivation of this tool, some scenario 
descriptions are good.






At 2016-04-26 11:33:10, "Ted Yu" <yuzhih...@gmail.com> wrote:
>Please take a look at:
>bin/draining_servers.rb
>
>On Mon, Apr 25, 2016 at 8:12 PM, WangYQ <wangyongqiang0...@163.com> wrote:
>
>> in hbase,  I find there is a "drain regionServer" feature
>>
>>
>> if a rs is added to drain regionServer in ZK, then regions will not be
>> move to on these regionServers
>>
>>
>> but, how can a rs be add to  drain regionServer,   we add it handly or rs
>> will add itself automaticly


question on "drain region servers"

2016-04-25 Thread WangYQ
in hbase,  I find there is a "drain regionServer" feature


if a rs is added to drain regionServer in ZK, then regions will not be move to 
on these regionServers


but, how can a rs be add to  drain regionServer,   we add it handly or rs will 
add itself automaticly

Re: Re: Re: single thread threadpool for master_table_operation

2016-04-20 Thread WangYQ
cp is coprocsssor

this step is carried out before upgrade hbase, so after remove cp, old hbase 
will not be used

so i want to increase the pool size in old version to speed up the process of 
removing cp



On 2016-04-20 22:01 , Ted Yu Wrote:

bq. all cp of all tables

What does cp mean ? Do you mean column family ?

bq. will never create/enable tables in the future

Then what would the cluster be used for ?

On Wed, Apr 20, 2016 at 6:16 AM, WangYQ <wangyongqiang0...@163.com> wrote:

> i want to remove all cp of all tables in hbase
> so i disable tables concurrently, and then modify tables
> for hundreds of tables, remove all cp costs 30 miniutes,  is not so fast
>
>
> so, i want to speed the whoe process. and will never create/enable tables
> in the future
>
>
> after examine the code, i want ot increase the  pool size for
> master_table_operation
>
>
> but not sure if there are any problems.
>
>
> thanks..
>
> At 2016-04-20 21:07:06, "Ted Yu" <yuzhih...@gmail.com> wrote:
> >Adding subject.
> >
> >Adding back user@hbase
> >
> >But the master wouldn't know what next action admin is going to perform,
> >right ?
> >
> >On Wed, Apr 20, 2016 at 5:59 AM, WangYQ <wangyongqiang0...@163.com>
> wrote:
> >
> >> if i just disable tables in concurrently,
> >> and will never enable, modify, create table
> >>
> >> i think is ok, right?
> >>
> >>
> >>
> >>
> >> 在2016年04月20日 20:53 ,Ted Yu <yuzhih...@gmail.com>写道:
> >>
> >>
> >> Have you seen the comment above that line ?
> >>
> >>// We depend on there being only one instance of this executor
> running
> >>// at a time.  To do concurrency, would need fencing of
> enable/disable
> >> of
> >>// tables.
> >>// Any time changing this maxThreads to > 1, pls see the comment at
> >>// AccessController#postCreateTableHandler
> >>
> >> BTW in the future, please send queries to user@hbase
> >>
> >> On Wed, Apr 20, 2016 at 5:50 AM, WangYQ <wangyongqiang0...@163.com>
> wrote:
> >>
> >>> hbase 0.98.10
> >>>
> >>> in class hmaster
> >>> line 1298, the threadpool size for master_table_operation is 1, can not
> >>> be set
> >>>
> >>> are there any problems if i disable tables in concurrently
> >>>
> >>> thanks
> >>
> >>
> >>
> >>
> >>
>


Re:Re: single thread threadpool for master_table_operation

2016-04-20 Thread WangYQ
i want to remove all cp of all tables in hbase
so i disable tables concurrently, and then modify tables
for hundreds of tables, remove all cp costs 30 miniutes,  is not so fast 


so, i want to speed the whoe process. and will never create/enable tables in 
the future 


after examine the code, i want ot increase the  pool size for 
master_table_operation


but not sure if there are any problems.


thanks..

At 2016-04-20 21:07:06, "Ted Yu" <yuzhih...@gmail.com> wrote:
>Adding subject.
>
>Adding back user@hbase
>
>But the master wouldn't know what next action admin is going to perform,
>right ?
>
>On Wed, Apr 20, 2016 at 5:59 AM, WangYQ <wangyongqiang0...@163.com> wrote:
>
>> if i just disable tables in concurrently,
>> and will never enable, modify, create table
>>
>> i think is ok, right?
>>
>>
>>
>>
>> 在2016年04月20日 20:53 ,Ted Yu <yuzhih...@gmail.com>写道:
>>
>>
>> Have you seen the comment above that line ?
>>
>>// We depend on there being only one instance of this executor running
>>// at a time.  To do concurrency, would need fencing of enable/disable
>> of
>>// tables.
>>// Any time changing this maxThreads to > 1, pls see the comment at
>>// AccessController#postCreateTableHandler
>>
>> BTW in the future, please send queries to user@hbase
>>
>> On Wed, Apr 20, 2016 at 5:50 AM, WangYQ <wangyongqiang0...@163.com> wrote:
>>
>>> hbase 0.98.10
>>>
>>> in class hmaster
>>> line 1298, the threadpool size for master_table_operation is 1, can not
>>> be set
>>>
>>> are there any problems if i disable tables in concurrently
>>>
>>> thanks
>>
>>
>>
>>
>>


Re:completebulkload not mv or rename but copy and split many attempt times

2015-12-23 Thread WangYQ
this is because the table region changes, not match with the regions when you 
get the HFiles
if the bulkload process is over, the files should be moved to hbase


i think it is better to delete all hfiles and dirs when the bulkload over.








At 2015-12-23 16:35:10, "qihuang.zheng"  wrote:
>I Have a HFile generate by importtsv, the file is really large, from 100mb to 
>10G.
>I have changed hbase.hregion.max.filesize to 50GB(53687091200). also specify 
>src CanonicalServiceName same with hbase.
>hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles 
>hdfs://tdhdfs/user/tongdun/id_hbase/1 data.md5_id2 HADOOP_CLASSPATH=`hbase 
>classpath` hadoop jar hbase-1.0.2/lib/hbase-server-1.0.2.jar completebulkload 
>/user/tongdun/id_hbase/1 data.md5_id2
>But both completebulkload and LoadIncrementalHFiles did't just mv/rename hfile 
>expected. but instead copy and split hfile happening, which take long time.
>the logSplit occured while grouping HFiles, retry attempt XXXwill create child 
>_tmp dir one by one level.
>2015-12-23 15:52:04,909 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig: 
>CacheConfig:disabled 2015-12-23 15:52:05,006 INFO [LoadIncrementalHFiles-0] 
>mapreduce.LoadIncrementalHFiles: Trying to load 
>hfile=hdfs://tdhdfs/user/tongdun/id_hbase/1/id/01114a58782b4c369819673e4b3678ae
> first=f6eb30074a52ebb8c5f52ed1c85c2f0d last=f93061a29e9458fada2521ffe45ca385 
>2015-12-23 15:52:05,007 INFO [LoadIncrementalHFiles-0] 
>mapreduce.LoadIncrementalHFiles: HFile at 
>hdfs://tdhdfs/user/tongdun/id_hbase/1/id/01114a58782b4c369819673e4b3678ae no 
>longer fits inside a single region. Splitting... 2015-12-23 15:53:38,639 INFO 
>[LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles: Successfully split 
>into new HFiles 
>hdfs://tdhdfs/user/tongdun/id_hbase/1/id/_tmp/9f6fe2d28ddc4f209be62757ace8611b.bottom
> and 
>hdfs://tdhdfs/user/tongdun/id_hbase/1/id/_tmp/9f6fe2d28ddc4f209be62757ace8611b.top
> 2015-12-23 15:53:39,173 INFO [main] mapreduce.LoadIncrementalHFiles: Split 
>occured while grouping HFiles, retry attempt 1 with 2 files remaining to group 
>or split 2015-12-23 15:53:39,186 INFO [LoadIncrementalHFiles-1] 
>mapreduce.LoadIncrementalHFiles: Trying to load 
>hfile=hdfs://tdhdfs/user/tongdun/id_hbase/1/id/_tmp/9f6fe2d28ddc4f209be62757ace8611b.bottom
> first=f6eb30074a52ebb8c5f52ed1c85c2f0d last=f733d2c504f22f71b191014d72e4d124 
>2015-12-23 15:53:39,188 INFO [LoadIncrementalHFiles-2] 
>mapreduce.LoadIncrementalHFiles: Trying to load 
>hfile=hdfs://tdhdfs/user/tongdun/id_hbase/1/id/_tmp/9f6fe2d28ddc4f209be62757ace8611b.top
> first=f733d2c6407f5758e860195b6d2c10c1 last=f93061a29e9458fada2521ffe45ca385 
>2015-12-23 15:53:39,189 INFO [LoadIncrementalHFiles-2] 
>mapreduce.LoadIncrementalHFiles: HFile at 
>hdfs://tdhdfs/user/tongdun/id_hbase/1/id/_tmp/9f6fe2d28ddc4f209be62757ace8611b.top
> no longer fits inside a single region. Splitting... 2015-12-23 15:54:27,722 
>INFO [LoadIncrementalHFiles-2] mapreduce.LoadIncrementalHFiles: Successfully 
>split into new HFiles 
>hdfs://tdhdfs/user/tongdun/id_hbase/1/id/_tmp/_tmp/17ba0f42c4934f4c96218c784d3c3bb0.bottom
> and 
>hdfs://tdhdfs/user/tongdun/id_hbase/1/id/_tmp/_tmp/17ba0f42c4934f4c96218c784d3c3bb0.top
> 2015-12-23 15:54:28,557 INFO [main] mapreduce.LoadIncrementalHFiles: Split 
>occured while grouping HFiles, retry attempt 2 with 2 files remaining to group 
>or split 2015-12-23 15:54:28,568 INFO [LoadIncrementalHFiles-4] 
>mapreduce.LoadIncrementalHFiles: Trying to load 
>hfile=hdfs://tdhdfs/user/tongdun/id_hbase/1/id/_tmp/_tmp/17ba0f42c4934f4c96218c784d3c3bb0.bottom
> first=f733d2c6407f5758e860195b6d2c10c1 last=f77c7d357a76ff92bb16ec1ef79f31fb 
>2015-12-23 15:54:28,568 INFO [LoadIncrementalHFiles-5] 
>mapreduce.LoadIncrementalHFiles: Trying to load 
>hfile=hdfs://tdhdfs/user/tongdun/id_hbase/1/id/_tmp/_tmp/17ba0f42c4934f4c96218c784d3c3bb0.top
> first=f77c7d3915c9a8b71c83c414aabd587d last=f93061a29e9458fada2521ffe45ca385 
>2015-12-23 15:54:28,568 INFO [LoadIncrementalHFiles-5] 
>mapreduce.LoadIncrementalHFiles: HFile at 
>hdfs://tdhdfs/user/tongdun/id_hbase/1/id/_tmp/_tmp/17ba0f42c4934f4c96218c784d3c3bb0.top
> no longer fits inside a single region. Splitting... 2015-12-23 15:55:08,992 
>INFO [LoadIncrementalHFiles-5] mapreduce.LoadIncrementalHFiles: Successfully 
>split into new HFiles 
>hdfs://tdhdfs/user/tongdun/id_hbase/1/id/_tmp/_tmp/_tmp/f7162cec4e404eabbea479b2a5446294.bottom
> and 
>hdfs://tdhdfs/user/tongdun/id_hbase/1/id/_tmp/_tmp/_tmp/f7162cec4e404eabbea479b2a5446294.top
> 2015-12-23 15:55:09,424 INFO [main] mapreduce.LoadIncrementalHFiles: Split 
>occured while grouping HFiles, retry attempt 3 with 2 files remaining to group 
>or split 2015-12-23 15:55:09,431 INFO [LoadIncrementalHFiles-7] 
>mapreduce.LoadIncrementalHFiles: Trying to load 
>hfile=hdfs://tdhdfs/user/tongdun/id_hbase/1/id/_tmp/_tmp/_tmp/f7162cec4e404eabbea479b2a5446294.bottom
> first=f77c7d3915c9a8b71c83c414aabd587d