[ 
https://issues.apache.org/jira/browse/HBASE-3040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12914914#action_12914914
 ] 

jinglong.liujl commented on HBASE-3040:
---------------------------------------

We can reproduce this issue in our environment.
   Our environment is HBase 0.20.6 + Hadoop (CDH3b2).  5 region server and 5 
datanode with 2193  region and  48.95 TB data in HDFS. And our client is put 5 
column family and 5 qualifier in one row, and commit  to HBase every 500 rows. 
We start 8 clients in one machine (have 6 machines, totally we have 48 clients, 
). block size is 64K, region size is 256 M, one cell has 100B random string. 
each machine has  8 core CPU, 48G memory,  12 1T disks.
   In region split , HMaster cosider the region has been load. But region 
server has not been completed for loading.
    From client, it can not see the daughter region . After several retrys, 
exception will be raised. exception like below
    org.apache.hadoop.hbase.client.NoServerForRegionException: No server 
address listed in .META. for region 

> BlockIndex readIndex too slowly in heavy write scenario
> -------------------------------------------------------
>
>                 Key: HBASE-3040
>                 URL: https://issues.apache.org/jira/browse/HBASE-3040
>             Project: HBase
>          Issue Type: Improvement
>          Components: regionserver
>    Affects Versions: 0.20.6
>         Environment: 1master, 7 region servers, 4 * 7 clients(all clients run 
> on region server host),  sequential put
>            Reporter: andychen
>
> region size is configured with 128M,  block size is 64K, the table has 5 
> column families
> at the beginning, when region split, master assigns daughters to new region 
> servers, new region server open region, readIndex of this region's 
> storefile(about 1000 blocks) spent 30~50ms, with the data import region 
> server spent more and more time (sometimes up to several seconds) to load 
> 1000 block indices
> at right now, we resolve this issue by getting all indices of one hfile 
> within one DFS read instead of 1000 reads.
> is there any other better resolution?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to