Ted,

Thanks so much for that information. I now see why this split too often, but 
what I am not sure of is how to fix this without blowing away the cluster. Add 
more heap?

Another symptom I have noticed is that load on the Master instance hbase daemon 
has been pretty high (load average 4.0, whereas it used to be 1.0)

Thanks,
Pere
 
On Nov 5, 2014, at 9:56 PM, Ted Yu <yuzhih...@gmail.com> wrote:

> IncreasingToUpperBoundRegionSplitPolicy is the default split policy.
> 
> You can read the javadoc of this class to see how it works.
> 
> Cheers
> 
> On Wed, Nov 5, 2014 at 9:39 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> 
>> Can you provide a bit more information (such as HBase release) ?
>> 
>> If you pastebin one of the region servers' log, that would help us
>> determine the cause.
>> 
>> Cheers
>> 
>> 
>> On Wed, Nov 5, 2014 at 9:29 PM, Pere Kyle <p...@whisper.sh> wrote:
>> 
>>> Hello,
>>> 
>>> Recently our cluster which has been running fine for 2 weeks split to
>>> 1024 regions at 1GB per region, after this split the cluster is unusable.
>>> Using the performance benchmark I was getting a little better than 100 w/s,
>>> whereas before it was 5000 w/s. There are 15 nodes of m2.2xlarge with 8GB
>>> heap reserved for Hbase
>>> 
>>> Any Ideas? I am stumped:
>>> 
>>> Thanks,
>>> Pere
>>> 
>>> Here is the current
>>> hbase-site.xml
>>> <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>> <configuration>
>>> <property>
>>>    <name>hbase.snapshot.enabled</name>
>>>    <value>true</value>
>>>  </property>
>>> 
>>> <property><name>fs.hdfs.impl</name><value>emr.hbase.fs.BlockableFileSystem</value></property>
>>> 
>>> <property><name>hbase.regionserver.handler.count</name><value>50</value></property>
>>> 
>>> <property><name>hbase.cluster.distributed</name><value>true</value></property>
>>> 
>>> <property><name>hbase.tmp.dir</name><value>/mnt/var/lib/hbase/tmp-data</value></property>
>>> 
>>> <property><name>hbase.master.wait.for.log.splitting</name><value>true</value></property>
>>> 
>>> <property><name>hbase.hregion.memstore.flush.size</name><value>134217728</value></property>
>>>  <property><name>hbase.hregion.max.filesize</name><value>5073741824
>>> </value></property>
>>> 
>>> <property><name>zookeeper.session.timeout</name><value>60000</value></property>
>>> 
>>> <property><name>hbase.thrift.maxQueuedRequests</name><value>0</value></property>
>>> 
>>> <property><name>hbase.client.scanner.caching</name><value>1000</value></property>
>>> 
>>> <property><name>hbase.hregion.memstore.block.multiplier</name><value>4</value></property>
>>> </configuration>
>>> 
>>> hbase-env.sh
>>> # The maximum amount of heap to use, in MB. Default is 1000.
>>> export HBASE_HEAPSIZE=8000
>>> 
>>> # Extra Java runtime options.
>>> # Below are what we set by default.  May only work with SUN JVM.
>>> # For more on why as well as other possible settings,
>>> # see http://wiki.apache.org/hadoop/PerformanceTuning
>>> export HBASE_OPTS="-XX:+UseConcMarkSweepGC”
>>> 
>>> hbase-env.sh
>> 
>> 
>> 

Reply via email to