Hi,

I am using 34 GB heap and my use case is also Read oriented. I also do
writes with MR. :)
Yesterday, under decent load i got pauses of 30-40 secs. Still, the RS were
not using the full 34 GB. I am thinking of doing some more tuning as i
expect the read load to increase.

Here is my GC setting for JDK6: -XX:NewSize=200m -XX:MaxNewSize=400m
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=70 -verbose:gc

~Anil


On Tue, Jul 9, 2013 at 8:12 AM, Bryan Beaudreault
<bbeaudrea...@hubspot.com>wrote:

> I see from the blog post that it is java7.  The question still stands
> regarding using that with hbase, considering the open jira
> https://issues.apache.org/jira/browse/HBASE-5261
>
>
> On Tue, Jul 9, 2013 at 11:03 AM, Bryan Beaudreault <
> bbeaudrea...@hubspot.com
> > wrote:
>
> > @Otis, are you guys running G1GC with java6 or java7? From what I'm
> > reading it seems to be more stable with better performance in java7, but
> I
> > also believe java7 is not officially supported by apache hadoop or hbase
> > yet.  I'm wondering if many people are using java7 for hbase without
> issue
> > despite the lack of support.
> >
> >
> > On Tue, Jul 9, 2013 at 1:52 AM, Azuryy Yu <azury...@gmail.com> wrote:
> >
> >> This is my HBASE GC options of CMS, it does work well.
> >>
> >> XX:+DisableExplicitGC -XX:+UseCompressedOops -XX:PermSize=160m
> >> -XX:MaxPermSize=160m -XX:GCTimeRatio=19 -XX:SoftRefLRUPolicyMSPerMB=0
> >> -XX:SurvivorRatio=2 -XX:MaxTenuringThreshold=1
> -XX:+UseFastAccessorMethods
> >> -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled
> >> -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSCompactAtFullCollection
> >> -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSClassUnloadingEnabled
> >> -XX:CMSMaxAbortablePrecleanTime=300 -XX:+CMSScavengeBeforeRemark
> >>
> >>
> >>
> >> On Tue, Jul 9, 2013 at 1:12 PM, Otis Gospodnetic <
> >> otis.gospodne...@gmail.com
> >> > wrote:
> >>
> >> > Hi,
> >> >
> >> > Check
> >> http://blog.sematext.com/2013/06/24/g1-cms-java-garbage-collector/
> >> >
> >> > Those graphs show RegionServer before and after switch to G1.  The
> >> > dashboard screenshot further below shows CMS (top row) vs. G1 (bottom
> >> > row).  After those tests we ended up switching to G1 across the whole
> >> > cluster and haven't had issues or major pauses since.... knock on
> >> > keyboard.
> >> >
> >> > Otis
> >> > --
> >> > Solr & ElasticSearch Support -- http://sematext.com/
> >> > Performance Monitoring -- http://sematext.com/spm
> >> >
> >> >
> >> >
> >> > On Mon, Jul 8, 2013 at 2:56 PM, Stack <st...@duboce.net> wrote:
> >> > > On Mon, Jul 8, 2013 at 11:09 AM, Suraj Varma <svarma...@gmail.com>
> >> > wrote:
> >> > >
> >> > >> Hello:
> >> > >> We have an HBase cluster with region servers running on 8GB heap
> size
> >> > with
> >> > >> a 0.6 block cache (it is a read heavy cluster, with bursty write
> >> traffic
> >> > >> via MR jobs). (version: hbase-0.94.6.1)
> >> > >>
> >> > >> During HBaseCon, while speaking to a few attendees, I heard some
> >> folks
> >> > were
> >> > >> running region servers as high as 24GB and some others in the 16GB
> >> > range.
> >> > >>
> >> > >> So - question: Are there any special GC recommendations (tuning
> >> > parameters,
> >> > >> flags, etc) that folks who run at these large heaps can recommend
> >> while
> >> > >> moving up from an 8GB heap? i.e. for 16GB and for 24GB RS heaps
> ... ?
> >> > >>
> >> > >> I'm especially concerned about long pauses causing zk session
> >> timeouts
> >> > and
> >> > >> consequent RS shutdowns. Our boxes do have a lot of RAM and we are
> >> > >> exploring how we can use more of it for the cluster while
> maintaining
> >> > >> overall stability.
> >> > >>
> >> > >> Also - if there are clusters running multiple region servers per
> >> host,
> >> > I'd
> >> > >> be very interested to know what RS heap sizes those are being run
> at
> >> ...
> >> > >> and whether this was chosen as an alternative to running a single
> RS
> >> > with
> >> > >> large heap.
> >> > >>
> >> > >> (I know I'll have to test the GC stuff out on my cluster and for my
> >> > >> workloads anyway ... but just trying to get a feel of what sort of
> >> > tuning
> >> > >> options had to be used to have a stable HBase cluster with 16 or
> >> 24GB RS
> >> > >> heaps).
> >> > >>
> >> > >
> >> > >
> >> > > You hit full GC in this 8G heap Suraj?  Can you try running one
> >> server at
> >> > > 24G to see how it does (with GC logging enabled so you can watch it
> >> over
> >> > > time)?  On one hand, more heap may make it so you avoid full GC --
> if
> >> you
> >> > > are hitting them now at 8G -- because application has more head
> room.
> >>  On
> >> > > other hand, yes, if a full GC hits, it will be gone for
> proportionally
> >> > > longer than for your 8G heap.
> >> > >
> >> > > St.Ack
> >> >
> >>
> >
> >
>



-- 
Thanks & Regards,
Anil Gupta

Reply via email to