How big is your cluster and what version of HBase are you on?
HBase Version: 0.20.2, r834515
20 machines, 4core ,12G Ram (1U server)

I dispatched my program including many threads in a single process job to
these 20 machines.
There is only one Zookeep Server Connection successfully message on the
console, which means it just setup
one connection to HBase, cause HConnectionManager is singleton ,right?
I'll try to test its performance by increasing more process(hbase
connection).
Any suggestions?


Fleming Chiu(邱宏明)
707-6128
[email protected]
週一無肉日吃素救地球(Meat Free Monday Taiwan)




                                                                                
                                                                      
                      "Jonathan Gray"                                           
                                                                      
                      <[email protected]        To:      
<[email protected]>                                                  
              
                      m>                       cc:      (bcc: Y_823910/TSMC)    
                                                                      
                                               Subject: RE: Split META manually 
                                                                      
                      2010/03/12 06:02                                          
                                                                      
                      PM                                                        
                                                                      
                      Please respond to                                         
                                                                      
                      hbase-user                                                
                                                                      
                                                                                
                                                                      
                                                                                
                                                                      




Are these concurrent threads running in a single JVM?  Or is this mapreduce
and you run 600 tasks at once?

How big is your cluster and what version of HBase are you on?

> -----Original Message-----
> From: [email protected] [mailto:[email protected]]
> Sent: Friday, March 12, 2010 1:42 AM
> To: [email protected]
> Subject: RE: Split META manually
>
>
> Are you seeing a META bottleneck even after clients are running for
> some
> time? I think its right!
>
> My job total count is 600.(my job is direct read hbase data and keep
> there
> in local machine for join)
> then write the joining result back to hbase.
> If I start 600 concurrent program, it took 2329 sec.
> (600 clients visit META concurrently and aggressively)
> If I just start 100 concurrent program then keep the META info for the
> subsequential 5 round of job.
> (only 100 concurrent program will visit META in the first round), it
> took
> 839 sec.
> So I think that you said that hbase can't sustain so many concurrent
> META
> requests is right.
>
>
>
> Fleming Chiu(邱宏明)
> 707-6128
> [email protected]
> 週一無肉日吃素救地球(Meat Free Monday Taiwan)
>
>
>
>
>
>                       "Jonathan Gray"
>                       <[email protected]        To:      <hbase-
> [email protected]>
>                       m>                       cc:      (bcc:
> Y_823910/TSMC)
>                                                Subject: RE: Split META
> manually
>                       2010/03/12 03:11
>                       PM
>                       Please respond to
>                       hbase-user
>
>
>
>
>
>
> Fleming,
>
> We're looking at a few different ideas for this problem right now.
>
> One is to make an efficient method for warming up a clients META cache
> by
> issuing a META scan for a single table or all tables.  This will be
> significantly faster than lots of gets.
>
> The other bigger change is that META may move (at least partially) into
> ZooKeeper for 0.21.  This would be beneficial as ZK allows for reading
> from
> many replicas.
>
> Splitting META sounds like a good idea but as of today it's really not
> well
> supported by HBase.  I think most of the team is working towards the
> above
> changes before making a split META work.  If you're really interested
> in
> it,
> coming up with some good unit tests is a good start and maybe we can
> work
> on
> that too.
>
> Are you seeing a META bottleneck even after clients are running for
> some
> time?  It sounds like you just did a heavy concurrency test of
> artificial
> META reads.  The clients aggressively cache META results so they should
> not
> sustain their META requests after warmed.
>
> JG
>
> > -----Original Message-----
> > From: [email protected] [mailto:[email protected]]
> > Sent: Thursday, March 11, 2010 10:35 PM
> > To: [email protected]
> > Subject: Re: Split META manually
> >
> > It doesn't work!
> > I would like META table can split by table name.
> > If I have 10 tables, there are 10 regions in META that can be
> > dispatched
> > to different region server, so I can start many
> > clients concurrently read different tables' data without META's
> > bottleneck.
> > I had started 2000 concurrent hbase java cleints to get only one
> > table's(187 regions ) META,
> > which took 302 sec.
> > What I want to do is to improve the META performance by splitting the
> > META.
> >
> >
> >
> >
> >
> > Fleming Chiu(邱宏明)
> > 707-6128
> > [email protected]
> > 週一無肉日吃素救地球(Meat Free Monday Taiwan)
> >
> >
> >
> >
> >
> >                       [email protected]
> >                       om                       To:      hbase-
> > [email protected]
> >                       Sent by:                 cc:      (bcc:
> > Y_823910/TSMC)
> >                       [email protected]        Subject: Re: Split
> META
> > manually
> >                       om
> >
> >
> >                       2010/03/12 02:01
> >                       PM
> >                       Please respond to
> >                       hbase-user
> >
> >
> >
> >
> >
> >
> > Why split .META.?  I'm not sure it works properly so would advise
> > against it (We don't have tests in place for that... we've not been
> > too concerned about it up to this since your install would have to be
> > massive for .META. to split).
> > St.Ack
> >
> > 2010/3/11  <[email protected]>:
> > > Hi there,
> > >
> > > I want to split META table manually but I wonder how to set the
> > optional
> > > Region Key in the webpage.
> > > (using the value like BIG_TABLE,FRPFXRD_NF61904-0
> > > 1.001.Main.0,1268309701214)
> > >
> > > BIG_TABLE,FRPFXRD_NF61904-0 column=info:regioninfo,
> > > timestamp=1268309711446, value=REGION => {NAME => 'BIG_TA
> > >  1.001.Main.0,1268309701214
> > > BLE,FRPFXRD_NF61904-01.001.Main.0,1268309701214', STARTKEY =>
> > > 'FRPFXRD_NF61904-01
> > >                             .001.Main.0', ENDKEY =>
> > > 'FRPFXRD_NG50972-01.001.Main.1', ENCODED => 1864321617, T
> > >
> > > There is only one region in our META table now , if I want to split
> > it by
> > > each table start key,
> > > is that possible? Thus it can release the META's bottleneck while
> > multiple
> > > clients reading request.
> > > Any ideas?
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > Fleming Chiu(邱宏明)
> > > 707-6128
> > > [email protected]
> > > 週一無肉日吃素救地球(Meat Free Monday Taiwan)
> > >
> > >
> > >
> > ---------------------------------------------------------------------
> --
> > ----
> > >                                                         TSMC
> PROPERTY
> > >  This email communication (and any attachments) is proprietary
> > information
> > >  for the sole use of its
> > >  intended recipient. Any unauthorized review, use or distribution
> by
> > anyone
> > >  other than the intended
> > >  recipient is strictly prohibited.  If you are not the intended
> > recipient,
> > >  please notify the sender by
> > >  replying to this email, and then delete this email and any copies
> of
> > it
> > >  immediately. Thank you.
> > >
> > ---------------------------------------------------------------------
> --
> > ----
> > >
> > >
> > >
> > >
> >
> >
> >
> >
> >  --------------------------------------------------------------------
> --
> > -----
> >                                                          TSMC
> PROPERTY
> >  This email communication (and any attachments) is proprietary
> > information
> >  for the sole use of its
> >  intended recipient. Any unauthorized review, use or distribution by
> > anyone
> >  other than the intended
> >  recipient is strictly prohibited.  If you are not the intended
> > recipient,
> >  please notify the sender by
> >  replying to this email, and then delete this email and any copies of
> > it
> >  immediately. Thank you.
> >  --------------------------------------------------------------------
> --
> > -----
> >
> >
>
>
>
>
>
>
>  ----------------------------------------------------------------------
> -----
>                                                          TSMC PROPERTY
>  This email communication (and any attachments) is proprietary
> information
>  for the sole use of its
>  intended recipient. Any unauthorized review, use or distribution by
> anyone
>  other than the intended
>  recipient is strictly prohibited.  If you are not the intended
> recipient,
>  please notify the sender by
>  replying to this email, and then delete this email and any copies of
> it
>  immediately. Thank you.
>  ----------------------------------------------------------------------
> -----
>
>






 --------------------------------------------------------------------------- 
                                                         TSMC PROPERTY       
 This email communication (and any attachments) is proprietary information   
 for the sole use of its                                                     
 intended recipient. Any unauthorized review, use or distribution by anyone  
 other than the intended                                                     
 recipient is strictly prohibited.  If you are not the intended recipient,   
 please notify the sender by                                                 
 replying to this email, and then delete this email and any copies of it     
 immediately. Thank you.                                                     
 --------------------------------------------------------------------------- 



Reply via email to