I agree. We have to make this easier - to configure and to understand. And the 
default should be better. Looks like they only make sense for long running M/R 
jobs.
Lemme check whether there's a jira already; I'll revive it, otherwise I'll 
create one and we can have further discussions there.
Thanks for pointing this issue out.

-- Lars
      From: mukund murrali <mukundmurra...@gmail.com>
 To: user@hbase.apache.org; lars hofhansl <la...@apache.org> 
 Sent: Tuesday, June 16, 2015 12:21 AM
 Subject: Re: How to make the client fast fail
   
We are using HBase - 1.0.0. Yes we have went through this blog. But configuring 
these parameters, we not able to find out what is the exact time it takes to 
fail fast. I am really curious if there could be a single configuration to 
ensure client level failing? Also it would be great if someone can suggest some 
optimal value for those parameters.



On Tue, Jun 16, 2015 at 12:43 PM, lars hofhansl <la...@apache.org> wrote:

Please always tell us which version of HBase you are using. We have fixed a lot 
of issues in this area over time.Here's an _old_ blog post I wrote about this: 
http://hadoop-hbase.blogspot.com/2012/09/hbase-client-timeouts.html

Using yet more threads to monitor timeouts of another thread is a bad idea, 
especially when the timeout is configurable in the first place.

-- Lars
      From: mukund murrali <mukundmurra...@gmail.com>
 To: user@hbase.apache.org
 Sent: Sunday, June 14, 2015 10:22 PM
 Subject: Re: How to make the client fast fail

It would be great if there is a single timeout configuration from the
client end. All other parameters should fine tune based on that one
parameter. We have modified simple based on trail basis to suit our need.
Also not sure what side effect it would cause configuring those parameters.



On Mon, Jun 15, 2015 at 10:38 AM, <hariharan_sethura...@dell.com> wrote:

> We are also interested on the solution for this. With
> hbase.client.retries.number = 7 and client.pause=400ms, it came down to
> ~9mins (from 20 mins). Now we are thinking the 9mins is also a big number.
>
> Thanks,
> Hari
>
> -----Original Message-----
> From: PRANEESH KUMAR [mailto:praneesh.san...@gmail.com]
> Sent: Monday, June 15, 2015 10:33 AM
> To: user@hbase.apache.org
> Subject: Re: How to make the client fast fail
>
> Hi Michael,
>
> We can have a monitoring thread and interrupt the hbase client thread
> after time out instead of doing this I want the timeout or some exception
> to be thrown from the HBase client itself.
>
> On Thu, Jun 11, 2015 at 5:16 AM, Michael Segel
> wrote:
>
> > threads?
> >
> > So that regardless of your hadoop settings, if you want something
> > faster, you can use one thread for a timer and then the request is in
> > another. So if you hit your timeout before you get a response, you can
> stop your thread.
> > (YMMV depending on side effects... )
> >
> > > On Jun 10, 2015, at 12:55 AM, PRANEESH KUMAR
> > >
> > wrote:
> > >
> > > Hi,
> > >
> > > I have got the Connection object with default configuration, if the
> > > zookeeper or HMaster or Region server is down, the client didn't
> > > fast
> > fail
> > > and it took almost 20 mins to thrown an error.
> > >
> > > What is the best configuration to make the client fast fail.
> > >
> > > Also what is significance of changing the following parameters.
> > >
> > > hbase.client.retries.number
> > > zookeeper.recovery.retry
> > > zookeeper.session.timeout
> > > zookeeper.recovery.retry.intervalmill
> > > hbase.rpc.timeout
> > >
> > > Regards,
> > > Praneesh
> >
> >
>


   



  

Reply via email to