We are also interested on the solution for this. With 
hbase.client.retries.number = 7 and client.pause=400ms, it came down to ~9mins 
(from 20 mins). Now we are thinking the 9mins is also a big number.

Thanks,
Hari

-----Original Message-----
From: PRANEESH KUMAR [mailto:praneesh.san...@gmail.com]
Sent: Monday, June 15, 2015 10:33 AM
To: user@hbase.apache.org
Subject: Re: How to make the client fast fail

Hi Michael,

We can have a monitoring thread and interrupt the hbase client thread after 
time out instead of doing this I want the timeout or some exception to be 
thrown from the HBase client itself.

On Thu, Jun 11, 2015 at 5:16 AM, Michael Segel
wrote:

> threads?
>
> So that regardless of your hadoop settings, if you want something
> faster, you can use one thread for a timer and then the request is in
> another. So if you hit your timeout before you get a response, you can stop 
> your thread.
> (YMMV depending on side effects... )
>
> > On Jun 10, 2015, at 12:55 AM, PRANEESH KUMAR
> >
> wrote:
> >
> > Hi,
> >
> > I have got the Connection object with default configuration, if the
> > zookeeper or HMaster or Region server is down, the client didn't
> > fast
> fail
> > and it took almost 20 mins to thrown an error.
> >
> > What is the best configuration to make the client fast fail.
> >
> > Also what is significance of changing the following parameters.
> >
> > hbase.client.retries.number
> > zookeeper.recovery.retry
> > zookeeper.session.timeout
> > zookeeper.recovery.retry.intervalmill
> > hbase.rpc.timeout
> >
> > Regards,
> > Praneesh
>
>

Reply via email to