Be careful for what you wish. 

You want to fail fast, ok, but when you shorten the HBase timers, you can run 
in to other problems. 
The simplest solution is to use a timer  / timeout thread in your application. 

You want to do it this way because you are asking for an application specific 
solution while HBase is a shared resource. 

Failing fast and failing often is no way to run an HBase/Hadoop cluster.  ;-) 

> On Jun 14, 2015, at 10:03 PM, PRANEESH KUMAR <praneesh.san...@gmail.com> 
> wrote:
> 
> Hi Michael,
> 
> We can have a monitoring thread and  interrupt the hbase client thread
> after time out instead of doing this I want the timeout or some exception
> to be thrown from the HBase client itself.
> 
> On Thu, Jun 11, 2015 at 5:16 AM, Michael Segel <michael_se...@hotmail.com>
> wrote:
> 
>> threads?
>> 
>> So that regardless of your hadoop settings, if you want something faster,
>> you can use one thread for a timer and then the request is in another. So
>> if you hit your timeout before you get a response, you can stop your thread.
>> (YMMV depending on side effects… )
>> 
>>> On Jun 10, 2015, at 12:55 AM, PRANEESH KUMAR <praneesh.san...@gmail.com>
>> wrote:
>>> 
>>> Hi,
>>> 
>>> I have got the Connection object with default configuration, if the
>>> zookeeper or HMaster or Region server is down, the client didn't fast
>> fail
>>> and it took almost 20 mins to thrown an error.
>>> 
>>> What is the best configuration to make the client fast fail.
>>> 
>>> Also what is significance of changing the following parameters.
>>> 
>>> hbase.client.retries.number
>>> zookeeper.recovery.retry
>>> zookeeper.session.timeout
>>> zookeeper.recovery.retry.intervalmill
>>> hbase.rpc.timeout
>>> 
>>> Regards,
>>> Praneesh
>> 
>> 

Reply via email to