[ 
https://issues.apache.org/jira/browse/HBASE-4177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon resolved HBASE-4177.
------------------------------------

       Resolution: Fixed
    Fix Version/s: 0.95.1

To me, this was fixed when we made the recoverLease synchronous. Please reopen 
if I'm wrong.
                
> Handling read failures during recovery‏ - when HMaster calls Namenode 
> recovery, recovery may be a failure leading to read failure while splitting 
> logs
> ------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-4177
>                 URL: https://issues.apache.org/jira/browse/HBASE-4177
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>            Priority: Critical
>             Fix For: 0.95.1
>
>
> As per the mailing thread with the heading
> 'Handling read failures during recovery‏' we found this problem.
> As part of split Logs the HMaster calls Namenode recovery.  The recovery is 
> an asynchronous process. 
> In HDFS
> =======
> Even though client is getting the updated block info from Namenode on first
> read failure, client is discarding the new info and using the old info only
> to retrieve the data from datanode. So, all the read
> retries are failing. [Method parameter reassignment - Not reflected in
> caller]. 
> In HBASE
> =======
> In HMaster code we tend to wait for  1sec.  But if the recovery had some 
> failure then split log may not happen and may lead to dataloss.
> So may be we need to decide upon the actual delay that needs to be introduced 
> once Hmaster calls NN recovery.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to