[ 
https://issues.apache.org/jira/browse/HDFS-3731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13437233#comment-13437233
 ] 

Todd Lipcon commented on HDFS-3731:
-----------------------------------

I'm on vacation for the next week or so (just doing a quick email check). But I 
can't think of any reason why the new recovery protocol wouldn't work properly 
with Colin's solution. So I'm +1 on the design (but haven't looked at the patch)
                
> 2.0 release upgrade must handle blocks being written from 1.0
> -------------------------------------------------------------
>
>                 Key: HDFS-3731
>                 URL: https://issues.apache.org/jira/browse/HDFS-3731
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 2.0.0-alpha
>            Reporter: Suresh Srinivas
>            Assignee: Colin Patrick McCabe
>            Priority: Blocker
>         Attachments: HDFS-3731.002.patch, HDFS-3731.003.patch
>
>
> Release 2.0 upgrades must handle blocks being written to (bbw) files from 1.0 
> release. Problem reported by Brahma Reddy.
> The {{DataNode}} will only have one block pool after upgrading from a 1.x 
> release.  (This is because in the 1.x releases, there were no block pools-- 
> or equivalently, everything was in the same block pool).  During the upgrade, 
> we should hardlink the block files from the {{blocksBeingWritten}} directory 
> into the {{rbw}} directory of this block pool.  Similarly, on {{-finalize}}, 
> we should delete the {{blocksBeingWritten}} directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to