[ https://issues.apache.org/jira/browse/HDFS-3731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13455244#comment-13455244 ]
Hadoop QA commented on HDFS-3731: --------------------------------- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12545040/hdfs-3731.branch-023.patch.txt against trunk revision . -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3185//console This message is automatically generated. > 2.0 release upgrade must handle blocks being written from 1.0 > ------------------------------------------------------------- > > Key: HDFS-3731 > URL: https://issues.apache.org/jira/browse/HDFS-3731 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node > Affects Versions: 2.0.0-alpha > Reporter: Suresh Srinivas > Assignee: Kihwal Lee > Priority: Blocker > Fix For: 2.0.2-alpha > > Attachments: hadoop1-bbw.tgz, HDFS-3731.002.patch, > HDFS-3731.003.patch, hdfs-3731.branch-023.patch.txt > > > Release 2.0 upgrades must handle blocks being written to (bbw) files from 1.0 > release. Problem reported by Brahma Reddy. > The {{DataNode}} will only have one block pool after upgrading from a 1.x > release. (This is because in the 1.x releases, there were no block pools-- > or equivalently, everything was in the same block pool). During the upgrade, > we should hardlink the block files from the {{blocksBeingWritten}} directory > into the {{rbw}} directory of this block pool. Similarly, on {{-finalize}}, > we should delete the {{blocksBeingWritten}} directory. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira