[
https://issues.apache.org/jira/browse/HADOOP-1752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Raghu Angadi resolved HADOOP-1752.
----------------------------------
Resolution: Won't Fix
This applies only to clusters upgrading from 0.13 or before. This upgrade is no
longer supported in current HDFS.
> "dfsadmin -upgradeProgress force" should leave safe mode in order to push the
> upgrade forward.
> ----------------------------------------------------------------------------------------------
>
> Key: HADOOP-1752
> URL: https://issues.apache.org/jira/browse/HADOOP-1752
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.14.0
> Reporter: Konstantin Shvachko
> Assignee: Raghu Angadi
>
> I have a cluster (created before hadoop 0.14) on which 40% of data-node
> blocks were lost. I tried to upgrade it to 0.14.
> The distributed upgrade was scheduled correctly on the name-node and all
> data-nodes. But it never started, since
> there was not enough blocks for the name-node to leave safe mode.
> I first tried
> {code}
> bin/hadoop dfsadmin -safemode leave
> {code}
> But this is prohibited since the distributed upgrade is in progress. I tried
> {code}
> bin/hadoop dfsadmin -upgradeProgress force
> {code}
> But these didn't work because the distributed upgrade does not start until
> the safe mode conditions are met on the name-node.
> The solution would be to set the safe-mode-threshold to 60% if of course I
> new exactly how many blocks were missing.
> The "force" command was designed as a way for an administrator to get the
> upgrade going even if the cluster is not in the perfect shape.
> This would let us save at least the data available rather than loosing
> everything.
> I propose to modify the force command so that it would let the cluster start
> distributed upgrade even if safe-mode is still on.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.