[ https://issues.apache.org/jira/browse/HDFS-1848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13022686#comment-13022686 ]
dhruba borthakur commented on HDFS-1848: ---------------------------------------- I too am not clear why the datanode process has to watch over "critical" disks. It would be nice if the datanode considers all disks the same. > Datanodes should shutdown when a critical volume fails > ------------------------------------------------------ > > Key: HDFS-1848 > URL: https://issues.apache.org/jira/browse/HDFS-1848 > Project: Hadoop HDFS > Issue Type: Improvement > Components: data-node > Reporter: Eli Collins > Fix For: 0.23.0 > > > A DN should shutdown when a critical volume (eg the volume that hosts the OS, > logs, pid, tmp dir etc.) fails. The admin should be able to specify which > volumes are critical, eg they might specify the volume that lives on the boot > disk. A failure in one of these volumes would not be subject to the threshold > (HDFS-1161) or result in host decommissioning (HDFS-1847) as the > decommissioning process would likely fail. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira