[ https://issues.apache.org/jira/browse/HDFS-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332481#comment-16332481 ]
Xiaoyu Yao commented on HDFS-13037: ----------------------------------- HDFS-8983 which guards important directories against accidental deletion. Case 1 can be supported by adding: {code}fs.protected.directories=/apps/hive/warehouse/, …\{code} Case 2 can also be supported since fs.protected.directories is a reconfigurable property of NN with HDFS-9349. You can modify fs.protected.directories and then run the following command to refresh NN without a NN restart. {code}hdfs dfsadmin -reconfig namenode <nn_addr>:<ipc_port> start{code} > Support protected path configuration > ------------------------------------ > > Key: HDFS-13037 > URL: https://issues.apache.org/jira/browse/HDFS-13037 > Project: Hadoop HDFS > Issue Type: New Feature > Components: namenode > Reporter: chuanjie.duan > Priority: Major > > After Hadoop2.7 root path("/") cannot be deleted for any situration. But like > '/tmp','/user','/user/hive/warehouse' and so on, shouldn't be deleted mostly. > So can we let user config their own custom protected path. Just for any > accident. > 1. add configuration to hdfs-site.xml > 2. add a command in dfsadmin for refreshing -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org