[ https://issues.apache.org/jira/browse/HDFS-15171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
zhuqi updated HDFS-15171: ------------------------- Description: There are 30 storage dirs per datanode in our production cluster , it will take too many time to restart, because sometimes the datanode didn't shutdown gracefully. Now only the datanode graceful shut down hook and the blockpoolslice shutdown will cause the saveDfsUsed function, that cause the restart of datanode can't reuse the dfsuse cache sometimes. I think if we can add a thread to periodically call the saveDfsUsed function. was: There are 30 storage dirs in our production cluster , it will take too many time to restart, because sometimes the datanode didn't shutdown gracefully. Now only the datanode graceful shut down hook and the blockpoolslice shutdown will cause the saveDfsUsed function, that cause the restart of datanode can't reuse the dfsuse cache sometimes. I think if we can add a thread to periodically call the saveDfsUsed function. > Add a thread to call saveDfsUsed periodically, to prevent datanode too long > restart time. > ------------------------------------------------------------------------------------------- > > Key: HDFS-15171 > URL: https://issues.apache.org/jira/browse/HDFS-15171 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode > Affects Versions: 3.2.0 > Reporter: zhuqi > Assignee: zhuqi > Priority: Major > > There are 30 storage dirs per datanode in our production cluster , it will > take too many time to restart, because sometimes the datanode didn't shutdown > gracefully. Now only the datanode graceful shut down hook and the > blockpoolslice shutdown will cause the saveDfsUsed function, that cause the > restart of datanode can't reuse the dfsuse cache sometimes. I think if we can > add a thread to periodically call the saveDfsUsed function. > -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org