Hello Chris Nauroth, Thank you for your advice. I just saw your email. I will confirm the last information in the log. I will be thinking about upgrading the cluster in the near future. Thank you very much. He Hao 从 Windows 版邮件发送 发件人: Chris Nauroth Hello, I think broadly there could be 2 potential root cause explanations: 1. Logs are routed to a volume that is too small to hold the expected logging. You can review configuration settings in log4j.properties related to the rolling file appender. This determines how large logs can get and how many of the old rolled files to retain. If the maximum would exceed the capacity on the volume holding these logs, then you either need to configure smaller retention or redirect the logs to a larger volume. 2. Some error condition caused abnormal log spam. If the log isn't there anymore, then it's difficult to say what this could have been specifically. You could keep an eye on logs for the next few days after the restart to see if there are a lot of unexpected errors. On a separate note, version 2.7.2 is quite old, released in 2017. It's missing numerous bug fixes and security patches. I recommend looking into an upgrade to 2.10.2 in the short term, followed by a plan for getting onto a currently supported 3.x release. I hope this helps. Chris Nauroth On Mon, Oct 24, 2022 at 11:31 PM hehaore...@gmail.com <hehaore...@gmail.com> wrote:
--------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org For additional commands, e-mail: user-h...@hadoop.apache.org |
- HDFS DataNode unavailable hehaore...@gmail.com
- Re: HDFS DataNode unavailable Chris Nauroth
- 回复: HDFS DataNode unavailable hehaore...@gmail.com