...@gmail.com抄送: user@hadoop.apache.org主题: Re: HDFS DataNode unavailable Hello, I think broadly there could be 2 potential root cause explanations: 1. Logs are routed to a volume that is too small to hold the expected logging. You can review configuration settings in log4j.properties related to the rolling file
Hello,
I think broadly there could be 2 potential root cause explanations:
1. Logs are routed to a volume that is too small to hold the expected
logging. You can review configuration settings in log4j.properties related
to the rolling file appender. This determines how large logs can get and
how
I have an HDFS cluster, version 2.7.2, with two namenodes and three datanodes. While uploads the file, an exception is found: java.io.IOException: Got error,status message,ack with firstBadLink as X:50010. I noticed that the datanode log is stopped, only datanode.log.1, not datanode.log. But the re