[ https://issues.apache.org/jira/browse/HDFS-15391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17126730#comment-17126730 ]
Ayush Saxena edited comment on HDFS-15391 at 6/5/20, 12:31 PM: --------------------------------------------------------------- Thanx, These are two different traces, correct? You tried restarting the namenode twice, and once it failed for CLOSE_OP and other time with TRUNCATE, Correct? What was the exception during write? was (Author: ayushtkn): Thanx, These are two different traces, correct? You tried restarting the namenode twice, and once it failed for CLOSE_OP and other time with TRUNCATE, Correct? > Due to edit log corruption, Standby NameNode could not properly load the > Ediltog log, result in abnormal exit of the service and failure to restart > --------------------------------------------------------------------------------------------------------------------------------------------------- > > Key: HDFS-15391 > URL: https://issues.apache.org/jira/browse/HDFS-15391 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Affects Versions: 3.2.0 > Reporter: huhaiyang > Priority: Critical > > In the cluster version 3.2.0 production environment, > We found that due to edit log corruption, Standby NameNode could not > properly load the Ediltog log, result in abnormal exit of the service and > failure to restart > {noformat} > The specific scenario is that Flink writes to HDFS(replication file), and in > the case of an exception to the write file, the following operations are > performed : > 1.close file > 2.open file > 3.truncate file > 4.append file > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org