[ 
https://issues.apache.org/jira/browse/HDFS-15391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17133296#comment-17133296
 ] 

hemanthboyina commented on HDFS-15391:
--------------------------------------

{quote}The block size should be 108764672 in the first 
CloseOp(TXID=126060942290).
When truncate is used, the block size is 63154347.
The block used by CloseOp twice is the same instance, which causes the first 
CloseOp has wrong block size.
When the second CloseOp(TXID=126060943585) is executed, the file is not in the 
UnderConstruction state, and SNN down.
{quote}
HDFS-15175 has reported similar kind of issue 

 

> Standby NameNode due loads the corruption edit log, the service exits and 
> cannot be restarted
> ---------------------------------------------------------------------------------------------
>
>                 Key: HDFS-15391
>                 URL: https://issues.apache.org/jira/browse/HDFS-15391
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 3.2.0
>            Reporter: huhaiyang
>            Priority: Critical
>
> In the cluster version 3.2.0 production environment,
>  We found that due to edit log corruption, Standby NameNode could not 
> properly load the Ediltog log, result in abnormal exit of the service and 
> failure to restart
> {noformat}
> The specific scenario is that Flink writes to HDFS(replication file), and in 
> the case of an exception to the write file, the following operations are 
> performed :
> 1.close file
> 2.open file
> 3.truncate file
> 4.append file
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to