[ https://issues.apache.org/jira/browse/HDFS-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Vinod Kumar Vavilapalli updated HDFS-9516: ------------------------------------------ Target Version/s: 2.8.0, 2.7.3 [~shv], unfortunately this came in too late for 2.7.2. That said, I don’t see any reason why this shouldn’t be in 2.8.0 and 2.7.3. Setting the target-versions accordingly on JIRA. If you agree, appreciate backport help to those branches (branch-2.8.0, branch-2.7). > truncate file fails with data dirs on multiple disks > ---------------------------------------------------- > > Key: HDFS-9516 > URL: https://issues.apache.org/jira/browse/HDFS-9516 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode > Affects Versions: 2.7.1 > Reporter: Bogdan Raducanu > Assignee: Plamen Jeliazkov > Fix For: 2.9.0 > > Attachments: HDFS-9516_1.patch, HDFS-9516_2.patch, HDFS-9516_3.patch, > HDFS-9516_testFailures.patch, Main.java, truncate.dn.log > > > FileSystem.truncate returns false (no exception) but the file is never closed > and not writable after this. > It seems to be because of copy on truncate which is used because the system > is in upgrade state. In this case a rename between devices is attempted. > See attached log and repro code. > Probably also affects truncate snapshotted file when copy on truncate is also > used. > Possibly it affects not only truncate but any block recovery. > I think the problem is in updateReplicaUnderRecovery > {code} > ReplicaBeingWritten newReplicaInfo = new ReplicaBeingWritten( > newBlockId, recoveryId, rur.getVolume(), > blockFile.getParentFile(), > newlength); > {code} > blockFile is created with copyReplicaWithNewBlockIdAndGS which is allowed to > choose any volume so rur.getVolume() is not where the block is located. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)