[ https://issues.apache.org/jira/browse/HDFS-16496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Stephen O'Donnell updated HDFS-16496: ------------------------------------- Component/s: namanode > Snapshot diff on snapshotable directory fails with not snapshottable error > -------------------------------------------------------------------------- > > Key: HDFS-16496 > URL: https://issues.apache.org/jira/browse/HDFS-16496 > Project: Hadoop HDFS > Issue Type: Bug > Components: namanode > Reporter: Stephen O'Donnell > Assignee: Stephen O'Donnell > Priority: Major > > Running a snapshot diff against some snapshotable folders gives an error: > {code} > org.apache.hadoop.hdfs.protocol.SnapshotException: Directory is neither > snapshottable nor under a snap root! > at > org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.checkAndGetSnapshottableAncestorDir(SnapshotManager.java:395) > at > org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.diff(SnapshotManager.java:744) > at > org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.getSnapshotDiffReportListing(FSDirSnapshotOp.java:200) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getSnapshotDiffReportListing(FSNamesystem.java:6983) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getSnapshotDiffReportListing(NameNodeRpcServer.java:1977) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getSnapshotDiffReportListing(ClientNamenodeProtocolServerSideTranslatorPB.java:1387) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:533) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:917) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2894) > {code} > This is caused by HDFS-15483 (in order snapshot delete), and the issue is in > the following method in SnapshotManager: > {code} > public INodeDirectory getSnapshottableAncestorDir(final INodesInPath iip) > throws IOException { > final String path = iip.getPath(); > final INode inode = iip.getLastINode(); > final INodeDirectory dir; > if (inode instanceof INodeDirectory) { // THIS SHOULD BE TRUE - change to > inode.isDirectory() > dir = INodeDirectory.valueOf(inode, path); > } else { > dir = INodeDirectory.valueOf(iip.getINode(-2), iip.getParentPath()); > } > if (dir.isSnapshottable()) { > return dir; > } > for (INodeDirectory snapRoot : this.snapshottables.values()) { > if (dir.isAncestorDirectory(snapRoot)) { > return snapRoot; > } > } > return null; > } > {code} > After adding some debug, I found the directory which is the snapshot root is > not an instance of INodeDirectory, but instead is an > "INodeReference$DstReference". I think the directory becomes an instance of > this class, if the directory is renamed and one of its children has been > moved out of another snapshot. > The fix is simple - just check `inode.isDirectory()` instead. -- This message was sent by Atlassian Jira (v8.20.1#820001) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org