[ 
https://issues.apache.org/jira/browse/HDFS-4846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13669765#comment-13669765
 ] 

Hudson commented on HDFS-4846:
------------------------------

Integrated in Hadoop-trunk-Commit #3801 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3801/])
    HDFS-4846. Clean up snapshot CLI commands output stacktrace for invalid 
arguments. Contributed by Jing Zhao (Revision 1487647)

     Result = SUCCESS
brandonli : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1487647
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/snapshot/LsSnapshottableDir.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/snapshot/SnapshotDiff.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestNestedSnapshots.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java

                
> Clean up snapshot CLI commands output stacktrace for invalid arguments
> ----------------------------------------------------------------------
>
>                 Key: HDFS-4846
>                 URL: https://issues.apache.org/jira/browse/HDFS-4846
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: snapshots
>    Affects Versions: 3.0.0
>            Reporter: Stephen Chu
>            Assignee: Jing Zhao
>            Priority: Minor
>              Labels: snapshot
>             Fix For: 3.0.0
>
>         Attachments: HDFS-4846.001.patch, HDFS-4846.002.patch, 
> HDFS-4846.003.patch, HDFS-4846.004.patch
>
>
> It'd be useful to clean up the stacktraces output by the snapshot CLI 
> commands when the commands are used incorrectly. This will make things more 
> readable for operators and hopefully prevent confusion.
> Allowing a snapshot on a directory that doesn't exist
> {code}
> schu-mbp:~ schu$ hdfs dfsadmin -allowSnapshot adfasdf
> 2013-05-23 15:46:46.052 java[24580:1203] Unable to load realm info from 
> SCDynamicStore
> 2013-05-23 15:46:46,066 WARN  [main] util.NativeCodeLoader 
> (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library 
> for your platform... using builtin-java classes where applicable
> allowSnapshot: Directory does not exist: /user/schu/adfasdf
>       at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.valueOf(INodeDirectory.java:52)
>       at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.setSnapshottable(SnapshotManager.java:106)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.allowSnapshot(FSNamesystem.java:5861)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.allowSnapshot(NameNodeRpcServer.java:1121)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.allowSnapshot(ClientNamenodeProtocolServerSideTranslatorPB.java:932)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48087)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1033)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1842)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1838)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:396)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1836)
> schu-mbp:~ schu$ 
> {code}
> Disallow a snapshot on a directory that isn't snapshottable
> {code}
> schu-mbp:~ schu$ hdfs dfsadmin -disallowSnapshot /user
> 2013-05-23 15:49:07.251 java[24687:1203] Unable to load realm info from 
> SCDynamicStore
> 2013-05-23 15:49:07,265 WARN  [main] util.NativeCodeLoader 
> (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library 
> for your platform... using builtin-java classes where applicable
> disallowSnapshot: Directory is not a snapshottable directory: /user
>       at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectorySnapshottable.valueOf(INodeDirectorySnapshottable.java:68)
>       at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.resetSnapshottable(SnapshotManager.java:151)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.disallowSnapshot(FSNamesystem.java:5889)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.disallowSnapshot(NameNodeRpcServer.java:1128)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.disallowSnapshot(ClientNamenodeProtocolServerSideTranslatorPB.java:943)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48089)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1033)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1842)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1838)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:396)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1836)
> {code}
> Snapshot diffs with non-existent snapshot paths
> {code}
> chu-mbp:~ schu$ hdfs snapshotDiff / gibberish1 gibberish2
> 2013-05-23 15:53:32.986 java[24877:1203] Unable to load realm info from 
> SCDynamicStore
> 2013-05-23 15:53:33,001 WARN  [main] util.NativeCodeLoader 
> (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library 
> for your platform... using builtin-java classes where applicable
> Exception in thread "main" 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotException: Cannot 
> find the snapshot of directory / with name gibberish1
>       at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectorySnapshottable.getSnapshotByName(INodeDirectorySnapshottable.java:389)
>       at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectorySnapshottable.computeDiff(INodeDirectorySnapshottable.java:363)
>       at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.diff(SnapshotManager.java:358)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getSnapshotDiffReport(FSNamesystem.java:6035)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getSnapshotDiffReport(NameNodeRpcServer.java:1153)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getSnapshotDiffReport(ClientNamenodeProtocolServerSideTranslatorPB.java:985)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48095)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1033)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1842)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1838)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:396)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1836)
>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>       at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>       at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>       at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>       at 
> org.apache.hadoop.hdfs.DFSClient.getSnapshotDiffReport(DFSClient.java:2161)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getSnapshotDiffReport(DistributedFileSystem.java:990)
>       at 
> org.apache.hadoop.hdfs.tools.snapshot.SnapshotDiff.main(SnapshotDiff.java:85)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotException):
>  Cannot find the snapshot of directory / with name gibberish1
>       at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectorySnapshottable.getSnapshotByName(INodeDirectorySnapshottable.java:389)
>       at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectorySnapshottable.computeDiff(INodeDirectorySnapshottable.java:363)
>       at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.diff(SnapshotManager.java:358)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getSnapshotDiffReport(FSNamesystem.java:6035)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getSnapshotDiffReport(NameNodeRpcServer.java:1153)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getSnapshotDiffReport(ClientNamenodeProtocolServerSideTranslatorPB.java:985)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48095)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1033)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1842)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1838)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:396)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1836)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1303)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1255)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:204)
>       at com.sun.proxy.$Proxy9.getSnapshotDiffReport(Unknown Source)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:163)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:82)
>       at com.sun.proxy.$Proxy9.getSnapshotDiffReport(Unknown Source)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getSnapshotDiffReport(ClientNamenodeProtocolTranslatorPB.java:975)
>       at 
> org.apache.hadoop.hdfs.DFSClient.getSnapshotDiffReport(DFSClient.java:2158)
>       ... 2 more
> schu-mbp:~ schu$ 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to