[ https://issues.apache.org/jira/browse/HBASE-9509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13765036#comment-13765036 ]
Hudson commented on HBASE-9509: ------------------------------- SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #723 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/723/]) HBASE-9509 Fix HFile V1 Detector to handle AccessControlException for non-existant files (stack: rev 1522051) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HFileV1Detector.java > Fix HFile V1 Detector to handle AccessControlException for non-existant files > ----------------------------------------------------------------------------- > > Key: HBASE-9509 > URL: https://issues.apache.org/jira/browse/HBASE-9509 > Project: HBase > Issue Type: Bug > Affects Versions: 0.96.0 > Reporter: Himanshu Vashishtha > Assignee: Himanshu Vashishtha > Fix For: 0.98.0, 0.96.0 > > Attachments: HBase-9509.patch > > > On some hadoop versions, fs.exists() throws an AccessControlException if > there is a non-searchable inode in the file path. Versions such as 2.1.0-beta > just returns false. > This jira is to fix HFile V1 detector tool to avoid making such calls. > See the below exception when running the tool on one hadoop version > {code} > ERROR util.HFileV1Detector: > org.apache.hadoop.security.AccessControlException: Permission denied: > user=hbase, access=EXECUTE, > inode="/hbase/.META./.tableinfo.0000000001":hbase:supergroup:-rw-r--r-- > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:234) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:187) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:150) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5141) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5123) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5102) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3265) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:719) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:692) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59628) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034) > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira