[ 
https://issues.apache.org/jira/browse/HDFS-15226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reopened HDFS-15226:
-----------------------------------------

> Ranger integrates HDFS and discovers NPE
> ----------------------------------------
>
>                 Key: HDFS-15226
>                 URL: https://issues.apache.org/jira/browse/HDFS-15226
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>    Affects Versions: 2.7.6
>         Environment: Apache Ranger1.2 && Hadoop2.7.6
>            Reporter: bianqi
>            Priority: Critical
>             Fix For: 3.2.0, 3.2.1
>
>
> When I integrated ranger1.2 with Hadoop2.7.6, the following NPE error 
> occurred when executing hdfs dfs -ls /.
>  However, when I integrated ranger1.2 with Hadoop2.7.1, executing hdfs dfs 
> -ls / without any errors, and the directory list can be displayed normally.
> {quote}java.lang.NullPointerException
>  at java.lang.String.checkBounds(String.java:384)
>  at java.lang.String.<init>(String.java:425)
>  at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:337)
>  at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:319)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:238)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:183)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3832)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
>  DEBUG org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020: responding 
> to org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 
> xxxxxx:8502 Call#0 Retry#0
> {quote}
> When I checked the HDFS source code and debug hdfs source . I found 
> pathByNameArr[i] is null.
> {quote}private INodeAttributes getINodeAttrs(byte[][] pathByNameArr, int 
> pathIdx,
>  INode inode, int snapshotId) {
>  INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId);
>  if (getAttributesProvider() != null) {
>  String[] elements = new String[pathIdx + 1];
>  for (int i = 0; i < elements.length; i++) {
>  elements[i] = DFSUtil.bytes2String(pathByNameArr[i]);
>  }
>  inodeAttrs = getAttributesProvider().getAttributes(elements, inodeAttrs);
>  }
>  return inodeAttrs;
>  }
>  
> {quote}
> I found that the code of the trunk branch has been fixed and currently has 
> not been merged in the latest 3.2.1 version.
> I hope that this patch can be merged into other branches as soon as 
> possible,thank you very much! 
>  
> {quote}private INodeAttributes getINodeAttrs(byte[][] pathByNameArr, int 
> pathIdx,
>  INode inode, int snapshotId) {
>  INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId);
>  if (getAttributesProvider() != null)
> Unknown macro: \{ String[] elements = new String[pathIdx + 1]; /** * {@link 
> INode#getPathComponents(String)}
> returns a null component
>  * for the root only path "/". Assign an empty string if so.
>  */
>  if (pathByNameArr.length == 1 && pathByNameArr[0] == null)
> Unknown macro: \{ elements[0] = ""; }
> else
> Unknown macro: { for (int i = 0; i < elements.length; i++)
> Unknown macro: \{ elements[i] = DFSUtil.bytes2String(pathByNameArr[i]); }}
> inodeAttrs = getAttributesProvider().getAttributes(elements, inodeAttrs);
>  }
>  return inodeAttrs;
>  }
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to