[ https://issues.apache.org/jira/browse/HDFS-16750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17597442#comment-17597442 ]
ASF GitHub Bot commented on HDFS-16750: --------------------------------------- tomscut commented on code in PR #4821: URL: https://github.com/apache/hadoop/pull/4821#discussion_r957913810 ########## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java: ########## @@ -1439,7 +1440,7 @@ private void checkSuperuserPrivilege() throws IOException, AccessControlExceptio return; } // Try to get the ugi in the RPC call. - UserGroupInformation callerUgi = ipcServer.getRemoteUser(); + UserGroupInformation callerUgi = Server.getRemoteUser(); Review Comment: The changes here don't seem to work? > NameNode should use NameNode.getRemoteUser() to log audit event to avoid > possible NPE > -------------------------------------------------------------------------------------- > > Key: HDFS-16750 > URL: https://issues.apache.org/jira/browse/HDFS-16750 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: ZanderXu > Assignee: ZanderXu > Priority: Major > Labels: pull-request-available > > NameNode should use NameNode.getRemoteUser() to log audit event to avoid > possible NPE. > The relate code is: > {code:java} > private void logAuditEvent(boolean succeeded, String cmd, String src, > String dst, FileStatus stat) throws IOException { > if (isAuditEnabled() && isExternalInvocation()) { > logAuditEvent(succeeded, Server.getRemoteUser(), Server.getRemoteIp(), > cmd, src, dst, stat); > } > } > // the ugi may be null. > private void logAuditEvent(boolean succeeded, > UserGroupInformation ugi, InetAddress addr, String cmd, String src, > String dst, FileStatus status) { > final String ugiStr = ugi.toString(); > ... > } {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org