curie71 opened a new pull request, #5464:
URL: https://github.com/apache/hadoop/pull/5464

   HDFS-16944 We found that in other components (like namenode in hdfs or 
resourcemanager in yarn), debug log and audit log are record seperately, except 
RouterAdminServer.
   
   We found that in other components (like namenode in hdfs or resourcemanager 
in yarn), *debug log and audit log are record seperately*, except 
RouterAdminServer.
   
   There are lots of +simple+ logs to help with debugging for the *developers* 
who can access to the source code. And there are also audit logs record 
+privileged operations+ with more +detailed+ information to help *system 
admins* understand what happened in a real run. 
   
   There is an example in yarn: 
   ```java
     public static final Log auditLog = LogFactory.getLog(
         FSNamesystem.class.getName() + ".audit");
   
   try {
         // Safety
         userUgi = UserGroupInformation.getCurrentUser();
         user = userUgi.getShortUserName();
       } catch (IOException ie) {
         LOG.warn("Unable to get the current user.", ie); // debug log
         RMAuditLogger.logFailure(user, AuditConstants.SUBMIT_APP_REQUEST,
             ie.getMessage(), "ClientRMService",
             "Exception in submitting application", applicationId, 
callerContext,
             submissionContext.getQueue()); // audit log
         throw RPCUtil.getRemoteException(ie);
       }
   ```
   So I suggest to add an audit log for *RouterAdminServer* to save privileged 
operation logs seperately.
   The logger' s name may be: 
   ```java
   // hadoop security
   public static final Logger AUDITLOG =
         LoggerFactory.getLogger(
             "SecurityLogger." + ServiceAuthorizationManager.class.getName());
   // namenode
     public static final Log auditLog = LogFactory.getLog(
         FSNamesystem.class.getName() + ".audit");
   ```
   I choose className.audit finally and record AUDITLOG instead of LOG for the 
privileged operations that call permission check function 
_checkSuperuserPrivilege_.
    
   
   
   <!--
     Thanks for sending a pull request!
       1. If this is your first time, please read our contributor guidelines: 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
       2. Make sure your PR title starts with JIRA issue id, e.g., 
'HADOOP-17799. Your PR title ...'.
   -->
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to