[ 
https://issues.apache.org/jira/browse/HDFS-5730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13865273#comment-13865273
 ] 

Uma Maheswara Rao G commented on HDFS-5730:
-------------------------------------------

HDFS audit logging interface:
{code}
 /**
   * Same as
   * {@link #logAuditEvent(boolean, String, InetAddress, String, String, 
String, FileStatus)}
   * with additional parameters related to logging delegation token tracking
   * IDs.
   * 
   * @param succeeded Whether authorization succeeded.
   * @param userName Name of the user executing the request.
   * @param addr Remote address of the request.
   * @param cmd The requested command.
   * @param src Path of affected source file.
   * @param dst Path of affected destination file (if any).
   * @param stat File information for operations that change the file's metadata
   *          (permissions, owner, times, etc).
   * @param ugi UserGroupInformation of the current user, or null if not logging
   *          token tracking information
   * @param dtSecretManager The token secret manager, or null if not logging
   *          token tracking information
   */
  public abstract void logAuditEvent(boolean succeeded, String userName,
      InetAddress addr, String cmd, String src, String dst,
      FileStatus stat, UserGroupInformation ugi,
      DelegationTokenSecretManager dtSecretManager);
{code}

Here succeeded parameter indicates whether Authorization check succeeded.

Recent APIs like addCacheDirective, modifyCacheDirective, 
removeCacheDirective..etc are used that parameter to indicate whether op 
succeeded or not.

{code}
 boolean success = false;
 .......
 writeLock();
    try {
      checkOperation(OperationCategory.WRITE);
      if (isInSafeMode()) {
        throw new SafeModeException(
            "Cannot add cache directive", safeMode);
      }
      cacheManager.modifyDirective(directive, pc, flags);
      getEditLog().logModifyCacheDirectiveInfo(directive,
          cacheEntry != null);
      success = true;
    } finally {
      writeUnlock();
      if (success) {
        getEditLog().logSync();
      }
      if (isAuditEnabled() && isExternalInvocation()) {
        logAuditEvent(success, "modifyCacheDirective", null, null, null);
      }
      RetryCache.setState(cacheEntry, success);
    }

{code}

But all the older APIs like startFile..etc handled the AccessControlException 
explicitly and passed the first parameter value as false if failure. No log for 
other IOE.


Also snapShot related APIs followed other pattern. Here we just logged only on 
success.

{code}
String createSnapshot(String snapshotRoot, String snapshotName)
      throws SafeModeException, IOException {
      ..........
      .........
    getEditLog().logSync();
    
    if (auditLog.isInfoEnabled() && isExternalInvocation()) {
      logAuditEvent(true, "createSnapshot", snapshotRoot, snapshotPath, null);
    }
    return snapshotPath;
  }
{code}

So, we have to unify the audit logging here in all APIs.

> Inconsistent Audit logging for HDFS APIs
> ----------------------------------------
>
>                 Key: HDFS-5730
>                 URL: https://issues.apache.org/jira/browse/HDFS-5730
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 3.0.0, 2.2.0
>            Reporter: Uma Maheswara Rao G
>            Assignee: Uma Maheswara Rao G
>
> When looking at the audit loggs in HDFS, I am seeing some inconsistencies 
> what was logged with audit and what is added recently.
> For more details please check the comments.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to