[ 
https://issues.apache.org/jira/browse/HDFS-15667?focusedWorklogId=508365&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-508365
 ]

ASF GitHub Bot logged work on HDFS-15667:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 06/Nov/20 06:36
            Start Date: 06/Nov/20 06:36
    Worklog Time Spent: 10m 
      Work Description: maobaolong commented on pull request #2437:
URL: https://github.com/apache/hadoop/pull/2437#issuecomment-722902960


   @ferhui @Hexiaoqiao Thanks for the simplify suggestion, It make the code 
more clear.
   
   ```java
     private static boolean deleteAllowed(final INodesInPath iip) {
       if (iip.length() < 1 || iip.getLastINode() == null) {
         if (NameNode.stateChangeLog.isDebugEnabled()) {
           NameNode.stateChangeLog.debug(
               "DIR* FSDirectory.unprotectedDelete: failed to remove "
                   + iip.getPath() + " because it does not exist");
         }
         return false;
       } else if (iip.length() == 1) { // src is the root
         NameNode.stateChangeLog.warn(
             "DIR* FSDirectory.unprotectedDelete: failed to remove " +
                 iip.getPath() + " because the root is not allowed to be 
deleted");
         return false;
       }
       return true;
     }
   ```
   
   As the `deleteAllowed` return false means this call don't allowed to delete, 
so I think it make sense to record `allowed = false` to auditlog.
   
   And when `iip.length() < 1 || iip.getLastINode() == null` or `iip.length() 
== 1` make `deleteAllowed` return `false`, so for the current deleteAllowed` 
method implementation, ` iip.getLastINode() == null` should make record 
`allowed = false` to auditlog.
   
   After an overlook at the delete flow, `iip.getLastINode()` might not be 
`null`.
   
   I push a new commit to simplify the test code, Please take another look, 
thanks.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 508365)
    Time Spent: 3h 20m  (was: 3h 10m)

> Audit log record the unexpected allowed result when delete called
> -----------------------------------------------------------------
>
>                 Key: HDFS-15667
>                 URL: https://issues.apache.org/jira/browse/HDFS-15667
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>    Affects Versions: 3.2.1, 3.4.0
>            Reporter: Baolong Mao
>            Assignee: Baolong Mao
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: screenshot-1.png, screenshot-2.png
>
>          Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> I met this issue if rm root directory, for remove non-root and non-empty 
> directory, toRemovedBlocks isn't null, its toDeleteList size is 0.
>  !screenshot-1.png! 
> when will return null?
> Through this screenshot, we can find that if fileRemoved = -1, then 
> toRemovedBlocks = null
>  !screenshot-2.png! 
> And when deleteAllowed(iip) return false, fileRemoved can be -1,
> {code:java}
>  private static boolean deleteAllowed(final INodesInPath iip) {
>     if (iip.length() < 1 || iip.getLastINode() == null) {
>       if (NameNode.stateChangeLog.isDebugEnabled()) {
>         NameNode.stateChangeLog.debug(
>             "DIR* FSDirectory.unprotectedDelete: failed to remove "
>                 + iip.getPath() + " because it does not exist");
>       }
>       return false;
>     } else if (iip.length() == 1) { // src is the root
>       NameNode.stateChangeLog.warn(
>           "DIR* FSDirectory.unprotectedDelete: failed to remove " +
>               iip.getPath() + " because the root is not allowed to be 
> deleted");
>       return false;
>     }
>     return true;
>   }
> {code}
> Through the code of deleteAllowed, we can find that when src is the root, it 
> can return false.
> So without this PR, when I execute *bin/hdfs dfs -rm -r /*
> I find the confusing auditlog line like following
> 2020-11-05 14:32:53,420 INFO  FSNamesystem.audit 
> (FSNamesystem.java:logAuditMessage(8102)) - allowed=true



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to