[ 
https://issues.apache.org/jira/browse/HDFS-15667?focusedWorklogId=508346&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-508346
 ]

ASF GitHub Bot logged work on HDFS-15667:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 06/Nov/20 03:39
            Start Date: 06/Nov/20 03:39
    Worklog Time Spent: 10m 
      Work Description: Hexiaoqiao commented on pull request #2437:
URL: https://github.com/apache/hadoop/pull/2437#issuecomment-722791873


   @maobaolong Thanks for your quick response. IIUC, value of allowed entry in 
audit log is decided if the request have permission to operate at the 
beginning. For this case, it makes sense to set allowed=false when delete root 
directory. I am not sure if it is reasonable that we also set allowed=false 
when `iip.getLastINode() = null`. Or the condition `iip.getLastINode() == null` 
could happen inner delete operation for some corner case? If not, the 
improvement is OK for me. Thanks.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 508346)
    Time Spent: 3h  (was: 2h 50m)

> Audit log record the unexpected allowed result when delete called
> -----------------------------------------------------------------
>
>                 Key: HDFS-15667
>                 URL: https://issues.apache.org/jira/browse/HDFS-15667
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>    Affects Versions: 3.2.1, 3.4.0
>            Reporter: Baolong Mao
>            Assignee: Baolong Mao
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: screenshot-1.png, screenshot-2.png
>
>          Time Spent: 3h
>  Remaining Estimate: 0h
>
> I met this issue if rm root directory, for remove non-root and non-empty 
> directory, toRemovedBlocks isn't null, its toDeleteList size is 0.
>  !screenshot-1.png! 
> when will return null?
> Through this screenshot, we can find that if fileRemoved = -1, then 
> toRemovedBlocks = null
>  !screenshot-2.png! 
> And when deleteAllowed(iip) return false, fileRemoved can be -1,
> {code:java}
>  private static boolean deleteAllowed(final INodesInPath iip) {
>     if (iip.length() < 1 || iip.getLastINode() == null) {
>       if (NameNode.stateChangeLog.isDebugEnabled()) {
>         NameNode.stateChangeLog.debug(
>             "DIR* FSDirectory.unprotectedDelete: failed to remove "
>                 + iip.getPath() + " because it does not exist");
>       }
>       return false;
>     } else if (iip.length() == 1) { // src is the root
>       NameNode.stateChangeLog.warn(
>           "DIR* FSDirectory.unprotectedDelete: failed to remove " +
>               iip.getPath() + " because the root is not allowed to be 
> deleted");
>       return false;
>     }
>     return true;
>   }
> {code}
> Through the code of deleteAllowed, we can find that when src is the root, it 
> can return false.
> So without this PR, when I execute *bin/hdfs dfs -rm -r /*
> I find the confusing auditlog line like following
> 2020-11-05 14:32:53,420 INFO  FSNamesystem.audit 
> (FSNamesystem.java:logAuditMessage(8102)) - allowed=true



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to