[ 
https://issues.apache.org/jira/browse/HDFS-15961?focusedWorklogId=582408&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-582408
 ]

ASF GitHub Bot logged work on HDFS-15961:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 14/Apr/21 11:44
            Start Date: 14/Apr/21 11:44
    Worklog Time Spent: 10m 
      Work Description: bshashikant commented on pull request #2881:
URL: https://github.com/apache/hadoop/pull/2881#issuecomment-819454476


   > > I think We should hold this off, I think the code has issues, as I said. 
Earlier I thought, there is some catch but I don’t think. It is misbehaving 
only. Ideally such features should go in a branch first....
   > 
   > @ayushtkn , IMO its not a misbehaviour here. Once the snapshotTrash 
feature is enabled, the .Trash directory has to be present to make the feature 
work. As you can have pre existing snapshottable directories , one way to 
create the .Trash inside these was to create right after the startup is done. 
The other solution was to explicitly provision the .Trash using a cmd line 
option. The choice was made to do it automatically on restart, and fail the 
Namenode in case any any issues occur.
   > 
   > The other solution is not fail the namenode , but log an warning and 
later, if any Trash operations gets performed inside the snapshottable root, 
will fail.
   > 
   > The discussion is here: [#2682 
(comment)](https://github.com/apache/hadoop/pull/2682#discussion_r570461526)
   
   Coming to think of it, if providing an external command to create the Trash 
directory by admins is feasible and makes sense, i think its ok to remove the 
NN startup logic to create Trash directories inside snapshottable root.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 582408)
    Time Spent: 2h 20m  (was: 2h 10m)

> standby namenode failed to start ordered snapshot deletion is enabled while 
> having snapshottable directories
> ------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-15961
>                 URL: https://issues.apache.org/jira/browse/HDFS-15961
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: snapshots
>    Affects Versions: 3.4.0
>            Reporter: Nilotpal Nandi
>            Assignee: Shashikant Banerjee
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 3.4.0
>
>          Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-04-08 12:07:25,398 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new 
> storage ID DS-515dfb62-9975-4a2d-8384-d33ac8ff9cd1 for DN 172.27.121.195:9866
> 2021-04-08 12:07:55,581 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Could not provision Trash directory for existing snapshottable 
> directories. Exiting Namenode.
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: ==> 
> JVMShutdownHook.run()
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: 
> Signalling async audit cleanup to start.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to