jojochuang commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r623513249



##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
##########
@@ -1527,34 +1535,49 @@ public Response delete(
       @QueryParam(RecursiveParam.NAME) @DefaultValue(RecursiveParam.DEFAULT)
           final RecursiveParam recursive,
       @QueryParam(SnapshotNameParam.NAME) 
@DefaultValue(SnapshotNameParam.DEFAULT)
-          final SnapshotNameParam snapshotName
+          final SnapshotNameParam snapshotName,
+      @QueryParam(DeleteSkipTrashParam.NAME)
+      @DefaultValue(DeleteSkipTrashParam.DEFAULT)
+          final DeleteSkipTrashParam skiptrash
       ) throws IOException, InterruptedException {
 
-    init(ugi, delegation, username, doAsUser, path, op, recursive, 
snapshotName);
+    init(ugi, delegation, username, doAsUser, path, op, recursive,
+        snapshotName, skiptrash);
 
-    return doAs(ugi, new PrivilegedExceptionAction<Response>() {
-      @Override
-      public Response run() throws IOException {
-          return delete(ugi, delegation, username, doAsUser,
-              path.getAbsolutePath(), op, recursive, snapshotName);
-      }
-    });
+    return doAs(ugi, () -> delete(
+        path.getAbsolutePath(), op, recursive, snapshotName, skiptrash));
   }
 
   protected Response delete(
-      final UserGroupInformation ugi,
-      final DelegationParam delegation,
-      final UserParam username,
-      final DoAsParam doAsUser,
       final String fullpath,
       final DeleteOpParam op,
       final RecursiveParam recursive,
-      final SnapshotNameParam snapshotName
-      ) throws IOException {
+      final SnapshotNameParam snapshotName,
+      final DeleteSkipTrashParam skipTrash) throws IOException {
     final ClientProtocol cp = getRpcClientProtocol();
 
     switch(op.getValue()) {
     case DELETE: {
+      Configuration conf =
+          (Configuration) context.getAttribute(JspHelper.CURRENT_CONF);
+      long trashInterval =
+          conf.getLong(FS_TRASH_INTERVAL_KEY, FS_TRASH_INTERVAL_DEFAULT);
+      if (trashInterval > 0 && !skipTrash.getValue()) {
+        LOG.info("{} is {} , trying to archive {} instead of removing",
+            FS_TRASH_INTERVAL_KEY, trashInterval, fullpath);
+        org.apache.hadoop.fs.Path path =
+            new org.apache.hadoop.fs.Path(fullpath);
+        boolean movedToTrash = Trash.moveToAppropriateTrash(
+            FileSystem.get(conf), path, conf);

Review comment:
       This could lead to OOM. We should not create FileSystem object inside 
NameNode.
   See https://issues.apache.org/jira/browse/HDFS-15052 for a similar problem.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to