xinglin commented on code in PR #5071:
URL: https://github.com/apache/hadoop/pull/5071#discussion_r1003605800


##########
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##########
@@ -723,6 +723,49 @@ public boolean delete(String src, boolean recursive) 
throws IOException {
     }
   }
 
+  // Create missing user home dirs for trash paths.
+  // We assume the router is running with super-user privilege (can create
+  // user home dir in /user dir).
+  private void createUserHomeForTrashPath(List<RemoteLocation> locations) 
throws IOException {
+    List<RemoteLocation> missingUserHomes = new ArrayList<>();
+
+    // Identify missing trash roots
+    for(RemoteLocation loc: locations) {
+
+      String path = loc.getDest();
+      // Continue if not a trash path
+      if (!MountTableResolver.isTrashPath(path)) {
+        continue;
+      }
+
+      // Check whether user home dir exists at the destination namespace
+      String trashRoot = MountTableResolver.getTrashRoot();
+      String userHome = new Path(trashRoot).getParent().toUri().getPath();
+      RemoteLocation userHomeLoc = new RemoteLocation(loc, userHome);
+      RemoteMethod method = new RemoteMethod("getFileInfo", new Class<?>[] 
{String.class}, new RemoteParam());
+      HdfsFileStatus ret = rpcClient.invokeSingle(userHomeLoc, method, 
HdfsFileStatus.class);
+      if (ret == null) {
+        missingUserHomes.add(userHomeLoc);
+      }
+    }
+
+    if (!missingUserHomes.isEmpty()) {

Review Comment:
   Hi @mkuchenbecker,
   
   When missingUserHomes is empty, it means all user home dirs have been 
created and thus we don't need to create user home dir anymore.
   
   There is a for loop inside `invokeSequentialInternal`(), to loop over 
locations. `invokeSequentialAsRouter`() will call `invokeSequentialInternal`(). 
I used invokeSequential, since I believe a trash path will mostly like live at 
a single namenode.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to