[GitHub] [hadoop] mkuchenbecker commented on a diff in pull request #4967: HDFS-16791 WIP - client protocol and Filesystem apis implemented and …

2023-01-25 Thread via GitHub


mkuchenbecker commented on code in PR #4967:
URL: https://github.com/apache/hadoop/pull/4967#discussion_r1086894151


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java:
##
@@ -1477,5 +1478,18 @@ public void setStoragePolicy(Path path, String 
policyName)
 throws IOException {
   throw readOnlyMountTable("setStoragePolicy", path);
 }
+
+@Override
+public Path getEnclosingRoot(Path path) throws IOException {
+  InodeTree.ResolveResult res;
+  try {
+res = fsState.resolve((path.toString()), true);
+  } catch (FileNotFoundException ex) {
+throw new NotInMountpointException(path, "getEnclosingRoot");
+  }
+  Path fullPath = new Path(res.resolvedPath);
+  Path enclosingPath = res.targetFileSystem.getEnclosingRoot(path);
+  return enclosingPath.depth() > fullPath.depth() ?  enclosingPath : 
fullPath;

Review Comment:
   In what cases is full path returned and how would it be different from 
enclosing?



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:
##
@@ -4917,7 +4917,6 @@ public CompletableFuture build() 
throws IOException {
*/
   @InterfaceAudience.Public
   @InterfaceStability.Unstable
-  // Should this throw RuntimeException (instead of IO), so we can throw 
NotInMountpointException from viewfs/rbf?

Review Comment:
   Why remove this context? 



##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractGetEnclosingRoot.java:
##
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.contract;
+
+import java.io.IOException;
+import java.security.PrivilegedExceptionAction;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+public abstract class AbstractContractGetEnclosingRoot extends 
AbstractFSContractTestBase {
+  private static final Logger LOG = 
LoggerFactory.getLogger(AbstractContractGetEnclosingRoot.class);
+
+  @Override
+  public void setup() throws Exception {
+super.setup();
+  }
+
+  @Test
+  public void testEnclosingRootEquivalence() throws IOException {
+FileSystem fs = getFileSystem();
+Path root = path("/");
+Path foobar = path("/foo/bar");
+
+assertEquals(fs.getEnclosingRoot(foobar), root);
+assertEquals(fs.getEnclosingRoot(fs.getEnclosingRoot(foobar)), root);
+assertEquals(fs.getEnclosingRoot(foobar), fs.getEnclosingRoot(root));
+
+assertEquals(fs.getEnclosingRoot(path(foobar.toString())), root);
+
assertEquals(fs.getEnclosingRoot(fs.getEnclosingRoot(path(foobar.toString(, 
root);
+assertEquals(fs.getEnclosingRoot(path(foobar.toString())), 
fs.getEnclosingRoot(root));
+  }
+
+  @Test
+  public void testEnclosingRootPathExists() throws Exception {

Review Comment:
   I'm not sure I understand the value of this test vs the other tests. Can you 
add some details about what this is testing. i.e. in almost all cases we 
basically want "/foo/bar" or whatever path to almost always return "/" as the 
path. I'm not seeing how this is different.



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java:
##
@@ -1477,5 +1478,18 @@ public void setStoragePolicy(Path path, String 
policyName)
 throws IOException {
   throw readOnlyMountTable("setStoragePolicy", path);
 }
+
+@Override
+public Path getEnclosingRoot(Path path) throws IOException {
+  InodeTree.ResolveResult res;
+  try {
+res = fsState.resolve((path.toString()), true);
+  } catch (FileNotFoundException ex) {
+throw new NotInMountpointException(path, "getEnclosingRoot");

Review Comment:
   You should propagate the original error as the cause of the thrown error. 



##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEnclosingRoot.java:
##
@@ -92,11 +92,14 @@ protected void setProvider() {
 
   @After
   public void teardown() {
-if 

[GitHub] [hadoop] mkuchenbecker commented on a diff in pull request #4967: HDFS-16791 WIP - client protocol and Filesystem apis implemented and …

2022-10-05 Thread GitBox


mkuchenbecker commented on code in PR #4967:
URL: https://github.com/apache/hadoop/pull/4967#discussion_r988403730


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java:
##
@@ -3898,4 +3898,31 @@ public DatanodeInfo[] getSlowDatanodeStats() throws 
IOException {
 return dfs.slowDatanodeReport();
   }
 
+  /* HDFS only */
+  public Path getEnclosingRoot(final Path path) throws IOException {
+statistics.incrementReadOps(1);
+storageStatistics.incrementOpCounter(OpType.GET_ENCLOSING_ROOT);
+Preconditions.checkNotNull(path);
+Path absF = fixRelativePart(path);
+return new FileSystemLinkResolver() {
+  @Override
+  public Path doCall(final Path p) throws IOException {
+return dfs.getEnclosingRoot(getPathName(p));
+  }
+
+  @Override
+  public Path next(final FileSystem fs, final Path p)
+  throws IOException {
+if (fs instanceof DistributedFileSystem) {

Review Comment:
   Nit, I would reverse the condition and say:
   if not instanceof then throw
   // Logic
   



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java:
##
@@ -1919,6 +1927,14 @@ public Collection 
getAllStoragePolicies()
   }
   return allPolicies;
 }
+
+@Override
+public Path getEnclosingRoot(Path path) throws IOException {

Review Comment:
   They both override; you could consider a static helper to DRY. 



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -1935,6 +1935,21 @@ public DatanodeInfo[] getSlowDatanodeReport() throws 
IOException {
 return rpcServer.getSlowDatanodeReport(true, 0);
   }
 
+  @Override
+  public String getEnclosingRoot(String src) throws IOException {
+Path mountPath = new Path("/");
+if (subclusterResolver instanceof MountTableResolver) {
+  MountTableResolver mountTable = (MountTableResolver) subclusterResolver;
+  if (mountTable.getMountPoint(src) != null) {
+// unclear if this is the correct thing to do, probably depends on 
default mount point / link fallback
+mountPath = new Path(mountTable.getMountPoint(src).getSourcePath());

Review Comment:
   getMonutPoint can return null. 



##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto:
##
@@ -428,6 +428,14 @@ message GetPreferredBlockSizeResponseProto {
 message GetSlowDatanodeReportRequestProto {
 }
 
+message GetEnclosingRootRequestProto {
+  required string filename = 1;

Review Comment:
   I'd use optional. Required has been deprecated in Proto3. 
   
   
https://stackoverflow.com/questions/31801257/why-required-and-optional-is-removed-in-protocol-buffers-3



##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java:
##
@@ -1888,4 +1889,11 @@ BatchedEntries listOpenFiles(long prevId,
   @ReadOnly
   DatanodeInfo[] getSlowDatanodeReport() throws IOException;
 
+  /**
+   * Get the enclosing root for a path.
+   */
+  @Idempotent
+  @ReadOnly(isCoordinated = true)
+  String getEnclosingRoot(String src) throws IOException;

Review Comment:
   +1, Path is getting implicitly cast as string. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org