[GitHub] [hadoop] taklwu commented on a diff in pull request #5553: HADOOP-18671 Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem

2023-04-20 Thread via GitHub


taklwu commented on code in PR #5553:
URL: https://github.com/apache/hadoop/pull/5553#discussion_r1173200395


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LeaseRecoverable.java:
##
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs;
+
+import java.io.IOException;
+
+/**
+ * Whether the given Path of the FileSystem has the capability to performance
+ * lease recovery.
+ */
+public interface LeaseRecoverable {
+
+  /**
+   * Start the lease recovery of a file
+   *
+   * @param file path to a file
+   * @return true if the file is already closed, and it does not require lease 
recovery
+   * @throws IOException if an error occurs during lease recovery
+   */
+  default boolean recoverLease(Path file) throws IOException {
+throw new IOException("This file system does not support lease recovery 
for Path " + file);

Review Comment:
   let's address your comment here as well, I will make the change soon



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] taklwu commented on a diff in pull request #5553: HADOOP-18671 Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem

2023-04-17 Thread via GitHub


taklwu commented on code in PR #5553:
URL: https://github.com/apache/hadoop/pull/5553#discussion_r1169365930


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileSystemContract.java:
##
@@ -137,4 +142,30 @@ public void testRenameNonExistentPath() throws Exception {
 () -> super.testRenameNonExistentPath());
 
   }
+
+  @Test
+  public void testFileSystemCapabilities() throws Exception {

Review Comment:
   yeah, that's why I chose to use `default` implementation when declaring the 
interface, then whoever does implement that will throw 
UnsupportedOperationException.
   
   but still we should change those filesystem expectation, they're not 
extending or implementing those  interface for now. (we don't know if in the 
future anyone would like to build new feature on top, but it's very less likely 
for s3a and abfs)



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestFileSystemInitialization.java:
##
@@ -105,5 +108,27 @@ public void testFileSystemCapabilities() throws Throwable {
 ETAGS_PRESERVED_IN_RENAME, etagsAcrossRename,
 FS_ACLS, acls, fs)
 .isEqualTo(acls);
+
+final boolean leaseRecovery = fs.hasPathCapability(p, LEASE_RECOVERABLE);

Review Comment:
   soif you prefer me to remove them, that would save a bit on the PR lol, 
but let me wait for your reply first.



##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java:
##
@@ -1633,6 +1634,53 @@ public DatanodeInfo[] getDataNodeStats(final 
DatanodeReportType type)
* @see org.apache.hadoop.hdfs.protocol.ClientProtocol#setSafeMode(
*HdfsConstants.SafeModeAction,boolean)
*/
+  @Override
+  public boolean setSafeMode(SafeModeAction action)
+throws IOException {
+return setSafeMode(action, false);
+  }
+
+  /**
+   * Enter, leave or get safe mode.
+   *
+   * @param action
+   *  One of SafeModeAction.ENTER, SafeModeAction.LEAVE and
+   *  SafeModeAction.GET
+   * @param isChecked
+   *  If true check only for Active NNs status, else check first NN's
+   *  status
+   */
+  @Override
+  public boolean setSafeMode(SafeModeAction action, boolean isChecked)
+throws IOException {
+return dfs.setSafeMode(convertToClientProtocolSafeModeAction(action), 
isChecked);
+  }
+
+  private HdfsConstants.SafeModeAction convertToClientProtocolSafeModeAction(

Review Comment:
   my understanding from the code reading, webhdfs does not support entering or 
leaving safe mode, so we can keep this function here, but having it as static 
would be good.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org