[ https://issues.apache.org/jira/browse/HADOOP-18671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713340#comment-17713340 ]
ASF GitHub Bot commented on HADOOP-18671: ----------------------------------------- taklwu commented on code in PR #5553: URL: https://github.com/apache/hadoop/pull/5553#discussion_r1169365930 ########## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileSystemContract.java: ########## @@ -137,4 +142,30 @@ public void testRenameNonExistentPath() throws Exception { () -> super.testRenameNonExistentPath()); } + + @Test + public void testFileSystemCapabilities() throws Exception { Review Comment: yeah, that's why I chose to use `default` implementation when declaring the interface, then whoever does implement that will throw UnsupportedOperationException. but still we should change those filesystem expectation, they're not extending or implementing those interface for now. (we don't know if in the future anyone would like to build new feature on top, but it's very less likely for s3a and abfs) ########## hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestFileSystemInitialization.java: ########## @@ -105,5 +108,27 @@ public void testFileSystemCapabilities() throws Throwable { ETAGS_PRESERVED_IN_RENAME, etagsAcrossRename, FS_ACLS, acls, fs) .isEqualTo(acls); + + final boolean leaseRecovery = fs.hasPathCapability(p, LEASE_RECOVERABLE); Review Comment: so....if you prefer me to remove them, that would save a bit on the PR lol, but let me wait for your reply first. ########## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java: ########## @@ -1633,6 +1634,53 @@ public DatanodeInfo[] getDataNodeStats(final DatanodeReportType type) * @see org.apache.hadoop.hdfs.protocol.ClientProtocol#setSafeMode( * HdfsConstants.SafeModeAction,boolean) */ + @Override + public boolean setSafeMode(SafeModeAction action) + throws IOException { + return setSafeMode(action, false); + } + + /** + * Enter, leave or get safe mode. + * + * @param action + * One of SafeModeAction.ENTER, SafeModeAction.LEAVE and + * SafeModeAction.GET + * @param isChecked + * If true check only for Active NNs status, else check first NN's + * status + */ + @Override + public boolean setSafeMode(SafeModeAction action, boolean isChecked) + throws IOException { + return dfs.setSafeMode(convertToClientProtocolSafeModeAction(action), isChecked); + } + + private HdfsConstants.SafeModeAction convertToClientProtocolSafeModeAction( Review Comment: my understanding from the code reading, webhdfs does not support entering or leaving safe mode, so we can keep this function here, but having it as static would be good. > Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem > -------------------------------------------------------------------- > > Key: HADOOP-18671 > URL: https://issues.apache.org/jira/browse/HADOOP-18671 > Project: Hadoop Common > Issue Type: New Feature > Components: fs > Reporter: Wei-Chiu Chuang > Priority: Major > Labels: pull-request-available > > We are in the midst of enabling HBase and Solr to run on Ozone. > An obstacle is that HBase relies heavily on HDFS APIs and semantics for its > Write Ahead Log (WAL) file (similarly, for Solr's transaction log). We > propose to push up these HDFS APIs, i.e. recoverLease(), setSafeMode(), > isFileClosed() to FileSystem abstraction so that HBase and other applications > do not need to take on Ozone dependency at compile time. This work will > (hopefully) enable HBase to run on other storage system implementations in > the future. > There are other HDFS features that HBase uses, including hedged read and > favored nodes. Those are FS-specific optimizations and are not critical to > enable HBase on Ozone. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org