[hadoop] branch branch-3.1 updated: YARN-10363. TestRMAdminCLI.testHelp is failing in branch-2.10. Contributed by Bilwa S T.

2020-07-31 Thread ebadger
This is an automated email from the ASF dual-hosted git repository.

ebadger pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 87816e8  YARN-10363. TestRMAdminCLI.testHelp is failing in 
branch-2.10. Contributed by Bilwa S T.
87816e8 is described below

commit 87816e8a51e3f30916972aae23ac4b0fed7fbfff
Author: Eric Badger 
AuthorDate: Fri Jul 31 22:39:39 2020 +

YARN-10363. TestRMAdminCLI.testHelp is failing in branch-2.10. Contributed 
by
Bilwa S T.
---
 .../java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java | 7 +--
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
index 1f4b493..91261f7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
@@ -775,11 +775,6 @@ public class TestRMAdminCLI {
   "Usage: yarn rmadmin [-getServiceState ]", dataErr, 0);
   testError(new String[] { "-help", "-checkHealth" },
   "Usage: yarn rmadmin [-checkHealth ]", dataErr, 0);
-  testError(new String[] { "-help", "-failover" },
-  "Usage: yarn rmadmin " +
-  "[-failover [--forcefence] [--forceactive] " +
-  " ]",
-  dataErr, 0);
 
   testError(new String[] { "-help", "-badParameter" },
   "Usage: yarn rmadmin", dataErr, 0);
@@ -1062,7 +1057,7 @@ public class TestRMAdminCLI {
 ByteArrayOutputStream errOutBytes = new ByteArrayOutputStream();
 rmAdminCLIWithHAEnabled.setErrOut(new PrintStream(errOutBytes));
 try {
-  String[] args = { "-failover" };
+  String[] args = { "-transitionToActive" };
   assertEquals(-1, rmAdminCLIWithHAEnabled.run(args));
   String errOut = new String(errOutBytes.toByteArray(), Charsets.UTF_8);
   errOutBytes.reset();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.10 updated: YARN-10363. TestRMAdminCLI.testHelp is failing in branch-2.10. Contributed by Bilwa S T.

2020-07-31 Thread ebadger
This is an automated email from the ASF dual-hosted git repository.

ebadger pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new c84ae8f  YARN-10363. TestRMAdminCLI.testHelp is failing in 
branch-2.10. Contributed by Bilwa S T.
c84ae8f is described below

commit c84ae8f3d9cfa2edd3a298636ba940f69a338e6a
Author: Eric Badger 
AuthorDate: Fri Jul 31 22:40:41 2020 +

YARN-10363. TestRMAdminCLI.testHelp is failing in branch-2.10. Contributed 
by
Bilwa S T.

(cherry picked from commit 87816e8a51e3f30916972aae23ac4b0fed7fbfff)
---
 .../java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java | 7 +--
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
index 013c227..10f8a25 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
@@ -582,11 +582,6 @@ public class TestRMAdminCLI {
   "Usage: yarn rmadmin [-getServiceState ]", dataErr, 0);
   testError(new String[] { "-help", "-checkHealth" },
   "Usage: yarn rmadmin [-checkHealth ]", dataErr, 0);
-  testError(new String[] { "-help", "-failover" },
-  "Usage: yarn rmadmin " +
-  "[-failover [--forcefence] [--forceactive] " +
-  " ]",
-  dataErr, 0);
 
   testError(new String[] { "-help", "-badParameter" },
   "Usage: yarn rmadmin", dataErr, 0);
@@ -867,7 +862,7 @@ public class TestRMAdminCLI {
 ByteArrayOutputStream errOutBytes = new ByteArrayOutputStream();
 rmAdminCLIWithHAEnabled.setErrOut(new PrintStream(errOutBytes));
 try {
-  String[] args = { "-failover" };
+  String[] args = { "-transitionToActive" };
   assertEquals(-1, rmAdminCLIWithHAEnabled.run(args));
   String errOut = new String(errOutBytes.toByteArray(), Charsets.UTF_8);
   errOutBytes.reset();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-17137. ABFS: Makes the test cases in ITestAbfsNetworkStatistics agnostic

2020-07-31 Thread dazhou
This is an automated email from the ASF dual-hosted git repository.

dazhou pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a7fda2e  HADOOP-17137. ABFS: Makes the test cases in 
ITestAbfsNetworkStatistics agnostic
a7fda2e is described below

commit a7fda2e38f2a06e18c2929dff0be978d5e0ef9d5
Author: bilaharith <52483117+bilahar...@users.noreply.github.com>
AuthorDate: Sat Aug 1 00:57:57 2020 +0530

HADOOP-17137. ABFS: Makes the test cases in ITestAbfsNetworkStatistics 
agnostic

- Contributed by Bilahari T H
---
 .../fs/azurebfs/ITestAbfsNetworkStatistics.java| 63 +-
 1 file changed, 38 insertions(+), 25 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsNetworkStatistics.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsNetworkStatistics.java
index b2e1301..9d76fb0 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsNetworkStatistics.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsNetworkStatistics.java
@@ -32,6 +32,9 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream;
 import org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation;
 
+import static org.apache.hadoop.fs.azurebfs.AbfsStatistic.CONNECTIONS_MADE;
+import static org.apache.hadoop.fs.azurebfs.AbfsStatistic.SEND_REQUESTS;
+
 public class ITestAbfsNetworkStatistics extends AbstractAbfsIntegrationTest {
 
   private static final Logger LOG =
@@ -56,6 +59,11 @@ public class ITestAbfsNetworkStatistics extends 
AbstractAbfsIntegrationTest {
 String testNetworkStatsString = "http_send";
 long connectionsMade, requestsSent, bytesSent;
 
+metricMap = fs.getInstrumentationMap();
+long connectionsMadeBeforeTest = metricMap
+.get(CONNECTIONS_MADE.getStatName());
+long requestsMadeBeforeTest = metricMap.get(SEND_REQUESTS.getStatName());
+
 /*
  * Creating AbfsOutputStream will result in 1 connection made and 1 send
  * request.
@@ -75,27 +83,26 @@ public class ITestAbfsNetworkStatistics extends 
AbstractAbfsIntegrationTest {
   /*
* Testing the network stats with 1 write operation.
*
-   * connections_made : 3(getFileSystem()) + 1(AbfsOutputStream) + 
2(flush).
+   * connections_made : (connections made above) + 2(flush).
*
-   * send_requests : 1(getFileSystem()) + 1(AbfsOutputStream) + 2(flush).
+   * send_requests : (requests sent above) + 2(flush).
*
* bytes_sent : bytes wrote in AbfsOutputStream.
*/
-  if 
(fs.getAbfsStore().isAppendBlobKey(fs.makeQualified(sendRequestPath).toString()))
 {
+  long extraCalls = 0;
+  if (!fs.getAbfsStore()
+  .isAppendBlobKey(fs.makeQualified(sendRequestPath).toString())) {
 // no network calls are made for hflush in case of appendblob
-connectionsMade = assertAbfsStatistics(AbfsStatistic.CONNECTIONS_MADE,
-5, metricMap);
-requestsSent = assertAbfsStatistics(AbfsStatistic.SEND_REQUESTS, 3,
-metricMap);
-  } else {
-connectionsMade = assertAbfsStatistics(AbfsStatistic.CONNECTIONS_MADE,
-6, metricMap);
-requestsSent = assertAbfsStatistics(AbfsStatistic.SEND_REQUESTS, 4,
-metricMap);
+extraCalls++;
   }
+  long expectedConnectionsMade = connectionsMadeBeforeTest + extraCalls + 
2;
+  long expectedRequestsSent = requestsMadeBeforeTest + extraCalls + 2;
+  connectionsMade = assertAbfsStatistics(CONNECTIONS_MADE,
+  expectedConnectionsMade, metricMap);
+  requestsSent = assertAbfsStatistics(SEND_REQUESTS, expectedRequestsSent,
+  metricMap);
   bytesSent = assertAbfsStatistics(AbfsStatistic.BYTES_SENT,
   testNetworkStatsString.getBytes().length, metricMap);
-
 }
 
 // To close the AbfsOutputStream 1 connection is made and 1 request is 
sent.
@@ -135,14 +142,14 @@ public class ITestAbfsNetworkStatistics extends 
AbstractAbfsIntegrationTest {
*/
   if 
(fs.getAbfsStore().isAppendBlobKey(fs.makeQualified(sendRequestPath).toString()))
 {
 // no network calls are made for hflush in case of appendblob
-assertAbfsStatistics(AbfsStatistic.CONNECTIONS_MADE,
+assertAbfsStatistics(CONNECTIONS_MADE,
 connectionsMade + 1 + LARGE_OPERATIONS, metricMap);
-assertAbfsStatistics(AbfsStatistic.SEND_REQUESTS,
+assertAbfsStatistics(SEND_REQUESTS,
 requestsSent + 1 + LARGE_OPERATIONS, metricMap);
   } else {
-assertAbfsStatistics(AbfsStatistic.CONNECTIONS_MADE,
+assertAbfsStatistics(CONNECTIONS_MADE,
 connectionsMade + 1 + LARGE_OPERATIONS * 2, metricMap);
-

[hadoop] branch branch-3.1 updated: HDFS-15478: When Empty mount points, we are assigning fallback link to self. But it should not use full URI for target fs. (#2160). Contributed by Uma Maheswara Rao

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 1768618  HDFS-15478: When Empty mount points, we are assigning 
fallback link to self. But it should not use full URI for target fs. (#2160). 
Contributed by Uma Maheswara Rao G.
1768618 is described below

commit 1768618ab948fbd0cfdfa481a2ece124e10e33ec
Author: Uma Maheswara Rao G 
AuthorDate: Tue Jul 21 23:29:10 2020 -0700

HDFS-15478: When Empty mount points, we are assigning fallback link to 
self. But it should not use full URI for target fs. (#2160). Contributed by Uma 
Maheswara Rao G.

(cherry picked from commit ac9a07b51aefd0fd3b4602adc844ab0f172835e3)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  2 +-
 .../viewfs/TestViewFsOverloadSchemeListStatus.java | 27 +++---
 .../src/site/markdown/ViewFsOverloadScheme.md  |  2 ++
 3 files changed, 22 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 484581c..b441483 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -292,7 +292,7 @@ public class ViewFileSystem extends FileSystem {
   myUri = new URI(getScheme(), authority, "/", null, null);
   boolean initingUriAsFallbackOnNoMounts =
   !FsConstants.VIEWFS_TYPE.equals(getType());
-  fsState = new InodeTree(conf, tableName, theUri,
+  fsState = new InodeTree(conf, tableName, myUri,
   initingUriAsFallbackOnNoMounts) {
 @Override
 protected FileSystem getTargetFileSystem(final URI uri)
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
index 300fdd8..7afc789 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
@@ -127,19 +127,30 @@ public class TestViewFsOverloadSchemeListStatus {
 
   /**
* Tests that ViewFSOverloadScheme should consider initialized fs as fallback
-   * if there are no mount links configured.
+   * if there are no mount links configured. It should add fallback with the
+   * chrootedFS at it's uri's root.
*/
   @Test(timeout = 3)
   public void testViewFSOverloadSchemeWithoutAnyMountLinks() throws Exception {
-try (FileSystem fs = FileSystem.get(TEST_DIR.toPath().toUri(), conf)) {
+Path initUri = new Path(TEST_DIR.toURI().toString(), "init");
+try (FileSystem fs = FileSystem.get(initUri.toUri(), conf)) {
   ViewFileSystemOverloadScheme vfs = (ViewFileSystemOverloadScheme) fs;
   assertEquals(0, vfs.getMountPoints().length);
-  Path testFallBack = new Path("test", FILE_NAME);
-  assertTrue(vfs.mkdirs(testFallBack));
-  FileStatus[] status = vfs.listStatus(testFallBack.getParent());
-  assertEquals(FILE_NAME, status[0].getPath().getName());
-  assertEquals(testFallBack.getName(),
-  vfs.getFileLinkStatus(testFallBack).getPath().getName());
+  Path testOnFallbackPath = new Path(TEST_DIR.toURI().toString(), "test");
+  assertTrue(vfs.mkdirs(testOnFallbackPath));
+  FileStatus[] status = vfs.listStatus(testOnFallbackPath.getParent());
+  assertEquals(Path.getPathWithoutSchemeAndAuthority(testOnFallbackPath),
+  Path.getPathWithoutSchemeAndAuthority(status[0].getPath()));
+  //Check directly on localFS. The fallBackFs(localFS) should be chrooted
+  //at it's root. So, after
+  FileSystem lfs = vfs.getRawFileSystem(testOnFallbackPath, conf);
+  FileStatus[] statusOnLocalFS =
+  lfs.listStatus(testOnFallbackPath.getParent());
+  assertEquals(testOnFallbackPath.getName(),
+  statusOnLocalFS[0].getPath().getName());
+  //initUri should not have exist in lfs, as it would have chrooted on it's
+  // root only.
+  assertFalse(lfs.exists(initUri));
 }
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
index 564bc03..f3eb336 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
@@ -34,6 +34,8 @@ If a user wants to continue use the 

[hadoop] branch branch-3.1 updated: HDFS-15464: ViewFsOverloadScheme should work when -fs option pointing to remote cluster without mount links (#2132). Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 544602e  HDFS-15464: ViewFsOverloadScheme should work when -fs option 
pointing to remote cluster without mount links (#2132). Contributed by Uma 
Maheswara Rao G.
544602e is described below

commit 544602e3d16a9a6e47c8851444f682d1fd4491d9
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 11 23:50:04 2020 -0700

HDFS-15464: ViewFsOverloadScheme should work when -fs option pointing to 
remote cluster without mount links (#2132). Contributed by Uma Maheswara Rao G.

(cherry picked from commit 3e700066394fb9f516e23537d8abb4661409cae1)
---
 .../java/org/apache/hadoop/fs/FsConstants.java |  2 ++
 .../org/apache/hadoop/fs/viewfs/InodeTree.java | 22 +---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 13 +++-
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 12 +++
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 16 +++--
 .../apache/hadoop/fs/viewfs/TestViewFsConfig.java  |  2 +-
 .../viewfs/TestViewFsOverloadSchemeListStatus.java | 39 --
 .../src/site/markdown/ViewFsOverloadScheme.md  |  3 +-
 ...stViewFileSystemOverloadSchemeWithDFSAdmin.java | 20 +++
 9 files changed, 102 insertions(+), 27 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
index 07c16b2..344048f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
@@ -44,4 +44,6 @@ public interface FsConstants {
   public static final String VIEWFS_SCHEME = "viewfs";
   String FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN =
   "fs.viewfs.overload.scheme.target.%s.impl";
+  String VIEWFS_TYPE = "viewfs";
+  String VIEWFSOS_TYPE = "viewfsOverloadScheme";
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 3d709b1..422e733 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -34,6 +34,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnsupportedFileSystemException;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -67,7 +68,7 @@ abstract class InodeTree {
   // the root of the mount table
   private final INode root;
   // the fallback filesystem
-  private final INodeLink rootFallbackLink;
+  private INodeLink rootFallbackLink;
   // the homedir for this mount table
   private final String homedirPrefix;
   private List> mountPoints = new ArrayList>();
@@ -460,7 +461,8 @@ abstract class InodeTree {
* @throws FileAlreadyExistsException
* @throws IOException
*/
-  protected InodeTree(final Configuration config, final String viewName)
+  protected InodeTree(final Configuration config, final String viewName,
+  final URI theUri, boolean initingUriAsFallbackOnNoMounts)
   throws UnsupportedFileSystemException, URISyntaxException,
   FileAlreadyExistsException, IOException {
 String mountTableName = viewName;
@@ -596,9 +598,19 @@ abstract class InodeTree {
 }
 
 if (!gotMountTableEntry) {
-  throw new IOException(
-  "ViewFs: Cannot initialize: Empty Mount table in config for " +
-  "viewfs://" + mountTableName + "/");
+  if (!initingUriAsFallbackOnNoMounts) {
+throw new IOException(
+"ViewFs: Cannot initialize: Empty Mount table in config for "
++ "viewfs://" + mountTableName + "/");
+  }
+  StringBuilder msg =
+  new StringBuilder("Empty mount table detected for ").append(theUri)
+  .append(" and considering itself as a linkFallback.");
+  FileSystem.LOG.info(msg.toString());
+  rootFallbackLink =
+  new INodeLink(mountTableName, ugi, getTargetFileSystem(theUri),
+  theUri);
+  getRootDir().addFallbackLink(rootFallbackLink);
 }
   }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 

[hadoop] branch branch-3.1 updated: HDFS-15449. Optionally ignore port number in mount-table name when picking from initialized uri. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 7084b27  HDFS-15449. Optionally ignore port number in mount-table name 
when picking from initialized uri. Contributed by Uma Maheswara Rao G.
7084b27 is described below

commit 7084b273aca575292ac6834ff2a5f4d7c1b41ba9
Author: Uma Maheswara Rao G 
AuthorDate: Mon Jul 6 18:50:03 2020 -0700

HDFS-15449. Optionally ignore port number in mount-table name when picking 
from initialized uri. Contributed by Uma Maheswara Rao G.

(cherry picked from commit dc0626b5f2f2ba0bd3919650ea231cedd424f77a)
---
 .../org/apache/hadoop/fs/viewfs/Constants.java | 13 +++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 10 -
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 13 ++-
 .../src/site/markdown/ViewFsOverloadScheme.md  |  8 +++-
 ...SystemOverloadSchemeHdfsFileSystemContract.java |  4 ++
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java | 45 +-
 ...stViewFileSystemOverloadSchemeWithDFSAdmin.java | 17 
 ...ViewFileSystemOverloadSchemeWithFSCommands.java |  2 +-
 8 files changed, 97 insertions(+), 15 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 28ebf73..492cb87 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -104,4 +104,17 @@ public interface Constants {
   "fs.viewfs.mount.links.as.symlinks";
 
   boolean CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT = true;
+
+  /**
+   * When initializing the viewfs, authority will be used as the mount table
+   * name to find the mount link configurations. To make the mount table name
+   * unique, we may want to ignore port if initialized uri authority contains
+   * port number. By default, we will consider port number also in
+   * ViewFileSystem(This default value false, because to support existing
+   * deployments continue with the current behavior).
+   */
+  String CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME =
+  "fs.viewfs.ignore.port.in.mount.table.name";
+
+  boolean CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT = false;
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index e192bfc..1ca1759 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -19,6 +19,8 @@ package org.apache.hadoop.fs.viewfs;
 
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE_DEFAULT;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT;
 import static org.apache.hadoop.fs.viewfs.Constants.PERMISSION_555;
@@ -272,9 +274,15 @@ public class ViewFileSystem extends FileSystem {
 final InnerCache innerCache = new InnerCache(fsGetter);
 // Now build  client side view (i.e. client side mount table) from config.
 final String authority = theUri.getAuthority();
+String tableName = authority;
+if (theUri.getPort() != -1 && config
+.getBoolean(CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME,
+CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT)) {
+  tableName = theUri.getHost();
+}
 try {
   myUri = new URI(getScheme(), authority, "/", null, null);
-  fsState = new InodeTree(conf, authority) {
+  fsState = new InodeTree(conf, tableName) {
 @Override
 protected FileSystem getTargetFileSystem(final URI uri)
   throws URISyntaxException, IOException {
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
index 672022b..2f3359d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
+++ 

[hadoop] branch branch-3.1 updated: HDFS-15430. create should work when parent dir is internalDir and fallback configured. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 49a7f9f  HDFS-15430. create should work when parent dir is internalDir 
and fallback configured. Contributed by Uma Maheswara Rao G.
49a7f9f is described below

commit 49a7f9ff7b2fc73957512ffc7038c5103cf38137
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 4 00:12:10 2020 -0700

HDFS-15430. create should work when parent dir is internalDir and fallback 
configured. Contributed by Uma Maheswara Rao G.

(cherry picked from commit 1f2a80b5e5024aeb7fb1f8c31b8fdd0fdb88bb66)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  37 -
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  37 +
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 148 
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java |  28 
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 154 +
 5 files changed, 375 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index e87c145..e192bfc 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -40,6 +40,7 @@ import java.util.Map.Entry;
 import java.util.Objects;
 import java.util.Set;
 
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -1141,7 +1142,41 @@ public class ViewFileSystem extends FileSystem {
 public FSDataOutputStream create(final Path f,
 final FsPermission permission, final boolean overwrite,
 final int bufferSize, final short replication, final long blockSize,
-final Progressable progress) throws AccessControlException {
+final Progressable progress) throws IOException {
+  Preconditions.checkNotNull(f, "File cannot be null.");
+  if (InodeTree.SlashPath.equals(f)) {
+throw new FileAlreadyExistsException(
+"/ is not a file. The directory / already exist at: "
++ theInternalDir.fullPath);
+  }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+
+if (theInternalDir.getChildren().containsKey(f.getName())) {
+  throw new FileAlreadyExistsException(
+  "A mount path(file/dir) already exist with the requested path: "
+  + theInternalDir.getChildren().get(f.getName()).fullPath);
+}
+
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String leaf = f.getName();
+Path fileToCreate = new Path(parent, leaf);
+
+try {
+  return linkedFallbackFs
+  .create(fileToCreate, permission, overwrite, bufferSize,
+  replication, blockSize, progress);
+} catch (IOException e) {
+  StringBuilder msg =
+  new StringBuilder("Failed to create file:").append(fileToCreate)
+  .append(" at fallback : ").append(linkedFallbackFs.getUri());
+  LOG.error(msg.toString(), e);
+  throw e;
+}
+  }
   throw readOnlyMountTable("create", f);
 }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index 770f43b..598a66d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -33,6 +33,8 @@ import java.util.Map;
 import java.util.Map.Entry;
 
 import java.util.Set;
+
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -912,6 +914,41 @@ public class ViewFs extends AbstractFileSystem {
 FileAlreadyExistsException, FileNotFoundException,
 ParentNotDirectoryException, UnsupportedFileSystemException,
 UnresolvedLinkException, IOException {
+  Preconditions.checkNotNull(f, "File cannot be null.");
+  if (InodeTree.SlashPath.equals(f)) {
+throw new FileAlreadyExistsException(
+"/ is not a file. The directory / already exist at: "
+ 

[hadoop] branch branch-3.1 updated: HDFS-15450. Fix NN trash emptier to work if ViewFSOveroadScheme enabled. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new ab43b7b  HDFS-15450. Fix NN trash emptier to work if 
ViewFSOveroadScheme enabled. Contributed by Uma Maheswara Rao G.
ab43b7b is described below

commit ab43b7bcfb294d4da1089c3acb01044deb845895
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 4 13:45:49 2020 -0700

HDFS-15450. Fix NN trash emptier to work if ViewFSOveroadScheme enabled. 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit 55a2ae80dc9b45413febd33840b8a653e3e29440)
---
 .../hadoop/hdfs/server/namenode/NameNode.java  |  7 ++
 ...stNNStartupWhenViewFSOverloadSchemeEnabled.java | 88 ++
 2 files changed, 95 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
index 4741c6c..c8cd8f7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
@@ -41,6 +41,7 @@ import org.apache.hadoop.ha.ServiceFailedException;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.DFSUtilClient;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.HAUtil;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
@@ -365,6 +366,7 @@ public class NameNode extends ReconfigurableBase implements
*/
   @Deprecated
   public static final int DEFAULT_PORT = DFS_NAMENODE_RPC_PORT_DEFAULT;
+  public static final String FS_HDFS_IMPL_KEY = "fs.hdfs.impl";
   public static final Logger LOG =
   LoggerFactory.getLogger(NameNode.class.getName());
   public static final Logger stateChangeLog =
@@ -704,6 +706,11 @@ public class NameNode extends ReconfigurableBase implements
   intervals);
   }
 }
+// Currently NN uses FileSystem.get to initialize DFS in startTrashEmptier.
+// If fs.hdfs.impl was overridden by core-site.xml, we may get other
+// filesystem. To make sure we get DFS, we are setting fs.hdfs.impl to DFS.
+// HDFS-15450
+conf.set(FS_HDFS_IMPL_KEY, DistributedFileSystem.class.getName());
 
 UserGroupInformation.setConfiguration(conf);
 loginAsNameNodeUser(conf);
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
new file mode 100644
index 000..9d394c0
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
@@ -0,0 +1,88 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.MiniDFSNNTopology;
+import org.junit.After;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Tests that the NN startup is successful with ViewFSOverloadScheme.
+ */
+public class TestNNStartupWhenViewFSOverloadSchemeEnabled {
+  private MiniDFSCluster cluster;
+  private static final String FS_IMPL_PATTERN_KEY = "fs.%s.impl";
+  private static final String HDFS_SCHEME = "hdfs";
+  private static final Configuration CONF = new Configuration();
+
+  @BeforeClass
+  public static void setUp() {
+CONF.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1);
+CONF.setInt(DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_KEY, 1);
+CONF.setInt(
+   

[hadoop] branch branch-3.2 updated: HDFS-15478: When Empty mount points, we are assigning fallback link to self. But it should not use full URI for target fs. (#2160). Contributed by Uma Maheswara Rao

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new cd5efe9  HDFS-15478: When Empty mount points, we are assigning 
fallback link to self. But it should not use full URI for target fs. (#2160). 
Contributed by Uma Maheswara Rao G.
cd5efe9 is described below

commit cd5efe91d9dda4a67050f81aa18fa871e3e4ed8b
Author: Uma Maheswara Rao G 
AuthorDate: Tue Jul 21 23:29:10 2020 -0700

HDFS-15478: When Empty mount points, we are assigning fallback link to 
self. But it should not use full URI for target fs. (#2160). Contributed by Uma 
Maheswara Rao G.

(cherry picked from commit ac9a07b51aefd0fd3b4602adc844ab0f172835e3)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  2 +-
 .../viewfs/TestViewFsOverloadSchemeListStatus.java | 27 +++---
 .../src/site/markdown/ViewFsOverloadScheme.md  |  2 ++
 3 files changed, 22 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 484581c..b441483 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -292,7 +292,7 @@ public class ViewFileSystem extends FileSystem {
   myUri = new URI(getScheme(), authority, "/", null, null);
   boolean initingUriAsFallbackOnNoMounts =
   !FsConstants.VIEWFS_TYPE.equals(getType());
-  fsState = new InodeTree(conf, tableName, theUri,
+  fsState = new InodeTree(conf, tableName, myUri,
   initingUriAsFallbackOnNoMounts) {
 @Override
 protected FileSystem getTargetFileSystem(final URI uri)
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
index 300fdd8..7afc789 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
@@ -127,19 +127,30 @@ public class TestViewFsOverloadSchemeListStatus {
 
   /**
* Tests that ViewFSOverloadScheme should consider initialized fs as fallback
-   * if there are no mount links configured.
+   * if there are no mount links configured. It should add fallback with the
+   * chrootedFS at it's uri's root.
*/
   @Test(timeout = 3)
   public void testViewFSOverloadSchemeWithoutAnyMountLinks() throws Exception {
-try (FileSystem fs = FileSystem.get(TEST_DIR.toPath().toUri(), conf)) {
+Path initUri = new Path(TEST_DIR.toURI().toString(), "init");
+try (FileSystem fs = FileSystem.get(initUri.toUri(), conf)) {
   ViewFileSystemOverloadScheme vfs = (ViewFileSystemOverloadScheme) fs;
   assertEquals(0, vfs.getMountPoints().length);
-  Path testFallBack = new Path("test", FILE_NAME);
-  assertTrue(vfs.mkdirs(testFallBack));
-  FileStatus[] status = vfs.listStatus(testFallBack.getParent());
-  assertEquals(FILE_NAME, status[0].getPath().getName());
-  assertEquals(testFallBack.getName(),
-  vfs.getFileLinkStatus(testFallBack).getPath().getName());
+  Path testOnFallbackPath = new Path(TEST_DIR.toURI().toString(), "test");
+  assertTrue(vfs.mkdirs(testOnFallbackPath));
+  FileStatus[] status = vfs.listStatus(testOnFallbackPath.getParent());
+  assertEquals(Path.getPathWithoutSchemeAndAuthority(testOnFallbackPath),
+  Path.getPathWithoutSchemeAndAuthority(status[0].getPath()));
+  //Check directly on localFS. The fallBackFs(localFS) should be chrooted
+  //at it's root. So, after
+  FileSystem lfs = vfs.getRawFileSystem(testOnFallbackPath, conf);
+  FileStatus[] statusOnLocalFS =
+  lfs.listStatus(testOnFallbackPath.getParent());
+  assertEquals(testOnFallbackPath.getName(),
+  statusOnLocalFS[0].getPath().getName());
+  //initUri should not have exist in lfs, as it would have chrooted on it's
+  // root only.
+  assertFalse(lfs.exists(initUri));
 }
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
index 564bc03..f3eb336 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
@@ -34,6 +34,8 @@ If a user wants to continue use the 

[hadoop] branch branch-3.2 updated: HDFS-15464: ViewFsOverloadScheme should work when -fs option pointing to remote cluster without mount links (#2132). Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 65778cd  HDFS-15464: ViewFsOverloadScheme should work when -fs option 
pointing to remote cluster without mount links (#2132). Contributed by Uma 
Maheswara Rao G.
65778cd is described below

commit 65778cdd474997b4cdeba7a3389bc4427f0e56d8
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 11 23:50:04 2020 -0700

HDFS-15464: ViewFsOverloadScheme should work when -fs option pointing to 
remote cluster without mount links (#2132). Contributed by Uma Maheswara Rao G.

(cherry picked from commit 3e700066394fb9f516e23537d8abb4661409cae1)
---
 .../java/org/apache/hadoop/fs/FsConstants.java |  2 ++
 .../org/apache/hadoop/fs/viewfs/InodeTree.java | 22 +---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 13 +++-
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 12 +++
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 16 +++--
 .../apache/hadoop/fs/viewfs/TestViewFsConfig.java  |  2 +-
 .../viewfs/TestViewFsOverloadSchemeListStatus.java | 39 --
 .../src/site/markdown/ViewFsOverloadScheme.md  |  3 +-
 ...stViewFileSystemOverloadSchemeWithDFSAdmin.java | 20 +++
 9 files changed, 102 insertions(+), 27 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
index 07c16b2..344048f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
@@ -44,4 +44,6 @@ public interface FsConstants {
   public static final String VIEWFS_SCHEME = "viewfs";
   String FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN =
   "fs.viewfs.overload.scheme.target.%s.impl";
+  String VIEWFS_TYPE = "viewfs";
+  String VIEWFSOS_TYPE = "viewfsOverloadScheme";
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 3d709b1..422e733 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -34,6 +34,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnsupportedFileSystemException;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -67,7 +68,7 @@ abstract class InodeTree {
   // the root of the mount table
   private final INode root;
   // the fallback filesystem
-  private final INodeLink rootFallbackLink;
+  private INodeLink rootFallbackLink;
   // the homedir for this mount table
   private final String homedirPrefix;
   private List> mountPoints = new ArrayList>();
@@ -460,7 +461,8 @@ abstract class InodeTree {
* @throws FileAlreadyExistsException
* @throws IOException
*/
-  protected InodeTree(final Configuration config, final String viewName)
+  protected InodeTree(final Configuration config, final String viewName,
+  final URI theUri, boolean initingUriAsFallbackOnNoMounts)
   throws UnsupportedFileSystemException, URISyntaxException,
   FileAlreadyExistsException, IOException {
 String mountTableName = viewName;
@@ -596,9 +598,19 @@ abstract class InodeTree {
 }
 
 if (!gotMountTableEntry) {
-  throw new IOException(
-  "ViewFs: Cannot initialize: Empty Mount table in config for " +
-  "viewfs://" + mountTableName + "/");
+  if (!initingUriAsFallbackOnNoMounts) {
+throw new IOException(
+"ViewFs: Cannot initialize: Empty Mount table in config for "
++ "viewfs://" + mountTableName + "/");
+  }
+  StringBuilder msg =
+  new StringBuilder("Empty mount table detected for ").append(theUri)
+  .append(" and considering itself as a linkFallback.");
+  FileSystem.LOG.info(msg.toString());
+  rootFallbackLink =
+  new INodeLink(mountTableName, ugi, getTargetFileSystem(theUri),
+  theUri);
+  getRootDir().addFallbackLink(rootFallbackLink);
 }
   }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 

[hadoop] branch branch-3.2 updated: HDFS-15449. Optionally ignore port number in mount-table name when picking from initialized uri. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 1369e41  HDFS-15449. Optionally ignore port number in mount-table name 
when picking from initialized uri. Contributed by Uma Maheswara Rao G.
1369e41 is described below

commit 1369e41c6525937fffe45a10272dd7547eac2e1f
Author: Uma Maheswara Rao G 
AuthorDate: Mon Jul 6 18:50:03 2020 -0700

HDFS-15449. Optionally ignore port number in mount-table name when picking 
from initialized uri. Contributed by Uma Maheswara Rao G.

(cherry picked from commit dc0626b5f2f2ba0bd3919650ea231cedd424f77a)
---
 .../org/apache/hadoop/fs/viewfs/Constants.java | 13 +++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 10 -
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 13 ++-
 .../src/site/markdown/ViewFsOverloadScheme.md  |  8 +++-
 ...SystemOverloadSchemeHdfsFileSystemContract.java |  4 ++
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java | 45 +-
 ...stViewFileSystemOverloadSchemeWithDFSAdmin.java | 17 
 ...ViewFileSystemOverloadSchemeWithFSCommands.java |  2 +-
 8 files changed, 97 insertions(+), 15 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 28ebf73..492cb87 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -104,4 +104,17 @@ public interface Constants {
   "fs.viewfs.mount.links.as.symlinks";
 
   boolean CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT = true;
+
+  /**
+   * When initializing the viewfs, authority will be used as the mount table
+   * name to find the mount link configurations. To make the mount table name
+   * unique, we may want to ignore port if initialized uri authority contains
+   * port number. By default, we will consider port number also in
+   * ViewFileSystem(This default value false, because to support existing
+   * deployments continue with the current behavior).
+   */
+  String CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME =
+  "fs.viewfs.ignore.port.in.mount.table.name";
+
+  boolean CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT = false;
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index e192bfc..1ca1759 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -19,6 +19,8 @@ package org.apache.hadoop.fs.viewfs;
 
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE_DEFAULT;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT;
 import static org.apache.hadoop.fs.viewfs.Constants.PERMISSION_555;
@@ -272,9 +274,15 @@ public class ViewFileSystem extends FileSystem {
 final InnerCache innerCache = new InnerCache(fsGetter);
 // Now build  client side view (i.e. client side mount table) from config.
 final String authority = theUri.getAuthority();
+String tableName = authority;
+if (theUri.getPort() != -1 && config
+.getBoolean(CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME,
+CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT)) {
+  tableName = theUri.getHost();
+}
 try {
   myUri = new URI(getScheme(), authority, "/", null, null);
-  fsState = new InodeTree(conf, authority) {
+  fsState = new InodeTree(conf, tableName) {
 @Override
 protected FileSystem getTargetFileSystem(final URI uri)
   throws URISyntaxException, IOException {
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
index 672022b..2f3359d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
+++ 

[hadoop] branch branch-3.2 updated: HDFS-15430. create should work when parent dir is internalDir and fallback configured. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 655b39c  HDFS-15430. create should work when parent dir is internalDir 
and fallback configured. Contributed by Uma Maheswara Rao G.
655b39c is described below

commit 655b39cc302acfca0b00e6ade92ebb20984a777e
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 4 00:12:10 2020 -0700

HDFS-15430. create should work when parent dir is internalDir and fallback 
configured. Contributed by Uma Maheswara Rao G.

(cherry picked from commit 1f2a80b5e5024aeb7fb1f8c31b8fdd0fdb88bb66)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  37 -
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  37 +
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 148 
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java |  28 
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 154 +
 5 files changed, 375 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index e87c145..e192bfc 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -40,6 +40,7 @@ import java.util.Map.Entry;
 import java.util.Objects;
 import java.util.Set;
 
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -1141,7 +1142,41 @@ public class ViewFileSystem extends FileSystem {
 public FSDataOutputStream create(final Path f,
 final FsPermission permission, final boolean overwrite,
 final int bufferSize, final short replication, final long blockSize,
-final Progressable progress) throws AccessControlException {
+final Progressable progress) throws IOException {
+  Preconditions.checkNotNull(f, "File cannot be null.");
+  if (InodeTree.SlashPath.equals(f)) {
+throw new FileAlreadyExistsException(
+"/ is not a file. The directory / already exist at: "
++ theInternalDir.fullPath);
+  }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+
+if (theInternalDir.getChildren().containsKey(f.getName())) {
+  throw new FileAlreadyExistsException(
+  "A mount path(file/dir) already exist with the requested path: "
+  + theInternalDir.getChildren().get(f.getName()).fullPath);
+}
+
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String leaf = f.getName();
+Path fileToCreate = new Path(parent, leaf);
+
+try {
+  return linkedFallbackFs
+  .create(fileToCreate, permission, overwrite, bufferSize,
+  replication, blockSize, progress);
+} catch (IOException e) {
+  StringBuilder msg =
+  new StringBuilder("Failed to create file:").append(fileToCreate)
+  .append(" at fallback : ").append(linkedFallbackFs.getUri());
+  LOG.error(msg.toString(), e);
+  throw e;
+}
+  }
   throw readOnlyMountTable("create", f);
 }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index 770f43b..598a66d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -33,6 +33,8 @@ import java.util.Map;
 import java.util.Map.Entry;
 
 import java.util.Set;
+
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -912,6 +914,41 @@ public class ViewFs extends AbstractFileSystem {
 FileAlreadyExistsException, FileNotFoundException,
 ParentNotDirectoryException, UnsupportedFileSystemException,
 UnresolvedLinkException, IOException {
+  Preconditions.checkNotNull(f, "File cannot be null.");
+  if (InodeTree.SlashPath.equals(f)) {
+throw new FileAlreadyExistsException(
+"/ is not a file. The directory / already exist at: "
+ 

[hadoop] branch branch-3.2 updated: HDFS-15450. Fix NN trash emptier to work if ViewFSOveroadScheme enabled. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 512d1d6  HDFS-15450. Fix NN trash emptier to work if 
ViewFSOveroadScheme enabled. Contributed by Uma Maheswara Rao G.
512d1d6 is described below

commit 512d1d6d272bb3f01b1e72f1de7908be87ac27de
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 4 13:45:49 2020 -0700

HDFS-15450. Fix NN trash emptier to work if ViewFSOveroadScheme enabled. 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit 55a2ae80dc9b45413febd33840b8a653e3e29440)
---
 .../hadoop/hdfs/server/namenode/NameNode.java  |  7 ++
 ...stNNStartupWhenViewFSOverloadSchemeEnabled.java | 88 ++
 2 files changed, 95 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
index 0fff970..30bf4f85 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
@@ -41,6 +41,7 @@ import org.apache.hadoop.ha.ServiceFailedException;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.DFSUtilClient;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.HAUtil;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
@@ -369,6 +370,7 @@ public class NameNode extends ReconfigurableBase implements
*/
   @Deprecated
   public static final int DEFAULT_PORT = DFS_NAMENODE_RPC_PORT_DEFAULT;
+  public static final String FS_HDFS_IMPL_KEY = "fs.hdfs.impl";
   public static final Logger LOG =
   LoggerFactory.getLogger(NameNode.class.getName());
   public static final Logger stateChangeLog =
@@ -708,6 +710,11 @@ public class NameNode extends ReconfigurableBase implements
   intervals);
   }
 }
+// Currently NN uses FileSystem.get to initialize DFS in startTrashEmptier.
+// If fs.hdfs.impl was overridden by core-site.xml, we may get other
+// filesystem. To make sure we get DFS, we are setting fs.hdfs.impl to DFS.
+// HDFS-15450
+conf.set(FS_HDFS_IMPL_KEY, DistributedFileSystem.class.getName());
 
 UserGroupInformation.setConfiguration(conf);
 loginAsNameNodeUser(conf);
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
new file mode 100644
index 000..9d394c0
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
@@ -0,0 +1,88 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.MiniDFSNNTopology;
+import org.junit.After;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Tests that the NN startup is successful with ViewFSOverloadScheme.
+ */
+public class TestNNStartupWhenViewFSOverloadSchemeEnabled {
+  private MiniDFSCluster cluster;
+  private static final String FS_IMPL_PATTERN_KEY = "fs.%s.impl";
+  private static final String HDFS_SCHEME = "hdfs";
+  private static final Configuration CONF = new Configuration();
+
+  @BeforeClass
+  public static void setUp() {
+CONF.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1);
+CONF.setInt(DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_KEY, 1);
+CONF.setInt(
+  

[hadoop] branch branch-3.3 updated: HDFS-15449. Optionally ignore port number in mount-table name when picking from initialized uri. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 10f8010  HDFS-15449. Optionally ignore port number in mount-table name 
when picking from initialized uri. Contributed by Uma Maheswara Rao G.
10f8010 is described below

commit 10f8010519d41119c282031ec00d86da7f3b0506
Author: Uma Maheswara Rao G 
AuthorDate: Mon Jul 6 18:50:03 2020 -0700

HDFS-15449. Optionally ignore port number in mount-table name when picking 
from initialized uri. Contributed by Uma Maheswara Rao G.

(cherry picked from commit dc0626b5f2f2ba0bd3919650ea231cedd424f77a)
---
 .../org/apache/hadoop/fs/viewfs/Constants.java | 13 +++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 10 -
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 13 ++-
 .../src/site/markdown/ViewFsOverloadScheme.md  |  8 +++-
 ...SystemOverloadSchemeHdfsFileSystemContract.java |  4 ++
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java | 45 +-
 ...stViewFileSystemOverloadSchemeWithDFSAdmin.java | 17 
 ...ViewFileSystemOverloadSchemeWithFSCommands.java |  2 +-
 8 files changed, 97 insertions(+), 15 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 28ebf73..492cb87 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -104,4 +104,17 @@ public interface Constants {
   "fs.viewfs.mount.links.as.symlinks";
 
   boolean CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT = true;
+
+  /**
+   * When initializing the viewfs, authority will be used as the mount table
+   * name to find the mount link configurations. To make the mount table name
+   * unique, we may want to ignore port if initialized uri authority contains
+   * port number. By default, we will consider port number also in
+   * ViewFileSystem(This default value false, because to support existing
+   * deployments continue with the current behavior).
+   */
+  String CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME =
+  "fs.viewfs.ignore.port.in.mount.table.name";
+
+  boolean CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT = false;
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index cb36965..0beeda2 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -20,6 +20,8 @@ package org.apache.hadoop.fs.viewfs;
 import static 
org.apache.hadoop.fs.impl.PathCapabilitiesSupport.validatePathCapabilityArgs;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE_DEFAULT;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT;
 import static org.apache.hadoop.fs.viewfs.Constants.PERMISSION_555;
@@ -274,9 +276,15 @@ public class ViewFileSystem extends FileSystem {
 final InnerCache innerCache = new InnerCache(fsGetter);
 // Now build  client side view (i.e. client side mount table) from config.
 final String authority = theUri.getAuthority();
+String tableName = authority;
+if (theUri.getPort() != -1 && config
+.getBoolean(CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME,
+CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT)) {
+  tableName = theUri.getHost();
+}
 try {
   myUri = new URI(getScheme(), authority, "/", null, null);
-  fsState = new InodeTree(conf, authority) {
+  fsState = new InodeTree(conf, tableName) {
 @Override
 protected FileSystem getTargetFileSystem(final URI uri)
   throws URISyntaxException, IOException {
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
index 672022b..2f3359d 100644
--- 

[hadoop] branch branch-3.3 updated: HDFS-15464: ViewFsOverloadScheme should work when -fs option pointing to remote cluster without mount links (#2132). Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new ae8261c  HDFS-15464: ViewFsOverloadScheme should work when -fs option 
pointing to remote cluster without mount links (#2132). Contributed by Uma 
Maheswara Rao G.
ae8261c is described below

commit ae8261c6719008b89b886d533207a8cbcb22d36a
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 11 23:50:04 2020 -0700

HDFS-15464: ViewFsOverloadScheme should work when -fs option pointing to 
remote cluster without mount links (#2132). Contributed by Uma Maheswara Rao G.

(cherry picked from commit 3e700066394fb9f516e23537d8abb4661409cae1)
---
 .../java/org/apache/hadoop/fs/FsConstants.java |  2 ++
 .../org/apache/hadoop/fs/viewfs/InodeTree.java | 22 +---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 13 +++-
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 12 +++
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 16 +++--
 .../apache/hadoop/fs/viewfs/TestViewFsConfig.java  |  2 +-
 .../viewfs/TestViewFsOverloadSchemeListStatus.java | 39 --
 .../src/site/markdown/ViewFsOverloadScheme.md  |  3 +-
 ...stViewFileSystemOverloadSchemeWithDFSAdmin.java | 20 +++
 9 files changed, 102 insertions(+), 27 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
index 07c16b2..344048f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
@@ -44,4 +44,6 @@ public interface FsConstants {
   public static final String VIEWFS_SCHEME = "viewfs";
   String FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN =
   "fs.viewfs.overload.scheme.target.%s.impl";
+  String VIEWFS_TYPE = "viewfs";
+  String VIEWFSOS_TYPE = "viewfsOverloadScheme";
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 3d709b1..422e733 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -34,6 +34,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnsupportedFileSystemException;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -67,7 +68,7 @@ abstract class InodeTree {
   // the root of the mount table
   private final INode root;
   // the fallback filesystem
-  private final INodeLink rootFallbackLink;
+  private INodeLink rootFallbackLink;
   // the homedir for this mount table
   private final String homedirPrefix;
   private List> mountPoints = new ArrayList>();
@@ -460,7 +461,8 @@ abstract class InodeTree {
* @throws FileAlreadyExistsException
* @throws IOException
*/
-  protected InodeTree(final Configuration config, final String viewName)
+  protected InodeTree(final Configuration config, final String viewName,
+  final URI theUri, boolean initingUriAsFallbackOnNoMounts)
   throws UnsupportedFileSystemException, URISyntaxException,
   FileAlreadyExistsException, IOException {
 String mountTableName = viewName;
@@ -596,9 +598,19 @@ abstract class InodeTree {
 }
 
 if (!gotMountTableEntry) {
-  throw new IOException(
-  "ViewFs: Cannot initialize: Empty Mount table in config for " +
-  "viewfs://" + mountTableName + "/");
+  if (!initingUriAsFallbackOnNoMounts) {
+throw new IOException(
+"ViewFs: Cannot initialize: Empty Mount table in config for "
++ "viewfs://" + mountTableName + "/");
+  }
+  StringBuilder msg =
+  new StringBuilder("Empty mount table detected for ").append(theUri)
+  .append(" and considering itself as a linkFallback.");
+  FileSystem.LOG.info(msg.toString());
+  rootFallbackLink =
+  new INodeLink(mountTableName, ugi, getTargetFileSystem(theUri),
+  theUri);
+  getRootDir().addFallbackLink(rootFallbackLink);
 }
   }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 

[hadoop] branch branch-3.3 updated: HDFS-15478: When Empty mount points, we are assigning fallback link to self. But it should not use full URI for target fs. (#2160). Contributed by Uma Maheswara Rao

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 4fe491d  HDFS-15478: When Empty mount points, we are assigning 
fallback link to self. But it should not use full URI for target fs. (#2160). 
Contributed by Uma Maheswara Rao G.
4fe491d is described below

commit 4fe491d10edd5e4e91ccf7fd76131e4552ce79a2
Author: Uma Maheswara Rao G 
AuthorDate: Tue Jul 21 23:29:10 2020 -0700

HDFS-15478: When Empty mount points, we are assigning fallback link to 
self. But it should not use full URI for target fs. (#2160). Contributed by Uma 
Maheswara Rao G.

(cherry picked from commit ac9a07b51aefd0fd3b4602adc844ab0f172835e3)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  2 +-
 .../viewfs/TestViewFsOverloadSchemeListStatus.java | 27 +++---
 .../src/site/markdown/ViewFsOverloadScheme.md  |  2 ++
 3 files changed, 22 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 1fc531e..baf0027 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -294,7 +294,7 @@ public class ViewFileSystem extends FileSystem {
   myUri = new URI(getScheme(), authority, "/", null, null);
   boolean initingUriAsFallbackOnNoMounts =
   !FsConstants.VIEWFS_TYPE.equals(getType());
-  fsState = new InodeTree(conf, tableName, theUri,
+  fsState = new InodeTree(conf, tableName, myUri,
   initingUriAsFallbackOnNoMounts) {
 @Override
 protected FileSystem getTargetFileSystem(final URI uri)
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
index 300fdd8..7afc789 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
@@ -127,19 +127,30 @@ public class TestViewFsOverloadSchemeListStatus {
 
   /**
* Tests that ViewFSOverloadScheme should consider initialized fs as fallback
-   * if there are no mount links configured.
+   * if there are no mount links configured. It should add fallback with the
+   * chrootedFS at it's uri's root.
*/
   @Test(timeout = 3)
   public void testViewFSOverloadSchemeWithoutAnyMountLinks() throws Exception {
-try (FileSystem fs = FileSystem.get(TEST_DIR.toPath().toUri(), conf)) {
+Path initUri = new Path(TEST_DIR.toURI().toString(), "init");
+try (FileSystem fs = FileSystem.get(initUri.toUri(), conf)) {
   ViewFileSystemOverloadScheme vfs = (ViewFileSystemOverloadScheme) fs;
   assertEquals(0, vfs.getMountPoints().length);
-  Path testFallBack = new Path("test", FILE_NAME);
-  assertTrue(vfs.mkdirs(testFallBack));
-  FileStatus[] status = vfs.listStatus(testFallBack.getParent());
-  assertEquals(FILE_NAME, status[0].getPath().getName());
-  assertEquals(testFallBack.getName(),
-  vfs.getFileLinkStatus(testFallBack).getPath().getName());
+  Path testOnFallbackPath = new Path(TEST_DIR.toURI().toString(), "test");
+  assertTrue(vfs.mkdirs(testOnFallbackPath));
+  FileStatus[] status = vfs.listStatus(testOnFallbackPath.getParent());
+  assertEquals(Path.getPathWithoutSchemeAndAuthority(testOnFallbackPath),
+  Path.getPathWithoutSchemeAndAuthority(status[0].getPath()));
+  //Check directly on localFS. The fallBackFs(localFS) should be chrooted
+  //at it's root. So, after
+  FileSystem lfs = vfs.getRawFileSystem(testOnFallbackPath, conf);
+  FileStatus[] statusOnLocalFS =
+  lfs.listStatus(testOnFallbackPath.getParent());
+  assertEquals(testOnFallbackPath.getName(),
+  statusOnLocalFS[0].getPath().getName());
+  //initUri should not have exist in lfs, as it would have chrooted on it's
+  // root only.
+  assertFalse(lfs.exists(initUri));
 }
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
index 564bc03..f3eb336 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
@@ -34,6 +34,8 @@ If a user wants to continue use the 

[hadoop] branch branch-3.3 updated: HDFS-15450. Fix NN trash emptier to work if ViewFSOveroadScheme enabled. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new aea1a8e  HDFS-15450. Fix NN trash emptier to work if 
ViewFSOveroadScheme enabled. Contributed by Uma Maheswara Rao G.
aea1a8e is described below

commit aea1a8e2bd780a2295bd1aa83640e733c3385a6a
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 4 13:45:49 2020 -0700

HDFS-15450. Fix NN trash emptier to work if ViewFSOveroadScheme enabled. 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit 55a2ae80dc9b45413febd33840b8a653e3e29440)
---
 .../hadoop/hdfs/server/namenode/NameNode.java  |  7 ++
 ...stNNStartupWhenViewFSOverloadSchemeEnabled.java | 88 ++
 2 files changed, 95 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
index 74757e5..7c2026c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
@@ -41,6 +41,7 @@ import org.apache.hadoop.ha.ServiceFailedException;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.DFSUtilClient;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.HAUtil;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
@@ -384,6 +385,7 @@ public class NameNode extends ReconfigurableBase implements
*/
   @Deprecated
   public static final int DEFAULT_PORT = DFS_NAMENODE_RPC_PORT_DEFAULT;
+  public static final String FS_HDFS_IMPL_KEY = "fs.hdfs.impl";
   public static final Logger LOG =
   LoggerFactory.getLogger(NameNode.class.getName());
   public static final Logger stateChangeLog =
@@ -725,6 +727,11 @@ public class NameNode extends ReconfigurableBase implements
   intervals);
   }
 }
+// Currently NN uses FileSystem.get to initialize DFS in startTrashEmptier.
+// If fs.hdfs.impl was overridden by core-site.xml, we may get other
+// filesystem. To make sure we get DFS, we are setting fs.hdfs.impl to DFS.
+// HDFS-15450
+conf.set(FS_HDFS_IMPL_KEY, DistributedFileSystem.class.getName());
 
 UserGroupInformation.setConfiguration(conf);
 loginAsNameNodeUser(conf);
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
new file mode 100644
index 000..9d394c0
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
@@ -0,0 +1,88 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.MiniDFSNNTopology;
+import org.junit.After;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Tests that the NN startup is successful with ViewFSOverloadScheme.
+ */
+public class TestNNStartupWhenViewFSOverloadSchemeEnabled {
+  private MiniDFSCluster cluster;
+  private static final String FS_IMPL_PATTERN_KEY = "fs.%s.impl";
+  private static final String HDFS_SCHEME = "hdfs";
+  private static final Configuration CONF = new Configuration();
+
+  @BeforeClass
+  public static void setUp() {
+CONF.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1);
+CONF.setInt(DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_KEY, 1);
+CONF.setInt(
+   

[hadoop] branch branch-3.3 updated: HDFS-15430. create should work when parent dir is internalDir and fallback configured. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 35fe6fd  HDFS-15430. create should work when parent dir is internalDir 
and fallback configured. Contributed by Uma Maheswara Rao G.
35fe6fd is described below

commit 35fe6fd54fdc935ed73fa080925c812fe6f493a2
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 4 00:12:10 2020 -0700

HDFS-15430. create should work when parent dir is internalDir and fallback 
configured. Contributed by Uma Maheswara Rao G.

(cherry picked from commit 1f2a80b5e5024aeb7fb1f8c31b8fdd0fdb88bb66)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  37 -
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  37 +
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 148 
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java |  28 
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 154 +
 5 files changed, 375 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 39d78cf..cb36965 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -41,6 +41,7 @@ import java.util.Map.Entry;
 import java.util.Objects;
 import java.util.Set;
 
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -1180,7 +1181,41 @@ public class ViewFileSystem extends FileSystem {
 public FSDataOutputStream create(final Path f,
 final FsPermission permission, final boolean overwrite,
 final int bufferSize, final short replication, final long blockSize,
-final Progressable progress) throws AccessControlException {
+final Progressable progress) throws IOException {
+  Preconditions.checkNotNull(f, "File cannot be null.");
+  if (InodeTree.SlashPath.equals(f)) {
+throw new FileAlreadyExistsException(
+"/ is not a file. The directory / already exist at: "
++ theInternalDir.fullPath);
+  }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+
+if (theInternalDir.getChildren().containsKey(f.getName())) {
+  throw new FileAlreadyExistsException(
+  "A mount path(file/dir) already exist with the requested path: "
+  + theInternalDir.getChildren().get(f.getName()).fullPath);
+}
+
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String leaf = f.getName();
+Path fileToCreate = new Path(parent, leaf);
+
+try {
+  return linkedFallbackFs
+  .create(fileToCreate, permission, overwrite, bufferSize,
+  replication, blockSize, progress);
+} catch (IOException e) {
+  StringBuilder msg =
+  new StringBuilder("Failed to create file:").append(fileToCreate)
+  .append(" at fallback : ").append(linkedFallbackFs.getUri());
+  LOG.error(msg.toString(), e);
+  throw e;
+}
+  }
   throw readOnlyMountTable("create", f);
 }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index c769003..a63960c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -33,6 +33,8 @@ import java.util.Map;
 import java.util.Map.Entry;
 
 import java.util.Set;
+
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -919,6 +921,41 @@ public class ViewFs extends AbstractFileSystem {
 FileAlreadyExistsException, FileNotFoundException,
 ParentNotDirectoryException, UnsupportedFileSystemException,
 UnresolvedLinkException, IOException {
+  Preconditions.checkNotNull(f, "File cannot be null.");
+  if (InodeTree.SlashPath.equals(f)) {
+throw new FileAlreadyExistsException(
+"/ is not a file. The directory / already exist at: "
+ 

[hadoop] branch branch-3.3 updated: HDFS-15430. create should work when parent dir is internalDir and fallback configured. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 35fe6fd  HDFS-15430. create should work when parent dir is internalDir 
and fallback configured. Contributed by Uma Maheswara Rao G.
35fe6fd is described below

commit 35fe6fd54fdc935ed73fa080925c812fe6f493a2
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 4 00:12:10 2020 -0700

HDFS-15430. create should work when parent dir is internalDir and fallback 
configured. Contributed by Uma Maheswara Rao G.

(cherry picked from commit 1f2a80b5e5024aeb7fb1f8c31b8fdd0fdb88bb66)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  37 -
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  37 +
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 148 
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java |  28 
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 154 +
 5 files changed, 375 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 39d78cf..cb36965 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -41,6 +41,7 @@ import java.util.Map.Entry;
 import java.util.Objects;
 import java.util.Set;
 
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -1180,7 +1181,41 @@ public class ViewFileSystem extends FileSystem {
 public FSDataOutputStream create(final Path f,
 final FsPermission permission, final boolean overwrite,
 final int bufferSize, final short replication, final long blockSize,
-final Progressable progress) throws AccessControlException {
+final Progressable progress) throws IOException {
+  Preconditions.checkNotNull(f, "File cannot be null.");
+  if (InodeTree.SlashPath.equals(f)) {
+throw new FileAlreadyExistsException(
+"/ is not a file. The directory / already exist at: "
++ theInternalDir.fullPath);
+  }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+
+if (theInternalDir.getChildren().containsKey(f.getName())) {
+  throw new FileAlreadyExistsException(
+  "A mount path(file/dir) already exist with the requested path: "
+  + theInternalDir.getChildren().get(f.getName()).fullPath);
+}
+
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String leaf = f.getName();
+Path fileToCreate = new Path(parent, leaf);
+
+try {
+  return linkedFallbackFs
+  .create(fileToCreate, permission, overwrite, bufferSize,
+  replication, blockSize, progress);
+} catch (IOException e) {
+  StringBuilder msg =
+  new StringBuilder("Failed to create file:").append(fileToCreate)
+  .append(" at fallback : ").append(linkedFallbackFs.getUri());
+  LOG.error(msg.toString(), e);
+  throw e;
+}
+  }
   throw readOnlyMountTable("create", f);
 }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index c769003..a63960c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -33,6 +33,8 @@ import java.util.Map;
 import java.util.Map.Entry;
 
 import java.util.Set;
+
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -919,6 +921,41 @@ public class ViewFs extends AbstractFileSystem {
 FileAlreadyExistsException, FileNotFoundException,
 ParentNotDirectoryException, UnsupportedFileSystemException,
 UnresolvedLinkException, IOException {
+  Preconditions.checkNotNull(f, "File cannot be null.");
+  if (InodeTree.SlashPath.equals(f)) {
+throw new FileAlreadyExistsException(
+"/ is not a file. The directory / already exist at: "
+ 

[hadoop] branch branch-3.2 updated: HDFS-14950. fix missing libhdfspp lib in dist-package (#1947)

2020-07-31 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 737bbab  HDFS-14950. fix missing libhdfspp lib in dist-package (#1947)
737bbab is described below

commit 737bbab90aea03f8e75575010bf947416e2a0c64
Author: Yuan 
AuthorDate: Fri Jul 31 15:49:49 2020 +0800

HDFS-14950. fix missing libhdfspp lib in dist-package (#1947)

libhdfspp.{a,so} are missed in dist-package.
This patch fixed this by copying these libs to the right directory

Signed-off-by: Yuan Zhou 
(cherry picked from commit e756fe3590906bfd8ffe4ab5cc8b9b24a9b2b4b2)
---
 .../hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt   | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt
index 411320a..c17f9d3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt
@@ -263,6 +263,7 @@ if (HADOOP_BUILD)
 ${CMAKE_THREAD_LIBS_INIT}
   )
   set_target_properties(hdfspp PROPERTIES SOVERSION ${LIBHDFSPP_VERSION})
+  hadoop_dual_output_directory(hdfspp ${OUT_DIR})
 else (HADOOP_BUILD)
   add_library(hdfspp_static STATIC ${EMPTY_FILE_CC} ${LIBHDFSPP_ALL_OBJECTS})
   target_link_libraries(hdfspp_static


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HDFS-14950. fix missing libhdfspp lib in dist-package (#1947)

2020-07-31 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new e1ac16a  HDFS-14950. fix missing libhdfspp lib in dist-package (#1947)
e1ac16a is described below

commit e1ac16a31897a22fe80f94bec4dd20bbad70a026
Author: Yuan 
AuthorDate: Fri Jul 31 15:49:49 2020 +0800

HDFS-14950. fix missing libhdfspp lib in dist-package (#1947)

libhdfspp.{a,so} are missed in dist-package.
This patch fixed this by copying these libs to the right directory

Signed-off-by: Yuan Zhou 
(cherry picked from commit e756fe3590906bfd8ffe4ab5cc8b9b24a9b2b4b2)
---
 .../hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt   | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt
index 411320a..c17f9d3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt
@@ -263,6 +263,7 @@ if (HADOOP_BUILD)
 ${CMAKE_THREAD_LIBS_INIT}
   )
   set_target_properties(hdfspp PROPERTIES SOVERSION ${LIBHDFSPP_VERSION})
+  hadoop_dual_output_directory(hdfspp ${OUT_DIR})
 else (HADOOP_BUILD)
   add_library(hdfspp_static STATIC ${EMPTY_FILE_CC} ${LIBHDFSPP_ALL_OBJECTS})
   target_link_libraries(hdfspp_static


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-14950. fix missing libhdfspp lib in dist-package (#1947)

2020-07-31 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e756fe3  HDFS-14950. fix missing libhdfspp lib in dist-package (#1947)
e756fe3 is described below

commit e756fe3590906bfd8ffe4ab5cc8b9b24a9b2b4b2
Author: Yuan 
AuthorDate: Fri Jul 31 15:49:49 2020 +0800

HDFS-14950. fix missing libhdfspp lib in dist-package (#1947)

libhdfspp.{a,so} are missed in dist-package.
This patch fixed this by copying these libs to the right directory

Signed-off-by: Yuan Zhou 
---
 .../hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt   | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt
index 6a2f378..939747c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt
@@ -265,6 +265,7 @@ if (HADOOP_BUILD)
 ${CMAKE_THREAD_LIBS_INIT}
   )
   set_target_properties(hdfspp PROPERTIES SOVERSION ${LIBHDFSPP_VERSION})
+  hadoop_dual_output_directory(hdfspp ${OUT_DIR})
 else (HADOOP_BUILD)
   add_library(hdfspp_static STATIC ${EMPTY_FILE_CC} ${LIBHDFSPP_ALL_OBJECTS})
   target_link_libraries(hdfspp_static


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org