[hadoop] branch trunk updated: HDFS-15573. Only log warning if considerLoad and considerStorageType are both true. Contributed by Stephen O'Donnell

2020-09-12 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new f59f7f2  HDFS-15573. Only log warning if considerLoad and 
considerStorageType are both true. Contributed by Stephen O'Donnell
f59f7f2 is described below

commit f59f7f21758fb7a391f2f5198a4c3eaba445
Author: Mingliang Liu 
AuthorDate: Sat Sep 12 01:41:38 2020 -0700

HDFS-15573. Only log warning if considerLoad and considerStorageType are 
both true. Contributed by Stephen O'Donnell
---
 .../hdfs/server/blockmanagement/DatanodeManager.java   | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index fbe132a..1b474fd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -326,12 +326,14 @@ public class DatanodeManager {
 this.readConsiderStorageType = conf.getBoolean(
 DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_KEY,
 DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_DEFAULT);
-LOG.warn(
-"{} and {} are incompatible and only one can be enabled. "
-+ "Both are currently enabled.",
-DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERLOAD_KEY,
-DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_KEY);
-
+if (readConsiderLoad && readConsiderStorageType) {
+  LOG.warn(
+  "{} and {} are incompatible and only one can be enabled. "
+  + "Both are currently enabled. {} will be ignored.",
+  DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERLOAD_KEY,
+  DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_KEY,
+  DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_KEY);
+}
 this.avoidStaleDataNodesForWrite = conf.getBoolean(
 DFSConfigKeys.DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY,
 DFSConfigKeys.DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_DEFAULT);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HDFS-15573. Only log warning if considerLoad and considerStorageType are both true. Contributed by Stephen O'Donnell

2020-09-12 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 4eccdd9  HDFS-15573. Only log warning if considerLoad and 
considerStorageType are both true. Contributed by Stephen O'Donnell
4eccdd9 is described below

commit 4eccdd950fe9eed4909e8602ddd86b5dcecc06cd
Author: Mingliang Liu 
AuthorDate: Sat Sep 12 01:41:38 2020 -0700

HDFS-15573. Only log warning if considerLoad and considerStorageType are 
both true. Contributed by Stephen O'Donnell
---
 .../hdfs/server/blockmanagement/DatanodeManager.java   | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index fbe132a..1b474fd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -326,12 +326,14 @@ public class DatanodeManager {
 this.readConsiderStorageType = conf.getBoolean(
 DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_KEY,
 DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_DEFAULT);
-LOG.warn(
-"{} and {} are incompatible and only one can be enabled. "
-+ "Both are currently enabled.",
-DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERLOAD_KEY,
-DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_KEY);
-
+if (readConsiderLoad && readConsiderStorageType) {
+  LOG.warn(
+  "{} and {} are incompatible and only one can be enabled. "
+  + "Both are currently enabled. {} will be ignored.",
+  DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERLOAD_KEY,
+  DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_KEY,
+  DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_KEY);
+}
 this.avoidStaleDataNodesForWrite = conf.getBoolean(
 DFSConfigKeys.DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY,
 DFSConfigKeys.DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_DEFAULT);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15532: listFiles on root/InternalDir will fail if fallback root has file. (#2298). Contributed by Uma Maheswara Rao G.

2020-09-12 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d2779de  HDFS-15532: listFiles on root/InternalDir will fail if 
fallback root has file. (#2298). Contributed by Uma Maheswara Rao G.
d2779de is described below

commit d2779de3f525f58790cbd6c9e3c265a9767d1d0c
Author: Uma Maheswara Rao G 
AuthorDate: Sat Sep 12 17:06:39 2020 -0700

HDFS-15532: listFiles on root/InternalDir will fail if fallback root has 
file. (#2298). Contributed by Uma Maheswara Rao G.
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 17 +++
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 15 ++
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 24 ++
 .../TestViewDistributedFileSystemContract.java |  6 --
 4 files changed, 56 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index c7ed15b..b906996 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1283,6 +1283,23 @@ public class ViewFileSystem extends FileSystem {
 public BlockLocation[] getFileBlockLocations(final FileStatus fs,
 final long start, final long len) throws
 FileNotFoundException, IOException {
+
+  // When application calls listFiles on internalDir, it would return
+  // RemoteIterator from InternalDirOfViewFs. If there is a fallBack, there
+  // is a chance of files exists under that internalDir in fallback.
+  // Iterator#next will call getFileBlockLocations with that files. So, we
+  // should return getFileBlockLocations on fallback. See HDFS-15532.
+  if (!InodeTree.SlashPath.equals(fs.getPath()) && this.fsState
+  .getRootFallbackLink() != null) {
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+Path pathToFallbackFs = new Path(parent, fs.getPath().getName());
+return linkedFallbackFs
+.getFileBlockLocations(pathToFallbackFs, start, len);
+  }
+
   checkPathIsSlash(fs.getPath());
   throw new FileNotFoundException("Path points to dir not a file");
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index 95b596b..a6ce33a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -981,6 +981,21 @@ public class ViewFs extends AbstractFileSystem {
 @Override
 public BlockLocation[] getFileBlockLocations(final Path f, final long 
start,
 final long len) throws FileNotFoundException, IOException {
+  // When application calls listFiles on internalDir, it would return
+  // RemoteIterator from InternalDirOfViewFs. If there is a fallBack, there
+  // is a chance of files exists under that internalDir in fallback.
+  // Iterator#next will call getFileBlockLocations with that files. So, we
+  // should return getFileBlockLocations on fallback. See HDFS-15532.
+  if (!InodeTree.SlashPath.equals(f) && this.fsState
+  .getRootFallbackLink() != null) {
+AbstractFileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+Path pathToFallbackFs = new Path(parent, f.getName());
+return linkedFallbackFs
+.getFileBlockLocations(pathToFallbackFs, start, len);
+  }
   checkPathIsSlash(f);
   throw new FileNotFoundException("Path points to dir not a file");
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
index 04d26b9..dc2eb0e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.fs.viewfs;
 
 import static org.apache.hadoop.fs.CreateFlag.CREATE;
+import static org.junit.Asser

[hadoop] branch branch-3.3 updated: HDFS-15532: listFiles on root/InternalDir will fail if fallback root has file. (#2298). Contributed by Uma Maheswara Rao G.

2020-09-12 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new bfa145d  HDFS-15532: listFiles on root/InternalDir will fail if 
fallback root has file. (#2298). Contributed by Uma Maheswara Rao G.
bfa145d is described below

commit bfa145dd7ca567438e7dacc1dc92ef39ee674164
Author: Uma Maheswara Rao G 
AuthorDate: Sat Sep 12 17:06:39 2020 -0700

HDFS-15532: listFiles on root/InternalDir will fail if fallback root has 
file. (#2298). Contributed by Uma Maheswara Rao G.

(cherry picked from commit d2779de3f525f58790cbd6c9e3c265a9767d1d0c)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 17 +++
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 15 ++
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 24 ++
 .../TestViewDistributedFileSystemContract.java |  6 --
 4 files changed, 56 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index f981af8..0e190a3 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1277,6 +1277,23 @@ public class ViewFileSystem extends FileSystem {
 public BlockLocation[] getFileBlockLocations(final FileStatus fs,
 final long start, final long len) throws
 FileNotFoundException, IOException {
+
+  // When application calls listFiles on internalDir, it would return
+  // RemoteIterator from InternalDirOfViewFs. If there is a fallBack, there
+  // is a chance of files exists under that internalDir in fallback.
+  // Iterator#next will call getFileBlockLocations with that files. So, we
+  // should return getFileBlockLocations on fallback. See HDFS-15532.
+  if (!InodeTree.SlashPath.equals(fs.getPath()) && this.fsState
+  .getRootFallbackLink() != null) {
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+Path pathToFallbackFs = new Path(parent, fs.getPath().getName());
+return linkedFallbackFs
+.getFileBlockLocations(pathToFallbackFs, start, len);
+  }
+
   checkPathIsSlash(fs.getPath());
   throw new FileNotFoundException("Path points to dir not a file");
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index 95b596b..a6ce33a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -981,6 +981,21 @@ public class ViewFs extends AbstractFileSystem {
 @Override
 public BlockLocation[] getFileBlockLocations(final Path f, final long 
start,
 final long len) throws FileNotFoundException, IOException {
+  // When application calls listFiles on internalDir, it would return
+  // RemoteIterator from InternalDirOfViewFs. If there is a fallBack, there
+  // is a chance of files exists under that internalDir in fallback.
+  // Iterator#next will call getFileBlockLocations with that files. So, we
+  // should return getFileBlockLocations on fallback. See HDFS-15532.
+  if (!InodeTree.SlashPath.equals(f) && this.fsState
+  .getRootFallbackLink() != null) {
+AbstractFileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+Path pathToFallbackFs = new Path(parent, f.getName());
+return linkedFallbackFs
+.getFileBlockLocations(pathToFallbackFs, start, len);
+  }
   checkPathIsSlash(f);
   throw new FileNotFoundException("Path points to dir not a file");
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
index 04d26b9..dc2eb0e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.fs.viewfs;

[hadoop] branch branch-3.3 updated: HADOOP-15891. provide Regex Based Mount Point In Inode Tree (#2185). Contributed by Zhenzhao Wang.

2020-09-12 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 2d5ca83  HADOOP-15891. provide Regex Based Mount Point In Inode Tree 
(#2185). Contributed by Zhenzhao Wang.
2d5ca83 is described below

commit 2d5ca830782016069ce17bd86a1018baa55de148
Author: zz <40777829+johnzzgit...@users.noreply.github.com>
AuthorDate: Thu Sep 10 21:20:32 2020 -0700

HADOOP-15891. provide Regex Based Mount Point In Inode Tree (#2185). 
Contributed by Zhenzhao Wang.

Co-authored-by: Zhenzhao Wang 
(cherry picked from commit 12a316cdf9994feaa36c3ff7d13e67d70398a9f3)
---
 .../org/apache/hadoop/fs/viewfs/ConfigUtil.java|  22 +
 .../org/apache/hadoop/fs/viewfs/Constants.java |   8 +
 .../org/apache/hadoop/fs/viewfs/InodeTree.java | 340 ++-
 .../apache/hadoop/fs/viewfs/RegexMountPoint.java   | 289 +
 .../fs/viewfs/RegexMountPointInterceptor.java  |  70 
 .../viewfs/RegexMountPointInterceptorFactory.java  |  67 +++
 .../fs/viewfs/RegexMountPointInterceptorType.java  |  53 +++
 ...ountPointResolvedDstPathReplaceInterceptor.java | 137 ++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  55 ++-
 .../hadoop/fs/viewfs/TestRegexMountPoint.java  | 160 +++
 .../TestRegexMountPointInterceptorFactory.java |  54 +++
 ...ountPointResolvedDstPathReplaceInterceptor.java | 101 +
 .../hadoop-hdfs/src/site/markdown/ViewFs.md|  63 +++
 .../fs/viewfs/TestViewFileSystemLinkRegex.java | 462 +
 14 files changed, 1765 insertions(+), 116 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
index 7d29b8f..09ec5d2 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
@@ -167,6 +167,28 @@ public class ConfigUtil {
   }
 
   /**
+   * Add a LinkRegex to the config for the specified mount table.
+   * @param conf - get mountable config from this conf
+   * @param mountTableName - the mountable name of the regex config item
+   * @param srcRegex - the src path regex expression that applies to this 
config
+   * @param targetStr - the string of target path
+   * @param interceptorSettings - the serialized interceptor string to be
+   *applied while resolving the mapping
+   */
+  public static void addLinkRegex(
+  Configuration conf, final String mountTableName, final String srcRegex,
+  final String targetStr, final String interceptorSettings) {
+String prefix = getConfigViewFsPrefix(mountTableName) + "."
++ Constants.CONFIG_VIEWFS_LINK_REGEX + ".";
+if ((interceptorSettings != null) && (!interceptorSettings.isEmpty())) {
+  prefix = prefix + interceptorSettings
+  + RegexMountPoint.SETTING_SRCREGEX_SEP;
+}
+String key = prefix + srcRegex;
+conf.set(key, targetStr);
+  }
+
+  /**
* Add config variable for homedir for default mount table
* @param conf - add to this conf
* @param homedir - the home dir path starting with slash
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 492cb87..bf9f7db 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -86,6 +86,14 @@ public interface Constants {
*/
   String CONFIG_VIEWFS_LINK_MERGE_SLASH = "linkMergeSlash";
 
+  /**
+   * Config variable for specifying a regex link which uses regular expressions
+   * as source and target could use group captured in src.
+   * E.g. (^/(?\\w+), /prefix-${firstDir}) =>
+   *   (/path1/file1 => /prefix-path1/file1)
+   */
+  String CONFIG_VIEWFS_LINK_REGEX = "linkRegex";
+
   FsPermission PERMISSION_555 = new FsPermission((short) 0555);
 
   String CONFIG_VIEWFS_RENAME_STRATEGY = "fs.viewfs.rename.strategy";
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 003694f..dbcd9b4 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -39,6 +39,8 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnsupportedFileSystemException;
 import org.apa

[hadoop] branch branch-3.3 updated: HDFS-15529: getChildFilesystems should include fallback fs as well (#2234). Contributed by Uma Maheswara Rao G.

2020-09-12 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 1195dac  HDFS-15529: getChildFilesystems should include fallback fs as 
well (#2234). Contributed by Uma Maheswara Rao G.
1195dac is described below

commit 1195dac55e995eeea22cded88be602030c09cf2d
Author: Uma Maheswara Rao G 
AuthorDate: Thu Sep 3 11:06:20 2020 -0700

HDFS-15529: getChildFilesystems should include fallback fs as well (#2234). 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit b3660d014708de3d0fb04c9c152934f6020a65ae)
---
 .../main/java/org/apache/hadoop/fs/viewfs/InodeTree.java |  9 +
 .../java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java |  6 ++
 .../TestViewFileSystemOverloadSchemeWithHdfsScheme.java  | 16 +---
 3 files changed, 28 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index dbcd9b4..fceb73a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -408,6 +408,15 @@ abstract class InodeTree {
 return rootFallbackLink != null;
   }
 
+  /**
+   * @return true if the root represented as internalDir. In LinkMergeSlash,
+   * there will be root to root mapping. So, root does not represent as
+   * internalDir.
+   */
+  protected boolean isRootInternalDir() {
+return root.isInternalDir();
+  }
+
   protected INodeLink getRootFallbackLink() {
 Preconditions.checkState(root.isInternalDir());
 return rootFallbackLink;
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 0e190a3..b906996 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -959,6 +959,12 @@ public class ViewFileSystem extends FileSystem {
   FileSystem targetFs = mountPoint.target.targetFileSystem;
   children.addAll(Arrays.asList(targetFs.getChildFileSystems()));
 }
+
+if (fsState.isRootInternalDir() && fsState.getRootFallbackLink() != null) {
+  children.addAll(Arrays.asList(
+  fsState.getRootFallbackLink().targetFileSystem
+  .getChildFileSystems()));
+}
 return children.toArray(new FileSystem[]{});
   }
   
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
index 31674f8..9a858e1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
@@ -476,10 +476,18 @@ public class 
TestViewFileSystemOverloadSchemeWithHdfsScheme {
 // 2. Two hdfs file systems should be there if no cache.
 conf.setBoolean(Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE, false);
 try (FileSystem vfs = FileSystem.get(conf)) {
-  Assert.assertEquals(2, vfs.getChildFileSystems().length);
+  Assert.assertEquals(isFallBackExist(conf) ? 3 : 2,
+  vfs.getChildFileSystems().length);
 }
   }
 
+  // HDFS-15529: if any extended tests added fallback, then getChildFileSystems
+  // will include fallback as well.
+  private boolean isFallBackExist(Configuration config) {
+return config.get(ConfigUtil.getConfigViewFsPrefix(defaultFSURI
+.getAuthority()) + "." + Constants.CONFIG_VIEWFS_LINK_FALLBACK) != 
null;
+  }
+
   /**
* Create mount links as follows
* hdfs://localhost:xxx/HDFSUser0 --> hdfs://localhost:xxx/HDFSUser/
@@ -501,7 +509,8 @@ public class TestViewFileSystemOverloadSchemeWithHdfsScheme 
{
 conf.setBoolean(Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE, false);
 // Two hdfs file systems should be there if no cache.
 try (FileSystem vfs = FileSystem.get(conf)) {
-  Assert.assertEquals(2, vfs.getChildFileSystems().length);
+  Assert.assertEquals(isFallBackExist(conf) ? 3 : 2,
+  vfs.getChildFileSystems().length);
 }
   }
 
@@ -528,7 +537,8 @@ public class TestViewFileSystemOverloadSchemeWithHdfsScheme 
{
 // cache should work.
 conf.setBoolean(Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE, false);
 try (Fi

[hadoop] branch branch-2.10.1 created (now aeeb125)

2020-09-12 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a change to branch branch-2.10.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at aeeb125  Preparing for 2.10.1 release

This branch includes the following new commits:

 new aeeb125  Preparing for 2.10.1 release

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/01: Preparing for 2.10.1 release

2020-09-12 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a commit to branch branch-2.10.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit aeeb12555fd3f935468d14bd00efce14b240baae
Author: Masatake Iwasaki 
AuthorDate: Sun Sep 13 14:30:17 2020 +0900

Preparing for 2.10.1 release
---
 hadoop-assemblies/pom.xml | 4 ++--
 hadoop-build-tools/pom.xml| 2 +-
 hadoop-client/pom.xml | 4 ++--
 hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml | 4 ++--
 hadoop-cloud-storage-project/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-annotations/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-auth-examples/pom.xml| 4 ++--
 hadoop-common-project/hadoop-auth/pom.xml | 4 ++--
 hadoop-common-project/hadoop-common/pom.xml   | 4 ++--
 hadoop-common-project/hadoop-kms/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-minikdc/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-nfs/pom.xml  | 4 ++--
 hadoop-common-project/pom.xml | 4 ++--
 hadoop-dist/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml| 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml| 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml | 4 ++--
 hadoop-hdfs-project/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml| 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client-hs-plugins/pom.xml| 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml| 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/pom.xml   | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml  | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml| 4 ++--
 hadoop-mapreduce-project/pom.xml  | 4 ++--
 hadoop-maven-plugins/pom.xml  | 2 +-
 hadoop-minicluster/pom.xml| 4 ++--
 hadoop-project-dist/pom.xml   | 4 ++--
 hadoop-project/pom.xml| 4 ++--
 hadoop-tools/hadoop-aliyun/pom.xml| 2 +-
 hadoop-tools/hadoop-ant/pom.xml   | 4 ++--
 hadoop-tools/hadoop-archive-logs/pom.xml  | 4 ++--
 hadoop-tools/hadoop-archives/pom.xml  | 4 ++--
 hadoop-tools/hadoop-aws/pom.xml   | 4 ++--
 hadoop-tools/hadoop-azure-datalake/pom.xml| 2 +-
 hadoop-tools/hadoop-azure/pom.xml | 2 +-
 hadoop-tools/hadoop-datajoin/pom.xml  | 4 ++--
 hadoop-tools/hadoop-distcp/pom.xml| 4 ++--
 hadoop-tools/hadoop-extras/pom.xml| 4 ++--
 hadoop-tools/hadoop-gridmix/pom.xml   | 4 ++--
 hadoop-tools/hadoop-openstack/pom.xml | 4 ++--
 hadoop-tools/hadoop-pipes/pom.xml | 4 ++--
 hadoop-tools/hadoop-resourceestimator/pom.xml | 4 ++--
 hadoop-tools/hadoop-rumen/pom.xml | 4 ++--
 hadoop-tools/hadoop-sls/pom.xml   | 4 ++--
 hadoop-tools/hadoop-streaming/pom.xml | 4 ++--
 hadoop-tools/hadoop-tools-dist/pom.xml| 4 ++--
 hadoop-tools/pom.xml  | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml   | 4 ++--
 .../hadoop-yarn-applications-distributedshell/pom.xml | 4 ++--
 .../hadoop-yarn-applications-unmanaged-am-launcher/pom.xml| 4 ++--
 hadoop-yarn-project/hadoop-yarn/

[hadoop] branch branch-2.10 updated: Preparing for 2.10.2 development

2020-09-12 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new f4e0c14  Preparing for 2.10.2 development
f4e0c14 is described below

commit f4e0c14fe92d5d811e66678e8399a494ea253c81
Author: Masatake Iwasaki 
AuthorDate: Sun Sep 13 14:33:36 2020 +0900

Preparing for 2.10.2 development
---
 hadoop-assemblies/pom.xml | 4 ++--
 hadoop-build-tools/pom.xml| 2 +-
 hadoop-client/pom.xml | 4 ++--
 hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml | 4 ++--
 hadoop-cloud-storage-project/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-annotations/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-auth-examples/pom.xml| 4 ++--
 hadoop-common-project/hadoop-auth/pom.xml | 4 ++--
 hadoop-common-project/hadoop-common/pom.xml   | 4 ++--
 hadoop-common-project/hadoop-kms/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-minikdc/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-nfs/pom.xml  | 4 ++--
 hadoop-common-project/pom.xml | 4 ++--
 hadoop-dist/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml| 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml| 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml | 4 ++--
 hadoop-hdfs-project/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml| 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client-hs-plugins/pom.xml| 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml| 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/pom.xml   | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml  | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml| 4 ++--
 hadoop-mapreduce-project/pom.xml  | 4 ++--
 hadoop-maven-plugins/pom.xml  | 2 +-
 hadoop-minicluster/pom.xml| 4 ++--
 hadoop-project-dist/pom.xml   | 4 ++--
 hadoop-project/pom.xml| 4 ++--
 hadoop-tools/hadoop-aliyun/pom.xml| 2 +-
 hadoop-tools/hadoop-ant/pom.xml   | 4 ++--
 hadoop-tools/hadoop-archive-logs/pom.xml  | 4 ++--
 hadoop-tools/hadoop-archives/pom.xml  | 4 ++--
 hadoop-tools/hadoop-aws/pom.xml   | 4 ++--
 hadoop-tools/hadoop-azure-datalake/pom.xml| 2 +-
 hadoop-tools/hadoop-azure/pom.xml | 2 +-
 hadoop-tools/hadoop-datajoin/pom.xml  | 4 ++--
 hadoop-tools/hadoop-distcp/pom.xml| 4 ++--
 hadoop-tools/hadoop-extras/pom.xml| 4 ++--
 hadoop-tools/hadoop-gridmix/pom.xml   | 4 ++--
 hadoop-tools/hadoop-openstack/pom.xml | 4 ++--
 hadoop-tools/hadoop-pipes/pom.xml | 4 ++--
 hadoop-tools/hadoop-resourceestimator/pom.xml | 4 ++--
 hadoop-tools/hadoop-rumen/pom.xml | 4 ++--
 hadoop-tools/hadoop-sls/pom.xml   | 4 ++--
 hadoop-tools/hadoop-streaming/pom.xml | 4 ++--
 hadoop-tools/hadoop-tools-dist/pom.xml| 4 ++--
 hadoop-tools/pom.xml  | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml   | 4 ++--
 .../hadoop-yarn-applications-distri