[hadoop] branch trunk updated: MAPREDUCE-6826. Job fails with InvalidStateTransitonException: Invalid event: JOB_TASK_COMPLETED at SUCCEEDED/COMMITTING. Contributed by Bilwa S T.

2020-05-18 Thread surendralilhore
This is an automated email from the ASF dual-hosted git repository.

surendralilhore pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d4e3640  MAPREDUCE-6826. Job fails with 
InvalidStateTransitonException: Invalid event: JOB_TASK_COMPLETED at 
SUCCEEDED/COMMITTING. Contributed by Bilwa S T.
d4e3640 is described below

commit d4e36409d40d9f0783234a3b98394962ae0da87e
Author: Surendra Singh Lilhore 
AuthorDate: Tue May 19 11:06:36 2020 +0530

MAPREDUCE-6826. Job fails with InvalidStateTransitonException: Invalid 
event: JOB_TASK_COMPLETED at SUCCEEDED/COMMITTING. Contributed by Bilwa S T.
---
 .../org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java |  6 --
 .../apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java | 12 +++-
 2 files changed, 15 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
index d2e2492..8ee097f 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
@@ -422,7 +422,8 @@ public class JobImpl implements 
org.apache.hadoop.mapreduce.v2.app.job.Job,
   EnumSet.of(JobEventType.JOB_UPDATED_NODES,
   JobEventType.JOB_TASK_ATTEMPT_FETCH_FAILURE,
   JobEventType.JOB_TASK_ATTEMPT_COMPLETED,
-  JobEventType.JOB_MAP_TASK_RESCHEDULED))
+  JobEventType.JOB_MAP_TASK_RESCHEDULED,
+  JobEventType.JOB_TASK_COMPLETED))
 
   // Transitions from SUCCEEDED state
   .addTransition(JobStateInternal.SUCCEEDED, 
JobStateInternal.SUCCEEDED,
@@ -441,7 +442,8 @@ public class JobImpl implements 
org.apache.hadoop.mapreduce.v2.app.job.Job,
   JobEventType.JOB_TASK_ATTEMPT_FETCH_FAILURE,
   JobEventType.JOB_AM_REBOOT,
   JobEventType.JOB_TASK_ATTEMPT_COMPLETED,
-  JobEventType.JOB_MAP_TASK_RESCHEDULED))
+  JobEventType.JOB_MAP_TASK_RESCHEDULED,
+  JobEventType.JOB_TASK_COMPLETED))
 
   // Transitions from FAIL_WAIT state
   .addTransition(JobStateInternal.FAIL_WAIT,
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
index 945b254..122fb9b 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
@@ -203,7 +203,7 @@ public class TestJobImpl {
   public void testCheckJobCompleteSuccess() throws Exception {
 Configuration conf = new Configuration();
 conf.set(MRJobConfig.MR_AM_STAGING_DIR, stagingDir);
-AsyncDispatcher dispatcher = new AsyncDispatcher();
+DrainDispatcher dispatcher = new DrainDispatcher();
 dispatcher.init(conf);
 dispatcher.start();
 CyclicBarrier syncBarrier = new CyclicBarrier(2);
@@ -225,6 +225,11 @@ public class TestJobImpl {
 JobEventType.JOB_MAP_TASK_RESCHEDULED));
 assertJobState(job, JobStateInternal.COMMITTING);
 
+job.handle(new JobEvent(job.getID(),
+JobEventType.JOB_TASK_COMPLETED));
+dispatcher.await();
+assertJobState(job, JobStateInternal.COMMITTING);
+
 // let the committer complete and verify the job succeeds
 syncBarrier.await();
 assertJobState(job, JobStateInternal.SUCCEEDED);
@@ -236,6 +241,11 @@ public class TestJobImpl {
 job.handle(new JobEvent(job.getID(), 
 JobEventType.JOB_MAP_TASK_RESCHEDULED));
 assertJobState(job, JobStateInternal.SUCCEEDED);
+
+job.handle(new JobEvent(job.getID(),
+JobEventType.JOB_TASK_COMPLETED));
+dispatcher.await();
+assertJobState(job, JobStateInternal.SUCCEEDED);
 
 dispatcher.stop();
 commitHandler.stop();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-17024. ListStatus on ViewFS root (ls "/") should list the linkFallBack root (configured target root). Contributed by Abhishek Das.

2020-05-18 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new ce4ec74  HADOOP-17024. ListStatus on ViewFS root (ls "/") should list 
the linkFallBack root (configured target root). Contributed by Abhishek Das.
ce4ec74 is described below

commit ce4ec7445345eb94c6741d416814a4eac319f0a6
Author: Abhishek Das 
AuthorDate: Mon May 18 22:27:12 2020 -0700

HADOOP-17024. ListStatus on ViewFS root (ls "/") should list the 
linkFallBack root (configured target root). Contributed by Abhishek Das.
---
 .../org/apache/hadoop/fs/viewfs/InodeTree.java | 13 +++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 49 ++-
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 51 ++-
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 98 ++
 4 files changed, 209 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 6992343..50c839b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -123,6 +123,7 @@ abstract class InodeTree {
 private final Map> children = new HashMap<>();
 private T internalDirFs =  null; //filesystem of this internal directory
 private boolean isRoot = false;
+private INodeLink fallbackLink = null;
 
 INodeDir(final String pathToNode, final UserGroupInformation aUgi) {
   super(pathToNode, aUgi);
@@ -149,6 +150,17 @@ abstract class InodeTree {
   return isRoot;
 }
 
+INodeLink getFallbackLink() {
+  return fallbackLink;
+}
+
+void addFallbackLink(INodeLink link) throws IOException {
+  if (!isRoot) {
+throw new IOException("Fallback link can only be added for root");
+  }
+  this.fallbackLink = link;
+}
+
 Map> getChildren() {
   return Collections.unmodifiableMap(children);
 }
@@ -580,6 +592,7 @@ abstract class InodeTree {
 }
   }
   rootFallbackLink = fallbackLink;
+  getRootDir().addFallbackLink(rootFallbackLink);
 }
 
 if (!gotMountTableEntry) {
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 115fc03..0cbcafc 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1200,10 +1200,19 @@ public class ViewFileSystem extends FileSystem {
 }
 
 
+/**
+ * {@inheritDoc}
+ *
+ * Note: listStatus on root("/") considers listing from fallbackLink if
+ * available. If the same directory name is present in configured mount
+ * path as well as in fallback link, then only the configured mount path
+ * will be listed in the returned result.
+ */
 @Override
 public FileStatus[] listStatus(Path f) throws AccessControlException,
 FileNotFoundException, IOException {
   checkPathIsSlash(f);
+  FileStatus[] fallbackStatuses = listStatusForFallbackLink();
   FileStatus[] result = new 
FileStatus[theInternalDir.getChildren().size()];
   int i = 0;
   for (Entry> iEntry :
@@ -1226,7 +1235,45 @@ public class ViewFileSystem extends FileSystem {
 myUri, null));
 }
   }
-  return result;
+  if (fallbackStatuses.length > 0) {
+return consolidateFileStatuses(fallbackStatuses, result);
+  } else {
+return result;
+  }
+}
+
+private FileStatus[] consolidateFileStatuses(FileStatus[] fallbackStatuses,
+FileStatus[] mountPointStatuses) {
+  ArrayList result = new ArrayList<>();
+  Set pathSet = new HashSet<>();
+  for (FileStatus status : mountPointStatuses) {
+result.add(status);
+pathSet.add(status.getPath().getName());
+  }
+  for (FileStatus status : fallbackStatuses) {
+if (!pathSet.contains(status.getPath().getName())) {
+  result.add(status);
+}
+  }
+  return result.toArray(new FileStatus[0]);
+}
+
+private FileStatus[] listStatusForFallbackLink() throws IOException {
+  if (theInternalDir.isRoot() &&
+  theInternalDir.getFallbackLink() != null) {
+FileSystem linkedFs =
+theInternalDir.getFallbackLink().getTargetFileSystem();
+// Fallback link is only applicable for root
+FileStatus[] statuses = linkedFs.listStatus(new Path("/"));
+for 

[hadoop] branch trunk updated: HADOOP-17004. ABFS: Improve the ABFS driver documentation

2020-05-18 Thread dazhou
This is an automated email from the ASF dual-hosted git repository.

dazhou pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new bdbd59c  HADOOP-17004. ABFS: Improve the ABFS driver documentation
bdbd59c is described below

commit bdbd59cfa0904860fc4ce7a2afef1e84f35b8b82
Author: bilaharith <52483117+bilahar...@users.noreply.github.com>
AuthorDate: Tue May 19 09:15:54 2020 +0530

HADOOP-17004. ABFS: Improve the ABFS driver documentation

Contributed by Bilahari T H.
---
 .../hadoop-azure/src/site/markdown/abfs.md | 133 -
 1 file changed, 130 insertions(+), 3 deletions(-)

diff --git a/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md 
b/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
index 89f52e7..93141f1 100644
--- a/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
+++ b/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
@@ -257,7 +257,8 @@ will have the URL 
`abfs://contain...@abfswales1.dfs.core.windows.net/`
 
 
 You can create a new container through the ABFS connector, by setting the 
option
- `fs.azure.createRemoteFileSystemDuringInitialization` to `true`.
+ `fs.azure.createRemoteFileSystemDuringInitialization` to `true`. Though the
+  same is not supported when AuthType is SAS.
 
 If the container does not exist, an attempt to list it with `hadoop fs -ls`
 will fail
@@ -317,8 +318,13 @@ driven by them.
 
 What can be changed is what secrets/credentials are used to authenticate the 
caller.
 
-The authentication mechanism is set in `fs.azure.account.auth.type` (or the 
account specific variant),
-and, for the various OAuth options `fs.azure.account.oauth.provider.type`
+The authentication mechanism is set in `fs.azure.account.auth.type` (or the
+account specific variant). The possible values are SharedKey, OAuth, Custom
+and SAS. For the various OAuth options use the config `fs.azure.account
+.oauth.provider.type`. Following are the implementations supported
+ClientCredsTokenProvider, UserPasswordTokenProvider, MsiTokenProvider and
+RefreshTokenBasedTokenProvider. An IllegalArgumentException is thrown if
+the specified provider type is not one of the supported.
 
 All secrets can be stored in JCEKS files. These are encrypted and password
 protected —use them or a compatible Hadoop Key Management Store wherever
@@ -350,6 +356,15 @@ the password, "key", retrieved from the XML/JCECKs 
configuration files.
 *Note*: The source of the account key can be changed through a custom key 
provider;
 one exists to execute a shell script to retrieve it.
 
+A custom key provider class can be provided with the config
+`fs.azure.account.keyprovider`. If a key provider class is specified the same
+will be used to get account key. Otherwise the Simple key provider will be used
+which will use the key specified for the config `fs.azure.account.key`.
+
+To retrieve using shell script, specify the path to the script for the config
+`fs.azure.shellkeyprovider.script`. ShellDecryptionKeyProvider class use the
+script specified to retrieve the key.
+
 ###  OAuth 2.0 Client Credentials
 
 OAuth 2.0 credentials of (client id, client secret, endpoint) are provided in 
the configuration/JCEKS file.
@@ -466,6 +481,13 @@ With an existing Oauth 2.0 token, make a request of the 
Active Directory endpoin
   
 
 
+  fs.azure.account.oauth2.refresh.endpoint
+  
+  
+  Refresh token endpoint
+  
+
+
   fs.azure.account.oauth2.client.id
   
   
@@ -507,6 +529,13 @@ The Azure Portal/CLI is used to create the service 
identity.
   
 
 
+  fs.azure.account.oauth2.msi.endpoint
+  
+  
+   MSI endpoint
+  
+
+
   fs.azure.account.oauth2.client.id
   
   
@@ -542,6 +571,26 @@ and optionally 
`org.apache.hadoop.fs.azurebfs.extensions.BoundDTExtension`.
 
 The declared class also holds responsibility to implement retry logic while 
fetching access tokens.
 
+###  Delegation Token 
Provider
+
+A delegation token provider supplies the ABFS connector with delegation tokens,
+helps renew and cancel the tokens by implementing the
+CustomDelegationTokenManager interface.
+
+```xml
+
+  fs.azure.enable.delegation.token
+  true
+  Make this true to use delegation token provider
+
+
+  fs.azure.delegation.token.provider.type
+  
{fully-qualified-class-name-for-implementation-of-CustomDelegationTokenManager-interface}
+
+```
+In case delegation token is enabled, and the config `fs.azure.delegation.token
+.provider.type` is not provided then an IlleagalArgumentException is thrown.
+
 ### Shared Access Signature (SAS) Token Provider
 
 A Shared Access Signature (SAS) token provider supplies the ABFS connector 
with SAS
@@ -691,6 +740,84 @@ Config `fs.azure.account.hns.enabled` provides an option 
to specify whether
 Config `fs.azure.enable.check.access` needs to be set true to enable
  the AzureBlobFileSystem.access().
 
+###  Primary User Group Options
+The group name which is part of 

[hadoop] branch branch-3.3 updated: HDFS-15293. Relax the condition for accepting a fsimage when receiving a checkpoint. Contributed by Chen Liang

2020-05-18 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new e452163  HDFS-15293. Relax the condition for accepting a fsimage when 
receiving a checkpoint. Contributed by Chen Liang
e452163 is described below

commit e452163a06daa6bbebc571127754962d8776a925
Author: Chen Liang 
AuthorDate: Mon May 18 10:58:52 2020 -0700

HDFS-15293. Relax the condition for accepting a fsimage when receiving a 
checkpoint. Contributed by Chen Liang

(cherry picked from commit 7bb902bc0d0c62d63a8960db444de3abb0a6ad22)
---
 .../hadoop/hdfs/server/namenode/ImageServlet.java  | 39 
 .../hdfs/server/namenode/TestCheckpoint.java   | 53 +-
 2 files changed, 81 insertions(+), 11 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
index 91f24dd..a9c2a09 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
@@ -99,6 +99,19 @@ public class ImageServlet extends HttpServlet {
   "recent.image.check.enabled";
   public static final boolean RECENT_IMAGE_CHECK_ENABLED_DEFAULT = true;
 
+  /*
+   * Specify a relaxation for the time delta check, the relaxation is to 
account
+   * for the scenario that there are chances that minor time difference (e.g.
+   * due to image upload delay, or minor machine clock skew) can cause ANN to
+   * reject a fsImage too aggressively.
+   */
+  private static double recentImageCheckTimePrecision = 0.75;
+
+  @VisibleForTesting
+  static void setRecentImageCheckTimePrecision(double ratio) {
+recentImageCheckTimePrecision = ratio;
+  }
+
   @Override
   public void doGet(final HttpServletRequest request,
   final HttpServletResponse response) throws ServletException, IOException 
{
@@ -592,6 +605,9 @@ public class ImageServlet extends HttpServlet {
   long checkpointPeriod =
   conf.getTimeDuration(DFS_NAMENODE_CHECKPOINT_PERIOD_KEY,
   DFS_NAMENODE_CHECKPOINT_PERIOD_DEFAULT, 
TimeUnit.SECONDS);
+  checkpointPeriod = Math.round(
+  checkpointPeriod * recentImageCheckTimePrecision);
+
   long checkpointTxnCount =
   conf.getLong(DFS_NAMENODE_CHECKPOINT_TXNS_KEY,
   DFS_NAMENODE_CHECKPOINT_TXNS_DEFAULT);
@@ -612,21 +628,24 @@ public class ImageServlet extends HttpServlet {
 // a new fsImage
 // 1. most recent image's txid is too far behind
 // 2. last checkpoint time was too old
-response.sendError(HttpServletResponse.SC_CONFLICT,
-"Most recent checkpoint is neither too far behind in "
-+ "txid, nor too old. New txnid cnt is "
-+ (txid - lastCheckpointTxid)
-+ ", expecting at least " + checkpointTxnCount
-+ " unless too long since last upload.");
+String message = "Rejecting a fsimage due to small time delta "
++ "and txnid delta. Time since previous checkpoint is "
++ timeDelta + " expecting at least " + checkpointPeriod
++ " txnid delta since previous checkpoint is " +
+(txid - lastCheckpointTxid) + " expecting at least "
++ checkpointTxnCount;
+LOG.info(message);
+response.sendError(HttpServletResponse.SC_CONFLICT, message);
 return null;
   }
 
   try {
 if (nnImage.getStorage().findImageFile(nnf, txid) != null) {
-  response.sendError(HttpServletResponse.SC_CONFLICT,
-  "Either current namenode has checkpointed or "
-  + "another checkpointer already uploaded an "
-  + "checkpoint for txid " + txid);
+  String message = "Either current namenode has checkpointed 
or "
+  + "another checkpointer already uploaded an "
+  + "checkpoint for txid " + txid;
+  LOG.info(message);
+  response.sendError(HttpServletResponse.SC_CONFLICT, message);
   return null;
 }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
index 

[hadoop] branch branch-3.3.0 updated: HDFS-15293. Relax the condition for accepting a fsimage when receiving a checkpoint. Contributed by Chen Liang

2020-05-18 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3.0 by this push:
 new 940a422  HDFS-15293. Relax the condition for accepting a fsimage when 
receiving a checkpoint. Contributed by Chen Liang
940a422 is described below

commit 940a422525258514165628ec93011507ba1ed5d1
Author: Chen Liang 
AuthorDate: Mon May 18 10:58:52 2020 -0700

HDFS-15293. Relax the condition for accepting a fsimage when receiving a 
checkpoint. Contributed by Chen Liang

(cherry picked from commit 7bb902bc0d0c62d63a8960db444de3abb0a6ad22)
(cherry picked from commit e452163a06daa6bbebc571127754962d8776a925)
---
 .../hadoop/hdfs/server/namenode/ImageServlet.java  | 39 
 .../hdfs/server/namenode/TestCheckpoint.java   | 53 +-
 2 files changed, 81 insertions(+), 11 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
index 91f24dd..a9c2a09 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
@@ -99,6 +99,19 @@ public class ImageServlet extends HttpServlet {
   "recent.image.check.enabled";
   public static final boolean RECENT_IMAGE_CHECK_ENABLED_DEFAULT = true;
 
+  /*
+   * Specify a relaxation for the time delta check, the relaxation is to 
account
+   * for the scenario that there are chances that minor time difference (e.g.
+   * due to image upload delay, or minor machine clock skew) can cause ANN to
+   * reject a fsImage too aggressively.
+   */
+  private static double recentImageCheckTimePrecision = 0.75;
+
+  @VisibleForTesting
+  static void setRecentImageCheckTimePrecision(double ratio) {
+recentImageCheckTimePrecision = ratio;
+  }
+
   @Override
   public void doGet(final HttpServletRequest request,
   final HttpServletResponse response) throws ServletException, IOException 
{
@@ -592,6 +605,9 @@ public class ImageServlet extends HttpServlet {
   long checkpointPeriod =
   conf.getTimeDuration(DFS_NAMENODE_CHECKPOINT_PERIOD_KEY,
   DFS_NAMENODE_CHECKPOINT_PERIOD_DEFAULT, 
TimeUnit.SECONDS);
+  checkpointPeriod = Math.round(
+  checkpointPeriod * recentImageCheckTimePrecision);
+
   long checkpointTxnCount =
   conf.getLong(DFS_NAMENODE_CHECKPOINT_TXNS_KEY,
   DFS_NAMENODE_CHECKPOINT_TXNS_DEFAULT);
@@ -612,21 +628,24 @@ public class ImageServlet extends HttpServlet {
 // a new fsImage
 // 1. most recent image's txid is too far behind
 // 2. last checkpoint time was too old
-response.sendError(HttpServletResponse.SC_CONFLICT,
-"Most recent checkpoint is neither too far behind in "
-+ "txid, nor too old. New txnid cnt is "
-+ (txid - lastCheckpointTxid)
-+ ", expecting at least " + checkpointTxnCount
-+ " unless too long since last upload.");
+String message = "Rejecting a fsimage due to small time delta "
++ "and txnid delta. Time since previous checkpoint is "
++ timeDelta + " expecting at least " + checkpointPeriod
++ " txnid delta since previous checkpoint is " +
+(txid - lastCheckpointTxid) + " expecting at least "
++ checkpointTxnCount;
+LOG.info(message);
+response.sendError(HttpServletResponse.SC_CONFLICT, message);
 return null;
   }
 
   try {
 if (nnImage.getStorage().findImageFile(nnf, txid) != null) {
-  response.sendError(HttpServletResponse.SC_CONFLICT,
-  "Either current namenode has checkpointed or "
-  + "another checkpointer already uploaded an "
-  + "checkpoint for txid " + txid);
+  String message = "Either current namenode has checkpointed 
or "
+  + "another checkpointer already uploaded an "
+  + "checkpoint for txid " + txid;
+  LOG.info(message);
+  response.sendError(HttpServletResponse.SC_CONFLICT, message);
   return null;
 }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
 

[hadoop] branch branch-2.10 updated: HDFS-15293. Relax the condition for accepting a fsimage when receiving a checkpoint. Contributed by Chen Liang

2020-05-18 Thread cliang
This is an automated email from the ASF dual-hosted git repository.

cliang pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new d0504cf  HDFS-15293. Relax the condition for accepting a fsimage when 
receiving a checkpoint. Contributed by Chen Liang
d0504cf is described below

commit d0504cf74d3d281a77f8f98da3fe9aab03fd1967
Author: Chen Liang 
AuthorDate: Mon May 18 10:58:52 2020 -0700

HDFS-15293. Relax the condition for accepting a fsimage when receiving a 
checkpoint. Contributed by Chen Liang
---
 .../hadoop/hdfs/server/namenode/ImageServlet.java  | 39 
 .../hdfs/server/namenode/TestCheckpoint.java   | 53 +-
 2 files changed, 81 insertions(+), 11 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
index 3dcc168..b6fc84b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
@@ -98,6 +98,19 @@ public class ImageServlet extends HttpServlet {
   "recent.image.check.enabled";
   public static final boolean RECENT_IMAGE_CHECK_ENABLED_DEFAULT = true;
 
+  /*
+   * Specify a relaxation for the time delta check, the relaxation is to 
account
+   * for the scenario that there are chances that minor time difference (e.g.
+   * due to image upload delay, or minor machine clock skew) can cause ANN to
+   * reject a fsImage too aggressively.
+   */
+  private static double recentImageCheckTimePrecision = 0.75;
+
+  @VisibleForTesting
+  static void setRecentImageCheckTimePrecision(double ratio) {
+recentImageCheckTimePrecision = ratio;
+  }
+
   @Override
   public void doGet(final HttpServletRequest request,
   final HttpServletResponse response) throws ServletException, IOException 
{
@@ -566,6 +579,9 @@ public class ImageServlet extends HttpServlet {
   long checkpointPeriod =
   conf.getTimeDuration(DFS_NAMENODE_CHECKPOINT_PERIOD_KEY,
   DFS_NAMENODE_CHECKPOINT_PERIOD_DEFAULT, 
TimeUnit.SECONDS);
+  checkpointPeriod = Math.round(
+  checkpointPeriod * recentImageCheckTimePrecision);
+
   long checkpointTxnCount =
   conf.getLong(DFS_NAMENODE_CHECKPOINT_TXNS_KEY,
   DFS_NAMENODE_CHECKPOINT_TXNS_DEFAULT);
@@ -586,21 +602,24 @@ public class ImageServlet extends HttpServlet {
 // a new fsImage
 // 1. most recent image's txid is too far behind
 // 2. last checkpoint time was too old
-response.sendError(HttpServletResponse.SC_CONFLICT,
-"Most recent checkpoint is neither too far behind in "
-+ "txid, nor too old. New txnid cnt is "
-+ (txid - lastCheckpointTxid)
-+ ", expecting at least " + checkpointTxnCount
-+ " unless too long since last upload.");
+String message = "Rejecting a fsimage due to small time delta "
++ "and txnid delta. Time since previous checkpoint is "
++ timeDelta + " expecting at least " + checkpointPeriod
++ " txnid delta since previous checkpoint is " +
+(txid - lastCheckpointTxid) + " expecting at least "
++ checkpointTxnCount;
+LOG.info(message);
+response.sendError(HttpServletResponse.SC_CONFLICT, message);
 return null;
   }
 
   try {
 if (nnImage.getStorage().findImageFile(nnf, txid) != null) {
-  response.sendError(HttpServletResponse.SC_CONFLICT,
-  "Either current namenode has checkpointed or "
-  + "another checkpointer already uploaded an "
-  + "checkpoint for txid " + txid);
+  String message = "Either current namenode has checkpointed 
or "
+  + "another checkpointer already uploaded an "
+  + "checkpoint for txid " + txid;
+  LOG.info(message);
+  response.sendError(HttpServletResponse.SC_CONFLICT, message);
   return null;
 }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
index 2fc1cd2..e9f1917 100644
--- 

[hadoop] branch branch-3.1 updated: HDFS-15293. Relax the condition for accepting a fsimage when receiving a checkpoint. Contributed by Chen Liang

2020-05-18 Thread cliang
This is an automated email from the ASF dual-hosted git repository.

cliang pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 8f53545  HDFS-15293. Relax the condition for accepting a fsimage when 
receiving a checkpoint. Contributed by Chen Liang
8f53545 is described below

commit 8f53545f9d14a95527760c3bf4ef504e606f61e2
Author: Chen Liang 
AuthorDate: Mon May 18 10:58:52 2020 -0700

HDFS-15293. Relax the condition for accepting a fsimage when receiving a 
checkpoint. Contributed by Chen Liang
---
 .../hadoop/hdfs/server/namenode/ImageServlet.java  | 39 
 .../hdfs/server/namenode/TestCheckpoint.java   | 53 +-
 2 files changed, 81 insertions(+), 11 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
index 5b3e64b..6d52f8b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
@@ -98,6 +98,19 @@ public class ImageServlet extends HttpServlet {
   "recent.image.check.enabled";
   public static final boolean RECENT_IMAGE_CHECK_ENABLED_DEFAULT = true;
 
+  /*
+   * Specify a relaxation for the time delta check, the relaxation is to 
account
+   * for the scenario that there are chances that minor time difference (e.g.
+   * due to image upload delay, or minor machine clock skew) can cause ANN to
+   * reject a fsImage too aggressively.
+   */
+  private static double recentImageCheckTimePrecision = 0.75;
+
+  @VisibleForTesting
+  static void setRecentImageCheckTimePrecision(double ratio) {
+recentImageCheckTimePrecision = ratio;
+  }
+
   @Override
   public void doGet(final HttpServletRequest request,
   final HttpServletResponse response) throws ServletException, IOException 
{
@@ -566,6 +579,9 @@ public class ImageServlet extends HttpServlet {
   long checkpointPeriod =
   conf.getTimeDuration(DFS_NAMENODE_CHECKPOINT_PERIOD_KEY,
   DFS_NAMENODE_CHECKPOINT_PERIOD_DEFAULT, 
TimeUnit.SECONDS);
+  checkpointPeriod = Math.round(
+  checkpointPeriod * recentImageCheckTimePrecision);
+
   long checkpointTxnCount =
   conf.getLong(DFS_NAMENODE_CHECKPOINT_TXNS_KEY,
   DFS_NAMENODE_CHECKPOINT_TXNS_DEFAULT);
@@ -586,21 +602,24 @@ public class ImageServlet extends HttpServlet {
 // a new fsImage
 // 1. most recent image's txid is too far behind
 // 2. last checkpoint time was too old
-response.sendError(HttpServletResponse.SC_CONFLICT,
-"Most recent checkpoint is neither too far behind in "
-+ "txid, nor too old. New txnid cnt is "
-+ (txid - lastCheckpointTxid)
-+ ", expecting at least " + checkpointTxnCount
-+ " unless too long since last upload.");
+String message = "Rejecting a fsimage due to small time delta "
++ "and txnid delta. Time since previous checkpoint is "
++ timeDelta + " expecting at least " + checkpointPeriod
++ " txnid delta since previous checkpoint is " +
+(txid - lastCheckpointTxid) + " expecting at least "
++ checkpointTxnCount;
+LOG.info(message);
+response.sendError(HttpServletResponse.SC_CONFLICT, message);
 return null;
   }
 
   try {
 if (nnImage.getStorage().findImageFile(nnf, txid) != null) {
-  response.sendError(HttpServletResponse.SC_CONFLICT,
-  "Either current namenode has checkpointed or "
-  + "another checkpointer already uploaded an "
-  + "checkpoint for txid " + txid);
+  String message = "Either current namenode has checkpointed 
or "
+  + "another checkpointer already uploaded an "
+  + "checkpoint for txid " + txid;
+  LOG.info(message);
+  response.sendError(HttpServletResponse.SC_CONFLICT, message);
   return null;
 }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
index 4f0928e..9fbf5a8 100644
--- 

[hadoop] branch branch-3.2 updated: HDFS-15293. Relax the condition for accepting a fsimage when receiving a checkpoint. Contributed by Chen Liang

2020-05-18 Thread cliang
This is an automated email from the ASF dual-hosted git repository.

cliang pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 1813d25  HDFS-15293. Relax the condition for accepting a fsimage when 
receiving a checkpoint. Contributed by Chen Liang
1813d25 is described below

commit 1813d25bf23b2ae3fe87cf5bbf2d5dcb7987cf65
Author: Chen Liang 
AuthorDate: Mon May 18 10:58:52 2020 -0700

HDFS-15293. Relax the condition for accepting a fsimage when receiving a 
checkpoint. Contributed by Chen Liang
---
 .../hadoop/hdfs/server/namenode/ImageServlet.java  | 39 
 .../hdfs/server/namenode/TestCheckpoint.java   | 53 +-
 2 files changed, 81 insertions(+), 11 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
index c36b46d..5ed2a16 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
@@ -98,6 +98,19 @@ public class ImageServlet extends HttpServlet {
   "recent.image.check.enabled";
   public static final boolean RECENT_IMAGE_CHECK_ENABLED_DEFAULT = true;
 
+  /*
+   * Specify a relaxation for the time delta check, the relaxation is to 
account
+   * for the scenario that there are chances that minor time difference (e.g.
+   * due to image upload delay, or minor machine clock skew) can cause ANN to
+   * reject a fsImage too aggressively.
+   */
+  private static double recentImageCheckTimePrecision = 0.75;
+
+  @VisibleForTesting
+  static void setRecentImageCheckTimePrecision(double ratio) {
+recentImageCheckTimePrecision = ratio;
+  }
+
   @Override
   public void doGet(final HttpServletRequest request,
   final HttpServletResponse response) throws ServletException, IOException 
{
@@ -566,6 +579,9 @@ public class ImageServlet extends HttpServlet {
   long checkpointPeriod =
   conf.getTimeDuration(DFS_NAMENODE_CHECKPOINT_PERIOD_KEY,
   DFS_NAMENODE_CHECKPOINT_PERIOD_DEFAULT, 
TimeUnit.SECONDS);
+  checkpointPeriod = Math.round(
+  checkpointPeriod * recentImageCheckTimePrecision);
+
   long checkpointTxnCount =
   conf.getLong(DFS_NAMENODE_CHECKPOINT_TXNS_KEY,
   DFS_NAMENODE_CHECKPOINT_TXNS_DEFAULT);
@@ -586,21 +602,24 @@ public class ImageServlet extends HttpServlet {
 // a new fsImage
 // 1. most recent image's txid is too far behind
 // 2. last checkpoint time was too old
-response.sendError(HttpServletResponse.SC_CONFLICT,
-"Most recent checkpoint is neither too far behind in "
-+ "txid, nor too old. New txnid cnt is "
-+ (txid - lastCheckpointTxid)
-+ ", expecting at least " + checkpointTxnCount
-+ " unless too long since last upload.");
+String message = "Rejecting a fsimage due to small time delta "
++ "and txnid delta. Time since previous checkpoint is "
++ timeDelta + " expecting at least " + checkpointPeriod
++ " txnid delta since previous checkpoint is " +
+(txid - lastCheckpointTxid) + " expecting at least "
++ checkpointTxnCount;
+LOG.info(message);
+response.sendError(HttpServletResponse.SC_CONFLICT, message);
 return null;
   }
 
   try {
 if (nnImage.getStorage().findImageFile(nnf, txid) != null) {
-  response.sendError(HttpServletResponse.SC_CONFLICT,
-  "Either current namenode has checkpointed or "
-  + "another checkpointer already uploaded an "
-  + "checkpoint for txid " + txid);
+  String message = "Either current namenode has checkpointed 
or "
+  + "another checkpointer already uploaded an "
+  + "checkpoint for txid " + txid;
+  LOG.info(message);
+  response.sendError(HttpServletResponse.SC_CONFLICT, message);
   return null;
 }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
index 46f6694..7bd2f9c 100644
--- 

[hadoop] branch trunk updated: HDFS-15293. Relax the condition for accepting a fsimage when receiving a checkpoint. Contributed by Chen Liang

2020-05-18 Thread cliang
This is an automated email from the ASF dual-hosted git repository.

cliang pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 7bb902b  HDFS-15293. Relax the condition for accepting a fsimage when 
receiving a checkpoint. Contributed by Chen Liang
7bb902b is described below

commit 7bb902bc0d0c62d63a8960db444de3abb0a6ad22
Author: Chen Liang 
AuthorDate: Mon May 18 10:58:52 2020 -0700

HDFS-15293. Relax the condition for accepting a fsimage when receiving a 
checkpoint. Contributed by Chen Liang
---
 .../hadoop/hdfs/server/namenode/ImageServlet.java  | 39 
 .../hdfs/server/namenode/TestCheckpoint.java   | 53 +-
 2 files changed, 81 insertions(+), 11 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
index 91f24dd..a9c2a09 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
@@ -99,6 +99,19 @@ public class ImageServlet extends HttpServlet {
   "recent.image.check.enabled";
   public static final boolean RECENT_IMAGE_CHECK_ENABLED_DEFAULT = true;
 
+  /*
+   * Specify a relaxation for the time delta check, the relaxation is to 
account
+   * for the scenario that there are chances that minor time difference (e.g.
+   * due to image upload delay, or minor machine clock skew) can cause ANN to
+   * reject a fsImage too aggressively.
+   */
+  private static double recentImageCheckTimePrecision = 0.75;
+
+  @VisibleForTesting
+  static void setRecentImageCheckTimePrecision(double ratio) {
+recentImageCheckTimePrecision = ratio;
+  }
+
   @Override
   public void doGet(final HttpServletRequest request,
   final HttpServletResponse response) throws ServletException, IOException 
{
@@ -592,6 +605,9 @@ public class ImageServlet extends HttpServlet {
   long checkpointPeriod =
   conf.getTimeDuration(DFS_NAMENODE_CHECKPOINT_PERIOD_KEY,
   DFS_NAMENODE_CHECKPOINT_PERIOD_DEFAULT, 
TimeUnit.SECONDS);
+  checkpointPeriod = Math.round(
+  checkpointPeriod * recentImageCheckTimePrecision);
+
   long checkpointTxnCount =
   conf.getLong(DFS_NAMENODE_CHECKPOINT_TXNS_KEY,
   DFS_NAMENODE_CHECKPOINT_TXNS_DEFAULT);
@@ -612,21 +628,24 @@ public class ImageServlet extends HttpServlet {
 // a new fsImage
 // 1. most recent image's txid is too far behind
 // 2. last checkpoint time was too old
-response.sendError(HttpServletResponse.SC_CONFLICT,
-"Most recent checkpoint is neither too far behind in "
-+ "txid, nor too old. New txnid cnt is "
-+ (txid - lastCheckpointTxid)
-+ ", expecting at least " + checkpointTxnCount
-+ " unless too long since last upload.");
+String message = "Rejecting a fsimage due to small time delta "
++ "and txnid delta. Time since previous checkpoint is "
++ timeDelta + " expecting at least " + checkpointPeriod
++ " txnid delta since previous checkpoint is " +
+(txid - lastCheckpointTxid) + " expecting at least "
++ checkpointTxnCount;
+LOG.info(message);
+response.sendError(HttpServletResponse.SC_CONFLICT, message);
 return null;
   }
 
   try {
 if (nnImage.getStorage().findImageFile(nnf, txid) != null) {
-  response.sendError(HttpServletResponse.SC_CONFLICT,
-  "Either current namenode has checkpointed or "
-  + "another checkpointer already uploaded an "
-  + "checkpoint for txid " + txid);
+  String message = "Either current namenode has checkpointed 
or "
+  + "another checkpointer already uploaded an "
+  + "checkpoint for txid " + txid;
+  LOG.info(message);
+  response.sendError(HttpServletResponse.SC_CONFLICT, message);
   return null;
 }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
index 572ad8b..fabe8c5 100644
--- 

[hadoop] 01/02: HDFS-15356. Unify configuration `dfs.ha.allow.stale.reads` to DFSConfigKeys. Contributed by Xiaoqiao He.

2020-05-18 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 3915d1afc7c2189b788f0e773949895d1ab0780b
Author: Ayush Saxena 
AuthorDate: Sat May 16 16:35:06 2020 +0530

HDFS-15356. Unify configuration `dfs.ha.allow.stale.reads` to 
DFSConfigKeys. Contributed by Xiaoqiao He.

(cherry picked from commit 178336f8a8bb291eb355bede729082f2f0382216)
---
 .../src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java  | 3 +++
 .../hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java | 7 +--
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml  | 9 +
 3 files changed, 17 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 2e2b7db..e4a710f 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -1168,6 +1168,9 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   "dfs.ha.nn.not-become-active-in-safemode";
   public static final boolean DFS_HA_NN_NOT_BECOME_ACTIVE_IN_SAFEMODE_DEFAULT =
   false;
+  public static final String DFS_HA_ALLOW_STALE_READ_KEY =
+  "dfs.ha.allow.stale.reads";
+  public static final boolean DFS_HA_ALLOW_STALE_READ_DEFAULT = false;
 
   // Security-related configs
   public static final String DFS_ENCRYPT_DATA_TRANSFER_KEY = 
"dfs.encrypt.data.transfer";
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
index aebc28a..43e76c7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.hdfs;
 
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_HA_ALLOW_STALE_READ_DEFAULT;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_HA_ALLOW_STALE_READ_KEY;
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_HA_NAMENODE_ID_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_HTTPS_ADDRESS_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_HTTPS_BIND_HOST_KEY;
@@ -220,11 +222,12 @@ public class HAUtil {
* @return true if the NN should allow read operations while in standby mode.
*/
   public static boolean shouldAllowStandbyReads(Configuration conf) {
-return conf.getBoolean("dfs.ha.allow.stale.reads", false);
+return conf.getBoolean(DFS_HA_ALLOW_STALE_READ_KEY,
+DFS_HA_ALLOW_STALE_READ_DEFAULT);
   }
   
   public static void setAllowStandbyReads(Configuration conf, boolean val) {
-conf.setBoolean("dfs.ha.allow.stale.reads", val);
+conf.setBoolean(DFS_HA_ALLOW_STALE_READ_KEY, val);
   }
 
   /**
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 7c42e0d..9c31c18 100755
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -4623,6 +4623,15 @@
 
 
 
+  dfs.ha.allow.stale.reads
+  false
+  
+If true, a NameNode in Standby state can process read request and the 
result
+could be stale.
+  
+
+
+
   dfs.journalnode.edits.dir
   /tmp/hadoop/dfs/journalnode/
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 02/02: HDFS-13183. Standby NameNode process getBlocks request to reduce Active load. Contributed by Xiaoqiao He.

2020-05-18 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit eb045ea056c6b70efd9f0ba514f37ec707dda209
Author: He Xiaoqiao 
AuthorDate: Mon May 18 07:08:32 2020 -0700

HDFS-13183. Standby NameNode process getBlocks request to reduce Active 
load. Contributed by Xiaoqiao He.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit a3f44dacc1fa19acc4eefd1e2505e54f8629e603)
---
 .../hadoop/hdfs/server/balancer/Balancer.java  | 39 ++
 .../hdfs/server/balancer/NameNodeConnector.java| 85 ++
 .../balancer/TestBalancerWithHANameNodes.java  | 56 +-
 3 files changed, 163 insertions(+), 17 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
index e8b4971..f643590 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
@@ -37,6 +37,7 @@ import java.util.Set;
 import java.util.concurrent.TimeUnit;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.hdfs.DFSUtilClient;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import org.apache.hadoop.HadoopIllegalArgumentException;
@@ -688,7 +689,7 @@ public class Balancer {
* execute a {@link Balancer} to work through all datanodes once.  
*/
   static private int doBalance(Collection namenodes,
-  final BalancerParameters p, Configuration conf)
+  Collection nsIds, final BalancerParameters p, Configuration conf)
   throws IOException, InterruptedException {
 final long sleeptime =
 conf.getTimeDuration(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY,
@@ -707,13 +708,12 @@ public class Balancer {
 System.out.println("Time Stamp   Iteration#  Bytes Already 
Moved  Bytes Left To Move  Bytes Being Moved");
 
 List connectors = Collections.emptyList();
-try {
-  connectors = NameNodeConnector.newNameNodeConnectors(namenodes, 
-  Balancer.class.getSimpleName(), BALANCER_ID_PATH, conf,
-  p.getMaxIdleIteration());
-
-  boolean done = false;
-  for(int iteration = 0; !done; iteration++) {
+boolean done = false;
+for(int iteration = 0; !done; iteration++) {
+  try {
+connectors = NameNodeConnector.newNameNodeConnectors(namenodes, nsIds,
+Balancer.class.getSimpleName(), BALANCER_ID_PATH, conf,
+p.getMaxIdleIteration());
 done = true;
 Collections.shuffle(connectors);
 for(NameNodeConnector nnc : connectors) {
@@ -741,19 +741,25 @@ public class Balancer {
 if (!done) {
   Thread.sleep(sleeptime);
 }
-  }
-} finally {
-  for(NameNodeConnector nnc : connectors) {
-IOUtils.cleanupWithLogger(LOG, nnc);
+  } finally {
+for(NameNodeConnector nnc : connectors) {
+  IOUtils.cleanupWithLogger(LOG, nnc);
+}
   }
 }
 return ExitStatus.SUCCESS.getExitCode();
   }
 
   static int run(Collection namenodes, final BalancerParameters p,
-  Configuration conf) throws IOException, InterruptedException {
+ Configuration conf) throws IOException, InterruptedException {
+return run(namenodes, null, p, conf);
+  }
+
+  static int run(Collection namenodes, Collection nsIds,
+  final BalancerParameters p, Configuration conf)
+  throws IOException, InterruptedException {
 if (!p.getRunAsService()) {
-  return doBalance(namenodes, p, conf);
+  return doBalance(namenodes, nsIds, p, conf);
 }
 if (!serviceRunning) {
   serviceRunning = true;
@@ -772,7 +778,7 @@ public class Balancer {
 
 while (serviceRunning) {
   try {
-int retCode = doBalance(namenodes, p, conf);
+int retCode = doBalance(namenodes, nsIds, p, conf);
 if (retCode < 0) {
   LOG.info("Balance failed, error code: " + retCode);
   failedTimesSinceLastSuccessfulBalance++;
@@ -856,7 +862,8 @@ public class Balancer {
 checkReplicationPolicyCompatibility(conf);
 
 final Collection namenodes = DFSUtil.getInternalNsRpcUris(conf);
-return Balancer.run(namenodes, parse(args), conf);
+final Collection nsIds = DFSUtilClient.getNameServiceIds(conf);
+return Balancer.run(namenodes, nsIds, parse(args), conf);
   } catch (IOException e) {
 System.out.println(e + ".  Exiting ...");
 return ExitStatus.IO_EXCEPTION.getExitCode();
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java
 

[hadoop] branch branch-3.3 updated (240cba7 -> eb045ea)

2020-05-18 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a change to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 240cba7  HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016)
 new 3915d1a  HDFS-15356. Unify configuration `dfs.ha.allow.stale.reads` to 
DFSConfigKeys. Contributed by Xiaoqiao He.
 new eb045ea  HDFS-13183. Standby NameNode process getBlocks request to 
reduce Active load. Contributed by Xiaoqiao He.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../java/org/apache/hadoop/hdfs/DFSConfigKeys.java |  3 +
 .../main/java/org/apache/hadoop/hdfs/HAUtil.java   |  7 +-
 .../hadoop/hdfs/server/balancer/Balancer.java  | 39 ++
 .../hdfs/server/balancer/NameNodeConnector.java| 85 ++
 .../src/main/resources/hdfs-default.xml|  9 +++
 .../balancer/TestBalancerWithHANameNodes.java  | 56 +-
 6 files changed, 180 insertions(+), 19 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-14999. Avoid Potential Infinite Loop in DFSNetworkTopology. Contributed by Ayush Saxena.

2020-05-18 Thread ayushsaxena
This is an automated email from the ASF dual-hosted git repository.

ayushsaxena pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c84e6be  HDFS-14999. Avoid Potential Infinite Loop in 
DFSNetworkTopology. Contributed by Ayush Saxena.
c84e6be is described below

commit c84e6beada4e604175f7f138c9878a29665a8c47
Author: Ayush Saxena 
AuthorDate: Mon May 18 21:06:46 2020 +0530

HDFS-14999. Avoid Potential Infinite Loop in DFSNetworkTopology. 
Contributed by Ayush Saxena.
---
 .../apache/hadoop/hdfs/net/DFSNetworkTopology.java | 51 ++
 1 file changed, 32 insertions(+), 19 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java
index e4e4350..dbc5dea 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java
@@ -249,17 +249,10 @@ public class DFSNetworkTopology extends NetworkTopology {
   return null;
 }
 // to this point, it is guaranteed that there is at least one node
-// that satisfies the requirement, keep trying until we found one.
-Node chosen;
-do {
-  chosen = chooseRandomWithStorageTypeAndExcludeRoot(root, excludeRoot,
-  type);
-  if (excludedNodes == null || !excludedNodes.contains(chosen)) {
-break;
-  } else {
-LOG.debug("Node {} is excluded, continuing.", chosen);
-  }
-} while (true);
+// that satisfies the requirement.
+Node chosen =
+chooseRandomWithStorageTypeAndExcludeRoot(root, excludeRoot, type,
+excludedNodes);
 LOG.debug("chooseRandom returning {}", chosen);
 return chosen;
   }
@@ -268,23 +261,23 @@ public class DFSNetworkTopology extends NetworkTopology {
* Choose a random node that has the required storage type, under the given
* root, with an excluded subtree root (could also just be a leaf node).
*
-   * Note that excludedNode is checked after a random node, so it is not being
-   * handled here.
-   *
* @param root the root node where we start searching for a datanode
* @param excludeRoot the root of the subtree what should be excluded
* @param type the expected storage type
+   * @param excludedNodes the list of nodes to be excluded
* @return a random datanode, with the storage type, and is not in excluded
* scope
*/
   private Node chooseRandomWithStorageTypeAndExcludeRoot(
-  DFSTopologyNodeImpl root, Node excludeRoot, StorageType type) {
+  DFSTopologyNodeImpl root, Node excludeRoot, StorageType type,
+  Collection excludedNodes) {
 Node chosenNode;
 if (root.isRack()) {
   // children are datanode descriptor
   ArrayList candidates = new ArrayList<>();
   for (Node node : root.getChildren()) {
-if (node.equals(excludeRoot)) {
+if (node.equals(excludeRoot) || (excludedNodes != null && excludedNodes
+.contains(node))) {
   continue;
 }
 DatanodeDescriptor dnDescriptor = (DatanodeDescriptor)node;
@@ -301,7 +294,7 @@ public class DFSNetworkTopology extends NetworkTopology {
 } else {
   // the children are inner nodes
   ArrayList candidates =
-  getEligibleChildren(root, excludeRoot, type);
+  getEligibleChildren(root, excludeRoot, type, excludedNodes);
   if (candidates.size() == 0) {
 return null;
   }
@@ -330,7 +323,7 @@ public class DFSNetworkTopology extends NetworkTopology {
   }
   DFSTopologyNodeImpl nextRoot = candidates.get(idxChosen);
   chosenNode = chooseRandomWithStorageTypeAndExcludeRoot(
-  nextRoot, excludeRoot, type);
+  nextRoot, excludeRoot, type, excludedNodes);
 }
 return chosenNode;
   }
@@ -343,11 +336,13 @@ public class DFSNetworkTopology extends NetworkTopology {
* @param root the subtree root we check.
* @param excludeRoot the root of the subtree that should be excluded.
* @param type the storage type we look for.
+   * @param excludedNodes the list of excluded nodes.
* @return a list of possible nodes, each of them is eligible as the next
* level root we search.
*/
   private ArrayList getEligibleChildren(
-  DFSTopologyNodeImpl root, Node excludeRoot, StorageType type) {
+  DFSTopologyNodeImpl root, Node excludeRoot, StorageType type,
+  Collection excludedNodes) {
 ArrayList candidates = new ArrayList<>();
 int excludeCount = 0;
 if (excludeRoot != null && root.isAncestor(excludeRoot)) {
@@ -374,6 +369,24 @@ public class DFSNetworkTopology extends NetworkTopology {
   (dfsNode.isAncestor(excludeRoot) || dfsNode.equals(excludeRoot))) 

[hadoop] 01/02: Revert "HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016)"

2020-05-18 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 289f8acc644ff8f30452bec35e2833af57994112
Author: Wei-Chiu Chuang 
AuthorDate: Mon May 18 09:41:03 2020 -0700

Revert "HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016)"

This reverts commit ad9a6a0ee3d6cb3bde5e23c73151c0857d47ffd4.
---
 .../java/org/apache/hadoop/hdfs/ClientContext.java |  20 +--
 .../hadoop/hdfs/client/HdfsClientConfigKeys.java   |   2 -
 .../hdfs/client/impl/BlockReaderFactory.java   |   9 +-
 .../hadoop/hdfs/client/impl/DfsClientConf.java |  21 +--
 .../src/main/resources/hdfs-default.xml|  10 --
 .../hadoop/fs/TestEnhancedByteBufferAccess.java|   6 +-
 .../hdfs/client/impl/TestBlockReaderFactory.java   |  14 +-
 .../hdfs/client/impl/TestBlockReaderLocal.java | 190 ++---
 .../hdfs/shortcircuit/TestShortCircuitCache.java   |  19 +--
 9 files changed, 76 insertions(+), 215 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
index 7a03240..cbd941b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
@@ -77,7 +77,7 @@ public class ClientContext {
   /**
* Caches short-circuit file descriptors, mmap regions.
*/
-  private final ShortCircuitCache[] shortCircuitCache;
+  private final ShortCircuitCache shortCircuitCache;
 
   /**
* Caches TCP and UNIX domain sockets for reuse.
@@ -132,23 +132,13 @@ public class ClientContext {
*/
   private DeadNodeDetector deadNodeDetector = null;
 
-  /**
-   * ShortCircuitCache array size.
-   */
-  private final int clientShortCircuitNum;
-
   private ClientContext(String name, DfsClientConf conf,
   Configuration config) {
 final ShortCircuitConf scConf = conf.getShortCircuitConf();
 
 this.name = name;
 this.confString = scConf.confAsString();
-this.clientShortCircuitNum = conf.getClientShortCircuitNum();
-this.shortCircuitCache = new ShortCircuitCache[this.clientShortCircuitNum];
-for (int i = 0; i < this.clientShortCircuitNum; i++) {
-  this.shortCircuitCache[i] = ShortCircuitCache.fromConf(scConf);
-}
-
+this.shortCircuitCache = ShortCircuitCache.fromConf(scConf);
 this.peerCache = new PeerCache(scConf.getSocketCacheCapacity(),
 scConf.getSocketCacheExpiry());
 this.keyProviderCache = new KeyProviderCache(
@@ -238,11 +228,7 @@ public class ClientContext {
   }
 
   public ShortCircuitCache getShortCircuitCache() {
-return shortCircuitCache[0];
-  }
-
-  public ShortCircuitCache getShortCircuitCache(long idx) {
-return shortCircuitCache[(int) (idx % clientShortCircuitNum)];
+return shortCircuitCache;
   }
 
   public PeerCache getPeerCache() {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
index 0c35c8d..efc2766 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
@@ -144,8 +144,6 @@ public interface HdfsClientConfigKeys {
   "dfs.short.circuit.shared.memory.watcher.interrupt.check.ms";
   int DFS_SHORT_CIRCUIT_SHARED_MEMORY_WATCHER_INTERRUPT_CHECK_MS_DEFAULT =
   6;
-  String DFS_CLIENT_SHORT_CIRCUIT_NUM = "dfs.client.short.circuit.num";
-  int DFS_CLIENT_SHORT_CIRCUIT_NUM_DEFAULT = 1;
   String  DFS_CLIENT_SLOW_IO_WARNING_THRESHOLD_KEY =
   "dfs.client.slow.io.warning.threshold.ms";
   longDFS_CLIENT_SLOW_IO_WARNING_THRESHOLD_DEFAULT = 3;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
index 028d629..a3b611c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
@@ -476,8 +476,7 @@ public class BlockReaderFactory implements 
ShortCircuitReplicaCreator {
   "giving up on BlockReaderLocal.", this, pathInfo);
   return null;
 }
-ShortCircuitCache cache =
-clientContext.getShortCircuitCache(block.getBlockId());
+ShortCircuitCache cache = clientContext.getShortCircuitCache();
 ExtendedBlockId 

[hadoop] 02/02: HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016)

2020-05-18 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 240cba7e6d8fa528a48ff04766b5b3c9b23a173e
Author: Wei-Chiu Chuang 
AuthorDate: Mon May 18 09:22:15 2020 -0700

HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016)

(cherry picked from commit 2abcf7762ae74b936e1cedb60d5d2b4cc4ee86ea)
---
 .../java/org/apache/hadoop/hdfs/ClientContext.java |  20 ++-
 .../hadoop/hdfs/client/HdfsClientConfigKeys.java   |   2 +
 .../hdfs/client/impl/BlockReaderFactory.java   |   9 +-
 .../hadoop/hdfs/client/impl/DfsClientConf.java |  21 ++-
 .../src/main/resources/hdfs-default.xml|  10 ++
 .../hadoop/fs/TestEnhancedByteBufferAccess.java|   6 +-
 .../hdfs/client/impl/TestBlockReaderFactory.java   |  14 +-
 .../hdfs/client/impl/TestBlockReaderLocal.java | 189 +++--
 .../hdfs/shortcircuit/TestShortCircuitCache.java   |  19 ++-
 9 files changed, 214 insertions(+), 76 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
index cbd941b..7a03240 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
@@ -77,7 +77,7 @@ public class ClientContext {
   /**
* Caches short-circuit file descriptors, mmap regions.
*/
-  private final ShortCircuitCache shortCircuitCache;
+  private final ShortCircuitCache[] shortCircuitCache;
 
   /**
* Caches TCP and UNIX domain sockets for reuse.
@@ -132,13 +132,23 @@ public class ClientContext {
*/
   private DeadNodeDetector deadNodeDetector = null;
 
+  /**
+   * ShortCircuitCache array size.
+   */
+  private final int clientShortCircuitNum;
+
   private ClientContext(String name, DfsClientConf conf,
   Configuration config) {
 final ShortCircuitConf scConf = conf.getShortCircuitConf();
 
 this.name = name;
 this.confString = scConf.confAsString();
-this.shortCircuitCache = ShortCircuitCache.fromConf(scConf);
+this.clientShortCircuitNum = conf.getClientShortCircuitNum();
+this.shortCircuitCache = new ShortCircuitCache[this.clientShortCircuitNum];
+for (int i = 0; i < this.clientShortCircuitNum; i++) {
+  this.shortCircuitCache[i] = ShortCircuitCache.fromConf(scConf);
+}
+
 this.peerCache = new PeerCache(scConf.getSocketCacheCapacity(),
 scConf.getSocketCacheExpiry());
 this.keyProviderCache = new KeyProviderCache(
@@ -228,7 +238,11 @@ public class ClientContext {
   }
 
   public ShortCircuitCache getShortCircuitCache() {
-return shortCircuitCache;
+return shortCircuitCache[0];
+  }
+
+  public ShortCircuitCache getShortCircuitCache(long idx) {
+return shortCircuitCache[(int) (idx % clientShortCircuitNum)];
   }
 
   public PeerCache getPeerCache() {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
index efc2766..0c35c8d 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
@@ -144,6 +144,8 @@ public interface HdfsClientConfigKeys {
   "dfs.short.circuit.shared.memory.watcher.interrupt.check.ms";
   int DFS_SHORT_CIRCUIT_SHARED_MEMORY_WATCHER_INTERRUPT_CHECK_MS_DEFAULT =
   6;
+  String DFS_CLIENT_SHORT_CIRCUIT_NUM = "dfs.client.short.circuit.num";
+  int DFS_CLIENT_SHORT_CIRCUIT_NUM_DEFAULT = 1;
   String  DFS_CLIENT_SLOW_IO_WARNING_THRESHOLD_KEY =
   "dfs.client.slow.io.warning.threshold.ms";
   longDFS_CLIENT_SLOW_IO_WARNING_THRESHOLD_DEFAULT = 3;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
index a3b611c..028d629 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
@@ -476,7 +476,8 @@ public class BlockReaderFactory implements 
ShortCircuitReplicaCreator {
   "giving up on BlockReaderLocal.", this, pathInfo);
   return null;
 }
-ShortCircuitCache cache = clientContext.getShortCircuitCache();
+ShortCircuitCache cache =
+clientContext.getShortCircuitCache(block.getBlockId());
 ExtendedBlockId 

[hadoop] branch branch-3.3 updated (53d22fd -> 240cba7)

2020-05-18 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a change to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 53d22fd  Revert "HDFS-13183. Standby NameNode process getBlocks 
request to reduce Active load. Contributed by Xiaoqiao He."
 new 289f8ac  Revert "HDFS-15202 Boost short circuit cache (rebase PR-1884) 
(#2016)"
 new 240cba7  HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java  | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: Revert "HDFS-13183. Standby NameNode process getBlocks request to reduce Active load. Contributed by Xiaoqiao He."

2020-05-18 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 53d22fd  Revert "HDFS-13183. Standby NameNode process getBlocks 
request to reduce Active load. Contributed by Xiaoqiao He."
53d22fd is described below

commit 53d22fdb88c60f43d8674348529d18917fcf6e39
Author: Wei-Chiu Chuang 
AuthorDate: Mon May 18 09:39:57 2020 -0700

Revert "HDFS-13183. Standby NameNode process getBlocks request to reduce 
Active load. Contributed by Xiaoqiao He."

This reverts commit acae31aa2824bf50fe9291148357b4f5c76a9329.
---
 .../hadoop/hdfs/server/balancer/Balancer.java  | 39 --
 .../hdfs/server/balancer/NameNodeConnector.java| 85 --
 .../balancer/TestBalancerWithHANameNodes.java  | 56 +-
 3 files changed, 17 insertions(+), 163 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
index f643590..e8b4971 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
@@ -37,7 +37,6 @@ import java.util.Set;
 import java.util.concurrent.TimeUnit;
 
 import com.google.common.annotations.VisibleForTesting;
-import org.apache.hadoop.hdfs.DFSUtilClient;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import org.apache.hadoop.HadoopIllegalArgumentException;
@@ -689,7 +688,7 @@ public class Balancer {
* execute a {@link Balancer} to work through all datanodes once.  
*/
   static private int doBalance(Collection namenodes,
-  Collection nsIds, final BalancerParameters p, Configuration conf)
+  final BalancerParameters p, Configuration conf)
   throws IOException, InterruptedException {
 final long sleeptime =
 conf.getTimeDuration(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY,
@@ -708,12 +707,13 @@ public class Balancer {
 System.out.println("Time Stamp   Iteration#  Bytes Already 
Moved  Bytes Left To Move  Bytes Being Moved");
 
 List connectors = Collections.emptyList();
-boolean done = false;
-for(int iteration = 0; !done; iteration++) {
-  try {
-connectors = NameNodeConnector.newNameNodeConnectors(namenodes, nsIds,
-Balancer.class.getSimpleName(), BALANCER_ID_PATH, conf,
-p.getMaxIdleIteration());
+try {
+  connectors = NameNodeConnector.newNameNodeConnectors(namenodes, 
+  Balancer.class.getSimpleName(), BALANCER_ID_PATH, conf,
+  p.getMaxIdleIteration());
+
+  boolean done = false;
+  for(int iteration = 0; !done; iteration++) {
 done = true;
 Collections.shuffle(connectors);
 for(NameNodeConnector nnc : connectors) {
@@ -741,25 +741,19 @@ public class Balancer {
 if (!done) {
   Thread.sleep(sleeptime);
 }
-  } finally {
-for(NameNodeConnector nnc : connectors) {
-  IOUtils.cleanupWithLogger(LOG, nnc);
-}
+  }
+} finally {
+  for(NameNodeConnector nnc : connectors) {
+IOUtils.cleanupWithLogger(LOG, nnc);
   }
 }
 return ExitStatus.SUCCESS.getExitCode();
   }
 
   static int run(Collection namenodes, final BalancerParameters p,
- Configuration conf) throws IOException, InterruptedException {
-return run(namenodes, null, p, conf);
-  }
-
-  static int run(Collection namenodes, Collection nsIds,
-  final BalancerParameters p, Configuration conf)
-  throws IOException, InterruptedException {
+  Configuration conf) throws IOException, InterruptedException {
 if (!p.getRunAsService()) {
-  return doBalance(namenodes, nsIds, p, conf);
+  return doBalance(namenodes, p, conf);
 }
 if (!serviceRunning) {
   serviceRunning = true;
@@ -778,7 +772,7 @@ public class Balancer {
 
 while (serviceRunning) {
   try {
-int retCode = doBalance(namenodes, nsIds, p, conf);
+int retCode = doBalance(namenodes, p, conf);
 if (retCode < 0) {
   LOG.info("Balance failed, error code: " + retCode);
   failedTimesSinceLastSuccessfulBalance++;
@@ -862,8 +856,7 @@ public class Balancer {
 checkReplicationPolicyCompatibility(conf);
 
 final Collection namenodes = DFSUtil.getInternalNsRpcUris(conf);
-final Collection nsIds = DFSUtilClient.getNameServiceIds(conf);
-return Balancer.run(namenodes, nsIds, parse(args), conf);
+return Balancer.run(namenodes, parse(args), conf);
   } catch (IOException e) {
 System.out.println(e + ".  Exiting ...");
 return 

[hadoop] branch trunk updated (50caba1 -> 2abcf77)

2020-05-18 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 50caba1  HDFS-15207. VolumeScanner skip to scan blocks accessed during 
recent scan peroid. Contributed by Yang Yun.
 new 4525292  Revert "HDFS-15202 Boost short circuit cache (rebase PR-1884) 
(#2016)"
 new 2abcf77  HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java  | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/02: Revert "HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016)"

2020-05-18 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 4525292d41482330a86f1cc3935e072f9f67c308
Author: Wei-Chiu Chuang 
AuthorDate: Mon May 18 09:22:05 2020 -0700

Revert "HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016)"

This reverts commit 86e6aa8eec538e142044e2b6415ec1caff5e9cbd.
---
 .../java/org/apache/hadoop/hdfs/ClientContext.java |  20 +--
 .../hadoop/hdfs/client/HdfsClientConfigKeys.java   |   2 -
 .../hdfs/client/impl/BlockReaderFactory.java   |   9 +-
 .../hadoop/hdfs/client/impl/DfsClientConf.java |  21 +--
 .../src/main/resources/hdfs-default.xml|  10 --
 .../hadoop/fs/TestEnhancedByteBufferAccess.java|   6 +-
 .../hdfs/client/impl/TestBlockReaderFactory.java   |  14 +-
 .../hdfs/client/impl/TestBlockReaderLocal.java | 190 ++---
 .../hdfs/shortcircuit/TestShortCircuitCache.java   |  19 +--
 9 files changed, 76 insertions(+), 215 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
index 7a03240..cbd941b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
@@ -77,7 +77,7 @@ public class ClientContext {
   /**
* Caches short-circuit file descriptors, mmap regions.
*/
-  private final ShortCircuitCache[] shortCircuitCache;
+  private final ShortCircuitCache shortCircuitCache;
 
   /**
* Caches TCP and UNIX domain sockets for reuse.
@@ -132,23 +132,13 @@ public class ClientContext {
*/
   private DeadNodeDetector deadNodeDetector = null;
 
-  /**
-   * ShortCircuitCache array size.
-   */
-  private final int clientShortCircuitNum;
-
   private ClientContext(String name, DfsClientConf conf,
   Configuration config) {
 final ShortCircuitConf scConf = conf.getShortCircuitConf();
 
 this.name = name;
 this.confString = scConf.confAsString();
-this.clientShortCircuitNum = conf.getClientShortCircuitNum();
-this.shortCircuitCache = new ShortCircuitCache[this.clientShortCircuitNum];
-for (int i = 0; i < this.clientShortCircuitNum; i++) {
-  this.shortCircuitCache[i] = ShortCircuitCache.fromConf(scConf);
-}
-
+this.shortCircuitCache = ShortCircuitCache.fromConf(scConf);
 this.peerCache = new PeerCache(scConf.getSocketCacheCapacity(),
 scConf.getSocketCacheExpiry());
 this.keyProviderCache = new KeyProviderCache(
@@ -238,11 +228,7 @@ public class ClientContext {
   }
 
   public ShortCircuitCache getShortCircuitCache() {
-return shortCircuitCache[0];
-  }
-
-  public ShortCircuitCache getShortCircuitCache(long idx) {
-return shortCircuitCache[(int) (idx % clientShortCircuitNum)];
+return shortCircuitCache;
   }
 
   public PeerCache getPeerCache() {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
index e8b5402..ab3f6f2 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
@@ -144,8 +144,6 @@ public interface HdfsClientConfigKeys {
   "dfs.short.circuit.shared.memory.watcher.interrupt.check.ms";
   int DFS_SHORT_CIRCUIT_SHARED_MEMORY_WATCHER_INTERRUPT_CHECK_MS_DEFAULT =
   6;
-  String DFS_CLIENT_SHORT_CIRCUIT_NUM = "dfs.client.short.circuit.num";
-  int DFS_CLIENT_SHORT_CIRCUIT_NUM_DEFAULT = 1;
   String  DFS_CLIENT_SLOW_IO_WARNING_THRESHOLD_KEY =
   "dfs.client.slow.io.warning.threshold.ms";
   longDFS_CLIENT_SLOW_IO_WARNING_THRESHOLD_DEFAULT = 3;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
index 028d629..a3b611c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
@@ -476,8 +476,7 @@ public class BlockReaderFactory implements 
ShortCircuitReplicaCreator {
   "giving up on BlockReaderLocal.", this, pathInfo);
   return null;
 }
-ShortCircuitCache cache =
-clientContext.getShortCircuitCache(block.getBlockId());
+ShortCircuitCache cache = clientContext.getShortCircuitCache();
 ExtendedBlockId key = 

[hadoop] 02/02: HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016)

2020-05-18 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 2abcf7762ae74b936e1cedb60d5d2b4cc4ee86ea
Author: Wei-Chiu Chuang 
AuthorDate: Mon May 18 09:22:15 2020 -0700

HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016)
---
 .../java/org/apache/hadoop/hdfs/ClientContext.java |  20 ++-
 .../hadoop/hdfs/client/HdfsClientConfigKeys.java   |   2 +
 .../hdfs/client/impl/BlockReaderFactory.java   |   9 +-
 .../hadoop/hdfs/client/impl/DfsClientConf.java |  21 ++-
 .../src/main/resources/hdfs-default.xml|  10 ++
 .../hadoop/fs/TestEnhancedByteBufferAccess.java|   6 +-
 .../hdfs/client/impl/TestBlockReaderFactory.java   |  14 +-
 .../hdfs/client/impl/TestBlockReaderLocal.java | 189 +++--
 .../hdfs/shortcircuit/TestShortCircuitCache.java   |  19 ++-
 9 files changed, 214 insertions(+), 76 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
index cbd941b..7a03240 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
@@ -77,7 +77,7 @@ public class ClientContext {
   /**
* Caches short-circuit file descriptors, mmap regions.
*/
-  private final ShortCircuitCache shortCircuitCache;
+  private final ShortCircuitCache[] shortCircuitCache;
 
   /**
* Caches TCP and UNIX domain sockets for reuse.
@@ -132,13 +132,23 @@ public class ClientContext {
*/
   private DeadNodeDetector deadNodeDetector = null;
 
+  /**
+   * ShortCircuitCache array size.
+   */
+  private final int clientShortCircuitNum;
+
   private ClientContext(String name, DfsClientConf conf,
   Configuration config) {
 final ShortCircuitConf scConf = conf.getShortCircuitConf();
 
 this.name = name;
 this.confString = scConf.confAsString();
-this.shortCircuitCache = ShortCircuitCache.fromConf(scConf);
+this.clientShortCircuitNum = conf.getClientShortCircuitNum();
+this.shortCircuitCache = new ShortCircuitCache[this.clientShortCircuitNum];
+for (int i = 0; i < this.clientShortCircuitNum; i++) {
+  this.shortCircuitCache[i] = ShortCircuitCache.fromConf(scConf);
+}
+
 this.peerCache = new PeerCache(scConf.getSocketCacheCapacity(),
 scConf.getSocketCacheExpiry());
 this.keyProviderCache = new KeyProviderCache(
@@ -228,7 +238,11 @@ public class ClientContext {
   }
 
   public ShortCircuitCache getShortCircuitCache() {
-return shortCircuitCache;
+return shortCircuitCache[0];
+  }
+
+  public ShortCircuitCache getShortCircuitCache(long idx) {
+return shortCircuitCache[(int) (idx % clientShortCircuitNum)];
   }
 
   public PeerCache getPeerCache() {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
index ab3f6f2..e8b5402 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
@@ -144,6 +144,8 @@ public interface HdfsClientConfigKeys {
   "dfs.short.circuit.shared.memory.watcher.interrupt.check.ms";
   int DFS_SHORT_CIRCUIT_SHARED_MEMORY_WATCHER_INTERRUPT_CHECK_MS_DEFAULT =
   6;
+  String DFS_CLIENT_SHORT_CIRCUIT_NUM = "dfs.client.short.circuit.num";
+  int DFS_CLIENT_SHORT_CIRCUIT_NUM_DEFAULT = 1;
   String  DFS_CLIENT_SLOW_IO_WARNING_THRESHOLD_KEY =
   "dfs.client.slow.io.warning.threshold.ms";
   longDFS_CLIENT_SLOW_IO_WARNING_THRESHOLD_DEFAULT = 3;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
index a3b611c..028d629 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
@@ -476,7 +476,8 @@ public class BlockReaderFactory implements 
ShortCircuitReplicaCreator {
   "giving up on BlockReaderLocal.", this, pathInfo);
   return null;
 }
-ShortCircuitCache cache = clientContext.getShortCircuitCache();
+ShortCircuitCache cache =
+clientContext.getShortCircuitCache(block.getBlockId());
 ExtendedBlockId key = new ExtendedBlockId(block.getBlockId(),
 block.getBlockPoolId());
 

[hadoop] branch branch-3.3 updated: HDFS-15207. VolumeScanner skip to scan blocks accessed during recent scan peroid. Contributed by Yang Yun.

2020-05-18 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 032ccba  HDFS-15207. VolumeScanner skip to scan blocks accessed during 
recent scan peroid. Contributed by Yang Yun.
032ccba is described below

commit 032ccba67c44aa83bd80b0209b4da2204f2f4c5e
Author: Wei-Chiu Chuang 
AuthorDate: Mon May 18 08:40:38 2020 -0700

HDFS-15207. VolumeScanner skip to scan blocks accessed during recent scan 
peroid. Contributed by Yang Yun.

(cherry picked from commit 50caba1a92cb36ce78307d47ed7624ce216562fc)
---
 .../java/org/apache/hadoop/hdfs/DFSConfigKeys.java |  4 +++
 .../hadoop/hdfs/server/datanode/BlockScanner.java  |  6 
 .../hadoop/hdfs/server/datanode/VolumeScanner.java | 22 +
 .../src/main/resources/hdfs-default.xml| 10 ++
 .../hdfs/server/datanode/TestBlockScanner.java | 38 ++
 5 files changed, 80 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 2c943b60..2e2b7db 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -846,6 +846,10 @@ public class DFSConfigKeys extends CommonConfigurationKeys 
{
   public static final int DFS_DATANODE_SCAN_PERIOD_HOURS_DEFAULT = 21 * 
24;  // 3 weeks.
   public static final String  DFS_BLOCK_SCANNER_VOLUME_BYTES_PER_SECOND = 
"dfs.block.scanner.volume.bytes.per.second";
   public static final long
DFS_BLOCK_SCANNER_VOLUME_BYTES_PER_SECOND_DEFAULT = 1048576L;
+  public static final String  DFS_BLOCK_SCANNER_SKIP_RECENT_ACCESSED =
+  "dfs.block.scanner.skip.recent.accessed";
+  public static final boolean DFS_BLOCK_SCANNER_SKIP_RECENT_ACCESSED_DEFAULT =
+  false;
   public static final String  DFS_DATANODE_TRANSFERTO_ALLOWED_KEY = 
"dfs.datanode.transferTo.allowed";
   public static final boolean DFS_DATANODE_TRANSFERTO_ALLOWED_DEFAULT = true;
   public static final String  DFS_HEARTBEAT_INTERVAL_KEY = 
"dfs.heartbeat.interval";
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockScanner.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockScanner.java
index 6b1b96f..82efcf8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockScanner.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockScanner.java
@@ -18,6 +18,8 @@
 
 package org.apache.hadoop.hdfs.server.datanode;
 
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SCANNER_SKIP_RECENT_ACCESSED;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SCANNER_SKIP_RECENT_ACCESSED_DEFAULT;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SCANNER_VOLUME_BYTES_PER_SECOND;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SCANNER_VOLUME_BYTES_PER_SECOND_DEFAULT;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_SCAN_PERIOD_HOURS_KEY;
@@ -112,6 +114,7 @@ public class BlockScanner {
 final long maxStalenessMs;
 final long scanPeriodMs;
 final long cursorSaveMs;
+final boolean skipRecentAccessed;
 final Class resultHandler;
 
 private static long getUnitTestLong(Configuration conf, String key,
@@ -163,6 +166,9 @@ public class BlockScanner {
   this.cursorSaveMs = Math.max(0L, getUnitTestLong(conf,
   INTERNAL_DFS_BLOCK_SCANNER_CURSOR_SAVE_INTERVAL_MS,
   INTERNAL_DFS_BLOCK_SCANNER_CURSOR_SAVE_INTERVAL_MS_DEFAULT));
+  this.skipRecentAccessed = conf.getBoolean(
+  DFS_BLOCK_SCANNER_SKIP_RECENT_ACCESSED,
+  DFS_BLOCK_SCANNER_SKIP_RECENT_ACCESSED_DEFAULT);
   if (allowUnitTestSettings) {
 this.resultHandler = (Class)
 conf.getClass(INTERNAL_VOLUME_SCANNER_SCAN_RESULT_HANDLER,
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
index 84cfb04..5f1a1e0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
@@ -19,8 +19,11 @@
 package org.apache.hadoop.hdfs.server.datanode;
 
 import java.io.DataOutputStream;
+import java.io.File;
 import java.io.FileNotFoundException;
 import java.io.IOException;
+import java.nio.file.Files;
+import 

[hadoop] branch trunk updated: HDFS-15207. VolumeScanner skip to scan blocks accessed during recent scan peroid. Contributed by Yang Yun.

2020-05-18 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 50caba1  HDFS-15207. VolumeScanner skip to scan blocks accessed during 
recent scan peroid. Contributed by Yang Yun.
50caba1 is described below

commit 50caba1a92cb36ce78307d47ed7624ce216562fc
Author: Wei-Chiu Chuang 
AuthorDate: Mon May 18 08:40:38 2020 -0700

HDFS-15207. VolumeScanner skip to scan blocks accessed during recent scan 
peroid. Contributed by Yang Yun.
---
 .../java/org/apache/hadoop/hdfs/DFSConfigKeys.java |  4 +++
 .../hadoop/hdfs/server/datanode/BlockScanner.java  |  6 
 .../hadoop/hdfs/server/datanode/VolumeScanner.java | 22 +
 .../src/main/resources/hdfs-default.xml| 10 ++
 .../hdfs/server/datanode/TestBlockScanner.java | 38 ++
 5 files changed, 80 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index f4bf33a..4b8c27b 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -846,6 +846,10 @@ public class DFSConfigKeys extends CommonConfigurationKeys 
{
   public static final int DFS_DATANODE_SCAN_PERIOD_HOURS_DEFAULT = 21 * 
24;  // 3 weeks.
   public static final String  DFS_BLOCK_SCANNER_VOLUME_BYTES_PER_SECOND = 
"dfs.block.scanner.volume.bytes.per.second";
   public static final long
DFS_BLOCK_SCANNER_VOLUME_BYTES_PER_SECOND_DEFAULT = 1048576L;
+  public static final String  DFS_BLOCK_SCANNER_SKIP_RECENT_ACCESSED =
+  "dfs.block.scanner.skip.recent.accessed";
+  public static final boolean DFS_BLOCK_SCANNER_SKIP_RECENT_ACCESSED_DEFAULT =
+  false;
   public static final String  DFS_DATANODE_TRANSFERTO_ALLOWED_KEY = 
"dfs.datanode.transferTo.allowed";
   public static final boolean DFS_DATANODE_TRANSFERTO_ALLOWED_DEFAULT = true;
   public static final String  DFS_HEARTBEAT_INTERVAL_KEY = 
"dfs.heartbeat.interval";
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockScanner.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockScanner.java
index 6b1b96f..82efcf8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockScanner.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockScanner.java
@@ -18,6 +18,8 @@
 
 package org.apache.hadoop.hdfs.server.datanode;
 
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SCANNER_SKIP_RECENT_ACCESSED;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SCANNER_SKIP_RECENT_ACCESSED_DEFAULT;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SCANNER_VOLUME_BYTES_PER_SECOND;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SCANNER_VOLUME_BYTES_PER_SECOND_DEFAULT;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_SCAN_PERIOD_HOURS_KEY;
@@ -112,6 +114,7 @@ public class BlockScanner {
 final long maxStalenessMs;
 final long scanPeriodMs;
 final long cursorSaveMs;
+final boolean skipRecentAccessed;
 final Class resultHandler;
 
 private static long getUnitTestLong(Configuration conf, String key,
@@ -163,6 +166,9 @@ public class BlockScanner {
   this.cursorSaveMs = Math.max(0L, getUnitTestLong(conf,
   INTERNAL_DFS_BLOCK_SCANNER_CURSOR_SAVE_INTERVAL_MS,
   INTERNAL_DFS_BLOCK_SCANNER_CURSOR_SAVE_INTERVAL_MS_DEFAULT));
+  this.skipRecentAccessed = conf.getBoolean(
+  DFS_BLOCK_SCANNER_SKIP_RECENT_ACCESSED,
+  DFS_BLOCK_SCANNER_SKIP_RECENT_ACCESSED_DEFAULT);
   if (allowUnitTestSettings) {
 this.resultHandler = (Class)
 conf.getClass(INTERNAL_VOLUME_SCANNER_SCAN_RESULT_HANDLER,
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
index 84cfb04..5f1a1e0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
@@ -19,8 +19,11 @@
 package org.apache.hadoop.hdfs.server.datanode;
 
 import java.io.DataOutputStream;
+import java.io.File;
 import java.io.FileNotFoundException;
 import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.attribute.BasicFileAttributes;
 import java.util.ArrayList;
 import java.util.Iterator;
 import 

[hadoop] branch branch-3.3 updated: HDFS-13183. Standby NameNode process getBlocks request to reduce Active load. Contributed by Xiaoqiao He.

2020-05-18 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new acae31a  HDFS-13183. Standby NameNode process getBlocks request to 
reduce Active load. Contributed by Xiaoqiao He.
acae31a is described below

commit acae31aa2824bf50fe9291148357b4f5c76a9329
Author: He Xiaoqiao 
AuthorDate: Mon May 18 07:08:32 2020 -0700

HDFS-13183. Standby NameNode process getBlocks request to reduce Active 
load. Contributed by Xiaoqiao He.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit a3f44dacc1fa19acc4eefd1e2505e54f8629e603)
---
 .../hadoop/hdfs/server/balancer/Balancer.java  | 39 ++
 .../hdfs/server/balancer/NameNodeConnector.java| 85 ++
 .../balancer/TestBalancerWithHANameNodes.java  | 56 +-
 3 files changed, 163 insertions(+), 17 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
index e8b4971..f643590 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
@@ -37,6 +37,7 @@ import java.util.Set;
 import java.util.concurrent.TimeUnit;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.hdfs.DFSUtilClient;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import org.apache.hadoop.HadoopIllegalArgumentException;
@@ -688,7 +689,7 @@ public class Balancer {
* execute a {@link Balancer} to work through all datanodes once.  
*/
   static private int doBalance(Collection namenodes,
-  final BalancerParameters p, Configuration conf)
+  Collection nsIds, final BalancerParameters p, Configuration conf)
   throws IOException, InterruptedException {
 final long sleeptime =
 conf.getTimeDuration(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY,
@@ -707,13 +708,12 @@ public class Balancer {
 System.out.println("Time Stamp   Iteration#  Bytes Already 
Moved  Bytes Left To Move  Bytes Being Moved");
 
 List connectors = Collections.emptyList();
-try {
-  connectors = NameNodeConnector.newNameNodeConnectors(namenodes, 
-  Balancer.class.getSimpleName(), BALANCER_ID_PATH, conf,
-  p.getMaxIdleIteration());
-
-  boolean done = false;
-  for(int iteration = 0; !done; iteration++) {
+boolean done = false;
+for(int iteration = 0; !done; iteration++) {
+  try {
+connectors = NameNodeConnector.newNameNodeConnectors(namenodes, nsIds,
+Balancer.class.getSimpleName(), BALANCER_ID_PATH, conf,
+p.getMaxIdleIteration());
 done = true;
 Collections.shuffle(connectors);
 for(NameNodeConnector nnc : connectors) {
@@ -741,19 +741,25 @@ public class Balancer {
 if (!done) {
   Thread.sleep(sleeptime);
 }
-  }
-} finally {
-  for(NameNodeConnector nnc : connectors) {
-IOUtils.cleanupWithLogger(LOG, nnc);
+  } finally {
+for(NameNodeConnector nnc : connectors) {
+  IOUtils.cleanupWithLogger(LOG, nnc);
+}
   }
 }
 return ExitStatus.SUCCESS.getExitCode();
   }
 
   static int run(Collection namenodes, final BalancerParameters p,
-  Configuration conf) throws IOException, InterruptedException {
+ Configuration conf) throws IOException, InterruptedException {
+return run(namenodes, null, p, conf);
+  }
+
+  static int run(Collection namenodes, Collection nsIds,
+  final BalancerParameters p, Configuration conf)
+  throws IOException, InterruptedException {
 if (!p.getRunAsService()) {
-  return doBalance(namenodes, p, conf);
+  return doBalance(namenodes, nsIds, p, conf);
 }
 if (!serviceRunning) {
   serviceRunning = true;
@@ -772,7 +778,7 @@ public class Balancer {
 
 while (serviceRunning) {
   try {
-int retCode = doBalance(namenodes, p, conf);
+int retCode = doBalance(namenodes, nsIds, p, conf);
 if (retCode < 0) {
   LOG.info("Balance failed, error code: " + retCode);
   failedTimesSinceLastSuccessfulBalance++;
@@ -856,7 +862,8 @@ public class Balancer {
 checkReplicationPolicyCompatibility(conf);
 
 final Collection namenodes = DFSUtil.getInternalNsRpcUris(conf);
-return Balancer.run(namenodes, parse(args), conf);
+final Collection nsIds = DFSUtilClient.getNameServiceIds(conf);
+return Balancer.run(namenodes, nsIds, parse(args), conf);
   } catch (IOException e) {
 System.out.println(e + ".  Exiting ...");
  

[hadoop] branch trunk updated: HDFS-13183. Standby NameNode process getBlocks request to reduce Active load. Contributed by Xiaoqiao He.

2020-05-18 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a3f44da  HDFS-13183. Standby NameNode process getBlocks request to 
reduce Active load. Contributed by Xiaoqiao He.
a3f44da is described below

commit a3f44dacc1fa19acc4eefd1e2505e54f8629e603
Author: He Xiaoqiao 
AuthorDate: Mon May 18 07:08:32 2020 -0700

HDFS-13183. Standby NameNode process getBlocks request to reduce Active 
load. Contributed by Xiaoqiao He.

Signed-off-by: Wei-Chiu Chuang 
---
 .../hadoop/hdfs/server/balancer/Balancer.java  | 39 ++
 .../hdfs/server/balancer/NameNodeConnector.java| 85 ++
 .../balancer/TestBalancerWithHANameNodes.java  | 56 +-
 3 files changed, 163 insertions(+), 17 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
index e8b4971..f643590 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
@@ -37,6 +37,7 @@ import java.util.Set;
 import java.util.concurrent.TimeUnit;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.hdfs.DFSUtilClient;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import org.apache.hadoop.HadoopIllegalArgumentException;
@@ -688,7 +689,7 @@ public class Balancer {
* execute a {@link Balancer} to work through all datanodes once.  
*/
   static private int doBalance(Collection namenodes,
-  final BalancerParameters p, Configuration conf)
+  Collection nsIds, final BalancerParameters p, Configuration conf)
   throws IOException, InterruptedException {
 final long sleeptime =
 conf.getTimeDuration(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY,
@@ -707,13 +708,12 @@ public class Balancer {
 System.out.println("Time Stamp   Iteration#  Bytes Already 
Moved  Bytes Left To Move  Bytes Being Moved");
 
 List connectors = Collections.emptyList();
-try {
-  connectors = NameNodeConnector.newNameNodeConnectors(namenodes, 
-  Balancer.class.getSimpleName(), BALANCER_ID_PATH, conf,
-  p.getMaxIdleIteration());
-
-  boolean done = false;
-  for(int iteration = 0; !done; iteration++) {
+boolean done = false;
+for(int iteration = 0; !done; iteration++) {
+  try {
+connectors = NameNodeConnector.newNameNodeConnectors(namenodes, nsIds,
+Balancer.class.getSimpleName(), BALANCER_ID_PATH, conf,
+p.getMaxIdleIteration());
 done = true;
 Collections.shuffle(connectors);
 for(NameNodeConnector nnc : connectors) {
@@ -741,19 +741,25 @@ public class Balancer {
 if (!done) {
   Thread.sleep(sleeptime);
 }
-  }
-} finally {
-  for(NameNodeConnector nnc : connectors) {
-IOUtils.cleanupWithLogger(LOG, nnc);
+  } finally {
+for(NameNodeConnector nnc : connectors) {
+  IOUtils.cleanupWithLogger(LOG, nnc);
+}
   }
 }
 return ExitStatus.SUCCESS.getExitCode();
   }
 
   static int run(Collection namenodes, final BalancerParameters p,
-  Configuration conf) throws IOException, InterruptedException {
+ Configuration conf) throws IOException, InterruptedException {
+return run(namenodes, null, p, conf);
+  }
+
+  static int run(Collection namenodes, Collection nsIds,
+  final BalancerParameters p, Configuration conf)
+  throws IOException, InterruptedException {
 if (!p.getRunAsService()) {
-  return doBalance(namenodes, p, conf);
+  return doBalance(namenodes, nsIds, p, conf);
 }
 if (!serviceRunning) {
   serviceRunning = true;
@@ -772,7 +778,7 @@ public class Balancer {
 
 while (serviceRunning) {
   try {
-int retCode = doBalance(namenodes, p, conf);
+int retCode = doBalance(namenodes, nsIds, p, conf);
 if (retCode < 0) {
   LOG.info("Balance failed, error code: " + retCode);
   failedTimesSinceLastSuccessfulBalance++;
@@ -856,7 +862,8 @@ public class Balancer {
 checkReplicationPolicyCompatibility(conf);
 
 final Collection namenodes = DFSUtil.getInternalNsRpcUris(conf);
-return Balancer.run(namenodes, parse(args), conf);
+final Collection nsIds = DFSUtilClient.getNameServiceIds(conf);
+return Balancer.run(namenodes, nsIds, parse(args), conf);
   } catch (IOException e) {
 System.out.println(e + ".  Exiting ...");
 return ExitStatus.IO_EXCEPTION.getExitCode();
diff --git 

[hadoop] branch branch-3.3 updated: HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016)

2020-05-18 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new ad9a6a0  HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016)
ad9a6a0 is described below

commit ad9a6a0ee3d6cb3bde5e23c73151c0857d47ffd4
Author: pustota2009 <61382543+pustota2...@users.noreply.github.com>
AuthorDate: Mon May 18 17:04:04 2020 +0300

HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016)

Added parameter dfs.client.short.circuit.num improving HDFS-client's 
massive reading performance by create few instances ShortCircuit caches instead 
of one. It helps avoid locks and lets CPU do job.

(cherry picked from commit 86e6aa8eec538e142044e2b6415ec1caff5e9cbd)
---
 .../java/org/apache/hadoop/hdfs/ClientContext.java |  20 ++-
 .../hadoop/hdfs/client/HdfsClientConfigKeys.java   |   2 +
 .../hdfs/client/impl/BlockReaderFactory.java   |   9 +-
 .../hadoop/hdfs/client/impl/DfsClientConf.java |  21 ++-
 .../src/main/resources/hdfs-default.xml|  10 ++
 .../hadoop/fs/TestEnhancedByteBufferAccess.java|   6 +-
 .../hdfs/client/impl/TestBlockReaderFactory.java   |  14 +-
 .../hdfs/client/impl/TestBlockReaderLocal.java | 190 +++--
 .../hdfs/shortcircuit/TestShortCircuitCache.java   |  19 ++-
 9 files changed, 215 insertions(+), 76 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
index cbd941b..7a03240 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
@@ -77,7 +77,7 @@ public class ClientContext {
   /**
* Caches short-circuit file descriptors, mmap regions.
*/
-  private final ShortCircuitCache shortCircuitCache;
+  private final ShortCircuitCache[] shortCircuitCache;
 
   /**
* Caches TCP and UNIX domain sockets for reuse.
@@ -132,13 +132,23 @@ public class ClientContext {
*/
   private DeadNodeDetector deadNodeDetector = null;
 
+  /**
+   * ShortCircuitCache array size.
+   */
+  private final int clientShortCircuitNum;
+
   private ClientContext(String name, DfsClientConf conf,
   Configuration config) {
 final ShortCircuitConf scConf = conf.getShortCircuitConf();
 
 this.name = name;
 this.confString = scConf.confAsString();
-this.shortCircuitCache = ShortCircuitCache.fromConf(scConf);
+this.clientShortCircuitNum = conf.getClientShortCircuitNum();
+this.shortCircuitCache = new ShortCircuitCache[this.clientShortCircuitNum];
+for (int i = 0; i < this.clientShortCircuitNum; i++) {
+  this.shortCircuitCache[i] = ShortCircuitCache.fromConf(scConf);
+}
+
 this.peerCache = new PeerCache(scConf.getSocketCacheCapacity(),
 scConf.getSocketCacheExpiry());
 this.keyProviderCache = new KeyProviderCache(
@@ -228,7 +238,11 @@ public class ClientContext {
   }
 
   public ShortCircuitCache getShortCircuitCache() {
-return shortCircuitCache;
+return shortCircuitCache[0];
+  }
+
+  public ShortCircuitCache getShortCircuitCache(long idx) {
+return shortCircuitCache[(int) (idx % clientShortCircuitNum)];
   }
 
   public PeerCache getPeerCache() {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
index efc2766..0c35c8d 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
@@ -144,6 +144,8 @@ public interface HdfsClientConfigKeys {
   "dfs.short.circuit.shared.memory.watcher.interrupt.check.ms";
   int DFS_SHORT_CIRCUIT_SHARED_MEMORY_WATCHER_INTERRUPT_CHECK_MS_DEFAULT =
   6;
+  String DFS_CLIENT_SHORT_CIRCUIT_NUM = "dfs.client.short.circuit.num";
+  int DFS_CLIENT_SHORT_CIRCUIT_NUM_DEFAULT = 1;
   String  DFS_CLIENT_SLOW_IO_WARNING_THRESHOLD_KEY =
   "dfs.client.slow.io.warning.threshold.ms";
   longDFS_CLIENT_SLOW_IO_WARNING_THRESHOLD_DEFAULT = 3;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
index a3b611c..028d629 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
+++ 

[hadoop] branch trunk updated: HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016)

2020-05-18 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 86e6aa8  HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016)
86e6aa8 is described below

commit 86e6aa8eec538e142044e2b6415ec1caff5e9cbd
Author: pustota2009 <61382543+pustota2...@users.noreply.github.com>
AuthorDate: Mon May 18 17:04:04 2020 +0300

HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016)

Added parameter dfs.client.short.circuit.num improving HDFS-client's 
massive reading performance by create few instances ShortCircuit caches instead 
of one. It helps avoid locks and lets CPU do job.
---
 .../java/org/apache/hadoop/hdfs/ClientContext.java |  20 ++-
 .../hadoop/hdfs/client/HdfsClientConfigKeys.java   |   2 +
 .../hdfs/client/impl/BlockReaderFactory.java   |   9 +-
 .../hadoop/hdfs/client/impl/DfsClientConf.java |  21 ++-
 .../src/main/resources/hdfs-default.xml|  10 ++
 .../hadoop/fs/TestEnhancedByteBufferAccess.java|   6 +-
 .../hdfs/client/impl/TestBlockReaderFactory.java   |  14 +-
 .../hdfs/client/impl/TestBlockReaderLocal.java | 190 +++--
 .../hdfs/shortcircuit/TestShortCircuitCache.java   |  19 ++-
 9 files changed, 215 insertions(+), 76 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
index cbd941b..7a03240 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
@@ -77,7 +77,7 @@ public class ClientContext {
   /**
* Caches short-circuit file descriptors, mmap regions.
*/
-  private final ShortCircuitCache shortCircuitCache;
+  private final ShortCircuitCache[] shortCircuitCache;
 
   /**
* Caches TCP and UNIX domain sockets for reuse.
@@ -132,13 +132,23 @@ public class ClientContext {
*/
   private DeadNodeDetector deadNodeDetector = null;
 
+  /**
+   * ShortCircuitCache array size.
+   */
+  private final int clientShortCircuitNum;
+
   private ClientContext(String name, DfsClientConf conf,
   Configuration config) {
 final ShortCircuitConf scConf = conf.getShortCircuitConf();
 
 this.name = name;
 this.confString = scConf.confAsString();
-this.shortCircuitCache = ShortCircuitCache.fromConf(scConf);
+this.clientShortCircuitNum = conf.getClientShortCircuitNum();
+this.shortCircuitCache = new ShortCircuitCache[this.clientShortCircuitNum];
+for (int i = 0; i < this.clientShortCircuitNum; i++) {
+  this.shortCircuitCache[i] = ShortCircuitCache.fromConf(scConf);
+}
+
 this.peerCache = new PeerCache(scConf.getSocketCacheCapacity(),
 scConf.getSocketCacheExpiry());
 this.keyProviderCache = new KeyProviderCache(
@@ -228,7 +238,11 @@ public class ClientContext {
   }
 
   public ShortCircuitCache getShortCircuitCache() {
-return shortCircuitCache;
+return shortCircuitCache[0];
+  }
+
+  public ShortCircuitCache getShortCircuitCache(long idx) {
+return shortCircuitCache[(int) (idx % clientShortCircuitNum)];
   }
 
   public PeerCache getPeerCache() {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
index ab3f6f2..e8b5402 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
@@ -144,6 +144,8 @@ public interface HdfsClientConfigKeys {
   "dfs.short.circuit.shared.memory.watcher.interrupt.check.ms";
   int DFS_SHORT_CIRCUIT_SHARED_MEMORY_WATCHER_INTERRUPT_CHECK_MS_DEFAULT =
   6;
+  String DFS_CLIENT_SHORT_CIRCUIT_NUM = "dfs.client.short.circuit.num";
+  int DFS_CLIENT_SHORT_CIRCUIT_NUM_DEFAULT = 1;
   String  DFS_CLIENT_SLOW_IO_WARNING_THRESHOLD_KEY =
   "dfs.client.slow.io.warning.threshold.ms";
   longDFS_CLIENT_SLOW_IO_WARNING_THRESHOLD_DEFAULT = 3;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
index a3b611c..028d629 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
@@ -476,7 +476,8 @@ public 

[hadoop] branch trunk updated: Revert "YARN-9606. Set sslfactory for AuthenticatedURL() while creating LogsCLI#webServiceClient."

2020-05-18 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new b65815d  Revert "YARN-9606. Set sslfactory for AuthenticatedURL() 
while creating LogsCLI#webServiceClient."
b65815d is described below

commit b65815d6914996fed25bd9fef4d37d00828bc664
Author: Akira Ajisaka 
AuthorDate: Mon May 18 16:29:07 2020 +0900

Revert "YARN-9606. Set sslfactory for AuthenticatedURL() while creating 
LogsCLI#webServiceClient."

This reverts commit 7836bc4c3533e93e7adc0c7da0659bc04bdf2494.
---
 .../hadoop-yarn/hadoop-yarn-client/pom.xml |   1 +
 .../org/apache/hadoop/yarn/client/cli/LogsCLI.java |  43 +--
 .../apache/hadoop/yarn/client/cli/TestLogsCLI.java | 132 +++--
 3 files changed, 100 insertions(+), 76 deletions(-)

diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
index 022d4f4..15e0eaf 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
@@ -119,6 +119,7 @@
 
   org.apache.hadoop
   hadoop-yarn-server-common
+  test
 
 
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
index 6d57fb8..343dfc7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
@@ -27,13 +27,17 @@ import com.sun.jersey.api.client.UniformInterfaceException;
 import com.sun.jersey.api.client.WebResource;
 import com.sun.jersey.api.client.WebResource.Builder;
 import com.sun.jersey.api.client.filter.ClientFilter;
+import com.sun.jersey.client.urlconnection.HttpURLConnectionFactory;
+import com.sun.jersey.client.urlconnection.URLConnectionClientHandler;
 import java.io.File;
 import java.io.IOException;
 import java.io.InputStream;
 import java.io.PrintStream;
 import java.net.ConnectException;
+import java.net.HttpURLConnection;
 import java.net.SocketException;
 import java.net.SocketTimeoutException;
+import java.net.URL;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
@@ -64,6 +68,8 @@ import 
org.apache.hadoop.classification.InterfaceStability.Evolving;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.authentication.client.AuthenticatedURL;
+import 
org.apache.hadoop.security.authentication.client.AuthenticationException;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.yarn.api.records.ApplicationAttemptReport;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
@@ -81,7 +87,6 @@ import 
org.apache.hadoop.yarn.logaggregation.ContainerLogsRequest;
 import org.apache.hadoop.yarn.logaggregation.LogCLIHelpers;
 import org.apache.hadoop.yarn.logaggregation.LogToolUtils;
 import org.apache.hadoop.yarn.server.metrics.AppAttemptMetricsConstants;
-import org.apache.hadoop.yarn.server.webapp.WebServiceClient;
 import org.apache.hadoop.yarn.util.Apps;
 import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
 import org.apache.hadoop.yarn.webapp.util.YarnWebServiceUtils;
@@ -117,7 +122,7 @@ public class LogsCLI extends Configured implements Tool {
 
   private PrintStream outStream = System.out;
   private YarnClient yarnClient = null;
-  private Client client = null;
+  private Client webServiceClient = null;
 
   private static final int DEFAULT_MAX_RETRIES = 30;
   private static final long DEFAULT_RETRY_INTERVAL = 1000;
@@ -134,14 +139,28 @@ public class LogsCLI extends Configured implements Tool {
   @Override
   public int run(String[] args) throws Exception {
 try {
-  client = WebServiceClient.getWebServiceClient().createClient();
+  webServiceClient = new Client(new URLConnectionClientHandler(
+  new HttpURLConnectionFactory() {
+  @Override
+  public HttpURLConnection getHttpURLConnection(URL url)
+  throws IOException {
+AuthenticatedURL.Token token = new AuthenticatedURL.Token();
+HttpURLConnection conn = null;
+try {
+  conn = new AuthenticatedURL().openConnection(url, token);
+} catch (AuthenticationException e) {
+  throw new IOException(e);
+}
+return conn;
+  }
+}));
   return runCommand(args);
 } finally {
   if (yarnClient != null) {
 yarnClient.close();
   }
-  if (client != 

[hadoop] branch branch-3.1 updated: HADOOP-17042. Hadoop distcp throws 'ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found'. Contributed by Aki Tanaka.

2020-05-18 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 533e51f  HADOOP-17042. Hadoop distcp throws 'ERROR: Tools helper 
///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found'. Contributed by 
Aki Tanaka.
533e51f is described below

commit 533e51f97e821b0ac5e188a8e86a79340c204e85
Author: Akira Ajisaka 
AuthorDate: Mon May 18 15:36:11 2020 +0900

HADOOP-17042. Hadoop distcp throws 'ERROR: Tools helper 
///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found'. Contributed by 
Aki Tanaka.

(cherry picked from commit 27601fc79ed053ce978ac18a2c5706d32e58019f)
---
 hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh 
b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
index bcb8158..ab0d53c 100755
--- a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
+++ b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
@@ -1336,7 +1336,7 @@ function hadoop_add_to_classpath_tools
 # shellcheck disable=SC1090
 . "${HADOOP_LIBEXEC_DIR}/tools/${module}.sh"
   else
-hadoop_error "ERROR: Tools helper ${HADOOP_LIBEXEC_DIR}/tools/${module}.sh 
was not found."
+hadoop_debug "Tools helper ${HADOOP_LIBEXEC_DIR}/tools/${module}.sh was 
not found."
   fi
 
   if declare -f hadoop_classpath_tools_${module} >/dev/null 2>&1; then


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HADOOP-17042. Hadoop distcp throws 'ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found'. Contributed by Aki Tanaka.

2020-05-18 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 777e364  HADOOP-17042. Hadoop distcp throws 'ERROR: Tools helper 
///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found'. Contributed by 
Aki Tanaka.
777e364 is described below

commit 777e36448cef41c347a1ee57d88cf96cd086e869
Author: Akira Ajisaka 
AuthorDate: Mon May 18 15:36:11 2020 +0900

HADOOP-17042. Hadoop distcp throws 'ERROR: Tools helper 
///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found'. Contributed by 
Aki Tanaka.

(cherry picked from commit 27601fc79ed053ce978ac18a2c5706d32e58019f)
---
 hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh 
b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
index b3b8afc..56248d3 100755
--- a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
+++ b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
@@ -1337,7 +1337,7 @@ function hadoop_add_to_classpath_tools
 # shellcheck disable=SC1090
 . "${HADOOP_LIBEXEC_DIR}/tools/${module}.sh"
   else
-hadoop_error "ERROR: Tools helper ${HADOOP_LIBEXEC_DIR}/tools/${module}.sh 
was not found."
+hadoop_debug "Tools helper ${HADOOP_LIBEXEC_DIR}/tools/${module}.sh was 
not found."
   fi
 
   if declare -f hadoop_classpath_tools_${module} >/dev/null 2>&1; then


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HADOOP-17042. Hadoop distcp throws 'ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found'. Contributed by Aki Tanaka.

2020-05-18 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 77587ff  HADOOP-17042. Hadoop distcp throws 'ERROR: Tools helper 
///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found'. Contributed by 
Aki Tanaka.
77587ff is described below

commit 77587ffb1e8e56d693f52e8c79ef5c742af03543
Author: Akira Ajisaka 
AuthorDate: Mon May 18 15:36:11 2020 +0900

HADOOP-17042. Hadoop distcp throws 'ERROR: Tools helper 
///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found'. Contributed by 
Aki Tanaka.

(cherry picked from commit 27601fc79ed053ce978ac18a2c5706d32e58019f)
---
 hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh 
b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
index eb7285f..0d51f6b 100755
--- a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
+++ b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
@@ -1342,7 +1342,7 @@ function hadoop_add_to_classpath_tools
 # shellcheck disable=SC1090
 . "${HADOOP_LIBEXEC_DIR}/tools/${module}.sh"
   else
-hadoop_error "ERROR: Tools helper ${HADOOP_LIBEXEC_DIR}/tools/${module}.sh 
was not found."
+hadoop_debug "Tools helper ${HADOOP_LIBEXEC_DIR}/tools/${module}.sh was 
not found."
   fi
 
   if declare -f hadoop_classpath_tools_${module} >/dev/null 2>&1; then


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-17042. Hadoop distcp throws 'ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found'. Contributed by Aki Tanaka.

2020-05-18 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 27601fc  HADOOP-17042. Hadoop distcp throws 'ERROR: Tools helper 
///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found'. Contributed by 
Aki Tanaka.
27601fc is described below

commit 27601fc79ed053ce978ac18a2c5706d32e58019f
Author: Akira Ajisaka 
AuthorDate: Mon May 18 15:36:11 2020 +0900

HADOOP-17042. Hadoop distcp throws 'ERROR: Tools helper 
///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found'. Contributed by 
Aki Tanaka.
---
 hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh 
b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
index eb7285f..0d51f6b 100755
--- a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
+++ b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
@@ -1342,7 +1342,7 @@ function hadoop_add_to_classpath_tools
 # shellcheck disable=SC1090
 . "${HADOOP_LIBEXEC_DIR}/tools/${module}.sh"
   else
-hadoop_error "ERROR: Tools helper ${HADOOP_LIBEXEC_DIR}/tools/${module}.sh 
was not found."
+hadoop_debug "Tools helper ${HADOOP_LIBEXEC_DIR}/tools/${module}.sh was 
not found."
   fi
 
   if declare -f hadoop_classpath_tools_${module} >/dev/null 2>&1; then


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org