[hadoop] branch trunk updated: HDFS-14558. RBF: Isolation/Fairness documentation. Contributed by Fengnan Li.

2021-01-12 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 0d7ac54  HDFS-14558. RBF: Isolation/Fairness documentation. 
Contributed by Fengnan Li.
0d7ac54 is described below

commit 0d7ac54510fbed8957d053546d606d5084e3e708
Author: Yiqun Lin 
AuthorDate: Tue Jan 12 23:38:55 2021 +0800

HDFS-14558. RBF: Isolation/Fairness documentation. Contributed by Fengnan 
Li.
---
 .../src/site/markdown/HDFSRouterFederation.md  | 23 ++
 1 file changed, 23 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
index 66f039a..702fa44 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
@@ -184,6 +184,18 @@ Router relies on a state store to distribute tokens across 
all routers. Apart fr
 See the Apache JIRA ticket 
[HDFS-13532](https://issues.apache.org/jira/browse/HDFS-13532) for more 
information on this feature.
 
 
+### Isolation
+Router supports assignment of the dedicated number of RPC handlers to achieve 
isolation for all downstream nameservices it is configured to proxy. Since 
large or busy clusters may have relatively higher RPC traffic to the namenode 
compared to other clusters namenodes, this feature if enabled allows admins to 
configure higher number of RPC handlers for busy clusters. If dedicated 
handlers are not assigned for specific nameservices, equal distribution of RPC 
handlers is done for all config [...]
+
+If a downstream namenode is slow/busy enough that permits are unavailable, 
routers would throw StandByException exception to the client. This would in 
turn trigger a failover behavior at the client side and clients would connect 
to a different router in the cluster. This offers a positive effect of 
automatically load balancing RPCs across all routers nodes. This is important 
to ensure that a single router does not become a bottleneck in case of 
unhealthy namenodes and all handlers availa [...]
+
+Users can configure handlers based on steady state load that individual 
downstream namenodes expect and can introduce more routers to the cluster to 
handle more RPCs overall. Because of the bouncing behavior that clients 
automatically get in this feature in an event where certain namenodes are 
overloaded, good clients connecting to good namenodes will always continue to 
use Rpc lanes dedicated to them. For bad behaving namenodes or backfill jobs 
that put spiky loads on namenodes, more ro [...]
+
+Overall the isolation feature is exposed via a configuration 
dfs.federation.router.handler.isolation.enable. The default value of this 
feature will be “false”. Users can also introduce their own fairness policy 
controller for custom allocation of handlers to various nameservices.
+
+See the Apache JIRA ticket 
[HDFS-14090](https://issues.apache.org/jira/browse/HDFS-14090) for more 
information on this feature.
+
+
 Deployment
 --
 
@@ -482,6 +494,17 @@ Kerberos and Delegation token supported in federation.
 | dfs.federation.router.kerberos.internal.spnego.principal | 
`${dfs.web.authentication.kerberos.principal}` | The server principal used by 
the Router for web UI SPNEGO authentication when Kerberos security is enabled. 
This is typically set to HTTP/_h...@realm.tld The SPNEGO server principal 
begins with the prefix HTTP/ by convention. If the value is '*', the web server 
will attempt to login with every principal specified in the keytab file 
'dfs.web.authentication.kerberos.keytab'. |
 | dfs.federation.router.secret.manager.class | 
`org.apache.hadoop.hdfs.server.federation.router.security.token.ZKDelegationTokenSecretManagerImpl`
 |  Class to implement state store to delegation tokens. Default implementation 
uses zookeeper as the backend to store delegation tokens. |
 
+### Isolation
+
+Isolation and dedicated assignment of RPC handlers across all configured 
downstream nameservices. The sum of these numbers must be strictly smaller than 
the total number of router handlers (configed by 
dfs.federation.router.handler.count).
+
+| Property | Default | Description|
+|: |: |: |
+| dfs.federation.router.fairness.enable | `false` | If `true`, dedicated RPC 
handlers will be assigned to each nameservice based on the fairness assignment 
policy configured. |
+| dfs.federation.router.fairness.policy.controller.class | 
`org.apache.hadoop.hdfs.server.federation.fairness.NoRouterRpcFairnessPolicyController`
 | Default handler allocation model to be used if isolation feature is enabled. 
Recommend to use 
`org.apache.hadoop.hdfs.server.federation.fairness.StaticRouterRpcFairnessPolicyController`
 to fully use

[hadoop] branch branch-2.9 updated: HDFS-15660. StorageTypeProto is not compatiable between 3.x and 2.6. Contributed by Ryan Wu.

2020-12-07 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch branch-2.9
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.9 by this push:
 new f70fc91  HDFS-15660. StorageTypeProto is not compatiable between 3.x 
and 2.6. Contributed by Ryan Wu.
f70fc91 is described below

commit f70fc91a0218829ecd432cbe83d28ee0bd686644
Author: Yiqun Lin 
AuthorDate: Mon Dec 7 18:52:12 2020 +0800

HDFS-15660. StorageTypeProto is not compatiable between 3.x and 2.6. 
Contributed by Ryan Wu.

(cherry picked from commit da1ea2530fa61c53a99770e10889023c474fb4ef)
---
 hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
index c07dd9e..1b82272 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
@@ -180,7 +180,7 @@ message StorageTypeQuotaInfosProto {
 }
 
 message StorageTypeQuotaInfoProto {
-  required StorageTypeProto type = 1;
+  optional StorageTypeProto type = 1 [default = DISK];
   required uint64 quota = 2;
   required uint64 consumed = 3;
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.10 updated: HDFS-15660. StorageTypeProto is not compatiable between 3.x and 2.6. Contributed by Ryan Wu.

2020-12-07 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new 8ddd85b  HDFS-15660. StorageTypeProto is not compatiable between 3.x 
and 2.6. Contributed by Ryan Wu.
8ddd85b is described below

commit 8ddd85b0e6fc41bac854c910aecb5cbc0392535a
Author: Yiqun Lin 
AuthorDate: Mon Dec 7 18:52:12 2020 +0800

HDFS-15660. StorageTypeProto is not compatiable between 3.x and 2.6. 
Contributed by Ryan Wu.

(cherry picked from commit da1ea2530fa61c53a99770e10889023c474fb4ef)
---
 hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
index c07dd9e..1b82272 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
@@ -180,7 +180,7 @@ message StorageTypeQuotaInfosProto {
 }
 
 message StorageTypeQuotaInfoProto {
-  required StorageTypeProto type = 1;
+  optional StorageTypeProto type = 1 [default = DISK];
   required uint64 quota = 2;
   required uint64 consumed = 3;
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.0 updated: HDFS-15660. StorageTypeProto is not compatiable between 3.x and 2.6. Contributed by Ryan Wu.

2020-12-07 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new b46ec61  HDFS-15660. StorageTypeProto is not compatiable between 3.x 
and 2.6. Contributed by Ryan Wu.
b46ec61 is described below

commit b46ec6190968205a83015fdc0450b5ce55b03fbb
Author: Yiqun Lin 
AuthorDate: Mon Dec 7 18:52:12 2020 +0800

HDFS-15660. StorageTypeProto is not compatiable between 3.x and 2.6. 
Contributed by Ryan Wu.

(cherry picked from commit da1ea2530fa61c53a99770e10889023c474fb4ef)
---
 hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
index 7b3f5ed..df770c8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
@@ -182,7 +182,7 @@ message StorageTypeQuotaInfosProto {
 }
 
 message StorageTypeQuotaInfoProto {
-  required StorageTypeProto type = 1;
+  optional StorageTypeProto type = 1 [default = DISK];
   required uint64 quota = 2;
   required uint64 consumed = 3;
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15660. StorageTypeProto is not compatiable between 3.x and 2.6. Contributed by Ryan Wu.

2020-12-07 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new da1ea25  HDFS-15660. StorageTypeProto is not compatiable between 3.x 
and 2.6. Contributed by Ryan Wu.
da1ea25 is described below

commit da1ea2530fa61c53a99770e10889023c474fb4ef
Author: Yiqun Lin 
AuthorDate: Mon Dec 7 18:52:12 2020 +0800

HDFS-15660. StorageTypeProto is not compatiable between 3.x and 2.6. 
Contributed by Ryan Wu.
---
 hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
index 5ae58cf..08fce71 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
@@ -198,7 +198,7 @@ message StorageTypeQuotaInfosProto {
 }
 
 message StorageTypeQuotaInfoProto {
-  required StorageTypeProto type = 1;
+  optional StorageTypeProto type = 1 [default = DISK];
   required uint64 quota = 2;
   required uint64 consumed = 3;
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15640. Add diff threshold to FedBalance. Contributed by Jinglun.

2020-10-26 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 15a5f53  HDFS-15640. Add diff threshold to FedBalance. Contributed by 
Jinglun.
15a5f53 is described below

commit 15a5f5367366fdd76933d0ff6499363fcbc8873e
Author: Yiqun Lin 
AuthorDate: Tue Oct 27 10:41:10 2020 +0800

HDFS-15640. Add diff threshold to FedBalance. Contributed by Jinglun.
---
 .../hadoop/tools/fedbalance/DistCpProcedure.java   | 43 +-
 .../apache/hadoop/tools/fedbalance/FedBalance.java | 22 ++-
 .../hadoop/tools/fedbalance/FedBalanceContext.java | 21 +++
 .../hadoop/tools/fedbalance/FedBalanceOptions.java | 11 ++
 .../src/site/markdown/HDFSFederationBalance.md |  1 +
 .../tools/fedbalance/TestDistCpProcedure.java  | 34 -
 6 files changed, 119 insertions(+), 13 deletions(-)

diff --git 
a/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/DistCpProcedure.java
 
b/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/DistCpProcedure.java
index 223b777..33d37be 100644
--- 
a/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/DistCpProcedure.java
+++ 
b/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/DistCpProcedure.java
@@ -89,6 +89,8 @@ public class DistCpProcedure extends BalanceProcedure {
   private boolean forceCloseOpenFiles;
   /* Disable write by setting the mount point readonly. */
   private boolean useMountReadOnly;
+  /* The threshold of diff entries. */
+  private int diffThreshold;
 
   private FsPermission fPerm; // the permission of the src.
   private AclStatus acl; // the acl of the src.
@@ -134,6 +136,7 @@ public class DistCpProcedure extends BalanceProcedure {
 this.bandWidth = context.getBandwidthLimit();
 this.forceCloseOpenFiles = context.getForceCloseOpenFiles();
 this.useMountReadOnly = context.getUseMountReadOnly();
+this.diffThreshold = context.getDiffThreshold();
 srcFs = (DistributedFileSystem) context.getSrc().getFileSystem(conf);
 dstFs = (DistributedFileSystem) context.getDst().getFileSystem(conf);
   }
@@ -227,12 +230,8 @@ public class DistCpProcedure extends BalanceProcedure {
   } else {
 throw new RetryException(); // wait job complete.
   }
-} else if (!verifyDiff()) {
-  if (!verifyOpenFiles() || forceCloseOpenFiles) {
-updateStage(Stage.DISABLE_WRITE);
-  } else {
-throw new RetryException();
-  }
+} else if (diffDistCpStageDone()) {
+  updateStage(Stage.DISABLE_WRITE);
 } else {
   submitDiffDistCp();
 }
@@ -372,14 +371,38 @@ public class DistCpProcedure extends BalanceProcedure {
   }
 
   /**
-   * Verify whether the src has changed since CURRENT_SNAPSHOT_NAME snapshot.
+   * Check whether the conditions are satisfied for moving to the next stage.
+   * If the diff entries size is no greater than the threshold and the open
+   * files could be force closed or there is no open file, then moving to the
+   * next stage.
+   *
+   * @return true if moving to the next stage. false if the conditions are not
+   * satisfied.
+   * @throws RetryException if the conditions are not satisfied and the diff
+   * size is under the given threshold scope.
+   */
+  @VisibleForTesting
+  boolean diffDistCpStageDone() throws IOException, RetryException {
+int diffSize = getDiffSize();
+if (diffSize <= diffThreshold) {
+  if (forceCloseOpenFiles || !verifyOpenFiles()) {
+return true;
+  } else {
+throw new RetryException();
+  }
+}
+return false;
+  }
+
+  /**
+   * Get number of the diff entries.
*
-   * @return true if the src has changed.
+   * @return number of the diff entries.
*/
-  private boolean verifyDiff() throws IOException {
+  private int getDiffSize() throws IOException {
 SnapshotDiffReport diffReport =
 srcFs.getSnapshotDiffReport(src, CURRENT_SNAPSHOT_NAME, "");
-return diffReport.getDiffList().size() > 0;
+return diffReport.getDiffList().size();
   }
 
   /**
diff --git 
a/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/FedBalance.java
 
b/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/FedBalance.java
index ca6b0df..c850798 100644
--- 
a/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/FedBalance.java
+++ 
b/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/FedBalance.java
@@ -51,6 +51,7 @@ import static 
org.apache.hadoop.tools.fedbalance.FedBalanceOptions.BANDWIDTH;
 import static org.apache.hadoop.tools.fedbalance.FedBalanceOptions.TRASH

[hadoop] branch trunk updated: HDFS-15374. Add documentation for fedbalance tool. Contributed by Jinglun.

2020-07-01 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new ff8bb67  HDFS-15374. Add documentation for fedbalance tool. 
Contributed by Jinglun.
ff8bb67 is described below

commit ff8bb672000980f3de7391e5d268e789d5cbe974
Author: Yiqun Lin 
AuthorDate: Wed Jul 1 14:18:18 2020 +0800

HDFS-15374. Add documentation for fedbalance tool. Contributed by Jinglun.
---
 hadoop-project/src/site/site.xml   |   1 +
 .../src/site/markdown/HDFSFederationBalance.md | 171 +
 .../src/site/resources/css/site.css|  30 
 .../resources/images/BalanceProcedureScheduler.png | Bin 0 -> 48275 bytes
 4 files changed, 202 insertions(+)

diff --git a/hadoop-project/src/site/site.xml b/hadoop-project/src/site/site.xml
index c3c0f19..4c9d356 100644
--- a/hadoop-project/src/site/site.xml
+++ b/hadoop-project/src/site/site.xml
@@ -198,6 +198,7 @@
   
   
   
+  
   
   
   
diff --git 
a/hadoop-tools/hadoop-federation-balance/src/site/markdown/HDFSFederationBalance.md
 
b/hadoop-tools/hadoop-federation-balance/src/site/markdown/HDFSFederationBalance.md
new file mode 100644
index 000..ff42eaf
--- /dev/null
+++ 
b/hadoop-tools/hadoop-federation-balance/src/site/markdown/HDFSFederationBalance.md
@@ -0,0 +1,171 @@
+
+
+HDFS Federation Balance Guide
+=
+
+
+
+Overview
+
+
+  HDFS Federation Balance is a tool balancing data across different federation
+  namespaces. It uses [DistCp](../hadoop-distcp/DistCp.html) to copy data from
+  the source path to the target path. First it creates a snapshot at the source
+  path and submits the initial distcp. Second it uses distcp diff to do the
+  incremental copy until the source and the target are the same. Then If it's
+  working in RBF mode it updates the mount table in Router. Finally it moves 
the
+  source to trash.
+
+  This document aims to describe the usage and design of the HDFS Federation
+  Balance.
+
+Usage
+-
+
+### Basic Usage
+
+  The hdfs federation balance tool supports both normal federation cluster and
+  router-based federation cluster. Taking rbf for example. Supposing we have a
+  mount entry in Router:
+
+Source   Destination
+/foo/src hdfs://namespace-0/foo/src
+
+  The command below runs an hdfs federation balance job. The first parameter is
+  the mount entry. The second one is the target path which must include the
+  target cluster. The option `-router` indicates this is in router-based
+  federation mode.
+
+bash$ /bin/hadoop fedbalance -router submit /foo/src 
hdfs://namespace-1/foo/dst
+
+  It copies data from hdfs://namespace-0/foo/src to hdfs://namespace-1/foo/dst
+  incrementally and finally updates the mount entry to:
+
+Source   Destination
+/foo/src hdfs://namespace-1/foo/dst
+
+  If the hadoop shell process exits unexpectedly, we can use the command below
+  to continue the unfinished job:
+
+bash$ /bin/hadoop fedbalance continue
+
+  This will scan the journal to find all the unfinished jobs, recover and
+  continue to execute them.
+
+  If we want to balance in a normal federation cluster, use the command below.
+
+bash$ /bin/hadoop fedbalance submit hdfs://namespace-0/foo/src 
hdfs://namespace-1/foo/dst
+
+  In normal federation mode the source path must includes the path schema.
+
+### RBF Mode And Normal Federation Mode
+
+  The hdfs federation balance tool has 2 modes:
+
+  * the router-based federation mode (RBF mode).
+  * the normal federation mode.
+
+  By default the command runs in the normal federation mode. You can specify 
the
+  rbf mode by using the option `-router`.
+
+  In the rbf mode the first parameter is taken as the mount point. It disables
+  write by setting the mount point readonly.
+
+  In the normal federation mode the first parameter is taken as the full path 
of
+  the source. The first parameter must include the source cluster. It disables
+  write by cancelling all the permissions of the source path.
+
+  Details about disabling write see [HDFS FedBalance](#HDFS_FedBalance).
+
+### Command Options
+
+Command `submit` has 5 options:
+
+| Option key | Description  | 
Default |
+| -- |  | 
--- |
+| -router | Run in router-based federation mode. | Normal federation mode. |
+| -forceCloseOpen | Force close all open files when there is no diff in the 
DIFF_DISTCP stage. | Wait until there is no open files. |
+| -map | Max number of concurrent maps to use for copy. | 10 |
+| -bandwidth | Specify bandwidth per map in MB. | 10 |
+| -delay | Specify the delayed duration(millie seconds) when the job needs to 
retry. | 1000 |
+| -moveToTrash | This options has 3 values: `tr

[hadoop] branch trunk updated: HDFS-15410. Add separated config file hdfs-fedbalance-default.xml for fedbalance tool. Contributed by Jinglun.

2020-07-01 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new de2cb86  HDFS-15410. Add separated config file 
hdfs-fedbalance-default.xml for fedbalance tool. Contributed by Jinglun.
de2cb86 is described below

commit de2cb8626016f22b388da7796082b2e160059cf6
Author: Yiqun Lin 
AuthorDate: Wed Jul 1 14:06:27 2020 +0800

HDFS-15410. Add separated config file hdfs-fedbalance-default.xml for 
fedbalance tool. Contributed by Jinglun.
---
 .../apache/hadoop/tools/fedbalance/FedBalance.java | 48 ++
 .../hadoop/tools/fedbalance/FedBalanceConfigs.java | 17 
 ...pBalanceOptions.java => FedBalanceOptions.java} | 16 
 .../procedure/BalanceProcedureScheduler.java   |  7 +---
 .../src/main/resources/hdfs-fedbalance-default.xml | 41 ++
 .../hadoop/tools/fedbalance/TestFedBalance.java| 34 +++
 6 files changed, 125 insertions(+), 38 deletions(-)

diff --git 
a/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/FedBalance.java
 
b/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/FedBalance.java
index adfb40b..8252957 100644
--- 
a/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/FedBalance.java
+++ 
b/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/FedBalance.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.tools.fedbalance;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.cli.CommandLine;
 import org.apache.commons.cli.CommandLineParser;
 import org.apache.commons.cli.GnuParser;
@@ -25,7 +26,6 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.conf.Configured;
 
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.tools.fedbalance.procedure.BalanceProcedure;
 import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
 import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys;
@@ -34,7 +34,6 @@ import 
org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
 import org.apache.hadoop.tools.fedbalance.procedure.BalanceJob;
 import org.apache.hadoop.tools.fedbalance.procedure.BalanceProcedureScheduler;
 import org.apache.hadoop.net.NetUtils;
-import org.apache.hadoop.util.ReflectionUtils;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
 import org.slf4j.Logger;
@@ -45,14 +44,13 @@ import java.net.InetSocketAddress;
 import java.util.Collection;
 import java.util.concurrent.TimeUnit;
 
-import static org.apache.hadoop.tools.fedbalance.DistCpBalanceOptions.ROUTER;
-import static 
org.apache.hadoop.tools.fedbalance.DistCpBalanceOptions.FORCE_CLOSE_OPEN;
-import static org.apache.hadoop.tools.fedbalance.DistCpBalanceOptions.MAP;
-import static 
org.apache.hadoop.tools.fedbalance.DistCpBalanceOptions.BANDWIDTH;
-import static org.apache.hadoop.tools.fedbalance.DistCpBalanceOptions.TRASH;
-import static 
org.apache.hadoop.tools.fedbalance.DistCpBalanceOptions.DELAY_DURATION;
-import static 
org.apache.hadoop.tools.fedbalance.DistCpBalanceOptions.CLI_OPTIONS;
-import static 
org.apache.hadoop.tools.fedbalance.FedBalanceConfigs.FEDERATION_BALANCE_CLASS;
+import static org.apache.hadoop.tools.fedbalance.FedBalanceOptions.ROUTER;
+import static 
org.apache.hadoop.tools.fedbalance.FedBalanceOptions.FORCE_CLOSE_OPEN;
+import static org.apache.hadoop.tools.fedbalance.FedBalanceOptions.MAP;
+import static org.apache.hadoop.tools.fedbalance.FedBalanceOptions.BANDWIDTH;
+import static org.apache.hadoop.tools.fedbalance.FedBalanceOptions.TRASH;
+import static 
org.apache.hadoop.tools.fedbalance.FedBalanceOptions.DELAY_DURATION;
+import static org.apache.hadoop.tools.fedbalance.FedBalanceOptions.CLI_OPTIONS;
 import static org.apache.hadoop.tools.fedbalance.FedBalanceConfigs.TrashOption;
 
 /**
@@ -73,6 +71,10 @@ public class FedBalance extends Configured implements Tool {
   private static final String MOUNT_TABLE_PROCEDURE = "mount-table-procedure";
   private static final String TRASH_PROCEDURE = "trash-procedure";
 
+  private static final String FED_BALANCE_DEFAULT_XML =
+  "hdfs-fedbalance-default.xml";
+  private static final String FED_BALANCE_SITE_XML = 
"hdfs-fedbalance-site.xml";
+
   /**
* This class helps building the balance job.
*/
@@ -210,7 +212,7 @@ public class FedBalance extends Configured implements Tool {
   public int run(String[] args) throws Exception {
 CommandLineParser parser = new GnuParser();
 CommandLine command =
-parser.parse(DistCpBalanceOptions.CLI_OPTIONS, args, true);
+parser.parse(FedBalanceOptions.CLI_OPTIONS, args, true);
 

[hadoop] branch trunk updated: HDFS-15346. FedBalance tool implementation. Contributed by Jinglun.

2020-06-17 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 9cbd76c  HDFS-15346. FedBalance tool implementation. Contributed by 
Jinglun.
9cbd76c is described below

commit 9cbd76cc775b58dfedb943f971b3307ec5702f13
Author: Yiqun Lin 
AuthorDate: Thu Jun 18 13:33:25 2020 +0800

HDFS-15346. FedBalance tool implementation. Contributed by Jinglun.
---
 .../src/main/resources/assemblies/hadoop-tools.xml |  15 +
 .../apache/hadoop/hdfs/protocol/HdfsConstants.java |   2 +
 hadoop-project/pom.xml |  17 +
 hadoop-tools/hadoop-federation-balance/pom.xml | 249 
 .../tools/fedbalance/DistCpBalanceOptions.java |  95 +++
 .../hadoop/tools/fedbalance/DistCpProcedure.java   | 635 +
 .../apache/hadoop/tools/fedbalance/FedBalance.java | 377 
 .../hadoop/tools/fedbalance/FedBalanceConfigs.java |  19 +-
 .../hadoop/tools/fedbalance/FedBalanceContext.java | 286 ++
 .../tools/fedbalance/MountTableProcedure.java  | 244 
 .../hadoop/tools/fedbalance/TrashProcedure.java| 112 
 .../hadoop/tools/fedbalance}/package-info.java |  14 +-
 .../tools/fedbalance}/procedure/BalanceJob.java|   2 +-
 .../fedbalance}/procedure/BalanceJournal.java  |   2 +-
 .../procedure/BalanceJournalInfoHDFS.java  |   8 +-
 .../fedbalance}/procedure/BalanceProcedure.java|   4 +-
 .../procedure/BalanceProcedureScheduler.java   |   8 +-
 .../tools/fedbalance}/procedure/package-info.java  |   2 +-
 .../shellprofile.d/hadoop-federation-balance.sh|  38 ++
 .../tools/fedbalance/TestDistCpProcedure.java  | 446 +++
 .../tools/fedbalance/TestMountTableProcedure.java  | 222 +++
 .../tools/fedbalance/TestTrashProcedure.java   | 102 
 .../fedbalance}/procedure/MultiPhaseProcedure.java |   2 +-
 .../fedbalance}/procedure/RecordProcedure.java |   2 +-
 .../fedbalance}/procedure/RetryProcedure.java  |   2 +-
 .../procedure/TestBalanceProcedureScheduler.java   |   7 +-
 .../procedure/UnrecoverableProcedure.java  |   2 +-
 .../tools/fedbalance}/procedure/WaitProcedure.java |   2 +-
 hadoop-tools/hadoop-tools-dist/pom.xml |   5 +
 hadoop-tools/pom.xml   |   1 +
 30 files changed, 2887 insertions(+), 35 deletions(-)

diff --git a/hadoop-assemblies/src/main/resources/assemblies/hadoop-tools.xml 
b/hadoop-assemblies/src/main/resources/assemblies/hadoop-tools.xml
index 054d8c0..db744f5 100644
--- a/hadoop-assemblies/src/main/resources/assemblies/hadoop-tools.xml
+++ b/hadoop-assemblies/src/main/resources/assemblies/hadoop-tools.xml
@@ -48,6 +48,14 @@
   0755
 
 
+  
../hadoop-federation-balance/src/main/shellprofile.d
+  
+*
+  
+  /libexec/shellprofile.d
+  0755
+
+
   ../hadoop-extras/src/main/shellprofile.d
   
 *
@@ -112,6 +120,13 @@
   
 
 
+  ../hadoop-federation-balance/target
+  
/share/hadoop/${hadoop.component}/sources
+  
+*-sources.jar
+  
+
+
   ../hadoop-extras/target
   
/share/hadoop/${hadoop.component}/sources
   
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index ab61e50..a025b9b 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -105,6 +105,8 @@ public final class HdfsConstants {
   public static final String DOT_SNAPSHOT_DIR = ".snapshot";
   public static final String SEPARATOR_DOT_SNAPSHOT_DIR
   = Path.SEPARATOR + DOT_SNAPSHOT_DIR;
+  public static final String DOT_SNAPSHOT_DIR_SEPARATOR =
+  DOT_SNAPSHOT_DIR + Path.SEPARATOR;
   public static final String SEPARATOR_DOT_SNAPSHOT_DIR_SEPARATOR
   = Path.SEPARATOR + DOT_SNAPSHOT_DIR + Path.SEPARATOR;
   public final static String DOT_RESERVED_STRING = ".reserved";
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index f3a3d76..48928b5 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -329,6 +329,12 @@
   
   
 org.apache.hadoop
+hadoop-hdfs-rbf
+${hadoop.version}
+test-jar
+  
+  
+org.apache.hadoop
 hadoop-mapreduce-client-app
 ${hadoop.version}
   
@@ -580,6 +586,17 @@
   
   
 org.apache.hadoop
+hadoop-federation-balance
+${hadoop.version}
+  
+  
+org.apache.hadoop
+hadoop-federation-balance
+${hadoop.versio

[hadoop] branch trunk updated: HDFS-15340. RBF: Implement BalanceProcedureScheduler basic framework. Contributed by Jinglun.

2020-05-19 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 1983eea  HDFS-15340. RBF: Implement BalanceProcedureScheduler basic 
framework. Contributed by Jinglun.
1983eea is described below

commit 1983eea62def58fb769f44c1d41dc29690274809
Author: Yiqun Lin 
AuthorDate: Wed May 20 10:39:40 2020 +0800

HDFS-15340. RBF: Implement BalanceProcedureScheduler basic framework. 
Contributed by Jinglun.
---
 .../apache/hadoop/hdfs/procedure/BalanceJob.java   | 361 +
 .../hadoop/hdfs/procedure/BalanceJournal.java  |  48 +++
 .../hdfs/procedure/BalanceJournalInfoHDFS.java | 203 ++
 .../hadoop/hdfs/procedure/BalanceProcedure.java| 226 +++
 .../hdfs/procedure/BalanceProcedureConfigKeys.java |  41 ++
 .../hdfs/procedure/BalanceProcedureScheduler.java  | 450 
 .../apache/hadoop/hdfs/procedure/package-info.java |  29 ++
 .../hadoop/hdfs/procedure/MultiPhaseProcedure.java |  88 
 .../hadoop/hdfs/procedure/RecordProcedure.java |  45 ++
 .../hadoop/hdfs/procedure/RetryProcedure.java  |  66 +++
 .../procedure/TestBalanceProcedureScheduler.java   | 451 +
 .../hdfs/procedure/UnrecoverableProcedure.java |  56 +++
 .../hadoop/hdfs/procedure/WaitProcedure.java   |  77 
 13 files changed, 2141 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/procedure/BalanceJob.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/procedure/BalanceJob.java
new file mode 100644
index 000..847092a
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/procedure/BalanceJob.java
@@ -0,0 +1,361 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.procedure;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.commons.lang3.builder.EqualsBuilder;
+import org.apache.commons.lang3.builder.HashCodeBuilder;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.List;
+import java.util.ArrayList;
+
+/**
+ * A Job is a state machine consists of many procedures. The procedures are
+ * executed as a chain. Each procedure needs to specify the next procedure. If
+ * there is no next procedure then the job is finished.
+ */
+public final class BalanceJob implements Writable {
+  private String id;
+  private BalanceProcedureScheduler scheduler;
+  private volatile boolean jobDone = false;
+  private Exception error;
+  public static final Logger LOG = LoggerFactory.getLogger(BalanceJob.class);
+  private Map procedureTable = new HashMap<>();
+  private T firstProcedure;
+  private T curProcedure;
+  private T lastProcedure;
+  private boolean removeAfterDone;
+
+  static final String NEXT_PROCEDURE_NONE = "NONE";
+  private static Set reservedNames = new HashSet<>();
+
+  static {
+reservedNames.add(NEXT_PROCEDURE_NONE);
+  }
+
+  public static class Builder {
+
+private List procedures = new ArrayList<>();
+private boolean removeAfterDone = false;
+
+/**
+ * Append a procedure to the tail.
+ */
+public Builder nextProcedure(T procedure) {
+  int size = procedures.size();
+  if (size > 0) {
+procedures.get(size - 1).setNextProcedure(procedure.name());
+  }
+  procedure.setNextProcedure(NEXT_PROCEDURE_NONE);
+  procedures.add(procedure);
+  return this;
+}
+
+/**
+ * Automatically remove this job from the scheduler cache when the job is
+ * done.
+ */
+public Builder removeAfterDone(boolean remove) {
+  removeAfterDone = remove;
+  return this;
+}

[hadoop] branch branch-2.10 updated: HDFS-15264. Backport Datanode detection to branch-2.10. Contributed by Lisheng Sun.

2020-05-16 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new e6374f0  HDFS-15264. Backport Datanode detection to branch-2.10. 
Contributed by Lisheng Sun.
e6374f0 is described below

commit e6374f031af6fb3a5467ccd12d6a4c8d7b0dae1e
Author: Yiqun Lin 
AuthorDate: Sun May 17 11:59:10 2020 +0800

HDFS-15264. Backport Datanode detection to branch-2.10. Contributed by 
Lisheng Sun.
---
 .../java/org/apache/hadoop/hdfs/ClientContext.java |  49 ++
 .../java/org/apache/hadoop/hdfs/DFSClient.java | 115 
 .../org/apache/hadoop/hdfs/DFSInputStream.java | 100 ++--
 .../org/apache/hadoop/hdfs/DeadNodeDetector.java   | 586 +
 .../hadoop/hdfs/client/HdfsClientConfigKeys.java   |  39 ++
 .../hadoop/hdfs/client/impl/DfsClientConf.java |  14 +
 .../src/main/resources/hdfs-default.xml|  73 +++
 .../apache/hadoop/hdfs/TestDeadNodeDetection.java  | 406 ++
 8 files changed, 1349 insertions(+), 33 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
index a31945c..2cb30f5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
@@ -39,6 +39,7 @@ import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.net.NetworkTopology;
 import org.apache.hadoop.net.NodeBase;
 import org.apache.hadoop.net.ScriptBasedMapping;
+import org.apache.hadoop.util.Daemon;
 import org.apache.hadoop.util.ReflectionUtils;
 
 import com.google.common.annotations.VisibleForTesting;
@@ -117,6 +118,19 @@ public class ClientContext {
   private NodeBase clientNode;
   private boolean topologyResolutionEnabled;
 
+  private Daemon deadNodeDetectorThr = null;
+
+  /**
+   * The switch to DeadNodeDetector.
+   */
+  private boolean deadNodeDetectionEnabled = false;
+
+  /**
+   * Detect the dead datanodes in advance, and share this information among all
+   * the DFSInputStreams in the same client.
+   */
+  private DeadNodeDetector deadNodeDetector = null;
+
   private ClientContext(String name, DfsClientConf conf,
   Configuration config) {
 final ShortCircuitConf scConf = conf.getShortCircuitConf();
@@ -133,6 +147,12 @@ public class ClientContext {
 
 this.byteArrayManager = ByteArrayManager.newInstance(
 conf.getWriteByteArrayManagerConf());
+this.deadNodeDetectionEnabled = conf.isDeadNodeDetectionEnabled();
+if (deadNodeDetectionEnabled && deadNodeDetector == null) {
+  deadNodeDetector = new DeadNodeDetector(name, config);
+  deadNodeDetectorThr = new Daemon(deadNodeDetector);
+  deadNodeDetectorThr.start();
+}
 initTopologyResolution(config);
   }
 
@@ -250,4 +270,33 @@ public class ClientContext {
 datanodeInfo.getNetworkLocation());
 return NetworkTopology.getDistanceByPath(clientNode, node);
   }
+
+  /**
+   * The switch to DeadNodeDetector. If true, DeadNodeDetector is available.
+   */
+  public boolean isDeadNodeDetectionEnabled() {
+return deadNodeDetectionEnabled;
+  }
+
+  /**
+   * Obtain DeadNodeDetector of the current client.
+   */
+  public DeadNodeDetector getDeadNodeDetector() {
+return deadNodeDetector;
+  }
+
+  /**
+   * Close dead node detector thread.
+   */
+  public void stopDeadNodeDetectorThread() {
+if (deadNodeDetectorThr != null) {
+  deadNodeDetectorThr.interrupt();
+  try {
+deadNodeDetectorThr.join();
+  } catch (InterruptedException e) {
+LOG.warn("Encountered exception while waiting to join on dead " +
+"node detector thread.", e);
+  }
+}
+  }
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 77ee893..ad4e499 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -48,6 +48,8 @@ import java.util.LinkedHashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.SynchronousQueue;
 import java.util.concurrent.ThreadLocalRandom;
 import java.util.concurrent.ThreadPoolExecutor;
@@ -240,6 +242,20 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
   private final int smallBufferSize;
   private final long serverDefaultsValidityPeriod;

[hadoop] branch trunk updated: HDFS-13811. RBF: Race condition between router admin quota update and periodic quota update service. Contributed by Jinglun.

2019-12-04 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 47fdae7  HDFS-13811. RBF: Race condition between router admin quota 
update and periodic quota update service. Contributed by Jinglun.
47fdae7 is described below

commit 47fdae79041ba2bb036ef7723a93ade5b1ac3619
Author: Yiqun Lin 
AuthorDate: Wed Dec 4 18:19:11 2019 +0800

HDFS-13811. RBF: Race condition between router admin quota update and 
periodic quota update service. Contributed by Jinglun.
---
 .../hdfs/server/federation/router/Router.java  |   8 ++
 .../federation/router/RouterQuotaManager.java  |  21 
 .../router/RouterQuotaUpdateService.java   |  40 +--
 .../server/federation/store/MountTableStore.java   |  12 ++-
 .../federation/store/impl/MountTableStoreImpl.java |  15 +++
 .../server/federation/router/TestRouterQuota.java  | 115 +++--
 6 files changed, 166 insertions(+), 45 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
index a03d8d4..64fdabe 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
@@ -43,6 +43,7 @@ import 
org.apache.hadoop.hdfs.server.federation.metrics.RBFMetrics;
 import org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics;
 import 
org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
 import 
org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
+import org.apache.hadoop.hdfs.server.federation.store.MountTableStore;
 import org.apache.hadoop.hdfs.server.federation.store.RouterStore;
 import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
 import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
@@ -292,6 +293,13 @@ public class Router extends CompositeService implements
 }
 
 super.serviceInit(conf);
+
+// Set quota manager in mount store to update quota usage in mount table.
+if (stateStore != null) {
+  MountTableStore mountstore =
+  this.stateStore.getRegisteredRecordStore(MountTableStore.class);
+  mountstore.setQuotaManager(this.quotaManager);
+}
   }
 
   private String getDisabledDependentServices() {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaManager.java
index c1a5146..ceb758e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaManager.java
@@ -170,6 +170,27 @@ public class RouterQuotaManager {
   }
 
   /**
+   * Update quota in cache. The usage will be preserved.
+   * @param path Mount table path.
+   * @param quota Corresponding quota value.
+   */
+  public void updateQuota(String path, RouterQuotaUsage quota) {
+writeLock.lock();
+try {
+  RouterQuotaUsage.Builder builder = new RouterQuotaUsage.Builder()
+  .quota(quota.getQuota()).spaceQuota(quota.getSpaceQuota());
+  RouterQuotaUsage current = this.cache.get(path);
+  if (current != null) {
+builder.fileAndDirectoryCount(current.getFileAndDirectoryCount())
+.spaceConsumed(current.getSpaceConsumed());
+  }
+  this.cache.put(path, builder.build());
+} finally {
+  writeLock.unlock();
+}
+  }
+
+  /**
* Remove the entity from cache.
* @param path Mount table path.
*/
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java
index e5d4472..f1a86bf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java
@@ -35,15 +35,13 @@ import 
org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
 import org.apache.hadoop.hdfs.server.federation.store.MountTableStore;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
 import

[hadoop] branch trunk updated: HDFS-15019. Refactor the unit test of TestDeadNodeDetection. Contributed by Lisheng Sun.

2019-11-27 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c3659f8  HDFS-15019. Refactor the unit test of TestDeadNodeDetection. 
Contributed by Lisheng Sun.
c3659f8 is described below

commit c3659f8f94bef7cfad0c3fb04391a7ffd4221679
Author: Yiqun Lin 
AuthorDate: Thu Nov 28 14:41:49 2019 +0800

HDFS-15019. Refactor the unit test of TestDeadNodeDetection. Contributed by 
Lisheng Sun.
---
 .../apache/hadoop/hdfs/TestDeadNodeDetection.java  | 38 --
 1 file changed, 7 insertions(+), 31 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDeadNodeDetection.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDeadNodeDetection.java
index 58f6d5d..a1e53cd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDeadNodeDetection.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDeadNodeDetection.java
@@ -51,6 +51,11 @@ public class TestDeadNodeDetection {
   public void setUp() {
 cluster = null;
 conf = new HdfsConfiguration();
+conf.setBoolean(DFS_CLIENT_DEAD_NODE_DETECTION_ENABLED_KEY, true);
+conf.setLong(
+DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_INTERVAL_MS_KEY, 1000);
+conf.setLong(
+DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_SUSPECT_NODE_INTERVAL_MS_KEY, 
100);
   }
 
   @After
@@ -62,15 +67,6 @@ public class TestDeadNodeDetection {
 
   @Test
   public void testDeadNodeDetectionInBackground() throws Exception {
-conf = new HdfsConfiguration();
-conf.setBoolean(DFS_CLIENT_DEAD_NODE_DETECTION_ENABLED_KEY, true);
-
conf.setLong(DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_INTERVAL_MS_KEY,
-1000);
-conf.setLong(
-DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_SUSPECT_NODE_INTERVAL_MS_KEY, 
100);
-// We'll be using a 512 bytes block size just for tests
-// so making sure the checksum bytes match it too.
-conf.setInt("io.bytes.per.checksum", 512);
 cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
 cluster.waitActive();
 
@@ -123,10 +119,6 @@ public class TestDeadNodeDetection {
   @Test
   public void testDeadNodeDetectionInMultipleDFSInputStream()
   throws IOException {
-conf.setBoolean(DFS_CLIENT_DEAD_NODE_DETECTION_ENABLED_KEY, true);
-// We'll be using a 512 bytes block size just for tests
-// so making sure the checksum bytes match it too.
-conf.setInt("io.bytes.per.checksum", 512);
 cluster = new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
 cluster.waitActive();
 
@@ -149,8 +141,9 @@ public class TestDeadNodeDetection {
   } catch (BlockMissingException e) {
   }
 
-  din2 = (DFSInputStream) in1.getWrappedStream();
+  din2 = (DFSInputStream) in2.getWrappedStream();
   dfsClient2 = din2.getDFSClient();
+  assertEquals(dfsClient1.toString(), dfsClient2.toString());
   assertEquals(1, dfsClient1.getDeadNodes(din1).size());
   assertEquals(1, dfsClient2.getDeadNodes(din2).size());
   assertEquals(1, dfsClient1.getClientContext().getDeadNodeDetector()
@@ -180,12 +173,6 @@ public class TestDeadNodeDetection {
 
   @Test
   public void testDeadNodeDetectionDeadNodeRecovery() throws Exception {
-conf.setBoolean(DFS_CLIENT_DEAD_NODE_DETECTION_ENABLED_KEY, true);
-
conf.setLong(DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_INTERVAL_MS_KEY,
-1000);
-// We'll be using a 512 bytes block size just for tests
-// so making sure the checksum bytes match it too.
-conf.setInt("io.bytes.per.checksum", 512);
 cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
 cluster.waitActive();
 
@@ -228,13 +215,7 @@ public class TestDeadNodeDetection {
 
   @Test
   public void testDeadNodeDetectionMaxDeadNodesProbeQueue() throws Exception {
-conf.setBoolean(DFS_CLIENT_DEAD_NODE_DETECTION_ENABLED_KEY, true);
-
conf.setLong(DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_INTERVAL_MS_KEY,
-1000);
 conf.setInt(DFS_CLIENT_DEAD_NODE_DETECTION_DEAD_NODE_QUEUE_MAX_KEY, 1);
-// We'll be using a 512 bytes block size just for tests
-// so making sure the checksum bytes match it too.
-conf.setInt("io.bytes.per.checksum", 512);
 cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
 cluster.waitActive();
 
@@ -268,12 +249,7 @@ public class TestDeadNodeDetection {
 
   @Test
   public void testDeadNodeDetectionSuspectNode() throws Exception {
-conf = new HdfsConfiguration();
-conf.setBoolean(DFS_CLIENT_DEAD_NODE_DETECTION_ENABLED_KEY, true);
 conf.setInt(DFS_CLIENT_DEAD_NODE_DETECTION_SUSPECT_NODE_QUEUE_MAX_KEY, 1);
-// We'll be using a 512 bytes block size just for t

[hadoop] branch branch-2.10 updated: HDFS-14986. ReplicaCachingGetSpaceUsed throws ConcurrentModificationException. Contributed by Aiphago.

2019-11-27 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new 1a83415  HDFS-14986. ReplicaCachingGetSpaceUsed throws 
ConcurrentModificationException. Contributed by Aiphago.
1a83415 is described below

commit 1a834157602069ed82e29e380e1d660a10592daa
Author: Yiqun Lin 
AuthorDate: Thu Nov 28 10:43:35 2019 +0800

HDFS-14986. ReplicaCachingGetSpaceUsed throws 
ConcurrentModificationException. Contributed by Aiphago.

(cherry picked from commit 2b452b4e6063072b2bec491edd3f412eb7ac21f3)
---
 .../org/apache/hadoop/fs/CachingGetSpaceUsed.java  | 34 +++--
 .../server/datanode/fsdataset/FsDatasetSpi.java|  6 +++
 .../datanode/fsdataset/impl/FsDatasetImpl.java | 12 ++---
 .../fsdataset/impl/ReplicaCachingGetSpaceUsed.java |  1 +
 .../impl/TestReplicaCachingGetSpaceUsed.java   | 55 ++
 5 files changed, 98 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
index 92476d7..58dc82d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
@@ -47,6 +47,7 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   private final long jitter;
   private final String dirPath;
   private Thread refreshUsed;
+  private boolean shouldFirstRefresh;
 
   /**
* This is the constructor used by the builder.
@@ -79,16 +80,30 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
 this.refreshInterval = interval;
 this.jitter = jitter;
 this.used.set(initialUsed);
+this.shouldFirstRefresh = true;
   }
 
   void init() {
 if (used.get() < 0) {
   used.set(0);
+  if (!shouldFirstRefresh) {
+// Skip initial refresh operation, so we need to do first refresh
+// operation immediately in refresh thread.
+initRefeshThread(true);
+return;
+  }
   refresh();
 }
+initRefeshThread(false);
+  }
 
+  /**
+   * RunImmediately should set true, if we skip the first refresh.
+   * @param runImmediately The param default should be false.
+   */
+  private void initRefeshThread (boolean runImmediately) {
 if (refreshInterval > 0) {
-  refreshUsed = new Thread(new RefreshThread(this),
+  refreshUsed = new Thread(new RefreshThread(this, runImmediately),
   "refreshUsed-" + dirPath);
   refreshUsed.setDaemon(true);
   refreshUsed.start();
@@ -101,6 +116,14 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   protected abstract void refresh();
 
   /**
+   * Reset that if we need to do the first refresh.
+   * @param shouldFirstRefresh The flag value to set.
+   */
+  protected void setShouldFirstRefresh(boolean shouldFirstRefresh) {
+this.shouldFirstRefresh = shouldFirstRefresh;
+  }
+
+  /**
* @return an estimate of space used in the directory path.
*/
   @Override public long getUsed() throws IOException {
@@ -156,9 +179,11 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   private static final class RefreshThread implements Runnable {
 
 final CachingGetSpaceUsed spaceUsed;
+private boolean runImmediately;
 
-RefreshThread(CachingGetSpaceUsed spaceUsed) {
+RefreshThread(CachingGetSpaceUsed spaceUsed, boolean runImmediately) {
   this.spaceUsed = spaceUsed;
+  this.runImmediately = runImmediately;
 }
 
 @Override
@@ -176,7 +201,10 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   }
   // Make sure that after the jitter we didn't end up at 0.
   refreshInterval = Math.max(refreshInterval, 1);
-  Thread.sleep(refreshInterval);
+  if (!runImmediately) {
+Thread.sleep(refreshInterval);
+  }
+  runImmediately = false;
   // update the used variable
   spaceUsed.refresh();
 } catch (InterruptedException e) {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
index 3f73e01..7953d45 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
@@ -662,5 +662,11 @@ public interface FsDat

[hadoop] branch branch-2 updated: HDFS-14986. ReplicaCachingGetSpaceUsed throws ConcurrentModificationException. Contributed by Aiphago.

2019-11-27 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new fc2cdb6  HDFS-14986. ReplicaCachingGetSpaceUsed throws 
ConcurrentModificationException. Contributed by Aiphago.
fc2cdb6 is described below

commit fc2cdb6cca8de95fb220c6a9f995406fa8b1d669
Author: Yiqun Lin 
AuthorDate: Thu Nov 28 10:43:35 2019 +0800

HDFS-14986. ReplicaCachingGetSpaceUsed throws 
ConcurrentModificationException. Contributed by Aiphago.

(cherry picked from commit 2b452b4e6063072b2bec491edd3f412eb7ac21f3)
---
 .../org/apache/hadoop/fs/CachingGetSpaceUsed.java  | 34 +++--
 .../server/datanode/fsdataset/FsDatasetSpi.java|  6 +++
 .../datanode/fsdataset/impl/FsDatasetImpl.java | 12 ++---
 .../fsdataset/impl/ReplicaCachingGetSpaceUsed.java |  1 +
 .../impl/TestReplicaCachingGetSpaceUsed.java   | 55 ++
 5 files changed, 98 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
index 92476d7..58dc82d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
@@ -47,6 +47,7 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   private final long jitter;
   private final String dirPath;
   private Thread refreshUsed;
+  private boolean shouldFirstRefresh;
 
   /**
* This is the constructor used by the builder.
@@ -79,16 +80,30 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
 this.refreshInterval = interval;
 this.jitter = jitter;
 this.used.set(initialUsed);
+this.shouldFirstRefresh = true;
   }
 
   void init() {
 if (used.get() < 0) {
   used.set(0);
+  if (!shouldFirstRefresh) {
+// Skip initial refresh operation, so we need to do first refresh
+// operation immediately in refresh thread.
+initRefeshThread(true);
+return;
+  }
   refresh();
 }
+initRefeshThread(false);
+  }
 
+  /**
+   * RunImmediately should set true, if we skip the first refresh.
+   * @param runImmediately The param default should be false.
+   */
+  private void initRefeshThread (boolean runImmediately) {
 if (refreshInterval > 0) {
-  refreshUsed = new Thread(new RefreshThread(this),
+  refreshUsed = new Thread(new RefreshThread(this, runImmediately),
   "refreshUsed-" + dirPath);
   refreshUsed.setDaemon(true);
   refreshUsed.start();
@@ -101,6 +116,14 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   protected abstract void refresh();
 
   /**
+   * Reset that if we need to do the first refresh.
+   * @param shouldFirstRefresh The flag value to set.
+   */
+  protected void setShouldFirstRefresh(boolean shouldFirstRefresh) {
+this.shouldFirstRefresh = shouldFirstRefresh;
+  }
+
+  /**
* @return an estimate of space used in the directory path.
*/
   @Override public long getUsed() throws IOException {
@@ -156,9 +179,11 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   private static final class RefreshThread implements Runnable {
 
 final CachingGetSpaceUsed spaceUsed;
+private boolean runImmediately;
 
-RefreshThread(CachingGetSpaceUsed spaceUsed) {
+RefreshThread(CachingGetSpaceUsed spaceUsed, boolean runImmediately) {
   this.spaceUsed = spaceUsed;
+  this.runImmediately = runImmediately;
 }
 
 @Override
@@ -176,7 +201,10 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   }
   // Make sure that after the jitter we didn't end up at 0.
   refreshInterval = Math.max(refreshInterval, 1);
-  Thread.sleep(refreshInterval);
+  if (!runImmediately) {
+Thread.sleep(refreshInterval);
+  }
+  runImmediately = false;
   // update the used variable
   spaceUsed.refresh();
 } catch (InterruptedException e) {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
index 3f73e01..7953d45 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
@@ -662,5 +662,11 @@ public interface FsDatasetSpi 
extend

[hadoop] branch trunk updated: HDFS-14986. ReplicaCachingGetSpaceUsed throws ConcurrentModificationException. Contributed by Aiphago.

2019-11-27 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2b452b4  HDFS-14986. ReplicaCachingGetSpaceUsed throws 
ConcurrentModificationException. Contributed by Aiphago.
2b452b4 is described below

commit 2b452b4e6063072b2bec491edd3f412eb7ac21f3
Author: Yiqun Lin 
AuthorDate: Thu Nov 28 10:43:35 2019 +0800

HDFS-14986. ReplicaCachingGetSpaceUsed throws 
ConcurrentModificationException. Contributed by Aiphago.
---
 .../org/apache/hadoop/fs/CachingGetSpaceUsed.java  | 34 +++--
 .../server/datanode/fsdataset/FsDatasetSpi.java|  6 +++
 .../datanode/fsdataset/impl/FsDatasetImpl.java | 12 ++---
 .../fsdataset/impl/ReplicaCachingGetSpaceUsed.java |  1 +
 .../impl/TestReplicaCachingGetSpaceUsed.java   | 55 ++
 5 files changed, 98 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
index 92476d7..58dc82d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
@@ -47,6 +47,7 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   private final long jitter;
   private final String dirPath;
   private Thread refreshUsed;
+  private boolean shouldFirstRefresh;
 
   /**
* This is the constructor used by the builder.
@@ -79,16 +80,30 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
 this.refreshInterval = interval;
 this.jitter = jitter;
 this.used.set(initialUsed);
+this.shouldFirstRefresh = true;
   }
 
   void init() {
 if (used.get() < 0) {
   used.set(0);
+  if (!shouldFirstRefresh) {
+// Skip initial refresh operation, so we need to do first refresh
+// operation immediately in refresh thread.
+initRefeshThread(true);
+return;
+  }
   refresh();
 }
+initRefeshThread(false);
+  }
 
+  /**
+   * RunImmediately should set true, if we skip the first refresh.
+   * @param runImmediately The param default should be false.
+   */
+  private void initRefeshThread (boolean runImmediately) {
 if (refreshInterval > 0) {
-  refreshUsed = new Thread(new RefreshThread(this),
+  refreshUsed = new Thread(new RefreshThread(this, runImmediately),
   "refreshUsed-" + dirPath);
   refreshUsed.setDaemon(true);
   refreshUsed.start();
@@ -101,6 +116,14 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   protected abstract void refresh();
 
   /**
+   * Reset that if we need to do the first refresh.
+   * @param shouldFirstRefresh The flag value to set.
+   */
+  protected void setShouldFirstRefresh(boolean shouldFirstRefresh) {
+this.shouldFirstRefresh = shouldFirstRefresh;
+  }
+
+  /**
* @return an estimate of space used in the directory path.
*/
   @Override public long getUsed() throws IOException {
@@ -156,9 +179,11 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   private static final class RefreshThread implements Runnable {
 
 final CachingGetSpaceUsed spaceUsed;
+private boolean runImmediately;
 
-RefreshThread(CachingGetSpaceUsed spaceUsed) {
+RefreshThread(CachingGetSpaceUsed spaceUsed, boolean runImmediately) {
   this.spaceUsed = spaceUsed;
+  this.runImmediately = runImmediately;
 }
 
 @Override
@@ -176,7 +201,10 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   }
   // Make sure that after the jitter we didn't end up at 0.
   refreshInterval = Math.max(refreshInterval, 1);
-  Thread.sleep(refreshInterval);
+  if (!runImmediately) {
+Thread.sleep(refreshInterval);
+  }
+  runImmediately = false;
   // update the used variable
   spaceUsed.refresh();
 } catch (InterruptedException e) {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
index 5b1e37a..e53cb37 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
@@ -661,5 +661,11 @@ public interface FsDatasetSpi 
extends FSDatasetMBean {
*/
   AutoCloseableLock acquireDatasetLock();
 
+  /**

[hadoop] branch trunk updated: HDFS-14649. Add suspect probe for DeadNodeDetector. Contributed by Lisheng Sun.

2019-11-26 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c8bef4d  HDFS-14649. Add suspect probe for DeadNodeDetector. 
Contributed by Lisheng Sun.
c8bef4d is described below

commit c8bef4d6a6d7d5affd00cff6ea4a2e2ef778050e
Author: Yiqun Lin 
AuthorDate: Wed Nov 27 10:57:20 2019 +0800

HDFS-14649. Add suspect probe for DeadNodeDetector. Contributed by Lisheng 
Sun.
---
 .../org/apache/hadoop/hdfs/DeadNodeDetector.java   | 169 +
 .../hadoop/hdfs/client/HdfsClientConfigKeys.java   |  13 ++
 .../src/main/resources/hdfs-default.xml|  24 +++
 .../apache/hadoop/hdfs/TestDeadNodeDetection.java  | 104 +++--
 4 files changed, 264 insertions(+), 46 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DeadNodeDetector.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DeadNodeDetector.java
index 2fe7cf8..ce50547 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DeadNodeDetector.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DeadNodeDetector.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hdfs;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
@@ -48,8 +49,14 @@ import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD
 import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_INTERVAL_MS_KEY;
 import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_THREADS_DEFAULT;
 import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_THREADS_KEY;
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_SUSPECT_NODE_INTERVAL_MS_DEFAULT;
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_SUSPECT_NODE_INTERVAL_MS_KEY;
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_SUSPECT_NODE_THREADS_DEFAULT;
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_SUSPECT_NODE_THREADS_KEY;
 import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_RPC_THREADS_DEFAULT;
 import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_RPC_THREADS_KEY;
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_SUSPECT_NODE_QUEUE_MAX_DEFAULT;
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_SUSPECT_NODE_QUEUE_MAX_KEY;
 import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_SOCKET_TIMEOUT_KEY;
 
 /**
@@ -83,13 +90,13 @@ public class DeadNodeDetector implements Runnable {
   private final Map deadNodes;
 
   /**
-   * Record dead nodes by one DFSInputStream. When dead node is not used by one
-   * DFSInputStream, remove it from dfsInputStreamNodes#DFSInputStream. If
-   * DFSInputStream does not include any dead node, remove DFSInputStream from
-   * dfsInputStreamNodes.
+   * Record suspect and dead nodes by one DFSInputStream. When node is not used
+   * by one DFSInputStream, remove it from suspectAndDeadNodes#DFSInputStream.
+   * If DFSInputStream does not include any node, remove DFSInputStream from
+   * suspectAndDeadNodes.
*/
   private final Map>
-  dfsInputStreamNodes;
+  suspectAndDeadNodes;
 
   /**
* Datanodes that is being probed.
@@ -108,11 +115,21 @@ public class DeadNodeDetector implements Runnable {
   private long deadNodeDetectInterval = 0;
 
   /**
+   * Interval time in milliseconds for probing suspect node behavior.
+   */
+  private long suspectNodeDetectInterval = 0;
+
+  /**
* The max queue size of probing dead node.
*/
   private int maxDeadNodesProbeQueueLen = 0;
 
   /**
+   * The max queue size of probing suspect node.
+   */
+  private int maxSuspectNodesProbeQueueLen;
+
+  /**
* Connection timeout for probing dead node in milliseconds.
*/
   private long probeConnectionTimeoutMs;
@@ -123,16 +140,31 @@ public class DeadNodeDetector implements Runnable {
   private Queue deadNodesProbeQueue;
 
   /**
+   * The suspect node probe queue.
+   */
+  private Queue suspectNodesProbeQueue;
+
+  /**
* The thread pool of probing dead node.
*/
   private ExecutorService probeDeadNodesThreadPool;
 
   /**
+   * The thread pool of probing suspect n

[hadoop] branch trunk updated: HDFS-14651. DeadNodeDetector checks dead node periodically. Contributed by Lisheng Sun.

2019-11-21 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 9b6906f  HDFS-14651. DeadNodeDetector checks dead node periodically. 
Contributed by Lisheng Sun.
9b6906f is described below

commit 9b6906fe914829f50076c2291dba59d425475d7b
Author: Yiqun Lin 
AuthorDate: Fri Nov 22 10:53:55 2019 +0800

HDFS-14651. DeadNodeDetector checks dead node periodically. Contributed by 
Lisheng Sun.
---
 .../java/org/apache/hadoop/hdfs/ClientContext.java |   2 +-
 .../org/apache/hadoop/hdfs/DeadNodeDetector.java   | 308 -
 .../hadoop/hdfs/client/HdfsClientConfigKeys.java   |  22 ++
 .../src/main/resources/hdfs-default.xml|  40 +++
 .../apache/hadoop/hdfs/TestDeadNodeDetection.java  | 154 +--
 5 files changed, 504 insertions(+), 22 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
index abb039c..6ee5277 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
@@ -150,7 +150,7 @@ public class ClientContext {
 conf.getWriteByteArrayManagerConf());
 this.deadNodeDetectionEnabled = conf.isDeadNodeDetectionEnabled();
 if (deadNodeDetectionEnabled && deadNodeDetector == null) {
-  deadNodeDetector = new DeadNodeDetector(name);
+  deadNodeDetector = new DeadNodeDetector(name, config);
   deadNodeDetectorThr = new Daemon(deadNodeDetector);
   deadNodeDetectorThr.start();
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DeadNodeDetector.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DeadNodeDetector.java
index 1ac29a7..2fe7cf8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DeadNodeDetector.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DeadNodeDetector.java
@@ -17,13 +17,40 @@
  */
 package org.apache.hadoop.hdfs;
 
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.DatanodeLocalInfo;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.Time;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.util.HashSet;
+import java.util.Map;
+import java.util.Queue;
 import java.util.Set;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.Callable;
 import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_DEAD_NODE_QUEUE_MAX_DEFAULT;
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_DEAD_NODE_QUEUE_MAX_KEY;
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_CONNECTION_TIMEOUT_MS_DEFAULT;
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_CONNECTION_TIMEOUT_MS_KEY;
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_INTERVAL_MS_DEFAULT;
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_INTERVAL_MS_KEY;
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_THREADS_DEFAULT;
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_THREADS_KEY;
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_RPC_THREADS_DEFAULT;
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_DEAD_NODE_DETECTION_RPC_THREADS_KEY;
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_SOCKET_TIMEOUT_KEY;
 
 /**
  * Detect the dead nodes in advance, and share this information among all the
@@ -48,10 +75,12 @@ public class DeadNodeDetector implements Runnable {
*/
   private String name;
 
+  private Configuration conf;
+
   /**
* Dead nodes shared by all the DFSInputStreams of the client.
*/
-  private final ConcurrentHashMap deadNodes;
+  private final Map d

[hadoop] branch trunk updated: HDFS-14648. Implement DeadNodeDetector basic model. Contributed by Lisheng Sun.

2019-11-15 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new b3119b9  HDFS-14648. Implement DeadNodeDetector basic model. 
Contributed by Lisheng Sun.
b3119b9 is described below

commit b3119b9ab60a19d624db476c4e1c53410870c7a6
Author: Yiqun Lin 
AuthorDate: Sat Nov 16 11:32:41 2019 +0800

HDFS-14648. Implement DeadNodeDetector basic model. Contributed by Lisheng 
Sun.
---
 .../java/org/apache/hadoop/hdfs/ClientContext.java |  49 ++
 .../java/org/apache/hadoop/hdfs/DFSClient.java |  98 +++
 .../org/apache/hadoop/hdfs/DFSInputStream.java |  98 +++
 .../apache/hadoop/hdfs/DFSStripedInputStream.java  |   6 +-
 .../org/apache/hadoop/hdfs/DeadNodeDetector.java   | 185 +
 .../hadoop/hdfs/client/HdfsClientConfigKeys.java   |   4 +
 .../hadoop/hdfs/client/impl/DfsClientConf.java |  31 +++-
 .../src/main/resources/hdfs-default.xml|   9 +
 .../apache/hadoop/hdfs/TestDeadNodeDetection.java  | 183 
 9 files changed, 617 insertions(+), 46 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
index ad1b359..abb039c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
@@ -40,6 +40,7 @@ import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.net.NetworkTopology;
 import org.apache.hadoop.net.NodeBase;
 import org.apache.hadoop.net.ScriptBasedMapping;
+import org.apache.hadoop.util.Daemon;
 import org.apache.hadoop.util.ReflectionUtils;
 
 import com.google.common.annotations.VisibleForTesting;
@@ -118,6 +119,19 @@ public class ClientContext {
   private NodeBase clientNode;
   private boolean topologyResolutionEnabled;
 
+  private Daemon deadNodeDetectorThr = null;
+
+  /**
+   * The switch to DeadNodeDetector.
+   */
+  private boolean deadNodeDetectionEnabled = false;
+
+  /**
+   * Detect the dead datanodes in advance, and share this information among all
+   * the DFSInputStreams in the same client.
+   */
+  private DeadNodeDetector deadNodeDetector = null;
+
   private ClientContext(String name, DfsClientConf conf,
   Configuration config) {
 final ShortCircuitConf scConf = conf.getShortCircuitConf();
@@ -134,6 +148,12 @@ public class ClientContext {
 
 this.byteArrayManager = ByteArrayManager.newInstance(
 conf.getWriteByteArrayManagerConf());
+this.deadNodeDetectionEnabled = conf.isDeadNodeDetectionEnabled();
+if (deadNodeDetectionEnabled && deadNodeDetector == null) {
+  deadNodeDetector = new DeadNodeDetector(name);
+  deadNodeDetectorThr = new Daemon(deadNodeDetector);
+  deadNodeDetectorThr.start();
+}
 initTopologyResolution(config);
   }
 
@@ -251,4 +271,33 @@ public class ClientContext {
 datanodeInfo.getNetworkLocation());
 return NetworkTopology.getDistanceByPath(clientNode, node);
   }
+
+  /**
+   * The switch to DeadNodeDetector. If true, DeadNodeDetector is available.
+   */
+  public boolean isDeadNodeDetectionEnabled() {
+return deadNodeDetectionEnabled;
+  }
+
+  /**
+   * Obtain DeadNodeDetector of the current client.
+   */
+  public DeadNodeDetector getDeadNodeDetector() {
+return deadNodeDetector;
+  }
+
+  /**
+   * Close dead node detector thread.
+   */
+  public void stopDeadNodeDetectorThread() {
+if (deadNodeDetectorThr != null) {
+  deadNodeDetectorThr.interrupt();
+  try {
+deadNodeDetectorThr.join(3000);
+  } catch (InterruptedException e) {
+LOG.warn("Encountered exception while waiting to join on dead " +
+"node detector thread.", e);
+  }
+}
+  }
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 56280f3..c19aa96 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -44,6 +44,8 @@ import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.SynchronousQueue;
 import java.util.concurrent.ThreadLocalRandom;
 import java.util.concurrent.ThreadPoolExecutor;
@@ -631,6 +633,8 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
   // lease renewal sto

[hadoop] branch branch-3.0 updated: HDFS-14313. Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du. Contributed by Lisheng Sun.

2019-08-07 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new ea53306  HDFS-14313. Get hdfs used space from 
FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du. Contributed by 
Lisheng Sun.
ea53306 is described below

commit ea5330643ce353cc34ed81bb988cc744502a15e3
Author: Yiqun Lin 
AuthorDate: Wed Aug 7 23:05:09 2019 +0800

HDFS-14313. Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in 
memory instead of df/du. Contributed by Lisheng Sun.
---
 .../hadoop/fs/CommonConfigurationKeysPublic.java   |  19 +++
 .../java/org/apache/hadoop/fs/GetSpaceUsed.java|  31 +++--
 .../src/main/resources/core-default.xml|  21 +++
 .../server/datanode/FSCachingGetSpaceUsed.java |  82 
 .../server/datanode/fsdataset/FsDatasetSpi.java|   2 +
 .../datanode/fsdataset/impl/BlockPoolSlice.java|  11 +-
 .../datanode/fsdataset/impl/FsDatasetImpl.java |  13 ++
 .../fsdataset/impl/ReplicaCachingGetSpaceUsed.java | 108 +++
 .../hdfs/server/datanode/SimulatedFSDataset.java   |  12 ++
 .../datanode/extdataset/ExternalDatasetImpl.java   |   6 +
 .../impl/TestReplicaCachingGetSpaceUsed.java   | 148 +
 11 files changed, 437 insertions(+), 16 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
index 95fccec..8d6a043 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
@@ -72,6 +72,25 @@ public class CommonConfigurationKeysPublic {
   public static final String  FS_DU_INTERVAL_KEY = "fs.du.interval";
   /** Default value for FS_DU_INTERVAL_KEY */
   public static final longFS_DU_INTERVAL_DEFAULT = 60;
+
+  /**
+   * @see
+   * 
+   * core-default.xml
+   */
+  public static final String FS_GETSPACEUSED_CLASSNAME =
+  "fs.getspaceused.classname";
+
+  /**
+   * @see
+   * 
+   * core-default.xml
+   */
+  public static final String FS_GETSPACEUSED_JITTER_KEY =
+  "fs.getspaceused.jitterMillis";
+  /** Default value for FS_GETSPACEUSED_JITTER_KEY */
+  public static final long FS_GETSPACEUSED_JITTER_DEFAULT = 6;
+
   /**
* @see
* 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GetSpaceUsed.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GetSpaceUsed.java
index 4d1f9ef..3439317 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GetSpaceUsed.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GetSpaceUsed.java
@@ -26,7 +26,6 @@ import java.io.File;
 import java.io.IOException;
 import java.lang.reflect.Constructor;
 import java.lang.reflect.InvocationTargetException;
-import java.util.concurrent.TimeUnit;
 
 public interface GetSpaceUsed {
 
@@ -36,20 +35,16 @@ public interface GetSpaceUsed {
   /**
* The builder class
*/
-  final class Builder {
+  class Builder {
 static final Logger LOG = LoggerFactory.getLogger(Builder.class);
 
-static final String CLASSNAME_KEY = "fs.getspaceused.classname";
-static final String JITTER_KEY = "fs.getspaceused.jitterMillis";
-static final long DEFAULT_JITTER = TimeUnit.MINUTES.toMillis(1);
-
-
 private Configuration conf;
 private Class klass = null;
 private File path = null;
 private Long interval = null;
 private Long jitter = null;
 private Long initialUsed = null;
+private Constructor cons;
 
 public Configuration getConf() {
   return conf;
@@ -89,7 +84,8 @@ public interface GetSpaceUsed {
   if (conf == null) {
 return result;
   }
-  return conf.getClass(CLASSNAME_KEY, result, GetSpaceUsed.class);
+  return conf.getClass(CommonConfigurationKeys.FS_GETSPACEUSED_CLASSNAME,
+  result, GetSpaceUsed.class);
 }
 
 public Builder setKlass(Class klass) {
@@ -124,9 +120,10 @@ public interface GetSpaceUsed {
 Configuration configuration = this.conf;
 
 if (configuration == null) {
-  return DEFAULT_JITTER;
+  return CommonConfigurationKeys.FS_GETSPACEUSED_JITTER_DEFAULT;
 }
-return configuration.getLong(JITTER_KEY, DEFAULT_JITTER);
+return 
configuration.getLong(CommonConfigurationKeys.FS_GETSPACEUSED_JITTER_KEY,
+CommonConfigurationKeys.FS_GETSPACEUSED_JITTER_DEFAULT);
   }
   return jitter;
 }
@@ -136,11 +133,21 @@ public interface GetSpaceUsed {
  

[hadoop] branch branch-2 updated: HDFS-14313. Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du. Contributed by Lisheng Sun.

2019-08-07 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 63531be  HDFS-14313. Get hdfs used space from 
FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du. Contributed by 
Lisheng Sun.
63531be is described below

commit 63531be5057f60c1b16d02b88058608470ed566f
Author: Yiqun Lin 
AuthorDate: Wed Aug 7 23:01:51 2019 +0800

HDFS-14313. Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in 
memory instead of df/du. Contributed by Lisheng Sun.
---
 .../hadoop/fs/CommonConfigurationKeysPublic.java   |  19 +++
 .../java/org/apache/hadoop/fs/GetSpaceUsed.java|  31 +++--
 .../src/main/resources/core-default.xml|  21 +++
 .../server/datanode/FSCachingGetSpaceUsed.java |  82 
 .../server/datanode/fsdataset/FsDatasetSpi.java|   2 +
 .../datanode/fsdataset/impl/BlockPoolSlice.java|  11 +-
 .../datanode/fsdataset/impl/FsDatasetImpl.java |  13 ++
 .../fsdataset/impl/ReplicaCachingGetSpaceUsed.java | 108 +++
 .../hdfs/server/datanode/SimulatedFSDataset.java   |  12 ++
 .../datanode/extdataset/ExternalDatasetImpl.java   |   6 +
 .../impl/TestReplicaCachingGetSpaceUsed.java   | 148 +
 11 files changed, 437 insertions(+), 16 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
index 9adb9ba..04bd936 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
@@ -84,6 +84,25 @@ public class CommonConfigurationKeysPublic {
   public static final String  FS_DU_INTERVAL_KEY = "fs.du.interval";
   /** Default value for FS_DU_INTERVAL_KEY */
   public static final longFS_DU_INTERVAL_DEFAULT = 60;
+
+  /**
+   * @see
+   * 
+   * core-default.xml
+   */
+  public static final String FS_GETSPACEUSED_CLASSNAME =
+  "fs.getspaceused.classname";
+
+  /**
+   * @see
+   * 
+   * core-default.xml
+   */
+  public static final String FS_GETSPACEUSED_JITTER_KEY =
+  "fs.getspaceused.jitterMillis";
+  /** Default value for FS_GETSPACEUSED_JITTER_KEY */
+  public static final long FS_GETSPACEUSED_JITTER_DEFAULT = 6;
+
   /**
* @see
* 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GetSpaceUsed.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GetSpaceUsed.java
index 4d1f9ef..3439317 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GetSpaceUsed.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GetSpaceUsed.java
@@ -26,7 +26,6 @@ import java.io.File;
 import java.io.IOException;
 import java.lang.reflect.Constructor;
 import java.lang.reflect.InvocationTargetException;
-import java.util.concurrent.TimeUnit;
 
 public interface GetSpaceUsed {
 
@@ -36,20 +35,16 @@ public interface GetSpaceUsed {
   /**
* The builder class
*/
-  final class Builder {
+  class Builder {
 static final Logger LOG = LoggerFactory.getLogger(Builder.class);
 
-static final String CLASSNAME_KEY = "fs.getspaceused.classname";
-static final String JITTER_KEY = "fs.getspaceused.jitterMillis";
-static final long DEFAULT_JITTER = TimeUnit.MINUTES.toMillis(1);
-
-
 private Configuration conf;
 private Class klass = null;
 private File path = null;
 private Long interval = null;
 private Long jitter = null;
 private Long initialUsed = null;
+private Constructor cons;
 
 public Configuration getConf() {
   return conf;
@@ -89,7 +84,8 @@ public interface GetSpaceUsed {
   if (conf == null) {
 return result;
   }
-  return conf.getClass(CLASSNAME_KEY, result, GetSpaceUsed.class);
+  return conf.getClass(CommonConfigurationKeys.FS_GETSPACEUSED_CLASSNAME,
+  result, GetSpaceUsed.class);
 }
 
 public Builder setKlass(Class klass) {
@@ -124,9 +120,10 @@ public interface GetSpaceUsed {
 Configuration configuration = this.conf;
 
 if (configuration == null) {
-  return DEFAULT_JITTER;
+  return CommonConfigurationKeys.FS_GETSPACEUSED_JITTER_DEFAULT;
 }
-return configuration.getLong(JITTER_KEY, DEFAULT_JITTER);
+return 
configuration.getLong(CommonConfigurationKeys.FS_GETSPACEUSED_JITTER_KEY,
+CommonConfigurationKeys.FS_GETSPACEUSED_JITTER_DEFAULT);
   }
   return jitter;
 }
@@ -136,11 +133,21 @@ public interface GetSpaceUsed {
   ret

[hadoop] branch trunk updated: HDFS-14313. Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du. Contributed by Lisheng Sun.

2019-08-06 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a5bb1e8  HDFS-14313. Get hdfs used space from 
FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du. Contributed by 
Lisheng Sun.
a5bb1e8 is described below

commit a5bb1e8ee871dfff77d0f6921b13c8ffb50e
Author: Yiqun Lin 
AuthorDate: Wed Aug 7 10:18:11 2019 +0800

HDFS-14313. Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in 
memory instead of df/du. Contributed by Lisheng Sun.
---
 .../hadoop/fs/CommonConfigurationKeysPublic.java   |  19 +++
 .../java/org/apache/hadoop/fs/GetSpaceUsed.java|  31 +++--
 .../src/main/resources/core-default.xml|  20 +++
 .../server/datanode/FSCachingGetSpaceUsed.java |  82 
 .../server/datanode/fsdataset/FsDatasetSpi.java|   2 +
 .../datanode/fsdataset/impl/BlockPoolSlice.java|  13 +-
 .../datanode/fsdataset/impl/FsDatasetImpl.java |  13 ++
 .../fsdataset/impl/ReplicaCachingGetSpaceUsed.java | 108 +++
 .../hdfs/server/datanode/SimulatedFSDataset.java   |  11 ++
 .../datanode/extdataset/ExternalDatasetImpl.java   |   6 +
 .../impl/TestReplicaCachingGetSpaceUsed.java   | 148 +
 11 files changed, 437 insertions(+), 16 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
index 7e3e47c..b24ce9e 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
@@ -72,6 +72,25 @@ public class CommonConfigurationKeysPublic {
   public static final String  FS_DU_INTERVAL_KEY = "fs.du.interval";
   /** Default value for FS_DU_INTERVAL_KEY */
   public static final longFS_DU_INTERVAL_DEFAULT = 60;
+
+  /**
+   * @see
+   * 
+   * core-default.xml
+   */
+  public static final String FS_GETSPACEUSED_CLASSNAME =
+  "fs.getspaceused.classname";
+
+  /**
+   * @see
+   * 
+   * core-default.xml
+   */
+  public static final String FS_GETSPACEUSED_JITTER_KEY =
+  "fs.getspaceused.jitterMillis";
+  /** Default value for FS_GETSPACEUSED_JITTER_KEY */
+  public static final long FS_GETSPACEUSED_JITTER_DEFAULT = 6;
+
   /**
* @see
* 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GetSpaceUsed.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GetSpaceUsed.java
index 4d1f9ef..3439317 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GetSpaceUsed.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GetSpaceUsed.java
@@ -26,7 +26,6 @@ import java.io.File;
 import java.io.IOException;
 import java.lang.reflect.Constructor;
 import java.lang.reflect.InvocationTargetException;
-import java.util.concurrent.TimeUnit;
 
 public interface GetSpaceUsed {
 
@@ -36,20 +35,16 @@ public interface GetSpaceUsed {
   /**
* The builder class
*/
-  final class Builder {
+  class Builder {
 static final Logger LOG = LoggerFactory.getLogger(Builder.class);
 
-static final String CLASSNAME_KEY = "fs.getspaceused.classname";
-static final String JITTER_KEY = "fs.getspaceused.jitterMillis";
-static final long DEFAULT_JITTER = TimeUnit.MINUTES.toMillis(1);
-
-
 private Configuration conf;
 private Class klass = null;
 private File path = null;
 private Long interval = null;
 private Long jitter = null;
 private Long initialUsed = null;
+private Constructor cons;
 
 public Configuration getConf() {
   return conf;
@@ -89,7 +84,8 @@ public interface GetSpaceUsed {
   if (conf == null) {
 return result;
   }
-  return conf.getClass(CLASSNAME_KEY, result, GetSpaceUsed.class);
+  return conf.getClass(CommonConfigurationKeys.FS_GETSPACEUSED_CLASSNAME,
+  result, GetSpaceUsed.class);
 }
 
 public Builder setKlass(Class klass) {
@@ -124,9 +120,10 @@ public interface GetSpaceUsed {
 Configuration configuration = this.conf;
 
 if (configuration == null) {
-  return DEFAULT_JITTER;
+  return CommonConfigurationKeys.FS_GETSPACEUSED_JITTER_DEFAULT;
 }
-return configuration.getLong(JITTER_KEY, DEFAULT_JITTER);
+return 
configuration.getLong(CommonConfigurationKeys.FS_GETSPACEUSED_JITTER_KEY,
+CommonConfigurationKeys.FS_GETSPACEUSED_JITTER_DEFAULT);
   }
   return jitter;
 }
@@ -136,11 +133,21 @@ public interface GetSpaceUsed {
   ret

[hadoop] branch trunk updated: HDFS-14410. Make Dynamometer documentation properly compile onto the Hadoop site. Contributed by Erik Krogen.

2019-07-11 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 5043840  HDFS-14410. Make Dynamometer documentation properly compile 
onto the Hadoop site. Contributed by Erik Krogen.
5043840 is described below

commit 5043840b1daeb80c1e3180ba7df604830cf69ea4
Author: Yiqun Lin 
AuthorDate: Thu Jul 11 23:47:27 2019 +0800

HDFS-14410. Make Dynamometer documentation properly compile onto the Hadoop 
site. Contributed by Erik Krogen.
---
 hadoop-project/src/site/site.xml   |   1 +
 .../src/site/markdown/Dynamometer.md   | 299 +
 .../src/site/resources/css/site.css|  30 +++
 .../images/dynamometer-architecture-infra.png  | Bin 0 -> 123874 bytes
 .../images/dynamometer-architecture-replay.png | Bin 0 -> 159507 bytes
 5 files changed, 330 insertions(+)

diff --git a/hadoop-project/src/site/site.xml b/hadoop-project/src/site/site.xml
index bf6bd26..a790369 100644
--- a/hadoop-project/src/site/site.xml
+++ b/hadoop-project/src/site/site.xml
@@ -215,6 +215,7 @@
   
   
   
+  
 
 
 
diff --git a/hadoop-tools/hadoop-dynamometer/src/site/markdown/Dynamometer.md 
b/hadoop-tools/hadoop-dynamometer/src/site/markdown/Dynamometer.md
new file mode 100644
index 000..39dd0db
--- /dev/null
+++ b/hadoop-tools/hadoop-dynamometer/src/site/markdown/Dynamometer.md
@@ -0,0 +1,299 @@
+
+
+# Dynamometer Guide
+
+
+
+## Overview
+
+Dynamometer is a tool to performance test Hadoop's HDFS NameNode. The intent 
is to provide a
+real-world environment by initializing the NameNode against a production file 
system image and replaying
+a production workload collected via e.g. the NameNode's audit logs. This 
allows for replaying a workload
+which is not only similar in characteristic to that experienced in production, 
but actually identical.
+
+Dynamometer will launch a YARN application which starts a single NameNode and 
a configurable number of
+DataNodes, simulating an entire HDFS cluster as a single application. There is 
an additional `workload`
+job run as a MapReduce job which accepts audit logs as input and uses the 
information contained within to
+submit matching requests to the NameNode, inducing load on the service.
+
+Dynamometer can execute this same workload against different Hadoop versions 
or with different
+configurations, allowing for the testing of configuration tweaks and code 
changes at scale without the
+necessity of deploying to a real large-scale cluster.
+
+Throughout this documentation, we will use "Dyno-HDFS", "Dyno-NN", and 
"Dyno-DN" to refer to the HDFS
+cluster, NameNode, and DataNodes (respectively) which are started _inside of_ 
a Dynamometer application.
+Terms like HDFS, YARN, and NameNode used without qualification refer to the 
existing infrastructure on
+top of which Dynamometer is run.
+
+For more details on how Dynamometer works, as opposed to how to use it, see 
the Architecture section
+at the end of this page.
+
+## Requirements
+
+Dynamometer is based around YARN applications, so an existing YARN cluster 
will be required for execution.
+It also requires an accompanying HDFS instance to store some temporary files 
for communication.
+
+## Building
+
+Dynamometer consists of three main components, each one in its own module:
+
+* Infrastructure (`dynamometer-infra`): This is the YARN application which 
starts a Dyno-HDFS cluster.
+* Workload (`dynamometer-workload`): This is the MapReduce job which replays 
audit logs.
+* Block Generator (`dynamometer-blockgen`): This is a MapReduce job used to 
generate input files for each Dyno-DN; its
+  execution is a prerequisite step to running the infrastructure application.
+
+The compiled version of all of these components will be included in a standard 
Hadoop distribution.
+You can find them in the packaged distribution within 
`share/hadoop/tools/dynamometer`.
+
+## Setup Steps
+
+Before launching a Dynamometer application, there are a number of setup steps 
that must be completed,
+instructing Dynamometer what configurations to use, what version to use, what 
fsimage to use when
+loading, etc. These steps can be performed a single time to put everything in 
place, and then many
+Dynamometer executions can be performed against them with minor tweaks to 
measure variations.
+
+Scripts discussed below can be found in the 
`share/hadoop/tools/dynamometer/dynamometer-{infra,workload,blockgen}/bin`
+directories of the distribution. The corresponding Java JAR files can be found 
in the `share/hadoop/tools/lib/` directory.
+References to bin files below assume that the current working directory is 
`share/hadoop/tools/dynamometer`.
+
+### Step 1: Preparing Requisite Files
+
+A number of steps are required in advance of starting your fir

[hadoop] branch branch-2 updated: HDFS-14632. Reduce useless #getNumLiveDataNodes call in SafeModeMonitor. Contributed by He Xiaoqiao.

2019-07-09 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 78aebee  HDFS-14632. Reduce useless #getNumLiveDataNodes call in 
SafeModeMonitor. Contributed by He Xiaoqiao.
78aebee is described below

commit 78aebee5c5f2cf8260137f29ff9be05f0e5b33b9
Author: Yiqun Lin 
AuthorDate: Wed Jul 10 10:53:34 2019 +0800

HDFS-14632. Reduce useless #getNumLiveDataNodes call in SafeModeMonitor. 
Contributed by He Xiaoqiao.

(cherry picked from commit 993dc8726b7d40ac832ae3e23b64e8541b62c4bd)
---
 .../blockmanagement/BlockManagerSafeMode.java  | 22 +---
 .../java/org/apache/hadoop/hdfs/TestSafeMode.java  | 10 
 .../blockmanagement/TestBlockManagerSafeMode.java  | 25 +++
 .../hdfs/server/namenode/ha/TestHASafeMode.java| 29 ++
 4 files changed, 62 insertions(+), 24 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java
index ae2f942..5271004 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java
@@ -306,16 +306,20 @@ class BlockManagerSafeMode {
   }
 }
 
-int numLive = blockManager.getDatanodeManager().getNumLiveDataNodes();
-if (numLive < datanodeThreshold) {
-  msg += String.format(
-  "The number of live datanodes %d needs an additional %d live "
-  + "datanodes to reach the minimum number %d.%n",
-  numLive, (datanodeThreshold - numLive), datanodeThreshold);
+if (datanodeThreshold > 0) {
+  int numLive = blockManager.getDatanodeManager().getNumLiveDataNodes();
+  if (numLive < datanodeThreshold) {
+msg += String.format(
+"The number of live datanodes %d needs an additional %d live "
++ "datanodes to reach the minimum number %d.%n",
+numLive, (datanodeThreshold - numLive), datanodeThreshold);
+  } else {
+msg += String.format("The number of live datanodes %d has reached "
++ "the minimum number %d. ",
+numLive, datanodeThreshold);
+  }
 } else {
-  msg += String.format("The number of live datanodes %d has reached "
-  + "the minimum number %d. ",
-  numLive, datanodeThreshold);
+  msg += "The minimum number of live datanodes is not required. ";
 }
 
 if (getBytesInFuture() > 0) {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
index 08926f3..8325a28 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
@@ -199,11 +199,11 @@ public class TestSafeMode {
 final NameNode nn = cluster.getNameNode();
 
 String status = nn.getNamesystem().getSafemode();
-assertEquals("Safe mode is ON. The reported blocks 0 needs additional " +
-"14 blocks to reach the threshold 0.9990 of total blocks 15." + 
NEWLINE +
-"The number of live datanodes 0 has reached the minimum number 0. " +
-"Safe mode will be turned off automatically once the thresholds " +
-"have been reached.", status);
+assertEquals("Safe mode is ON. The reported blocks 0 needs additional "
++ "14 blocks to reach the threshold 0.9990 of total blocks 15."
++ NEWLINE + "The minimum number of live datanodes is not required. "
++ "Safe mode will be turned off automatically once the thresholds have 
"
++ "been reached.", status);
 assertFalse("Mis-replicated block queues should not be initialized " +
 "until threshold is crossed",
 NameNodeAdapter.safeModeInitializedReplQueues(nn));
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java
index b7e8f6d..eb58f5c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java
+++ 
b/hadoop-hdfs

[hadoop] branch trunk updated: HDFS-14632. Reduce useless #getNumLiveDataNodes call in SafeModeMonitor. Contributed by He Xiaoqiao.

2019-07-09 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 993dc87  HDFS-14632. Reduce useless #getNumLiveDataNodes call in 
SafeModeMonitor. Contributed by He Xiaoqiao.
993dc87 is described below

commit 993dc8726b7d40ac832ae3e23b64e8541b62c4bd
Author: Yiqun Lin 
AuthorDate: Wed Jul 10 10:53:34 2019 +0800

HDFS-14632. Reduce useless #getNumLiveDataNodes call in SafeModeMonitor. 
Contributed by He Xiaoqiao.
---
 .../blockmanagement/BlockManagerSafeMode.java  | 22 +---
 .../java/org/apache/hadoop/hdfs/TestSafeMode.java  | 10 
 .../blockmanagement/TestBlockManagerSafeMode.java  | 25 +++
 .../hdfs/server/namenode/ha/TestHASafeMode.java| 29 ++
 4 files changed, 62 insertions(+), 24 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java
index 1d7e69b..3f59b64 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java
@@ -309,16 +309,20 @@ class BlockManagerSafeMode {
   }
 }
 
-int numLive = blockManager.getDatanodeManager().getNumLiveDataNodes();
-if (numLive < datanodeThreshold) {
-  msg += String.format(
-  "The number of live datanodes %d needs an additional %d live "
-  + "datanodes to reach the minimum number %d.%n",
-  numLive, (datanodeThreshold - numLive), datanodeThreshold);
+if (datanodeThreshold > 0) {
+  int numLive = blockManager.getDatanodeManager().getNumLiveDataNodes();
+  if (numLive < datanodeThreshold) {
+msg += String.format(
+"The number of live datanodes %d needs an additional %d live "
++ "datanodes to reach the minimum number %d.%n",
+numLive, (datanodeThreshold - numLive), datanodeThreshold);
+  } else {
+msg += String.format("The number of live datanodes %d has reached "
++ "the minimum number %d. ",
+numLive, datanodeThreshold);
+  }
 } else {
-  msg += String.format("The number of live datanodes %d has reached "
-  + "the minimum number %d. ",
-  numLive, datanodeThreshold);
+  msg += "The minimum number of live datanodes is not required. ";
 }
 
 if (getBytesInFuture() > 0) {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
index 0fde81e..7fbf222 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
@@ -201,11 +201,11 @@ public class TestSafeMode {
 final NameNode nn = cluster.getNameNode();
 
 String status = nn.getNamesystem().getSafemode();
-assertEquals("Safe mode is ON. The reported blocks 0 needs additional " +
-"14 blocks to reach the threshold 0.9990 of total blocks 15." + 
NEWLINE +
-"The number of live datanodes 0 has reached the minimum number 0. " +
-"Safe mode will be turned off automatically once the thresholds " +
-"have been reached.", status);
+assertEquals("Safe mode is ON. The reported blocks 0 needs additional "
++ "14 blocks to reach the threshold 0.9990 of total blocks 15."
++ NEWLINE + "The minimum number of live datanodes is not required. "
++ "Safe mode will be turned off automatically once the thresholds have 
"
++ "been reached.", status);
 assertFalse("Mis-replicated block queues should not be initialized " +
 "until threshold is crossed",
 NameNodeAdapter.safeModeInitializedReplQueues(nn));
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java
index fd224ea..5cf094d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmana

[hadoop] branch trunk updated: HDDS-1189. Recon Aggregate DB schema and ORM. Contributed by Siddharth Wagle.

2019-04-04 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a92806d  HDDS-1189. Recon Aggregate DB schema and ORM. Contributed by 
Siddharth Wagle.
a92806d is described below

commit a92806d05a2eb1f586463fa07aa2f17ce9180401
Author: Yiqun Lin 
AuthorDate: Thu Apr 4 17:33:37 2019 +0800

HDDS-1189. Recon Aggregate DB schema and ORM. Contributed by Siddharth 
Wagle.
---
 .../common/src/main/resources/ozone-default.xml|  95 
 hadoop-hdds/tools/pom.xml  |   3 +-
 hadoop-ozone/ozone-recon-codegen/pom.xml   |  58 +++
 .../ozone/recon/codegen/JooqCodeGenerator.java | 170 +
 .../recon/codegen/ReconSchemaGenerationModule.java |  39 +
 .../ozone/recon/codegen/TableNamingStrategy.java   |  48 ++
 .../hadoop/ozone/recon/codegen/package-info.java   |  22 +++
 .../ozone/recon/schema/ReconSchemaDefinition.java  |  34 +
 .../recon/schema/UtilizationSchemaDefinition.java  |  69 +
 .../hadoop/ozone/recon/schema/package-info.java|  22 +++
 .../dev-support/findbugsExcludeFile.xml|  28 
 hadoop-ozone/ozone-recon/pom.xml   | 147 ++
 .../hadoop/ozone/recon/ReconControllerModule.java  | 102 -
 .../hadoop/ozone/recon/ReconServerConfigKeys.java  |  25 +++
 .../recon/persistence/DataSourceConfiguration.java |  86 +++
 .../persistence/DefaultDataSourceProvider.java |  74 +
 .../recon/persistence/JooqPersistenceModule.java   | 111 ++
 .../TransactionalMethodInterceptor.java|  76 +
 .../ozone/recon/persistence/package-info.java  |  22 +++
 .../recon/persistence/AbstractSqlDatabaseTest.java | 146 ++
 .../TestUtilizationSchemaDefinition.java   | 160 +++
 .../ozone/recon/persistence/package-info.java  |  22 +++
 hadoop-ozone/pom.xml   |   1 +
 23 files changed, 1524 insertions(+), 36 deletions(-)

diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index 5580548..731bf28 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -2394,4 +2394,99 @@
   If enabled, tracing information is sent to tracing server.
 
   
+  
+ozone.recon.sql.db.driver
+org.sqlite.JDBC
+OZONE, RECON
+
+  Database driver class name available on the
+  Ozone Recon classpath.
+
+  
+  
+ozone.recon.sql.db.jdbc.url
+jdbc:sqlite:/${ozone.recon.db.dir}/ozone_recon_sqlite.db
+OZONE, RECON
+
+  Ozone Recon SQL database jdbc url.
+
+  
+  
+ozone.recon.sql.db.username
+
+OZONE, RECON
+
+  Ozone Recon SQL database username.
+
+  
+  
+ozone.recon.sql.db.password
+
+OZONE, RECON
+
+  Ozone Recon datbase password.
+
+  
+  
+ozone.recon.sql.db.auto.commit
+false
+OZONE, RECON
+
+  Sets the Ozone Recon database connection property of auto-commit to
+  true/false.
+
+  
+  
+ozone.recon.sql.db.conn.timeout
+3
+OZONE, RECON
+
+  Sets time in milliseconds before call to getConnection is timed out.
+
+  
+  
+ozone.recon.sql.db.conn.max.active
+1
+OZONE, RECON
+
+  The max active connections to the SQL database. The default SQLite
+  database only allows single active connection, set this to a
+  resonable value like 10, for external production database.
+
+  
+  
+ozone.recon.sql.db.conn.max.age
+1800
+OZONE, RECON
+
+  Sets maximum time a connection can be active in seconds.
+
+  
+  
+ozone.recon.sql.db.conn.idle.max.age
+3600
+OZONE, RECON
+
+  Sets maximum time to live for idle connection in seconds.
+
+  
+  
+ozone.recon.sql.db.conn.idle.test.period
+60
+OZONE, RECON
+
+  This sets the time (in seconds), for a connection to remain idle before
+  sending a test query to the DB. This is useful to prevent a DB from
+  timing out connections on its end.
+
+  
+  
+ozone.recon.sql.db.conn.idle.test
+SELECT 1
+OZONE, RECON
+
+  The query to send to the DB to maintain keep-alives and test for dead
+  connections.
+
+  
 
diff --git a/hadoop-hdds/tools/pom.xml b/hadoop-hdds/tools/pom.xml
index 0e39330..689bca7 100644
--- a/hadoop-hdds/tools/pom.xml
+++ b/hadoop-hdds/tools/pom.xml
@@ -49,9 +49,8 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
 
   org.xerial
   sqlite-jdbc
-  3.8.7
+  3.25.2
 
 
-
   
 
diff --git a/hadoop-ozone/ozone-recon-codegen/pom.xml 
b/hadoop-ozone/ozone-recon-codegen/pom.xml
new file mode 100644
index 

[hadoop] branch trunk updated: HDDS-1365. Fix error handling in KeyValueContainerCheck. Contributed by Supratim Deka.

2019-04-03 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new f96fb05  HDDS-1365. Fix error handling in KeyValueContainerCheck. 
Contributed by Supratim Deka.
f96fb05 is described below

commit f96fb05a2b889f3acdfae60d8f64c755ff48b8c1
Author: Yiqun Lin 
AuthorDate: Wed Apr 3 14:01:30 2019 +0800

HDDS-1365. Fix error handling in KeyValueContainerCheck. Contributed by 
Supratim Deka.
---
 .../container/keyvalue/KeyValueContainerCheck.java | 285 ++---
 .../keyvalue/TestKeyValueContainerCheck.java   |  11 +-
 2 files changed, 81 insertions(+), 215 deletions(-)

diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java
index b1ab1e1..bdfdf21 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java
@@ -68,9 +68,26 @@ public class KeyValueContainerCheck {
   }
 
   /**
-   * fast checks are basic and do not look inside the metadata files.
-   * Or into the structures on disk. These checks can be done on Open
-   * containers as well without concurrency implications
+   * Run basic integrity checks on container metadata.
+   * These checks do not look inside the metadata files.
+   * Applicable for OPEN containers.
+   *
+   * @return true : corruption detected, false : no corruption.
+   */
+  public boolean fastCheck() {
+boolean corruption = false;
+try {
+  basicChecks();
+
+} catch (IOException e) {
+  handleCorruption(e);
+  corruption = true;
+}
+
+return corruption;
+  }
+
+  /**
* Checks :
* 1. check directory layout
* 2. check container file
@@ -78,24 +95,14 @@ public class KeyValueContainerCheck {
* @return void
*/
 
-  public KvCheckError fastCheck() {
-
-KvCheckError error;
-LOG.trace("Running fast check for container {};", containerID);
-
-error = loadContainerData();
-if (error != KvCheckError.ERROR_NONE) {
-  return error;
-}
+  private void basicChecks() throws IOException {
 
-error = checkLayout();
-if (error != KvCheckError.ERROR_NONE) {
-  return error;
-}
+LOG.trace("Running basic checks for container {};", containerID);
 
-error = checkContainerFile();
+loadContainerData();
 
-return error;
+checkLayout();
+checkContainerFile();
   }
 
   /**
@@ -107,129 +114,80 @@ public class KeyValueContainerCheck {
* 
* fullCheck is a superset of fastCheck
*
-   * @return void
+   * @return true : corruption detected, false : no corruption.
*/
-  public KvCheckError fullCheck() {
-/**
-
- */
-KvCheckError error;
+  public boolean fullCheck() {
+boolean corruption = false;
 
-error = fastCheck();
-if (error != KvCheckError.ERROR_NONE) {
+try {
+  basicChecks();
+  checkBlockDB();
 
-  LOG.trace("fastCheck failed, aborting full check for Container {}",
-  containerID);
-  return error;
+} catch (IOException e) {
+  handleCorruption(e);
+  corruption = true;
 }
 
-error = checkBlockDB();
-
-return error;
+return corruption;
   }
 
   /**
* Check the integrity of the directory structure of the container.
-   *
-   * @return error code or ERROR_NONE
*/
-  private KvCheckError checkLayout() {
-boolean success;
-KvCheckError error = KvCheckError.ERROR_NONE;
+  private void checkLayout() throws IOException {
 
 // is metadataPath accessible as a directory?
-try {
-  checkDirPath(metadataPath);
-} catch (IOException ie) {
-  error = KvCheckError.METADATA_PATH_ACCESS;
-  handleCorruption(ie.getMessage(), error, ie);
-  return error;
-}
+checkDirPath(metadataPath);
 
-String chunksPath = onDiskContainerData.getChunksPath();
 // is chunksPath accessible as a directory?
-try {
-  checkDirPath(chunksPath);
-} catch (IOException ie) {
-  error = KvCheckError.CHUNKS_PATH_ACCESS;
-  handleCorruption(ie.getMessage(), error, ie);
-  return error;
-}
-
-return error;
+String chunksPath = onDiskContainerData.getChunksPath();
+checkDirPath(chunksPath);
   }
 
   private void checkDirPath(String path) throws IOException {
 
 File dirPath = new File(path);
 String errStr = null;
-boolean success = true;
 
 try {
   if (!dirPath.isDirectory()) {
-success = false;
 errStr = "Not a directory [" + path + "]";
+throw new IOException(errStr);
   }

[hadoop] branch trunk updated: HDDS-1337. Handle GroupMismatchException in OzoneClient. Contributed by Shashikant Banerjee.

2019-04-02 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d31c868  HDDS-1337. Handle GroupMismatchException in OzoneClient. 
Contributed by Shashikant Banerjee.
d31c868 is described below

commit d31c86892e0ceec5d642f76fc9123fac4fd80db8
Author: Yiqun Lin 
AuthorDate: Tue Apr 2 16:27:11 2019 +0800

HDDS-1337. Handle GroupMismatchException in OzoneClient. Contributed by 
Shashikant Banerjee.
---
 .../hadoop/hdds/scm/storage/BlockOutputStream.java |  43 +++--
 .../transport/server/ratis/XceiverServerRatis.java |  11 ++
 hadoop-hdds/pom.xml|   2 +-
 .../hadoop/ozone/client/OzoneClientUtils.java  |   2 +
 .../hadoop/ozone/client/io/KeyOutputStream.java|  15 +-
 .../org/apache/hadoop/ozone/MiniOzoneCluster.java  |   2 +
 .../apache/hadoop/ozone/MiniOzoneClusterImpl.java  |   3 +-
 .../rpc/TestBlockOutputStreamWithFailures.java |  11 +-
 .../client/rpc/TestContainerStateMachine.java  |   8 +-
 .../rpc/TestContainerStateMachineFailures.java |   8 +-
 .../rpc/TestOzoneClientRetriesOnException.java | 213 +
 .../ozone/container/ContainerTestHelper.java   |  82 +++-
 hadoop-ozone/pom.xml   |   2 +-
 13 files changed, 359 insertions(+), 43 deletions(-)

diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
index cfbb6ae..a8ead77 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
@@ -55,6 +55,7 @@ import java.util.concurrent.TimeoutException;
 import java.util.concurrent.CompletionException;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.Executors;
+import java.util.concurrent.atomic.AtomicReference;
 import java.util.stream.Collectors;
 
 import static org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls
@@ -100,7 +101,7 @@ public class BlockOutputStream extends OutputStream {
   // The IOException will be set by response handling thread in case there is 
an
   // exception received in the response. If the exception is set, the next
   // request will fail upfront.
-  private IOException ioException;
+  private AtomicReference ioException;
   private ExecutorService responseExecutor;
 
   // the effective length of data flushed so far
@@ -187,6 +188,7 @@ public class BlockOutputStream extends OutputStream {
 writtenDataLength = 0;
 failedServers = Collections.emptyList();
 bufferList = null;
+ioException = new AtomicReference<>(null);
   }
 
 
@@ -221,9 +223,8 @@ public class BlockOutputStream extends OutputStream {
 return bufferPool;
   }
 
-  @VisibleForTesting
   public IOException getIoException() {
-return ioException;
+return ioException.get();
   }
 
   @VisibleForTesting
@@ -372,10 +373,9 @@ public class BlockOutputStream extends OutputStream {
 waitOnFlushFutures();
   }
 } catch (InterruptedException | ExecutionException e) {
-  ioException = new IOException(
-  "Unexpected Storage Container Exception: " + e.toString(), e);
+  setIoException(e);
   adjustBuffersOnException();
-  throw ioException;
+  throw getIoException();
 }
 if (!commitIndex2flushedDataMap.isEmpty()) {
   watchForCommit(
@@ -430,9 +430,9 @@ public class BlockOutputStream extends OutputStream {
   adjustBuffers(index);
 } catch (TimeoutException | InterruptedException | ExecutionException e) {
   LOG.warn("watchForCommit failed for index " + commitIndex, e);
+  setIoException(e);
   adjustBuffersOnException();
-  throw new IOException(
-  "Unexpected Storage Container Exception: " + e.toString(), e);
+  throw getIoException();
 }
   }
 
@@ -461,7 +461,7 @@ public class BlockOutputStream extends OutputStream {
   throw new CompletionException(sce);
 }
 // if the ioException is not set, putBlock is successful
-if (ioException == null) {
+if (getIoException() == null) {
   BlockID responseBlockID = BlockID.getFromProtobuf(
   e.getPutBlock().getCommittedBlockLength().getBlockID());
   Preconditions.checkState(blockID.getContainerBlockID()
@@ -505,10 +505,9 @@ public class BlockOutputStream extends OutputStream {
   } catch (InterruptedException | ExecutionException e) {
 // just set the exception here as well in order to maintain sanctity of
 // ioException field
-ioException = new IOException(
-"Unexpected Storage Container Exception: "

[hadoop] branch trunk updated: HDDS-1360. Invalid metric type due to fully qualified class name. Contributed by Doroszlai, Attila.

2019-04-01 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 8bfef21  HDDS-1360. Invalid metric type due to fully qualified class 
name. Contributed by Doroszlai, Attila.
8bfef21 is described below

commit 8bfef21efaa8698976e6d80359c66116d7b451ea
Author: Yiqun Lin 
AuthorDate: Mon Apr 1 19:26:44 2019 +0800

HDDS-1360. Invalid metric type due to fully qualified class name. 
Contributed by Doroszlai, Attila.
---
 .../java/org/apache/hadoop/hdds/scm/server/SCMContainerMetrics.java| 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMContainerMetrics.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMContainerMetrics.java
index 7466142..5e8e137 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMContainerMetrics.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMContainerMetrics.java
@@ -42,7 +42,8 @@ import org.apache.hadoop.metrics2.lib.Interns;
 public class SCMContainerMetrics implements MetricsSource {
 
   private final SCMMXBean scmmxBean;
-  private static final String SOURCE = SCMContainerMetrics.class.getName();
+  private static final String SOURCE =
+  SCMContainerMetrics.class.getSimpleName();
 
   public SCMContainerMetrics(SCMMXBean scmmxBean) {
 this.scmmxBean = scmmxBean;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1334. Fix asf license errors in newly added files by HDDS-1234. Contributed by Aravindan Vijayan.

2019-03-25 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c99b107  HDDS-1334. Fix asf license errors in newly added files by 
HDDS-1234. Contributed by Aravindan Vijayan.
c99b107 is described below

commit c99b107772f4a52832bafd3a4c23fdef8015fdea
Author: Yiqun Lin 
AuthorDate: Tue Mar 26 11:51:04 2019 +0800

HDDS-1334. Fix asf license errors in newly added files by HDDS-1234. 
Contributed by Aravindan Vijayan.
---
 .../ozone/recon/AbstractOMMetadataManagerTest.java | 18 ++
 1 file changed, 18 insertions(+)

diff --git 
a/hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/AbstractOMMetadataManagerTest.java
 
b/hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/AbstractOMMetadataManagerTest.java
index b58e225..d115891 100644
--- 
a/hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/AbstractOMMetadataManagerTest.java
+++ 
b/hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/AbstractOMMetadataManagerTest.java
@@ -1,3 +1,21 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
 package org.apache.hadoop.ozone.recon;
 
 import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_DB_DIRS;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1234. Iterate the OM DB snapshot and populate the recon container DB. Contributed by Aravindan Vijayan.

2019-03-25 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e5d72f5  HDDS-1234. Iterate the OM DB snapshot and populate the recon 
container DB. Contributed by Aravindan Vijayan.
e5d72f5 is described below

commit e5d72f504e2cf932657f96797623f3a5bbd71f4b
Author: Yiqun Lin 
AuthorDate: Mon Mar 25 22:52:02 2019 +0800

HDDS-1234. Iterate the OM DB snapshot and populate the recon container DB. 
Contributed by Aravindan Vijayan.
---
 .../apache/hadoop/utils/LevelDBStoreIterator.java  |   4 -
 .../org/apache/hadoop/utils/MetaStoreIterator.java |   5 -
 .../apache/hadoop/utils/RocksDBStoreIterator.java  |   5 -
 .../java/org/apache/hadoop/utils/db/DBStore.java   |   6 +
 .../org/apache/hadoop/utils/db/IntegerCodec.java   |  28 ++-
 .../java/org/apache/hadoop/utils/db/RDBStore.java  |   4 +-
 .../org/apache/hadoop/utils/TestMetadataStore.java |  51 -
 .../apache/hadoop/ozone/recon/ReconConstants.java  |   4 +
 .../hadoop/ozone/recon/ReconControllerModule.java  |   6 +-
 .../org/apache/hadoop/ozone/recon/ReconServer.java |  48 -
 .../ozone/recon/api/ContainerKeyService.java   |  77 +++-
 .../ozone/recon/api/types/ContainerKeyPrefix.java  |  41 +++-
 .../hadoop/ozone/recon/api/types/KeyMetadata.java  |  74 ---
 .../recon/recovery/ReconOmMetadataManagerImpl.java |   4 +-
 .../recon/spi/ContainerDBServiceProvider.java  |  13 +-
 .../recon/spi/OzoneManagerServiceProvider.java |   9 +-
 .../spi/impl/ContainerDBServiceProviderImpl.java   | 116 ++-
 .../recon/spi/impl/ContainerKeyPrefixCodec.java|  87 +
 .../spi/impl/OzoneManagerServiceProviderImpl.java  |  49 ++---
 .../spi/{ => impl}/ReconContainerDBProvider.java   |  62 +++---
 .../ozone/recon/tasks/ContainerKeyMapperTask.java  | 107 ++
 .../package-info.java} |  21 +-
 .../ozone/recon/AbstractOMMetadataManagerTest.java | 172 
 .../apache/hadoop/ozone/recon/TestReconCodecs.java |  58 ++
 .../apache/hadoop/ozone/recon/TestReconUtils.java  |   4 +-
 .../ozone/recon/api/TestContainerKeyService.java   | 216 +
 .../hadoop/ozone/recon/api/package-info.java}  |  20 +-
 .../impl/TestContainerDBServiceProviderImpl.java   | 141 ++
 .../impl/TestOzoneManagerServiceProviderImpl.java  | 111 +--
 .../spi/impl/TestReconContainerDBProvider.java |  87 +
 .../recon/tasks/TestContainerKeyMapperTask.java| 194 ++
 .../hadoop/ozone/recon/tasks/package-info.java}|  21 +-
 32 files changed, 1416 insertions(+), 429 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/LevelDBStoreIterator.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/LevelDBStoreIterator.java
index 92051dd..cd07b64 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/LevelDBStoreIterator.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/LevelDBStoreIterator.java
@@ -62,8 +62,4 @@ public class LevelDBStoreIterator implements 
MetaStoreIterator {
 levelDBIterator.seekToLast();
   }
 
-  @Override
-  public void prefixSeek(byte[] prefix) {
-levelDBIterator.seek(prefix);
-  }
 }
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/MetaStoreIterator.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/MetaStoreIterator.java
index 15ded0d..52d0a3e 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/MetaStoreIterator.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/MetaStoreIterator.java
@@ -36,9 +36,4 @@ public interface MetaStoreIterator extends Iterator {
*/
   void seekToLast();
 
-  /**
-   * seek with prefix.
-   */
-  void prefixSeek(byte[] prefix);
-
 }
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/RocksDBStoreIterator.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/RocksDBStoreIterator.java
index 161d5de..6e9b695 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/RocksDBStoreIterator.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/RocksDBStoreIterator.java
@@ -63,9 +63,4 @@ public class RocksDBStoreIterator implements 
MetaStoreIterator {
 rocksDBIterator.seekToLast();
   }
 
-  @Override
-  public void prefixSeek(byte[] prefix) {
-rocksDBIterator.seek(prefix);
-  }
-
 }
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStore.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStore.java
index d55daa2..0bc30d0 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStore.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStore.java
@@ -19,6 +19,7 @@
 
 package org.apache.hadoop.utils.db;
 
+import java.io.F

[hadoop] branch trunk updated: HDDS-1233. Create an Ozone Manager Service provider for Recon. Contributed by Aravindan Vijayan.

2019-03-20 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 60cdd4c  HDDS-1233. Create an Ozone Manager Service provider for 
Recon. Contributed by Aravindan Vijayan.
60cdd4c is described below

commit 60cdd4cac17e547edbd9cd58c19ef27a8409b9c3
Author: Yiqun Lin 
AuthorDate: Thu Mar 21 11:02:29 2019 +0800

HDDS-1233. Create an Ozone Manager Service provider for Recon. Contributed 
by Aravindan Vijayan.
---
 .../org/apache/hadoop/utils/db/DBStoreBuilder.java |   8 +-
 .../hadoop/utils/db/RDBCheckpointManager.java  |  46 
 .../java/org/apache/hadoop/utils/db/RDBStore.java  |  25 +-
 .../apache/hadoop/utils/db/RocksDBCheckpoint.java  |  81 ++
 .../common/src/main/resources/ozone-default.xml|  85 ++-
 .../org/apache/hadoop/utils/db/TestRDBStore.java   |  43 
 .../org/apache/hadoop/hdds/server/ServerUtils.java |  19 +-
 .../hadoop/ozone/om/OmMetadataManagerImpl.java | 145 ++-
 hadoop-ozone/ozone-recon/pom.xml   |  26 +-
 .../apache/hadoop/ozone/recon/ReconConstants.java  |   3 +
 .../hadoop/ozone/recon/ReconControllerModule.java  |  14 +-
 .../hadoop/ozone/recon/ReconServerConfigKeys.java  |  37 ++-
 .../org/apache/hadoop/ozone/recon/ReconUtils.java  | 178 +
 .../ReconOMMetadataManager.java}   |  24 +-
 .../recon/recovery/ReconOmMetadataManagerImpl.java |  99 
 .../recon/spi/OzoneManagerServiceProvider.java |  15 ++
 .../ozone/recon/spi/ReconContainerDBProvider.java  |  35 +--
 .../spi/impl/OzoneManagerServiceProviderImpl.java  | 211 
 .../apache/hadoop/ozone/recon/TestReconUtils.java  | 135 ++
 .../recovery/TestReconOmMetadataManagerImpl.java   | 148 +++
 .../hadoop/ozone/recon/recovery/package-info.java} |   8 +-
 .../impl/TestContainerDBServiceProviderImpl.java   |   3 -
 .../impl/TestOzoneManagerServiceProviderImpl.java  | 275 +
 23 files changed, 1505 insertions(+), 158 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStoreBuilder.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStoreBuilder.java
index 3459b20..34bdc5d 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStoreBuilder.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStoreBuilder.java
@@ -57,6 +57,7 @@ public final class DBStoreBuilder {
   private List tableNames;
   private Configuration configuration;
   private CodecRegistry registry;
+  private boolean readOnly = false;
 
   private DBStoreBuilder(Configuration configuration) {
 tables = new HashSet<>();
@@ -113,6 +114,11 @@ public final class DBStoreBuilder {
 return this;
   }
 
+  public DBStoreBuilder setReadOnly(boolean rdOnly) {
+readOnly = rdOnly;
+return this;
+  }
+
   /**
* Builds a DBStore instance and returns that.
*
@@ -131,7 +137,7 @@ public final class DBStoreBuilder {
 if (!dbFile.getParentFile().exists()) {
   throw new IOException("The DB destination directory should exist.");
 }
-return new RDBStore(dbFile, options, tables, registry);
+return new RDBStore(dbFile, options, tables, registry, readOnly);
   }
 
   /**
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBCheckpointManager.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBCheckpointManager.java
index ce716c3..68d196f 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBCheckpointManager.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBCheckpointManager.java
@@ -19,13 +19,11 @@
 
 package org.apache.hadoop.utils.db;
 
-import java.io.IOException;
 import java.nio.file.Path;
 import java.nio.file.Paths;
 import java.time.Duration;
 import java.time.Instant;
 
-import org.apache.commons.io.FileUtils;
 import org.apache.commons.lang3.StringUtils;
 import org.rocksdb.Checkpoint;
 import org.rocksdb.RocksDB;
@@ -99,48 +97,4 @@ public class RDBCheckpointManager {
 }
 return null;
   }
-
-  static class RocksDBCheckpoint implements DBCheckpoint {
-
-private Path checkpointLocation;
-private long checkpointTimestamp;
-private long latestSequenceNumber;
-private long checkpointCreationTimeTaken;
-
-RocksDBCheckpoint(Path checkpointLocation,
-  long snapshotTimestamp,
-  long latestSequenceNumber,
-  long checkpointCreationTimeTaken) {
-  this.checkpointLocation = checkpointLocation;
-  this.checkpointTimestamp = snapshotTimestamp;
-  this.latestSequenceNumber = latestSequenceNumber;
-  this.checkpointCreationTimeTaken = checkpointCreationTimeTaken;
-}
-
-@Override
-public Path getCh

[hadoop] branch trunk updated: HDDS-1232. Recon Container DB service definition. Contributed by Aravindan Vijayan.

2019-03-08 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new fb851c9  HDDS-1232. Recon Container DB service definition. Contributed 
by Aravindan Vijayan.
fb851c9 is described below

commit fb851c94817e69ffa75d2b87f496c658c273b73b
Author: Yiqun Lin 
AuthorDate: Fri Mar 8 16:59:41 2019 +0800

HDDS-1232. Recon Container DB service definition. Contributed by Aravindan 
Vijayan.
---
 .../apache/hadoop/utils/LevelDBStoreIterator.java  |   5 +
 .../org/apache/hadoop/utils/MetaStoreIterator.java |   5 +
 .../apache/hadoop/utils/RocksDBStoreIterator.java  |   5 +
 .../common/src/main/resources/ozone-default.xml|  94 +
 .../org/apache/hadoop/utils/TestMetadataStore.java |  52 
 .../org/apache/hadoop/hdds/server/ServerUtils.java |  30 +++--
 hadoop-ozone/dist/pom.xml  |   7 +
 hadoop-ozone/integration-test/pom.xml  |   4 +
 .../hadoop/ozone/TestOzoneConfigurationFields.java |   2 +
 hadoop-ozone/ozone-recon/pom.xml   |  13 +-
 ...onControllerModule.java => ReconConstants.java} |  20 ++-
 .../hadoop/ozone/recon/ReconControllerModule.java  |   7 +
 .../apache/hadoop/ozone/recon/ReconHttpServer.java |  20 +--
 ...nfiguration.java => ReconServerConfigKeys.java} |  12 +-
 .../types/ContainerKeyPrefix.java} |  36 +++--
 .../recon/spi/ContainerDBServiceProvider.java  |  58 
 .../ozone/recon/spi/ReconContainerDBProvider.java  |  77 +++
 .../spi/impl/ContainerDBServiceProviderImpl.java   | 138 +++
 .../impl/package-info.java}|  20 +--
 .../impl/TestContainerDBServiceProviderImpl.java   | 148 +
 .../hadoop/ozone/recon/spi/impl/package-info.java} |  19 +--
 hadoop-ozone/pom.xml   |   5 +
 22 files changed, 697 insertions(+), 80 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/LevelDBStoreIterator.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/LevelDBStoreIterator.java
index 7b62f7a..92051dd 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/LevelDBStoreIterator.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/LevelDBStoreIterator.java
@@ -61,4 +61,9 @@ public class LevelDBStoreIterator implements 
MetaStoreIterator {
   public void seekToLast() {
 levelDBIterator.seekToLast();
   }
+
+  @Override
+  public void prefixSeek(byte[] prefix) {
+levelDBIterator.seek(prefix);
+  }
 }
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/MetaStoreIterator.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/MetaStoreIterator.java
index 52d0a3e..15ded0d 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/MetaStoreIterator.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/MetaStoreIterator.java
@@ -36,4 +36,9 @@ public interface MetaStoreIterator extends Iterator {
*/
   void seekToLast();
 
+  /**
+   * seek with prefix.
+   */
+  void prefixSeek(byte[] prefix);
+
 }
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/RocksDBStoreIterator.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/RocksDBStoreIterator.java
index 6e9b695..161d5de 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/RocksDBStoreIterator.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/RocksDBStoreIterator.java
@@ -63,4 +63,9 @@ public class RocksDBStoreIterator implements 
MetaStoreIterator {
 rocksDBIterator.seekToLast();
   }
 
+  @Override
+  public void prefixSeek(byte[] prefix) {
+rocksDBIterator.seek(prefix);
+  }
+
 }
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index a95d9d1..a0b4c52 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -2144,4 +2144,98 @@
   milliseconds.
 
   
+  
+ozone.recon.http.enabled
+true
+RECON, MANAGEMENT
+
+  Property to enable or disable Recon web user interface.
+
+  
+  
+ozone.recon.http-address
+0.0.0.0:9888
+RECON, MANAGEMENT
+
+  The address and the base port where the Recon web UI will listen on.
+
+  If the port is 0, then the server will start on a free port. However, it
+  is best to specify a well-known port, so it is easy to connect and see
+  the Recon management UI.
+
+  
+  
+ozone.recon.http-bind-host
+0.0.0.0
+RECON, MANAGEMENT
+
+  The actual address the Recon server will bind to. If this optional
+  the address is set, it overrides only the hostname portion of
+  ozone.recon.http-a

[hadoop] branch trunk updated: HDFS-14182. Datanode usage histogram is clicked to show ip list. Contributed by fengchuang.

2019-03-04 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 18ea0c1  HDFS-14182. Datanode usage histogram is clicked to show ip 
list. Contributed by fengchuang.
18ea0c1 is described below

commit 18ea0c14933c3e33617647eae2e3076cda1232c0
Author: Yiqun Lin 
AuthorDate: Mon Mar 4 17:34:24 2019 +0800

HDFS-14182. Datanode usage histogram is clicked to show ip list. 
Contributed by fengchuang.
---
 .../hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js | 47 +-
 1 file changed, 46 insertions(+), 1 deletion(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
index 5b2838c..4e8b362 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
@@ -303,10 +303,13 @@
   .attr("class", "bar")
   .attr("transform", function(d) { return "translate(" + x(d.x0) + "," 
+ y(d.length) + ")"; });
 
+  window.liveNodes = dnData.LiveNodes;
+
   bar.append("rect")
   .attr("x", 1)
   .attr("width", x(bins[0].x1) - x(bins[0].x0) - 1)
-  .attr("height", function(d) { return height - y(d.length); });
+  .attr("height", function(d) { return height - y(d.length); })
+  .attr("onclick", function (d) { return "open_hostip_list(" + d.x0 + 
"," + d.x1 + ")"; });
 
   bar.append("text")
   .attr("dy", ".75em")
@@ -425,3 +428,45 @@
 load_page();
   });
 })();
+
+function open_hostip_list(x0, x1) {
+  close_hostip_list();
+  var ips = new Array();
+  for (var i = 0; i < liveNodes.length; i++) {
+var dn = liveNodes[i];
+var index = (dn.usedSpace / dn.capacity) * 100.0;
+if (index == 0) {
+  index = 1;
+}
+//More than 100% do not care,so not record in 95%-100% bar
+if (index > x0 && index <= x1) {
+  ips.push(dn.infoAddr.split(":")[0]);
+}
+  }
+  var ipsText = '';
+  for (var i = 0; i < ips.length; i++) {
+ipsText += ips[i] + '\n';
+  }
+  var histogram_div = document.getElementById('datanode-usage-histogram');
+  histogram_div.setAttribute('style', 'position: relative');
+  var ips_div = document.createElement("textarea");
+  ips_div.setAttribute('id', 'datanode_ips');
+  ips_div.setAttribute('rows', '8');
+  ips_div.setAttribute('cols', '14');
+  ips_div.setAttribute('style', 'position: absolute;top: 0px;right: -38px;');
+  ips_div.setAttribute('readonly', 'readonly');
+  histogram_div.appendChild(ips_div);
+
+  var close_div = document.createElement("div");
+  histogram_div.appendChild(close_div);
+  close_div.setAttribute('id', 'close_ips');
+  close_div.setAttribute('style', 'position: absolute;top: 0px;right: 
-62px;width:20px;height;20px');
+  close_div.setAttribute('onclick', 'close_hostip_list()');
+  close_div.innerHTML = "X";
+  ips_div.innerHTML = ipsText;
+}
+
+function close_hostip_list() {
+  $("#datanode_ips").remove();
+  $("#close_ips").remove();
+}
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1130. Make BenchMarkBlockManager multi-threaded. Contributed by Lokesh Jain.

2019-02-19 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 1d30fd9  HDDS-1130. Make BenchMarkBlockManager multi-threaded. 
Contributed by Lokesh Jain.
1d30fd9 is described below

commit 1d30fd94c6430492ce2f92883117eff56094eec0
Author: Yiqun Lin 
AuthorDate: Wed Feb 20 10:45:51 2019 +0800

HDDS-1130. Make BenchMarkBlockManager multi-threaded. Contributed by Lokesh 
Jain.
---
 .../ozone/genesis/BenchMarkBlockManager.java   | 81 +-
 1 file changed, 49 insertions(+), 32 deletions(-)

diff --git 
a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkBlockManager.java
 
b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkBlockManager.java
index 4f7e096..cc08709 100644
--- 
a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkBlockManager.java
+++ 
b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkBlockManager.java
@@ -19,6 +19,7 @@
 package org.apache.hadoop.ozone.genesis;
 
 import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.hdds.HddsConfigKeys;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
@@ -46,6 +47,7 @@ import java.io.IOException;
 import java.util.UUID;
 import java.util.List;
 import java.util.ArrayList;
+import java.util.concurrent.locks.ReentrantLock;
 
 import static 
org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_DB_CACHE_SIZE_DEFAULT;
 import static 
org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_DB_CACHE_SIZE_MB;
@@ -59,9 +61,16 @@ import static 
org.apache.hadoop.ozone.OzoneConsts.SCM_PIPELINE_DB;
 @State(Scope.Thread)
 public class BenchMarkBlockManager {
 
-  private StorageContainerManager scm;
-  private PipelineManager pipelineManager;
-  private BlockManager blockManager;
+  private static String testDir;
+  private static StorageContainerManager scm;
+  private static PipelineManager pipelineManager;
+  private static BlockManager blockManager;
+  private static ReentrantLock lock = new ReentrantLock();
+
+  @Param({"1", "10", "100", "1000", "1", "10"})
+  private static int numPipelines;
+  @Param({"3", "10", "100"})
+  private static int numContainersPerPipeline;
 
   private static StorageContainerManager getScm(OzoneConfiguration conf,
   SCMConfigurator configurator) throws IOException,
@@ -80,46 +89,53 @@ public class BenchMarkBlockManager {
   }
 
   @Setup(Level.Trial)
-  public void initialize()
+  public static void initialize()
   throws IOException, AuthenticationException, InterruptedException {
-OzoneConfiguration conf = new OzoneConfiguration();
-conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
-GenesisUtil.getTempPath().resolve(RandomStringUtils.randomNumeric(7))
-.toString());
-conf.setInt(OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT, 100);
-final File metaDir = ServerUtils.getScmDbDir(conf);
-final File pipelineDBPath = new File(metaDir, SCM_PIPELINE_DB);
-int cacheSize = conf.getInt(OZONE_SCM_DB_CACHE_SIZE_MB,
-OZONE_SCM_DB_CACHE_SIZE_DEFAULT);
-MetadataStore pipelineStore =
-MetadataStoreBuilder.newBuilder()
-.setCreateIfMissing(true)
-.setConf(conf)
-.setDbFile(pipelineDBPath)
-.setCacheSize(cacheSize * OzoneConsts.MB)
-.build();
-addPipelines(100, ReplicationFactor.THREE, pipelineStore);
-pipelineStore.close();
-scm = getScm(conf, new SCMConfigurator());
-pipelineManager = scm.getPipelineManager();
-for (Pipeline pipeline : pipelineManager
-.getPipelines(ReplicationType.RATIS, ReplicationFactor.THREE)) {
-  pipelineManager.openPipeline(pipeline.getId());
+try {
+  lock.lock();
+  if (scm == null) {
+OzoneConfiguration conf = new OzoneConfiguration();
+testDir = GenesisUtil.getTempPath()
+.resolve(RandomStringUtils.randomNumeric(7)).toString();
+conf.set(HddsConfigKeys.OZONE_METADATA_DIRS, testDir);
+conf.setInt(OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT,
+numContainersPerPipeline);
+final File metaDir = ServerUtils.getScmDbDir(conf);
+final File pipelineDBPath = new File(metaDir, SCM_PIPELINE_DB);
+int cacheSize = conf.getInt(OZONE_SCM_DB_CACHE_SIZE_MB,
+OZONE_SCM_DB_CACHE_SIZE_DEFAULT);
+MetadataStore pipelineStore =
+MetadataStoreBuilder.newBuilder().setCreateIfMissing(true)
+.setConf(conf).setDbFile(pipelineDBPath)
+.setCacheSize(cacheSize * OzoneConsts.MB).build();
+addPipelines(Replicat

[hadoop] branch trunk updated: HDDS-1122. Fix TestOzoneManagerRatisServer#testSubmitRatisRequest unit test failure. Contributed by Yiqun Lin.

2019-02-18 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 67af509  HDDS-1122. Fix 
TestOzoneManagerRatisServer#testSubmitRatisRequest unit test failure. 
Contributed by Yiqun Lin.
67af509 is described below

commit 67af509097d8df4df161d53e3e511b312e8ac3dd
Author: Yiqun Lin 
AuthorDate: Tue Feb 19 11:29:52 2019 +0800

HDDS-1122. Fix TestOzoneManagerRatisServer#testSubmitRatisRequest unit test 
failure. Contributed by Yiqun Lin.
---
 .../src/main/java/org/apache/hadoop/ozone/om/ratis/OMRatisHelper.java   | 2 ++
 1 file changed, 2 insertions(+)

diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OMRatisHelper.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OMRatisHelper.java
index ee1fee6..9115421 100644
--- 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OMRatisHelper.java
+++ 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OMRatisHelper.java
@@ -25,6 +25,7 @@ import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
 .OMRequest;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
 .OMResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Status;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Type;
 import org.apache.ratis.RaftConfigKeys;
 import org.apache.ratis.client.RaftClient;
@@ -110,6 +111,7 @@ public final class OMRatisHelper {
 .setCmdType(cmdType)
 .setSuccess(false)
 .setMessage(e.getMessage())
+.setStatus(Status.INTERNAL_ERROR)
 .build();
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1106. Introduce queryMap in PipelineManager. Contributed by Lokesh Jain.

2019-02-18 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new f2fb653  HDDS-1106. Introduce queryMap in PipelineManager. Contributed 
by Lokesh Jain.
f2fb653 is described below

commit f2fb6536dcbe6320f69273bf9e11d4701248172c
Author: Yiqun Lin 
AuthorDate: Mon Feb 18 22:35:23 2019 +0800

HDDS-1106. Introduce queryMap in PipelineManager. Contributed by Lokesh 
Jain.
---
 .../hadoop/hdds/scm/pipeline/PipelineStateMap.java | 72 +-
 .../scm/pipeline/TestPipelineStateManager.java | 42 +
 2 files changed, 111 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateMap.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateMap.java
index dea2115..2b6c61b 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateMap.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateMap.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.hdds.scm.pipeline;
 
 import com.google.common.base.Preconditions;
+import org.apache.commons.lang3.builder.HashCodeBuilder;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType;
 import org.apache.hadoop.hdds.scm.container.ContainerID;
@@ -27,6 +28,7 @@ import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 import java.util.*;
+import java.util.concurrent.CopyOnWriteArrayList;
 import java.util.stream.Collectors;
 
 /**
@@ -42,15 +44,27 @@ class PipelineStateMap {
 
   private final Map pipelineMap;
   private final Map> pipeline2container;
+  private final Map> query2OpenPipelines;
 
   PipelineStateMap() {
 
 // TODO: Use TreeMap for range operations?
-this.pipelineMap = new HashMap<>();
-this.pipeline2container = new HashMap<>();
+pipelineMap = new HashMap<>();
+pipeline2container = new HashMap<>();
+query2OpenPipelines = new HashMap<>();
+initializeQueryMap();
 
   }
 
+  private void initializeQueryMap() {
+for (ReplicationType type : ReplicationType.values()) {
+  for (ReplicationFactor factor : ReplicationFactor.values()) {
+query2OpenPipelines
+.put(new PipelineQuery(type, factor), new 
CopyOnWriteArrayList<>());
+  }
+}
+  }
+
   /**
* Adds provided pipeline in the data structures.
*
@@ -70,6 +84,9 @@ class PipelineStateMap {
   .format("Duplicate pipeline ID %s detected.", pipeline.getId()));
 }
 pipeline2container.put(pipeline.getId(), new TreeSet<>());
+if (pipeline.getPipelineState() == PipelineState.OPEN) {
+  query2OpenPipelines.get(new PipelineQuery(pipeline)).add(pipeline);
+}
   }
 
   /**
@@ -188,6 +205,10 @@ class PipelineStateMap {
 Preconditions.checkNotNull(factor, "Replication factor cannot be null");
 Preconditions.checkNotNull(state, "Pipeline state cannot be null");
 
+if (state == PipelineState.OPEN) {
+  return Collections.unmodifiableList(
+  query2OpenPipelines.get(new PipelineQuery(type, factor)));
+}
 return pipelineMap.values().stream().filter(
 pipeline -> pipeline.getType() == type
 && pipeline.getPipelineState() == state
@@ -293,7 +314,52 @@ class PipelineStateMap {
 Preconditions.checkNotNull(state, "Pipeline LifeCycleState cannot be 
null");
 
 final Pipeline pipeline = getPipeline(pipelineID);
-return pipelineMap.compute(pipelineID,
+Pipeline updatedPipeline = pipelineMap.compute(pipelineID,
 (id, p) -> Pipeline.newBuilder(pipeline).setState(state).build());
+PipelineQuery query = new PipelineQuery(pipeline);
+if (updatedPipeline.getPipelineState() == PipelineState.OPEN) {
+  // for transition to OPEN state add pipeline to query2OpenPipelines
+  query2OpenPipelines.get(query).add(updatedPipeline);
+} else if (updatedPipeline.getPipelineState() == PipelineState.CLOSED) {
+  // for transition from OPEN to CLOSED state remove pipeline from
+  // query2OpenPipelines
+  query2OpenPipelines.get(query).remove(pipeline);
+}
+return updatedPipeline;
+  }
+
+  private class PipelineQuery {
+private ReplicationType type;
+private ReplicationFactor factor;
+
+PipelineQuery(ReplicationType type, ReplicationFactor factor) {
+  this.type = type;
+  this.factor = factor;
+}
+
+PipelineQuery(Pipeline pipeline) {
+  type = pipeline.getType();
+  factor = pipeline.getFactor();
+}
+
+@Override
+public boolean equals(Object other) {
+  if (this == other) {
+ 

[hadoop] 02/02: HDDS-1099. Genesis benchmark for ozone key creation in OM. Contributed by Bharat Viswanadham.

2019-02-14 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 084b6a6751dd203de1c7f3c65077ca72f1d83632
Author: Yiqun Lin 
AuthorDate: Fri Feb 15 14:23:34 2019 +0800

HDDS-1099. Genesis benchmark for ozone key creation in OM. Contributed by 
Bharat Viswanadham.
---
 hadoop-ozone/tools/pom.xml |   9 +-
 .../ozone/genesis/BenchMarkOMKeyAllocation.java| 135 +
 .../org/apache/hadoop/ozone/genesis/Genesis.java   |   1 +
 3 files changed, 140 insertions(+), 5 deletions(-)

diff --git a/hadoop-ozone/tools/pom.xml b/hadoop-ozone/tools/pom.xml
index 95bef70..aeff0f7 100644
--- a/hadoop-ozone/tools/pom.xml
+++ b/hadoop-ozone/tools/pom.xml
@@ -31,6 +31,10 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   
 
   org.apache.hadoop
+  hadoop-ozone-ozone-manager
+
+
+  org.apache.hadoop
   hadoop-ozone-common
 
 
@@ -78,11 +82,6 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   2.15.0
   test
 
-  
-  org.apache.hadoop
-  hadoop-ozone-ozone-manager
-  0.4.0-SNAPSHOT
-  
   
   
 
diff --git 
a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkOMKeyAllocation.java
 
b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkOMKeyAllocation.java
new file mode 100644
index 000..fbb686a
--- /dev/null
+++ 
b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkOMKeyAllocation.java
@@ -0,0 +1,135 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.genesis;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.commons.lang3.RandomUtils;
+import org.apache.hadoop.hdds.client.BlockID;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.ozone.om.BucketManager;
+import org.apache.hadoop.ozone.om.BucketManagerImpl;
+import org.apache.hadoop.ozone.om.KeyManager;
+import org.apache.hadoop.ozone.om.KeyManagerImpl;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.VolumeManager;
+import org.apache.hadoop.ozone.om.VolumeManagerImpl;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.helpers.OpenKeySession;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.Level;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.TearDown;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+/**
+ * Benchmark key creation in a bucket in OM.
+ */
+@State(Scope.Thread)
+public class BenchMarkOMKeyAllocation {
+
+  private static final String TMP_DIR = "java.io.tmpdir";
+  private String volumeName = UUID.randomUUID().toString();
+  private String bucketName = UUID.randomUUID().toString();
+  private KeyManager keyManager;
+  private VolumeManager volumeManager;
+  private BucketManager bucketManager;
+  private String path = Paths.get(System.getProperty(TMP_DIR)).resolve(
+  RandomStringUtils.randomNumeric(6)).toFile()
+.getAbsolutePath();
+
+  @Setup(Level.Trial)
+  public void setup() throws IOException {
+OzoneConfiguration configuration = new OzoneConfiguration();
+configuration.set(OMConfigKeys.OZONE_OM_DB_DIRS, path);
+
+OmMetadataManagerImpl omMetadataManager =
+new OmMetadataManagerImpl(configuration);
+volumeManager = new VolumeManagerImpl(omMetadataManager, configuration);
+bucketManager = new

[hadoop] branch trunk updated (5656409 -> 084b6a6)

2019-02-14 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 5656409  HDDS-1099. Genesis benchmark for ozone key creation in OM. 
Contributed by Bharat Viswanadham.
 new 492e49e  Revert "HDDS-1099. Genesis benchmark for ozone key creation 
in OM. Contributed by Bharat Viswanadham."
 new 084b6a6  HDDS-1099. Genesis benchmark for ozone key creation in OM. 
Contributed by Bharat Viswanadham.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../tools/src/main/java/org/apache/hadoop/ozone/genesis/Genesis.java | 1 +
 1 file changed, 1 insertion(+)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/02: Revert "HDDS-1099. Genesis benchmark for ozone key creation in OM. Contributed by Bharat Viswanadham."

2019-02-14 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 492e49e7caff34231b07c85a0038f27f41de67f7
Author: Yiqun Lin 
AuthorDate: Fri Feb 15 14:21:29 2019 +0800

Revert "HDDS-1099. Genesis benchmark for ozone key creation in OM. 
Contributed by Bharat Viswanadham."

This reverts commit 5656409327db5a590cc29b846d291dad005bf8d0.
---
 hadoop-ozone/tools/pom.xml |   9 +-
 .../ozone/genesis/BenchMarkOMKeyAllocation.java| 135 -
 2 files changed, 5 insertions(+), 139 deletions(-)

diff --git a/hadoop-ozone/tools/pom.xml b/hadoop-ozone/tools/pom.xml
index aeff0f7..95bef70 100644
--- a/hadoop-ozone/tools/pom.xml
+++ b/hadoop-ozone/tools/pom.xml
@@ -31,10 +31,6 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   
 
   org.apache.hadoop
-  hadoop-ozone-ozone-manager
-
-
-  org.apache.hadoop
   hadoop-ozone-common
 
 
@@ -82,6 +78,11 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   2.15.0
   test
 
+  
+  org.apache.hadoop
+  hadoop-ozone-ozone-manager
+  0.4.0-SNAPSHOT
+  
   
   
 
diff --git 
a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkOMKeyAllocation.java
 
b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkOMKeyAllocation.java
deleted file mode 100644
index fbb686a..000
--- 
a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkOMKeyAllocation.java
+++ /dev/null
@@ -1,135 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.ozone.genesis;
-
-import org.apache.commons.io.FileUtils;
-import org.apache.commons.lang3.RandomStringUtils;
-import org.apache.commons.lang3.RandomUtils;
-import org.apache.hadoop.hdds.client.BlockID;
-import org.apache.hadoop.hdds.conf.OzoneConfiguration;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
-import org.apache.hadoop.ozone.om.BucketManager;
-import org.apache.hadoop.ozone.om.BucketManagerImpl;
-import org.apache.hadoop.ozone.om.KeyManager;
-import org.apache.hadoop.ozone.om.KeyManagerImpl;
-import org.apache.hadoop.ozone.om.OMConfigKeys;
-import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
-import org.apache.hadoop.ozone.om.VolumeManager;
-import org.apache.hadoop.ozone.om.VolumeManagerImpl;
-import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
-import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
-import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
-import org.apache.hadoop.ozone.om.helpers.OpenKeySession;
-import org.apache.hadoop.security.UserGroupInformation;
-import org.openjdk.jmh.annotations.Benchmark;
-import org.openjdk.jmh.annotations.Level;
-import org.openjdk.jmh.annotations.Scope;
-import org.openjdk.jmh.annotations.Setup;
-import org.openjdk.jmh.annotations.State;
-import org.openjdk.jmh.annotations.TearDown;
-
-import java.io.File;
-import java.io.IOException;
-import java.nio.file.Paths;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.UUID;
-
-/**
- * Benchmark key creation in a bucket in OM.
- */
-@State(Scope.Thread)
-public class BenchMarkOMKeyAllocation {
-
-  private static final String TMP_DIR = "java.io.tmpdir";
-  private String volumeName = UUID.randomUUID().toString();
-  private String bucketName = UUID.randomUUID().toString();
-  private KeyManager keyManager;
-  private VolumeManager volumeManager;
-  private BucketManager bucketManager;
-  private String path = Paths.get(System.getProperty(TMP_DIR)).resolve(
-  RandomStringUtils.randomNumeric(6)).toFile()
-.getAbsolutePath();
-
-  @Setup(Level.Trial)
-  public void setup() throws IOException {
-OzoneConfiguration configuration = new OzoneConfiguration();
-configuration.set(OMConfigKeys.OZONE_OM_DB_DIRS, path);
-
-OmMetadataManagerImpl omMetadataManager =
-new OmMetadataManagerImpl(configuration);
-volumeManager = new VolumeManagerImpl(omMetadataManager, configuratio

[hadoop] branch trunk updated: HDDS-1099. Genesis benchmark for ozone key creation in OM. Contributed by Bharat Viswanadham.

2019-02-14 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 5656409  HDDS-1099. Genesis benchmark for ozone key creation in OM. 
Contributed by Bharat Viswanadham.
5656409 is described below

commit 5656409327db5a590cc29b846d291dad005bf8d0
Author: Yiqun Lin 
AuthorDate: Fri Feb 15 14:07:15 2019 +0800

HDDS-1099. Genesis benchmark for ozone key creation in OM. Contributed by 
Bharat Viswanadham.
---
 hadoop-ozone/tools/pom.xml |   9 +-
 .../ozone/genesis/BenchMarkOMKeyAllocation.java| 135 +
 2 files changed, 139 insertions(+), 5 deletions(-)

diff --git a/hadoop-ozone/tools/pom.xml b/hadoop-ozone/tools/pom.xml
index 95bef70..aeff0f7 100644
--- a/hadoop-ozone/tools/pom.xml
+++ b/hadoop-ozone/tools/pom.xml
@@ -31,6 +31,10 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   
 
   org.apache.hadoop
+  hadoop-ozone-ozone-manager
+
+
+  org.apache.hadoop
   hadoop-ozone-common
 
 
@@ -78,11 +82,6 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   2.15.0
   test
 
-  
-  org.apache.hadoop
-  hadoop-ozone-ozone-manager
-  0.4.0-SNAPSHOT
-  
   
   
 
diff --git 
a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkOMKeyAllocation.java
 
b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkOMKeyAllocation.java
new file mode 100644
index 000..fbb686a
--- /dev/null
+++ 
b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkOMKeyAllocation.java
@@ -0,0 +1,135 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.genesis;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.commons.lang3.RandomUtils;
+import org.apache.hadoop.hdds.client.BlockID;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.ozone.om.BucketManager;
+import org.apache.hadoop.ozone.om.BucketManagerImpl;
+import org.apache.hadoop.ozone.om.KeyManager;
+import org.apache.hadoop.ozone.om.KeyManagerImpl;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.VolumeManager;
+import org.apache.hadoop.ozone.om.VolumeManagerImpl;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.helpers.OpenKeySession;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.Level;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.TearDown;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+/**
+ * Benchmark key creation in a bucket in OM.
+ */
+@State(Scope.Thread)
+public class BenchMarkOMKeyAllocation {
+
+  private static final String TMP_DIR = "java.io.tmpdir";
+  private String volumeName = UUID.randomUUID().toString();
+  private String bucketName = UUID.randomUUID().toString();
+  private KeyManager keyManager;
+  private VolumeManager volumeManager;
+  private BucketManager bucketManager;
+  private String path = Paths.get(System.getProperty(TMP_DIR)).resolve(
+  RandomStringUtils.randomNumeric(6)).toFile()
+.getAbsolutePath();
+
+  @Setup(Level.Trial)
+  public void setup() throws IOException {
+OzoneConfiguration configuration = new OzoneConfiguration();
+configuration.set(OMConfigKeys.OZONE_OM_DB_DIRS, path);
+
+OmMetadataManagerImpl omMetadataManager =
+new

[hadoop] branch trunk updated: HADOOP-16097. Provide proper documentation for FairCallQueue. Contributed by Erik Krogen.

2019-02-12 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 7b11b40  HADOOP-16097. Provide proper documentation for FairCallQueue. 
Contributed by Erik Krogen.
7b11b40 is described below

commit 7b11b404a35f93e4b4b12546034ef8001720eb5f
Author: Yiqun Lin 
AuthorDate: Wed Feb 13 11:16:04 2019 +0800

HADOOP-16097. Provide proper documentation for FairCallQueue. Contributed 
by Erik Krogen.
---
 .../src/site/markdown/FairCallQueue.md | 150 +
 .../resources/images/faircallqueue-overview.png| Bin 0 -> 47397 bytes
 hadoop-project/src/site/site.xml   |   1 +
 3 files changed, 151 insertions(+)

diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/FairCallQueue.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/FairCallQueue.md
new file mode 100644
index 000..e62c7ad
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/FairCallQueue.md
@@ -0,0 +1,150 @@
+
+
+Fair Call Queue Guide
+=
+
+
+
+Purpose
+---
+
+This document describes how to configure and manage the Fair Call Queue for 
Hadoop.
+
+Prerequisites
+-
+
+Make sure Hadoop is installed, configured and setup correctly. For more 
information see:
+
+* [Single Node Setup](./SingleCluster.html) for first-time users.
+* [Cluster Setup](./ClusterSetup.html) for large, distributed clusters.
+
+Overview
+
+
+Hadoop server components, in particular the HDFS NameNode, experience very 
heavy RPC load from clients. By default,
+all client requests are routed through a first-in, first-out queue and 
serviced in the order they arrive. This means
+that a single user submitting a very large number of requests can easily 
overwhelm the service, causing degraded service
+for all other users. The Fair Call Queue, and related components, aim to 
mitigate this impact.
+
+Design Details
+--
+
+There are a few components in the IPC stack which have a complex interplay, 
each with their own tuning parameters.
+The image below presents a schematic overview of their interactions, which 
will be explained below.
+
+![FairCallQueue Overview](./images/faircallqueue-overview.png)
+
+In the following explanation, **bolded** words refer to named entities or 
configurables.
+
+When a client makes a request to an IPC server, this request first lands in a 
**listen queue**. **Reader** threads
+remove requests from this queue and pass them to a configurable 
**RpcScheduler** to be assigned a priority and placed
+into a **call queue**; this is where FairCallQueue sits as a pluggable 
implementation (the other existing
+implementation being a FIFO queue). **Handler** threads accept requests out of 
the call queue, process them, and
+respond to the client.
+
+The implementation of RpcScheduler used with FairCallQueue by default is 
**DecayRpcScheduler**, which maintains a
+count of requests received for each user. This count _decays_ over time; every 
**sweep period** (5s by default),
+the number of requests per user is multiplied by a **decay factor** (0.5 by 
default). This maintains a weighted/rolling
+average of request count per user. Every time that a sweep is performed, the 
call counts for all known users are
+ranked from highest to lowest. Each user is assigned a **priority** (0-3 by 
default, with 0 being highest priority)
+based on the proportion of calls originating from that user. The default 
**priority thresholds** are (0.125, 0.25, 0.5),
+meaning that users whose calls make up more than 50% of the total (there can 
be at most one such user) are placed into
+the lowest priority, users whose calls make up between 25% and 50% of the 
total are in the 2nd lowest, users whose calls
+make up between 12.5% and 25% are in the 2nd highest priority, and all other 
users are placed in the highest priority.
+At the end of the sweep, each known user has a cached priority which will be 
used until the next sweep; new users which
+appear between sweeps will have their priority calculated on-the-fly.
+
+Within FairCallQueue, there are multiple **priority queues**, each of which is 
designated a **weight**. When a request
+arrives at the call queue, the request is placed into one of these priority 
queues based on the current priority
+assigned to the call (by the RpcScheduler). When a handler thread attempts to 
fetch an item from the call queue, which
+queue it pulls from is decided via an **RpcMultiplexer**; currently this is 
hard-coded to be a
+**WeightedRoundRobinMultiplexer**. The WRRM serves requests from queues based 
on their weights; the default weights
+for the default 4 priority levels are (8, 4, 2, 1). Thus, the WRRM would serve 
8 requests from the highest priority
+queue, 4 from the second highest, 2 from the third highest

[hadoop] branch trunk updated: HDDS-1047. Fix TestRatisPipelineProvider#testCreatePipelineWithFactor. Contributed by Nilotpal Nandi.

2019-02-12 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 06d7890  HDDS-1047. Fix 
TestRatisPipelineProvider#testCreatePipelineWithFactor. Contributed by Nilotpal 
Nandi.
06d7890 is described below

commit 06d7890bdd3e597824f9ca02b453d45eef445f49
Author: Yiqun Lin 
AuthorDate: Wed Feb 13 10:50:57 2019 +0800

HDDS-1047. Fix TestRatisPipelineProvider#testCreatePipelineWithFactor. 
Contributed by Nilotpal Nandi.
---
 .../scm/pipeline/TestRatisPipelineProvider.java| 32 ++
 1 file changed, 32 insertions(+)

diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineProvider.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineProvider.java
index 6f4934f..6f385de 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineProvider.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineProvider.java
@@ -50,6 +50,28 @@ public class TestRatisPipelineProvider {
 stateManager, new OzoneConfiguration());
   }
 
+  private void createPipelineAndAssertions(
+  HddsProtos.ReplicationFactor factor) throws IOException {
+Pipeline pipeline = provider.create(factor);
+stateManager.addPipeline(pipeline);
+Assert.assertEquals(pipeline.getType(), HddsProtos.ReplicationType.RATIS);
+Assert.assertEquals(pipeline.getFactor(), factor);
+Assert.assertEquals(pipeline.getPipelineState(),
+Pipeline.PipelineState.OPEN);
+Assert.assertEquals(pipeline.getNodes().size(), factor.getNumber());
+Pipeline pipeline1 = provider.create(factor);
+stateManager.addPipeline(pipeline1);
+// New pipeline should not overlap with the previous created pipeline
+Assert.assertTrue(
+CollectionUtils.intersection(pipeline.getNodes(), pipeline1.getNodes())
+.isEmpty());
+Assert.assertEquals(pipeline1.getType(), HddsProtos.ReplicationType.RATIS);
+Assert.assertEquals(pipeline1.getFactor(), factor);
+Assert.assertEquals(pipeline1.getPipelineState(),
+Pipeline.PipelineState.OPEN);
+Assert.assertEquals(pipeline1.getNodes().size(), factor.getNumber());
+  }
+
   @Test
   public void testCreatePipelineWithFactor() throws IOException {
 HddsProtos.ReplicationFactor factor = HddsProtos.ReplicationFactor.THREE;
@@ -76,6 +98,16 @@ public class TestRatisPipelineProvider {
 Assert.assertEquals(pipeline1.getNodes().size(), factor.getNumber());
   }
 
+  @Test
+  public void testCreatePipelineWithFactorThree() throws IOException {
+createPipelineAndAssertions(HddsProtos.ReplicationFactor.THREE);
+  }
+
+  @Test
+  public void testCreatePipelineWithFactorOne() throws IOException {
+createPipelineAndAssertions(HddsProtos.ReplicationFactor.ONE);
+  }
+
   private List createListOfNodes(int nodeCount) {
 List nodes = new ArrayList<>();
 for (int i = 0; i < nodeCount; i++) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1074. Remove dead variable from KeyOutputStream#addKeyLocationInfo. Contributed by Siddharth Wagle.

2019-02-12 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 26e6013  HDDS-1074. Remove dead variable from 
KeyOutputStream#addKeyLocationInfo. Contributed by Siddharth Wagle.
26e6013 is described below

commit 26e60135f57868cf7d1e3a0229c55ed045f9745b
Author: Yiqun Lin 
AuthorDate: Tue Feb 12 16:28:50 2019 +0800

HDDS-1074. Remove dead variable from KeyOutputStream#addKeyLocationInfo. 
Contributed by Siddharth Wagle.
---
 .../main/java/org/apache/hadoop/ozone/client/io/KeyOutputStream.java   | 3 ---
 1 file changed, 3 deletions(-)

diff --git 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyOutputStream.java
 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyOutputStream.java
index af39631..b94e14f 100644
--- 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyOutputStream.java
+++ 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyOutputStream.java
@@ -21,7 +21,6 @@ import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.fs.FSExceptionMessages;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Result;
-import org.apache.hadoop.hdds.scm.XceiverClientSpi;
 import 
org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException;
 import 
org.apache.hadoop.hdds.scm.container.common.helpers.ContainerWithPipeline;
 import org.apache.hadoop.ozone.common.Checksum;
@@ -202,8 +201,6 @@ public class KeyOutputStream extends OutputStream {
 ContainerWithPipeline containerWithPipeline = scmClient
 .getContainerWithPipeline(subKeyInfo.getContainerID());
 UserGroupInformation.getCurrentUser().addToken(subKeyInfo.getToken());
-XceiverClientSpi xceiverClient =
-
xceiverClientManager.acquireClient(containerWithPipeline.getPipeline());
 BlockOutputStreamEntry.Builder builder =
 new BlockOutputStreamEntry.Builder()
 .setBlockID(subKeyInfo.getBlockID())


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1048. Remove SCMNodeStat from SCMNodeManager and use storage information from DatanodeInfo#StorageReportProto. Contributed by Nanda kumar.

2019-02-08 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new fb8c997  HDDS-1048. Remove SCMNodeStat from SCMNodeManager and use 
storage information from DatanodeInfo#StorageReportProto. Contributed by Nanda 
kumar.
fb8c997 is described below

commit fb8c997a6884bbe19c45bab77950068bb78109c7
Author: Yiqun Lin 
AuthorDate: Fri Feb 8 23:49:37 2019 +0800

HDDS-1048. Remove SCMNodeStat from SCMNodeManager and use storage 
information from DatanodeInfo#StorageReportProto. Contributed by Nanda kumar.
---
 .../apache/hadoop/hdds/scm/node/DatanodeInfo.java  |   7 +-
 .../hadoop/hdds/scm/node/DeadNodeHandler.java  |   1 -
 .../apache/hadoop/hdds/scm/node/NodeManager.java   |  15 +-
 .../hadoop/hdds/scm/node/NodeStateManager.java |  65 ++--
 .../hadoop/hdds/scm/node/SCMNodeManager.java   | 169 ++---
 .../hadoop/hdds/scm/node/states/NodeStateMap.java  |  58 ---
 .../hadoop/hdds/scm/container/MockNodeManager.java |  31 +---
 .../hadoop/hdds/scm/node/TestDeadNodeHandler.java  |  28 ++--
 .../hdds/scm/node/TestNodeReportHandler.java   |   3 +-
 .../testutils/ReplicationNodeManagerMock.java  |  11 +-
 10 files changed, 123 insertions(+), 265 deletions(-)

diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
index 26b8b95..d06ea2a 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
@@ -23,6 +23,7 @@ import org.apache.hadoop.hdds.protocol.proto
 .StorageContainerDatanodeProtocolProtos.StorageReportProto;
 import org.apache.hadoop.util.Time;
 
+import java.util.Collections;
 import java.util.List;
 import java.util.concurrent.locks.ReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
@@ -38,7 +39,6 @@ public class DatanodeInfo extends DatanodeDetails {
   private volatile long lastHeartbeatTime;
   private long lastStatsUpdatedTime;
 
-  // If required we can dissect StorageReportProto and store the raw data
   private List storageReports;
 
   /**
@@ -48,8 +48,9 @@ public class DatanodeInfo extends DatanodeDetails {
*/
   public DatanodeInfo(DatanodeDetails datanodeDetails) {
 super(datanodeDetails);
-lock = new ReentrantReadWriteLock();
-lastHeartbeatTime = Time.monotonicNow();
+this.lock = new ReentrantReadWriteLock();
+this.lastHeartbeatTime = Time.monotonicNow();
+this.storageReports = Collections.emptyList();
   }
 
   /**
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
index 8e71399..a75a51a 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
@@ -58,7 +58,6 @@ public class DeadNodeHandler implements 
EventHandler {
   @Override
   public void onMessage(DatanodeDetails datanodeDetails,
   EventPublisher publisher) {
-nodeManager.processDeadNode(datanodeDetails.getUuid());
 
 // TODO: check if there are any pipeline on this node and fire close
 // pipeline event
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
index d8865a8..6b8d477 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
@@ -93,8 +93,7 @@ public interface NodeManager extends 
StorageContainerNodeProtocol,
* Return a map of node stats.
* @return a map of individual node stats (live/stale but not dead).
*/
-  // TODO: try to change the return type to Map
-  Map getNodeStats();
+  Map getNodeStats();
 
   /**
* Return the node stat of the specified datanode.
@@ -159,17 +158,11 @@ public interface NodeManager extends 
StorageContainerNodeProtocol,
   /**
* Process node report.
*
-   * @param dnUuid
+   * @param datanodeDetails
* @param nodeReport
*/
-  void processNodeReport(DatanodeDetails dnUuid, NodeReportProto nodeReport);
-
-  /**
-   * Process a dead node event in this Node Manager.
-   *
-   * @param dnUuid datanode uuid.
-   */
-  void processDeadNode(UUID dnUuid);
+  void processNodeReport(DatanodeDetails datanodeDetails,
+ NodeReportProto nodeReport);
 
   /**
* Get list of SCMCommands in the Command Queue for a particular Datanode.
diff --git

[hadoop] branch trunk updated: HDFS-14172. Avoid NPE when SectionName#fromString returns null. Contributed by Xiang Li.

2019-02-08 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 1771317  HDFS-14172. Avoid NPE when SectionName#fromString returns 
null. Contributed by Xiang Li.
1771317 is described below

commit 177131793a88960b734038f6e646476d568c3626
Author: Yiqun Lin 
AuthorDate: Fri Feb 8 20:51:30 2019 +0800

HDFS-14172. Avoid NPE when SectionName#fromString returns null. Contributed 
by Xiang Li.
---
 .../apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java | 7 +--
 .../hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java   | 8 +++-
 .../hadoop/hdfs/tools/offlineImageViewer/PBImageTextWriter.java   | 6 +-
 .../hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java| 6 +-
 4 files changed, 22 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
index 7aed5fd..ad883b1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
@@ -241,8 +241,11 @@ public final class FSImageFormatProtobuf {
 summary.getCodec(), in);
 
 String n = s.getName();
-
-switch (SectionName.fromString(n)) {
+SectionName sectionName = SectionName.fromString(n);
+if (sectionName == null) {
+  throw new IOException("Unrecognized section " + n);
+}
+switch (sectionName) {
 case NS_INFO:
   loadNameSystemSection(in);
   break;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
index 7152c88..f65d29c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
@@ -156,7 +156,13 @@ class FSImageLoader {
   LOG.debug("Loading section " + s.getName() + " length: " + 
s.getLength
   ());
 }
-switch (FSImageFormatProtobuf.SectionName.fromString(s.getName())) {
+
+FSImageFormatProtobuf.SectionName sectionName
+= FSImageFormatProtobuf.SectionName.fromString(s.getName());
+if (sectionName == null) {
+  throw new IOException("Unrecognized section " + s.getName());
+}
+switch (sectionName) {
   case STRING_TABLE:
 stringTable = loadStringTable(is);
 break;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageTextWriter.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageTextWriter.java
index 6f36be4..c5bf5af 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageTextWriter.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageTextWriter.java
@@ -594,7 +594,11 @@ abstract class PBImageTextWriter implements Closeable {
 is = FSImageUtil.wrapInputStreamForCompression(conf,
 summary.getCodec(), new BufferedInputStream(new LimitInputStream(
 fin, section.getLength(;
-switch (SectionName.fromString(section.getName())) {
+SectionName sectionName = SectionName.fromString(section.getName());
+if (sectionName == null) {
+  throw new IOException("Unrecognized section " + section.getName());
+}
+switch (sectionName) {
 case STRING_TABLE:
   LOG.info("Loading string table");
   stringTable = FSImageLoader.loadStringTable(is);
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
index 7a5ef31..cec44f5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
@@ -326,7 +326,11 @@ public final class PBImageXmlWriter {
 summary.getCodec(), new BufferedInputSt

[hadoop] branch trunk updated: HDDS-1029. Allow option for force in DeleteContainerCommand. Contributed by Bharat Viswanadham.

2019-02-04 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 3efa168  HDDS-1029. Allow option for force in DeleteContainerCommand. 
Contributed by Bharat Viswanadham.
3efa168 is described below

commit 3efa168e1f7b4ef7ab741c1fc03d6fc71b653bec
Author: Yiqun Lin 
AuthorDate: Tue Feb 5 10:51:52 2019 +0800

HDDS-1029. Allow option for force in DeleteContainerCommand. Contributed by 
Bharat Viswanadham.
---
 .../src/main/proto/DatanodeContainerProtocol.proto |   1 -
 .../container/common/impl/HddsDispatcher.java  |   1 -
 .../container/common/interfaces/Container.java |   3 +-
 .../ozone/container/common/interfaces/Handler.java |   4 +-
 .../DeleteContainerCommandHandler.java |   3 +-
 .../container/keyvalue/KeyValueContainer.java  |   5 +-
 .../ozone/container/keyvalue/KeyValueHandler.java  |  27 ++--
 .../keyvalue/helpers/KeyValueContainerUtil.java|   3 +-
 .../container/ozoneimpl/ContainerController.java   |   7 +-
 .../protocol/commands/DeleteContainerCommand.java  |  28 +++-
 .../proto/StorageContainerDatanodeProtocol.proto   |   1 +
 .../container/keyvalue/TestKeyValueContainer.java  |   4 +-
 .../container/replication/ReplicationManager.java  |   2 +-
 .../common/impl/TestContainerPersistence.java  |   4 +-
 .../commandhandler/TestDeleteContainerHandler.java | 156 +++--
 .../container/ozoneimpl/TestOzoneContainer.java|  27 +---
 16 files changed, 174 insertions(+), 102 deletions(-)

diff --git a/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto 
b/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
index 3e4f64d..197bfad 100644
--- a/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
+++ b/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
@@ -126,7 +126,6 @@ enum Result {
   PUT_SMALL_FILE_ERROR = 20;
   GET_SMALL_FILE_ERROR = 21;
   CLOSED_CONTAINER_IO = 22;
-  ERROR_CONTAINER_NOT_EMPTY = 23;
   ERROR_IN_COMPACT_DB = 24;
   UNCLOSED_CONTAINER_IO = 25;
   DELETE_ON_OPEN_CONTAINER = 26;
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
index c5c51a3..2f37344 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
@@ -126,7 +126,6 @@ public class HddsDispatcher implements ContainerDispatcher, 
Auditor {
 case CONTAINER_UNHEALTHY:
 case CLOSED_CONTAINER_IO:
 case DELETE_ON_OPEN_CONTAINER:
-case ERROR_CONTAINER_NOT_EMPTY:
   return true;
 default:
   return false;
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Container.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Container.java
index 58e3383..89f09fd 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Container.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Container.java
@@ -50,10 +50,9 @@ public interface Container extends RwLock {
   /**
* Deletes the container.
*
-   * @param forceDelete   - whether this container should be deleted forcibly.
* @throws StorageContainerException
*/
-  void delete(boolean forceDelete) throws StorageContainerException;
+  void delete() throws StorageContainerException;
 
   /**
* Update the container.
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Handler.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Handler.java
index cd93e48..a3bb34b 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Handler.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Handler.java
@@ -151,9 +151,11 @@ public abstract class Handler {
* Deletes the given container.
*
* @param container container to be deleted
+   * @param force if this is set to true, we delete container without checking
+   * state of the container.
* @throws IOException
*/
-  public abstract void deleteContainer(Container container)
+  public abstract void deleteContainer(Container container, boolean force)
   throws IOException;
 
   public void setScmID(String scmId) {
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common

[hadoop] branch trunk updated: HDDS-1025. Handle replication of closed containers in DeadNodeHanlder. Contributed by Bharat Viswanadham.

2019-01-31 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 16195ea  HDDS-1025. Handle replication of closed containers in 
DeadNodeHanlder. Contributed by Bharat Viswanadham.
16195ea is described below

commit 16195eaee1b4a7620bc018f48e9c24fc5fc7cc02
Author: Yiqun Lin 
AuthorDate: Fri Feb 1 11:34:51 2019 +0800

HDDS-1025. Handle replication of closed containers in DeadNodeHanlder. 
Contributed by Bharat Viswanadham.
---
 .../hadoop/hdds/scm/node/DeadNodeHandler.java  | 28 
 .../java/org/apache/hadoop/hdds/scm/TestUtils.java | 15 +++
 .../hadoop/hdds/scm/node/TestDeadNodeHandler.java  | 51 ++
 3 files changed, 77 insertions(+), 17 deletions(-)

diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
index 43f0167..8e71399 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
@@ -21,6 +21,7 @@ package org.apache.hadoop.hdds.scm.node;
 import java.util.Set;
 
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.scm.container.ContainerException;
 import org.apache.hadoop.hdds.scm.container.ContainerID;
 import org.apache.hadoop.hdds.scm.container.ContainerInfo;
@@ -81,8 +82,6 @@ public class DeadNodeHandler implements 
EventHandler {
   try {
 final ContainerInfo container = containerManager.getContainer(id);
 // TODO: For open containers, trigger close on other nodes
-// TODO: Check replica count and call replication manager
-// on these containers.
 if (!container.isOpen()) {
   Set replicas = containerManager
   .getContainerReplicas(id);
@@ -92,6 +91,9 @@ public class DeadNodeHandler implements 
EventHandler {
   .ifPresent(replica -> {
 try {
   containerManager.removeContainerReplica(id, replica);
+  ContainerInfo containerInfo =
+  containerManager.getContainer(id);
+  replicateIfNeeded(containerInfo, publisher);
 } catch (ContainerException ex) {
   LOG.warn("Exception while removing container replica #{} " +
   "for container #{}.", replica, container, ex);
@@ -109,13 +111,21 @@ public class DeadNodeHandler implements 
EventHandler {
*/
   private void replicateIfNeeded(ContainerInfo container,
   EventPublisher publisher) throws ContainerNotFoundException {
-final int existingReplicas = containerManager
-.getContainerReplicas(container.containerID()).size();
-final int expectedReplicas = container.getReplicationFactor().getNumber();
-if (existingReplicas != expectedReplicas) {
-  publisher.fireEvent(SCMEvents.REPLICATE_CONTAINER,
-  new ReplicationRequest(
-  container.getContainerID(), existingReplicas, expectedReplicas));
+// Replicate only closed and Quasi closed containers
+if (container.getState() == HddsProtos.LifeCycleState.CLOSED ||
+container.getState() == HddsProtos.LifeCycleState.QUASI_CLOSED) {
+  final int existingReplicas = containerManager
+  .getContainerReplicas(container.containerID()).size();
+  final int expectedReplicas = 
container.getReplicationFactor().getNumber();
+  if (existingReplicas != expectedReplicas) {
+LOG.debug("Replicate Request fired for container {}, exisiting " +
+"replica count {}, expected replica count {}",
+container.getContainerID(), existingReplicas, expectedReplicas);
+publisher.fireEvent(SCMEvents.REPLICATE_CONTAINER,
+new ReplicationRequest(
+container.getContainerID(), existingReplicas,
+expectedReplicas));
+  }
 }
   }
 
diff --git 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/TestUtils.java
 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/TestUtils.java
index e99739c..0f9a5e4 100644
--- 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/TestUtils.java
+++ 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/TestUtils.java
@@ -446,4 +446,19 @@ public final class TestUtils {
 id, HddsProtos.LifeCycleEvent.CLOSE);
 
   }
+
+  /**
+   * Move the container to Quaise close state.
+   * @param containerManager
+   * @param id
+   * @throws IOException
+   */
+  public static void quasiCloseContainer(ContainerManager containerManager

[hadoop] branch trunk updated: HDDS-1024. Handle DeleteContainerCommand in the SCMDatanodeProtocolServer. Contributed by Bharat Viswanadham.

2019-01-29 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d583cc4  HDDS-1024. Handle DeleteContainerCommand in the 
SCMDatanodeProtocolServer. Contributed by Bharat Viswanadham.
d583cc4 is described below

commit d583cc45c69d0363790efbc7b15d339e4492891f
Author: Yiqun Lin 
AuthorDate: Wed Jan 30 13:56:28 2019 +0800

HDDS-1024. Handle DeleteContainerCommand in the SCMDatanodeProtocolServer. 
Contributed by Bharat Viswanadham.
---
 .../hdds/scm/server/SCMDatanodeProtocolServer.java |   9 +
 .../commandhandler/TestDeleteContainerHandler.java | 209 +
 .../statemachine/commandhandler/package-info.java  |  21 +++
 3 files changed, 239 insertions(+)

diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
index 06a4a86..3030aa7 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
@@ -58,6 +58,9 @@ import static org.apache.hadoop.hdds.protocol.proto
 .StorageContainerDatanodeProtocolProtos.SCMCommandProto
 .Type.deleteBlocksCommand;
 import static org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.SCMCommandProto
+.Type.deleteContainerCommand;
+import static org.apache.hadoop.hdds.protocol.proto
 .StorageContainerDatanodeProtocolProtos.SCMCommandProto.Type
 .replicateContainerCommand;
 import static org.apache.hadoop.hdds.protocol.proto
@@ -87,6 +90,7 @@ import org.apache.hadoop.ozone.audit.SCMAction;
 import org.apache.hadoop.ozone.protocol.StorageContainerDatanodeProtocol;
 import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
 import org.apache.hadoop.ozone.protocol.commands.DeleteBlocksCommand;
+import org.apache.hadoop.ozone.protocol.commands.DeleteContainerCommand;
 import org.apache.hadoop.ozone.protocol.commands.RegisteredCommand;
 import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
 import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
@@ -335,6 +339,11 @@ public class SCMDatanodeProtocolServer implements
   .setCloseContainerCommandProto(
   ((CloseContainerCommand) cmd).getProto())
   .build();
+case deleteContainerCommand:
+  return builder.setCommandType(deleteContainerCommand)
+  .setDeleteContainerCommandProto(
+  ((DeleteContainerCommand) cmd).getProto())
+  .build();
 case replicateContainerCommand:
   return builder
   .setCommandType(replicateContainerCommand)
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestDeleteContainerHandler.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestDeleteContainerHandler.java
new file mode 100644
index 000..232ab0ac
--- /dev/null
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestDeleteContainerHandler.java
@@ -0,0 +1,209 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.container.common.statemachine.commandhandler;
+
+import org.apache.hadoop.hdds.client.ReplicationFactor;
+import org.apache.hadoop.hdds.client.ReplicationType;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.container.ContainerInfo;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.ozone.HddsDatanodeService;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.client.

[hadoop] branch trunk updated: HDDS-1022. Add cmd type in getCommandResponse in SCMDatanodeProtocolServer. Contributed by Bharat Viswanadham.

2019-01-28 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2d06112  HDDS-1022. Add cmd type in getCommandResponse in 
SCMDatanodeProtocolServer. Contributed by Bharat Viswanadham.
2d06112 is described below

commit 2d06112b74c1f797db1f2b4884c1de4485b15dfa
Author: Yiqun Lin 
AuthorDate: Tue Jan 29 11:26:41 2019 +0800

HDDS-1022. Add cmd type in getCommandResponse in SCMDatanodeProtocolServer. 
Contributed by Bharat Viswanadham.
---
 .../org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java   | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
index 6a3552e..06a4a86 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
@@ -342,7 +342,8 @@ public class SCMDatanodeProtocolServer implements
   ((ReplicateContainerCommand)cmd).getProto())
   .build();
 default:
-  throw new IllegalArgumentException("Not implemented");
+  throw new IllegalArgumentException("Scm command " +
+  cmd.getType().toString() + " is not implemented");
 }
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-974. Add getServiceAddress method to ServiceInfo and use it in TestOzoneShell. Contributed by Doroszlai, Attila.

2019-01-28 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 8326450  HDDS-974. Add getServiceAddress method to ServiceInfo and use 
it in TestOzoneShell. Contributed by Doroszlai, Attila.
8326450 is described below

commit 8326450bca1b15383bf1c1a3b4e86df1ca027495
Author: Yiqun Lin 
AuthorDate: Mon Jan 28 17:03:12 2019 +0800

HDDS-974. Add getServiceAddress method to ServiceInfo and use it in 
TestOzoneShell. Contributed by Doroszlai, Attila.
---
 .../apache/hadoop/ozone/client/rpc/RpcClient.java  |  5 +-
 .../hadoop/ozone/om/helpers/ServiceInfo.java   | 14 -
 .../ozone/ozShell/TestOzoneDatanodeShell.java  | 34 +--
 .../hadoop/ozone/ozShell/TestOzoneShell.java   | 66 --
 4 files changed, 40 insertions(+), 79 deletions(-)

diff --git 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
index 538f69b..8f398e9 100644
--- 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
+++ 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
@@ -80,7 +80,6 @@ import org.apache.ratis.protocol.ClientId;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import javax.ws.rs.HEAD;
 import java.io.IOException;
 import java.net.InetSocketAddress;
 import java.util.*;
@@ -206,8 +205,8 @@ public class RpcClient implements ClientProtocol {
 ServiceInfo scmInfo = services.stream().filter(
 a -> a.getNodeType().equals(HddsProtos.NodeType.SCM))
 .collect(Collectors.toList()).get(0);
-return NetUtils.createSocketAddr(scmInfo.getHostname()+ ":" +
-scmInfo.getPort(ServicePort.Type.RPC));
+return NetUtils.createSocketAddr(
+scmInfo.getServiceAddress(ServicePort.Type.RPC));
   }
 
   @Override
diff --git 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/ServiceInfo.java
 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/ServiceInfo.java
index 9b03aef..fb49644 100644
--- 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/ServiceInfo.java
+++ 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/ServiceInfo.java
@@ -109,11 +109,11 @@ public final class ServiceInfo {
   }
 
   /**
-   * Returns the port for given type, null if the service doesn't support
-   * the type.
+   * Returns the port for given type.
*
* @param type the type of port.
* ex: RPC, HTTP, HTTPS, etc..
+   * @throws NullPointerException if the service doesn't support the given type
*/
   @JsonIgnore
   public int getPort(ServicePort.Type type) {
@@ -121,6 +121,16 @@ public final class ServiceInfo {
   }
 
   /**
+   * Returns the address of the service (hostname with port of the given type).
+   * @param portType the type of port, eg. RPC, HTTP, etc.
+   * @return service address (hostname with port of the given type)
+   */
+  @JsonIgnore
+  public String getServiceAddress(ServicePort.Type portType) {
+return hostname + ":" + getPort(portType);
+  }
+
+  /**
* Converts {@link ServiceInfo} to OzoneManagerProtocolProtos.ServiceInfo.
*
* @return OzoneManagerProtocolProtos.ServiceInfo
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneDatanodeShell.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneDatanodeShell.java
index a45dee8..9ba8529 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneDatanodeShell.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneDatanodeShell.java
@@ -21,25 +21,19 @@ import com.google.common.base.Strings;
 
 import java.io.ByteArrayOutputStream;
 import java.io.File;
-import java.io.IOException;
 import java.io.PrintStream;
-import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
 import java.util.List;
-import java.util.stream.Collectors;
 
 import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.hdds.cli.MissingSubcommandException;
 import org.apache.hadoop.hdds.client.ReplicationFactor;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.ozone.HddsDatanodeService;
 import org.apache.hadoop.ozone.MiniOzoneCluster;
-import org.apache.hadoop.ozone.client.protocol.ClientProtocol;
 import org.apache.hadoop.ozone.client.rest.RestClient;
 import org.apache.hadoop.ozone.client.rpc.RpcClient;
-import org.apache.hadoop.ozone.om.helpers.ServiceInfo;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.junit.After;
 import org.juni

[hadoop] branch HDFS-13891 updated: HDFS-14209. RBF: setQuota() through router is working for only the mount Points under the Source column in MountTable. Contributed by Shubham Dewan.

2019-01-23 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HDFS-13891 by this push:
 new 7fe0b06  HDFS-14209. RBF: setQuota() through router is working for 
only the mount Points under the Source column in MountTable. Contributed by 
Shubham Dewan.
7fe0b06 is described below

commit 7fe0b0684c67d2a766eebce663ac145e63fe4acb
Author: Yiqun Lin 
AuthorDate: Wed Jan 23 22:59:43 2019 +0800

HDFS-14209. RBF: setQuota() through router is working for only the mount 
Points under the Source column in MountTable. Contributed by Shubham Dewan.
---
 .../hdfs/server/federation/router/Quota.java   |  7 -
 .../server/federation/router/TestRouterQuota.java  | 32 +-
 2 files changed, 37 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
index cfb538f..a6f5bab 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
@@ -216,6 +216,11 @@ public class Quota {
 locations.addAll(rpcServer.getLocationsForPath(childPath, true, 
false));
   }
 }
-return locations;
+if (locations.size() >= 1) {
+  return locations;
+} else {
+  locations.addAll(rpcServer.getLocationsForPath(path, true, false));
+  return locations;
+}
   }
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
index 656b401..034023c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
@@ -755,4 +755,34 @@ public class TestRouterQuota {
 assertEquals(HdfsConstants.QUOTA_RESET, subClusterQuota.getQuota());
 assertEquals(HdfsConstants.QUOTA_RESET, subClusterQuota.getSpaceQuota());
   }
-}
\ No newline at end of file
+
+  @Test
+  public void testSetQuotaNotMountTable() throws Exception {
+long nsQuota = 5;
+long ssQuota = 100;
+final FileSystem nnFs1 = nnContext1.getFileSystem();
+
+// setQuota should run for any directory
+MountTable mountTable1 = MountTable.newInstance("/setquotanmt",
+Collections.singletonMap("ns0", "/testdir16"));
+
+addMountTable(mountTable1);
+
+// Add a directory not present in mount table.
+nnFs1.mkdirs(new Path("/testdir16/testdir17"));
+
+routerContext.getRouter().getRpcServer().setQuota("/setquotanmt/testdir17",
+nsQuota, ssQuota, null);
+
+RouterQuotaUpdateService updateService = routerContext.getRouter()
+.getQuotaCacheUpdateService();
+// ensure setQuota RPC call was invoked
+updateService.periodicInvoke();
+
+ClientProtocol client1 = nnContext1.getClient().getNamenode();
+final QuotaUsage quota1 = client1.getQuotaUsage("/testdir16/testdir17");
+
+assertEquals(nsQuota, quota1.getQuota());
+assertEquals(ssQuota, quota1.getSpaceQuota());
+  }
+}


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HDFS-13891 updated: HDFS-14206. RBF: Cleanup quota modules. Contributed by Inigo Goiri.

2019-01-14 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HDFS-13891 by this push:
 new c9a6545  HDFS-14206. RBF: Cleanup quota modules. Contributed by Inigo 
Goiri.
c9a6545 is described below

commit c9a65456fc64b16289d62d72a22cf7890605f024
Author: Yiqun Lin 
AuthorDate: Tue Jan 15 14:21:33 2019 +0800

HDFS-14206. RBF: Cleanup quota modules. Contributed by Inigo Goiri.
---
 .../hdfs/server/federation/router/Quota.java   |  6 ++--
 .../federation/router/RouterClientProtocol.java| 22 +++---
 .../federation/router/RouterQuotaManager.java  |  2 +-
 .../router/RouterQuotaUpdateService.java   |  6 ++--
 .../server/federation/router/RouterQuotaUsage.java | 35 --
 5 files changed, 38 insertions(+), 33 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
index 5d0309f..cfb538f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
@@ -163,7 +163,7 @@ public class Quota {
 long ssCount = 0;
 long nsQuota = HdfsConstants.QUOTA_RESET;
 long ssQuota = HdfsConstants.QUOTA_RESET;
-boolean hasQuotaUnSet = false;
+boolean hasQuotaUnset = false;
 
 for (Map.Entry entry : results.entrySet()) {
   RemoteLocation loc = entry.getKey();
@@ -172,7 +172,7 @@ public class Quota {
 // If quota is not set in real FileSystem, the usage
 // value will return -1.
 if (usage.getQuota() == -1 && usage.getSpaceQuota() == -1) {
-  hasQuotaUnSet = true;
+  hasQuotaUnset = true;
 }
 nsQuota = usage.getQuota();
 ssQuota = usage.getSpaceQuota();
@@ -189,7 +189,7 @@ public class Quota {
 
 QuotaUsage.Builder builder = new QuotaUsage.Builder()
 .fileAndDirectoryCount(nsCount).spaceConsumed(ssCount);
-if (hasQuotaUnSet) {
+if (hasQuotaUnset) {
   builder.quota(HdfsConstants.QUOTA_RESET)
   .spaceQuota(HdfsConstants.QUOTA_RESET);
 } else {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index 894f5be..0962c0d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -20,7 +20,7 @@ package org.apache.hadoop.hdfs.server.federation.router;
 import static 
org.apache.hadoop.hdfs.server.federation.router.FederationUtil.updateMountPointStatus;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.crypto.CryptoProtocolVersion;
-import org.apache.hadoop.fs.BatchedRemoteIterator;
+import org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries;
 import org.apache.hadoop.fs.CacheFlag;
 import org.apache.hadoop.fs.ContentSummary;
 import org.apache.hadoop.fs.CreateFlag;
@@ -1140,7 +1140,7 @@ public class RouterClientProtocol implements 
ClientProtocol {
   }
 
   @Override
-  public BatchedRemoteIterator.BatchedEntries 
listCacheDirectives(
+  public BatchedEntries listCacheDirectives(
   long prevId, CacheDirectiveInfo filter) throws IOException {
 rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
 return null;
@@ -1162,7 +1162,7 @@ public class RouterClientProtocol implements 
ClientProtocol {
   }
 
   @Override
-  public BatchedRemoteIterator.BatchedEntries 
listCachePools(String prevKey)
+  public BatchedEntries listCachePools(String prevKey)
   throws IOException {
 rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
 return null;
@@ -1273,7 +1273,7 @@ public class RouterClientProtocol implements 
ClientProtocol {
   }
 
   @Override
-  public BatchedRemoteIterator.BatchedEntries 
listEncryptionZones(long prevId)
+  public BatchedEntries listEncryptionZones(long prevId)
   throws IOException {
 rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
 return null;
@@ -1286,7 +1286,7 @@ public class RouterClientProtocol implements 
ClientProtocol {
   }
 
   @Override
-  public BatchedRemoteIterator.BatchedEntries 
listReencryptionStatus(
+  public BatchedEntries listReencryptionStatus(
   long prevId) throws IOException {
 rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
 ret

[hadoop] branch HDFS-13891 updated: HDFS-14150. RBF: Quotas of the sub-cluster should be removed when removing the mount point. Contributed by Takanobu Asanuma.

2019-01-09 Thread yqlin
This is an automated email from the ASF dual-hosted git repository.

yqlin pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HDFS-13891 by this push:
 new 8c245e7  HDFS-14150. RBF: Quotas of the sub-cluster should be removed 
when removing the mount point. Contributed by Takanobu Asanuma.
8c245e7 is described below

commit 8c245e75b4f8020cd3353c0263ea83704d1096c4
Author: Yiqun Lin 
AuthorDate: Wed Jan 9 17:18:43 2019 +0800

HDFS-14150. RBF: Quotas of the sub-cluster should be removed when removing 
the mount point. Contributed by Takanobu Asanuma.
---
 .../federation/router/RouterAdminServer.java   | 23 +++
 .../src/main/resources/hdfs-rbf-default.xml|  4 +-
 .../src/site/markdown/HDFSRouterFederation.md  |  4 +-
 .../server/federation/router/TestRouterQuota.java  | 48 +-
 4 files changed, 67 insertions(+), 12 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
index 5bb7751..18c19e0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
@@ -250,23 +250,25 @@ public class RouterAdminServer extends AbstractService
 
 MountTable mountTable = request.getEntry();
 if (mountTable != null && router.isQuotaEnabled()) {
-  synchronizeQuota(mountTable);
+  synchronizeQuota(mountTable.getSourcePath(),
+  mountTable.getQuota().getQuota(),
+  mountTable.getQuota().getSpaceQuota());
 }
 return response;
   }
 
   /**
* Synchronize the quota value across mount table and subclusters.
-   * @param mountTable Quota set in given mount table.
+   * @param path Source path in given mount table.
+   * @param nsQuota Name quota definition in given mount table.
+   * @param ssQuota Space quota definition in given mount table.
* @throws IOException
*/
-  private void synchronizeQuota(MountTable mountTable) throws IOException {
-String path = mountTable.getSourcePath();
-long nsQuota = mountTable.getQuota().getQuota();
-long ssQuota = mountTable.getQuota().getSpaceQuota();
-
-if (nsQuota != HdfsConstants.QUOTA_DONT_SET
-|| ssQuota != HdfsConstants.QUOTA_DONT_SET) {
+  private void synchronizeQuota(String path, long nsQuota, long ssQuota)
+  throws IOException {
+if (router.isQuotaEnabled() &&
+(nsQuota != HdfsConstants.QUOTA_DONT_SET
+|| ssQuota != HdfsConstants.QUOTA_DONT_SET)) {
   HdfsFileStatus ret = this.router.getRpcServer().getFileInfo(path);
   if (ret != null) {
 this.router.getRpcServer().getQuotaModule().setQuota(path, nsQuota,
@@ -278,6 +280,9 @@ public class RouterAdminServer extends AbstractService
   @Override
   public RemoveMountTableEntryResponse removeMountTableEntry(
   RemoveMountTableEntryRequest request) throws IOException {
+// clear sub-cluster's quota definition
+synchronizeQuota(request.getSrcPath(), HdfsConstants.QUOTA_RESET,
+HdfsConstants.QUOTA_RESET);
 return getMountTableStore().removeMountTableEntry(request);
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
index 72f6c2f..20ae778 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
@@ -447,7 +447,9 @@
 dfs.federation.router.quota.enable
 false
 
-  Set to true to enable quota system in Router.
+  Set to true to enable quota system in Router. When it's enabled, setting
+  or clearing sub-cluster's quota directly is not recommended since Router
+  Admin server will override sub-cluster's quota with global quota.
 
   
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
index adc4383..959cd63 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
@@ -143,6 +143,8 @@ For performance reasons, the Router caches the quota usage 
and updates it period
 will be used for quota-verification during each WRITE RPC call invoked in 
RouterRPCSever. See [HDFS Quotas Guide](../hadoop-hdfs/HdfsQuotaAdminGuide.html)
 for the quota detail.
 
+Note: When global quota is enabled, setting or clearing sub-cluster's quota 
directly is

hadoop git commit: HDFS-13946. Log longest FSN write/read lock held stack trace.

2018-12-22 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk 763e96ea2 -> feb2664ac


HDFS-13946. Log longest FSN write/read lock held stack trace.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/feb2664a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/feb2664a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/feb2664a

Branch: refs/heads/trunk
Commit: feb2664ac4b246ca87fc4997a941190f00026dff
Parents: 763e96e
Author: Yiqun Lin 
Authored: Sat Dec 22 23:09:59 2018 +0800
Committer: Yiqun Lin 
Committed: Sat Dec 22 23:09:59 2018 +0800

--
 .../apache/hadoop/log/LogThrottlingHelper.java  | 16 
 .../hadoop/log/TestLogThrottlingHelper.java |  3 +
 .../hdfs/server/namenode/FSNamesystemLock.java  | 93 
 .../server/namenode/TestFSNamesystemLock.java   | 41 +++--
 4 files changed, 126 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/feb2664a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogThrottlingHelper.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogThrottlingHelper.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogThrottlingHelper.java
index 41bee04..848f123 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogThrottlingHelper.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogThrottlingHelper.java
@@ -273,6 +273,22 @@ public class LogThrottlingHelper {
   }
 
   /**
+   * Return the summary information for given index.
+   *
+   * @param recorderName The name of the recorder.
+   * @param idx The index value.
+   * @return The summary information.
+   */
+  public SummaryStatistics getCurrentStats(String recorderName, int idx) {
+LoggingAction currentLog = currentLogs.get(recorderName);
+if (currentLog != null) {
+  return currentLog.getStats(idx);
+}
+
+return null;
+  }
+
+  /**
* A standard log action which keeps track of all of the values which have
* been logged. This is also used for internal bookkeeping via its private
* fields and methods; it will maintain whether or not it is ready to be

http://git-wip-us.apache.org/repos/asf/hadoop/blob/feb2664a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogThrottlingHelper.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogThrottlingHelper.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogThrottlingHelper.java
index a675d0a..d0eeea3 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogThrottlingHelper.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogThrottlingHelper.java
@@ -167,6 +167,9 @@ public class TestLogThrottlingHelper {
 assertEquals(2.0, bar.getStats(0).getMean(), 0.01);
 assertEquals(3.0, baz.getStats(0).getMean(), 0.01);
 assertEquals(3.0, baz.getStats(1).getMean(), 0.01);
+
+assertEquals(2.0, helper.getCurrentStats("bar", 0).getMax(), 0);
+assertEquals(3.0, helper.getCurrentStats("baz", 0).getMax(), 0);
   }
 
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/feb2664a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java
index 7c28465..ebf5178 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java
@@ -21,10 +21,12 @@ package org.apache.hadoop.hdfs.server.namenode;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.atomic.AtomicReference;
 import java.util.concurrent.locks.Condition;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.apache.commons.math3.stat.descriptive.SummaryStatistics;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.log.LogThrottlingHelper;
 import org.apache.hadoop.metrics2.lib.MutableRatesWithAggregation;
@@ -97,8 +99,18 @@ class 

[1/2] hadoop git commit: HDFS-13443. RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries. Contributed by Mohammad Arshad.

2018-12-18 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-13891 7d7e88c30 -> c9ebaf2d3


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c9ebaf2d/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTableCacheRefresh.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTableCacheRefresh.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTableCacheRefresh.java
new file mode 100644
index 000..c90e614
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTableCacheRefresh.java
@@ -0,0 +1,396 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.curator.test.TestingServer;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.hdfs.server.federation.FederationTestUtils;
+import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster;
+import 
org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster.RouterContext;
+import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
+import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
+import org.apache.hadoop.hdfs.server.federation.store.RouterStore;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryResponse;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryResponse;
+import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
+import org.apache.hadoop.service.Service.STATE;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.util.Time;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * This test class verifies that mount table cache is updated on all the 
routers
+ * when MountTableRefreshService is enabled and there is a change in mount 
table
+ * entries.
+ */
+public class TestRouterMountTableCacheRefresh {
+  private static TestingServer curatorTestingServer;
+  private static MiniRouterDFSCluster cluster;
+  private static RouterContext routerContext;
+  private static MountTableManager mountTableManager;
+
+  @BeforeClass
+  public static void setUp() throws Exception {
+curatorTestingServer = new TestingServer();
+curatorTestingServer.start();
+final String connectString = curatorTestingServer.getConnectString();
+int numNameservices = 2;
+cluster = new MiniRouterDFSCluster(false, numNameservices);
+Configuration conf = new RouterConfigBuilder().refreshCache().admin().rpc()
+.heartbeat().build();
+conf.setClass(RBFConfigKeys.FEDERATION_FILE_RESOLVER_CLIENT_CLASS,
+

[2/2] hadoop git commit: HDFS-13443. RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries. Contributed by Mohammad Arshad.

2018-12-18 Thread yqlin
HDFS-13443. RBF: Update mount table cache immediately after changing 
(add/update/remove) mount table entries. Contributed by Mohammad Arshad.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c9ebaf2d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c9ebaf2d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c9ebaf2d

Branch: refs/heads/HDFS-13891
Commit: c9ebaf2d3f8204c31ff9796b17af71fc2be9a6bd
Parents: 7d7e88c
Author: Yiqun Lin 
Authored: Wed Dec 19 11:40:00 2018 +0800
Committer: Yiqun Lin 
Committed: Wed Dec 19 11:40:00 2018 +0800

--
 ...uterAdminProtocolServerSideTranslatorPB.java |  23 ++
 .../RouterAdminProtocolTranslatorPB.java|  21 +
 .../federation/resolver/MountTableManager.java  |  16 +
 .../router/MountTableRefresherService.java  | 289 ++
 .../router/MountTableRefresherThread.java   |  96 +
 .../server/federation/router/RBFConfigKeys.java |  25 ++
 .../hdfs/server/federation/router/Router.java   |  53 ++-
 .../federation/router/RouterAdminServer.java|  28 +-
 .../router/RouterHeartbeatService.java  |   5 +
 .../federation/store/MountTableStore.java   |  24 ++
 .../federation/store/StateStoreUtils.java   |  26 ++
 .../store/impl/MountTableStoreImpl.java |  18 +
 .../RefreshMountTableEntriesRequest.java|  34 ++
 .../RefreshMountTableEntriesResponse.java   |  44 +++
 .../RefreshMountTableEntriesRequestPBImpl.java  |  67 
 .../RefreshMountTableEntriesResponsePBImpl.java |  74 
 .../federation/store/records/RouterState.java   |   4 +
 .../records/impl/pb/RouterStatePBImpl.java  |  10 +
 .../hdfs/tools/federation/RouterAdmin.java  |  33 +-
 .../src/main/proto/FederationProtocol.proto |   8 +
 .../src/main/proto/RouterProtocol.proto |   5 +
 .../src/main/resources/hdfs-rbf-default.xml |  34 ++
 .../src/site/markdown/HDFSRouterFederation.md   |   9 +
 .../server/federation/FederationTestUtils.java  |  27 ++
 .../server/federation/RouterConfigBuilder.java  |  12 +
 .../federation/router/TestRouterAdminCLI.java   |  25 +-
 .../TestRouterMountTableCacheRefresh.java   | 396 +++
 .../src/site/markdown/HDFSCommands.md   |   2 +
 28 files changed, 1402 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c9ebaf2d/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
index 6341ebd..a31c46d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
@@ -37,6 +37,8 @@ import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProt
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetSafeModeResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.LeaveSafeModeRequestProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.LeaveSafeModeResponseProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesRequestProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryRequestProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.UpdateMountTableEntryRequestProto;
@@ -58,6 +60,8 @@ import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeReques
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeResponse;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeRequest;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeResponse;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
 import 

hadoop git commit: HDFS-13869. RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics. Contributed by Ranith Sardar.

2018-12-16 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-13891 47efde24f -> a6f61f9b3


HDFS-13869. RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics. 
Contributed by Ranith Sardar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a6f61f9b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a6f61f9b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a6f61f9b

Branch: refs/heads/HDFS-13891
Commit: a6f61f9b379d5ffd2b7d6448e0644e285ede920a
Parents: 47efde2
Author: Yiqun Lin 
Authored: Mon Dec 17 12:35:07 2018 +0800
Committer: Yiqun Lin 
Committed: Mon Dec 17 12:35:07 2018 +0800

--
 .../federation/metrics/NamenodeBeanMetrics.java | 149 ---
 .../hdfs/server/federation/router/Router.java   |   8 +-
 .../server/federation/router/TestRouter.java|  14 ++
 3 files changed, 147 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a6f61f9b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
index 64df10c..25ec27c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
@@ -168,8 +168,12 @@ public class NamenodeBeanMetrics
 }
   }
 
-  private FederationMetrics getFederationMetrics() {
-return this.router.getMetrics();
+  private FederationMetrics getFederationMetrics() throws IOException {
+FederationMetrics metrics = getRouter().getMetrics();
+if (metrics == null) {
+  throw new IOException("Federated metrics is not initialized");
+}
+return metrics;
   }
 
   /
@@ -188,22 +192,42 @@ public class NamenodeBeanMetrics
 
   @Override
   public long getUsed() {
-return getFederationMetrics().getUsedCapacity();
+try {
+  return getFederationMetrics().getUsedCapacity();
+} catch (IOException e) {
+  LOG.debug("Failed to get the used capacity", e.getMessage());
+}
+return 0;
   }
 
   @Override
   public long getFree() {
-return getFederationMetrics().getRemainingCapacity();
+try {
+  return getFederationMetrics().getRemainingCapacity();
+} catch (IOException e) {
+  LOG.debug("Failed to get remaining capacity", e.getMessage());
+}
+return 0;
   }
 
   @Override
   public long getTotal() {
-return getFederationMetrics().getTotalCapacity();
+try {
+  return getFederationMetrics().getTotalCapacity();
+} catch (IOException e) {
+  LOG.debug("Failed to Get total capacity", e.getMessage());
+}
+return 0;
   }
 
   @Override
   public long getProvidedCapacity() {
-return getFederationMetrics().getProvidedSpace();
+try {
+  return getFederationMetrics().getProvidedSpace();
+} catch (IOException e) {
+  LOG.debug("Failed to get provided capacity", e.getMessage());
+}
+return 0;
   }
 
   @Override
@@ -261,39 +285,79 @@ public class NamenodeBeanMetrics
 
   @Override
   public long getTotalBlocks() {
-return getFederationMetrics().getNumBlocks();
+try {
+  return getFederationMetrics().getNumBlocks();
+} catch (IOException e) {
+  LOG.debug("Failed to get number of blocks", e.getMessage());
+}
+return 0;
   }
 
   @Override
   public long getNumberOfMissingBlocks() {
-return getFederationMetrics().getNumOfMissingBlocks();
+try {
+  return getFederationMetrics().getNumOfMissingBlocks();
+} catch (IOException e) {
+  LOG.debug("Failed to get number of missing blocks", e.getMessage());
+}
+return 0;
   }
 
   @Override
   @Deprecated
   public long getPendingReplicationBlocks() {
-return getFederationMetrics().getNumOfBlocksPendingReplication();
+try {
+  return getFederationMetrics().getNumOfBlocksPendingReplication();
+} catch (IOException e) {
+  LOG.debug("Failed to get number of blocks pending replica",
+  e.getMessage());
+}
+return 0;
   }
 
   @Override
   public long getPendingReconstructionBlocks() {
-return getFederationMetrics().getNumOfBlocksPendingReplication();
+try {
+  return getFederationMetrics().getNumOfBlocksPendingReplication();
+} catch (IOException e) {
+  LOG.debug("Failed to get number of blocks pending replica",
+ 

hadoop git commit: HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui.

2018-12-04 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-13891 4df886a64 -> 9c1594708


HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei 
Hui.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9c159470
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9c159470
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9c159470

Branch: refs/heads/HDFS-13891
Commit: 9c1594708913403784283a29b94f260448969615
Parents: 4df886a
Author: Yiqun Lin 
Authored: Wed Dec 5 11:44:38 2018 +0800
Committer: Yiqun Lin 
Committed: Wed Dec 5 11:44:38 2018 +0800

--
 .../federation/router/ConnectionManager.java| 20 ---
 .../federation/router/ConnectionPool.java   | 14 -
 .../server/federation/router/RBFConfigKeys.java |  5 ++
 .../src/main/resources/hdfs-rbf-default.xml |  8 +++
 .../router/TestConnectionManager.java   | 55 +---
 5 files changed, 85 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9c159470/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
index fa2bf94..745 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
@@ -49,10 +49,6 @@ public class ConnectionManager {
   private static final Logger LOG =
   LoggerFactory.getLogger(ConnectionManager.class);
 
-  /** Minimum amount of active connections: 50%. */
-  protected static final float MIN_ACTIVE_RATIO = 0.5f;
-
-
   /** Configuration for the connection manager, pool and sockets. */
   private final Configuration conf;
 
@@ -60,6 +56,8 @@ public class ConnectionManager {
   private final int minSize = 1;
   /** Max number of connections per user + nn. */
   private final int maxSize;
+  /** Min ratio of active connections per user + nn. */
+  private final float minActiveRatio;
 
   /** How often we close a pool for a particular user + nn. */
   private final long poolCleanupPeriodMs;
@@ -96,10 +94,13 @@ public class ConnectionManager {
   public ConnectionManager(Configuration config) {
 this.conf = config;
 
-// Configure minimum and maximum connection pools
+// Configure minimum, maximum and active connection pools
 this.maxSize = this.conf.getInt(
 RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE,
 RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE_DEFAULT);
+this.minActiveRatio = this.conf.getFloat(
+RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO,
+RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO_DEFAULT);
 
 // Map with the connections indexed by UGI and Namenode
 this.pools = new HashMap<>();
@@ -203,7 +204,8 @@ public class ConnectionManager {
 pool = this.pools.get(connectionId);
 if (pool == null) {
   pool = new ConnectionPool(
-  this.conf, nnAddress, ugi, this.minSize, this.maxSize, protocol);
+  this.conf, nnAddress, ugi, this.minSize, this.maxSize,
+  this.minActiveRatio, protocol);
   this.pools.put(connectionId, pool);
 }
   } finally {
@@ -326,8 +328,9 @@ public class ConnectionManager {
   long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
   int total = pool.getNumConnections();
   int active = pool.getNumActiveConnections();
+  float poolMinActiveRatio = pool.getMinActiveRatio();
   if (timeSinceLastActive > connectionCleanupPeriodMs ||
-  active < MIN_ACTIVE_RATIO * total) {
+  active < poolMinActiveRatio * total) {
 // Remove and close 1 connection
 List conns = pool.removeConnections(1);
 for (ConnectionContext conn : conns) {
@@ -412,8 +415,9 @@ public class ConnectionManager {
   try {
 int total = pool.getNumConnections();
 int active = pool.getNumActiveConnections();
+float poolMinActiveRatio = pool.getMinActiveRatio();
 if (pool.getNumConnections() < pool.getMaxSize() &&
-active >= MIN_ACTIVE_RATIO * total) {
+active >= poolMinActiveRatio * total) {
   ConnectionContext conn = pool.newConnection();
   pool.addConnection(conn);
 } else {


hadoop git commit: Revert "HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui."

2018-12-04 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-13891 7c0d6f65f -> 4df886a64


Revert "HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed 
by Fei Hui."

This reverts commit 7c0d6f65fde12ead91ed7c706521ad1d3dc995f8.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4df886a6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4df886a6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4df886a6

Branch: refs/heads/HDFS-13891
Commit: 4df886a64d7595000760556798a33cf017fffb07
Parents: 7c0d6f6
Author: Yiqun Lin 
Authored: Tue Dec 4 22:16:00 2018 +0800
Committer: Yiqun Lin 
Committed: Tue Dec 4 22:16:00 2018 +0800

--
 .../federation/router/ConnectionManager.java| 20 +++-
 .../federation/router/ConnectionPool.java   | 14 +-
 .../server/federation/router/RBFConfigKeys.java |  5 --
 .../src/main/resources/hdfs-rbf-default.xml |  8 ---
 .../router/TestConnectionManager.java   | 51 +++-
 5 files changed, 15 insertions(+), 83 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4df886a6/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
index 745..fa2bf94 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
@@ -49,6 +49,10 @@ public class ConnectionManager {
   private static final Logger LOG =
   LoggerFactory.getLogger(ConnectionManager.class);
 
+  /** Minimum amount of active connections: 50%. */
+  protected static final float MIN_ACTIVE_RATIO = 0.5f;
+
+
   /** Configuration for the connection manager, pool and sockets. */
   private final Configuration conf;
 
@@ -56,8 +60,6 @@ public class ConnectionManager {
   private final int minSize = 1;
   /** Max number of connections per user + nn. */
   private final int maxSize;
-  /** Min ratio of active connections per user + nn. */
-  private final float minActiveRatio;
 
   /** How often we close a pool for a particular user + nn. */
   private final long poolCleanupPeriodMs;
@@ -94,13 +96,10 @@ public class ConnectionManager {
   public ConnectionManager(Configuration config) {
 this.conf = config;
 
-// Configure minimum, maximum and active connection pools
+// Configure minimum and maximum connection pools
 this.maxSize = this.conf.getInt(
 RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE,
 RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE_DEFAULT);
-this.minActiveRatio = this.conf.getFloat(
-RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO,
-RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO_DEFAULT);
 
 // Map with the connections indexed by UGI and Namenode
 this.pools = new HashMap<>();
@@ -204,8 +203,7 @@ public class ConnectionManager {
 pool = this.pools.get(connectionId);
 if (pool == null) {
   pool = new ConnectionPool(
-  this.conf, nnAddress, ugi, this.minSize, this.maxSize,
-  this.minActiveRatio, protocol);
+  this.conf, nnAddress, ugi, this.minSize, this.maxSize, protocol);
   this.pools.put(connectionId, pool);
 }
   } finally {
@@ -328,9 +326,8 @@ public class ConnectionManager {
   long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
   int total = pool.getNumConnections();
   int active = pool.getNumActiveConnections();
-  float poolMinActiveRatio = pool.getMinActiveRatio();
   if (timeSinceLastActive > connectionCleanupPeriodMs ||
-  active < poolMinActiveRatio * total) {
+  active < MIN_ACTIVE_RATIO * total) {
 // Remove and close 1 connection
 List conns = pool.removeConnections(1);
 for (ConnectionContext conn : conns) {
@@ -415,9 +412,8 @@ public class ConnectionManager {
   try {
 int total = pool.getNumConnections();
 int active = pool.getNumActiveConnections();
-float poolMinActiveRatio = pool.getMinActiveRatio();
 if (pool.getNumConnections() < pool.getMaxSize() &&
-active >= poolMinActiveRatio * total) {
+active >= MIN_ACTIVE_RATIO * total) {
   ConnectionContext conn = 

hadoop git commit: HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui.

2018-12-04 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-13891 2ddf87f7a -> 7c0d6f65f


HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei 
Hui.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7c0d6f65
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7c0d6f65
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7c0d6f65

Branch: refs/heads/HDFS-13891
Commit: 7c0d6f65fde12ead91ed7c706521ad1d3dc995f8
Parents: 2ddf87f
Author: Yiqun Lin 
Authored: Tue Dec 4 19:58:38 2018 +0800
Committer: Yiqun Lin 
Committed: Tue Dec 4 19:58:38 2018 +0800

--
 .../federation/router/ConnectionManager.java| 20 +---
 .../federation/router/ConnectionPool.java   | 14 +-
 .../server/federation/router/RBFConfigKeys.java |  5 ++
 .../src/main/resources/hdfs-rbf-default.xml |  8 +++
 .../router/TestConnectionManager.java   | 51 +---
 5 files changed, 83 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7c0d6f65/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
index fa2bf94..745 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
@@ -49,10 +49,6 @@ public class ConnectionManager {
   private static final Logger LOG =
   LoggerFactory.getLogger(ConnectionManager.class);
 
-  /** Minimum amount of active connections: 50%. */
-  protected static final float MIN_ACTIVE_RATIO = 0.5f;
-
-
   /** Configuration for the connection manager, pool and sockets. */
   private final Configuration conf;
 
@@ -60,6 +56,8 @@ public class ConnectionManager {
   private final int minSize = 1;
   /** Max number of connections per user + nn. */
   private final int maxSize;
+  /** Min ratio of active connections per user + nn. */
+  private final float minActiveRatio;
 
   /** How often we close a pool for a particular user + nn. */
   private final long poolCleanupPeriodMs;
@@ -96,10 +94,13 @@ public class ConnectionManager {
   public ConnectionManager(Configuration config) {
 this.conf = config;
 
-// Configure minimum and maximum connection pools
+// Configure minimum, maximum and active connection pools
 this.maxSize = this.conf.getInt(
 RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE,
 RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE_DEFAULT);
+this.minActiveRatio = this.conf.getFloat(
+RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO,
+RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO_DEFAULT);
 
 // Map with the connections indexed by UGI and Namenode
 this.pools = new HashMap<>();
@@ -203,7 +204,8 @@ public class ConnectionManager {
 pool = this.pools.get(connectionId);
 if (pool == null) {
   pool = new ConnectionPool(
-  this.conf, nnAddress, ugi, this.minSize, this.maxSize, protocol);
+  this.conf, nnAddress, ugi, this.minSize, this.maxSize,
+  this.minActiveRatio, protocol);
   this.pools.put(connectionId, pool);
 }
   } finally {
@@ -326,8 +328,9 @@ public class ConnectionManager {
   long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
   int total = pool.getNumConnections();
   int active = pool.getNumActiveConnections();
+  float poolMinActiveRatio = pool.getMinActiveRatio();
   if (timeSinceLastActive > connectionCleanupPeriodMs ||
-  active < MIN_ACTIVE_RATIO * total) {
+  active < poolMinActiveRatio * total) {
 // Remove and close 1 connection
 List conns = pool.removeConnections(1);
 for (ConnectionContext conn : conns) {
@@ -412,8 +415,9 @@ public class ConnectionManager {
   try {
 int total = pool.getNumConnections();
 int active = pool.getNumActiveConnections();
+float poolMinActiveRatio = pool.getMinActiveRatio();
 if (pool.getNumConnections() < pool.getMaxSize() &&
-active >= MIN_ACTIVE_RATIO * total) {
+active >= poolMinActiveRatio * total) {
   ConnectionContext conn = pool.newConnection();
   pool.addConnection(conn);
 } else {


hadoop git commit: HDDS-848. Create SCM metrics related to container state. Contributed by Bharat Viswanadham.

2018-12-03 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk 042c8ef59 -> 3044b78bd


HDDS-848. Create SCM metrics related to container state. Contributed by Bharat 
Viswanadham.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3044b78b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3044b78b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3044b78b

Branch: refs/heads/trunk
Commit: 3044b78bd0191883d5f9daf2601a58a268beed06
Parents: 042c8ef
Author: Yiqun Lin 
Authored: Mon Dec 3 17:16:34 2018 +0800
Committer: Yiqun Lin 
Committed: Mon Dec 3 17:16:34 2018 +0800

--
 .../hadoop/hdds/scm/server/SCMMXBean.java   |  5 ++
 .../scm/server/StorageContainerManager.java | 11 +++
 .../apache/hadoop/ozone/scm/TestSCMMXBean.java  | 81 ++--
 3 files changed, 91 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3044b78b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMMXBean.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMMXBean.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMMXBean.java
index 4093918..dc09ceb 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMMXBean.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMMXBean.java
@@ -59,4 +59,9 @@ public interface SCMMXBean extends ServiceRuntimeInfo {
* @return String
*/
   double getChillModeCurrentContainerThreshold();
+
+  /**
+   * Returns the container count in all states.
+   */
+  Map getContainerStateCount();
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3044b78b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
index 2d27984..a0d5e1d 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
@@ -30,6 +30,7 @@ import com.google.protobuf.BlockingService;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.hdds.HddsUtils;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.scm.block.BlockManager;
@@ -925,6 +926,16 @@ public final class StorageContainerManager extends 
ServiceRuntimeInfoImpl
 return scmChillModeManager.getCurrentContainerThreshold();
   }
 
+  @Override
+  public Map getContainerStateCount() {
+Map nodeStateCount = new HashMap<>();
+for (HddsProtos.LifeCycleState state: HddsProtos.LifeCycleState.values()) {
+  nodeStateCount.put(state.toString(), containerManager.getContainers(
+  state).size());
+}
+return nodeStateCount;
+  }
+
   /**
* Startup options.
*/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3044b78b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMMXBean.java
--
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMMXBean.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMMXBean.java
index 3136df2..eabf5e0 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMMXBean.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMMXBean.java
@@ -20,6 +20,10 @@ package org.apache.hadoop.ozone.scm;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.container.ContainerInfo;
+import org.apache.hadoop.hdds.scm.container.ContainerManager;
 import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
 import org.apache.hadoop.ozone.MiniOzoneCluster;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
@@ -31,8 +35,11 @@ import javax.management.MBeanServer;
 import javax.management.ObjectName;
 import java.io.IOException;
 import java.lang.management.ManagementFactory;
+import java.util.ArrayList;

hadoop git commit: HDDS-886. Unnecessary buffer copy in HddsDispatcher#dispatch. Contributed by Lokesh Jain.

2018-11-30 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk 7ccb640a6 -> 62f821115


HDDS-886. Unnecessary buffer copy in HddsDispatcher#dispatch. Contributed by 
Lokesh Jain.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/62f82111
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/62f82111
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/62f82111

Branch: refs/heads/trunk
Commit: 62f821115be34f26e994790591c235710f0fc224
Parents: 7ccb640
Author: Yiqun Lin 
Authored: Fri Nov 30 22:57:29 2018 +0800
Committer: Yiqun Lin 
Committed: Fri Nov 30 22:57:29 2018 +0800

--
 .../apache/hadoop/ozone/container/common/impl/HddsDispatcher.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/62f82111/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
index 24ba784..352cc86 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
@@ -134,9 +134,9 @@ public class HddsDispatcher implements ContainerDispatcher, 
Auditor {
   @Override
   public ContainerCommandResponseProto dispatch(
   ContainerCommandRequestProto msg) {
+Preconditions.checkNotNull(msg);
 LOG.trace("Command {}, trace ID: {} ", msg.getCmdType().toString(),
 msg.getTraceID());
-Preconditions.checkNotNull(msg.toString());
 
 AuditAction action = ContainerCommandRequestPBHelper.getAuditAction(
 msg.getCmdType());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-13870. WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT API doc. Contributed by Siyao Meng.

2018-11-29 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk bad12031f -> 0e36e935d


HDFS-13870. WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT API doc. 
Contributed by Siyao Meng.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0e36e935
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0e36e935
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0e36e935

Branch: refs/heads/trunk
Commit: 0e36e935d909862401890d0a5410204504f48b31
Parents: bad1203
Author: Yiqun Lin 
Authored: Fri Nov 30 11:31:34 2018 +0800
Committer: Yiqun Lin 
Committed: Fri Nov 30 11:31:34 2018 +0800

--
 .../hadoop-hdfs/src/site/markdown/WebHDFS.md| 24 
 1 file changed, 24 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0e36e935/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
index 383eda0..8661659 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
@@ -64,6 +64,8 @@ The HTTP REST API supports the complete 
[FileSystem](../../api/org/apache/hadoop
 * [`SETTIMES`](#Set_Access_or_Modification_Time) (see 
[FileSystem](../../api/org/apache/hadoop/fs/FileSystem.html).setTimes)
 * [`RENEWDELEGATIONTOKEN`](#Renew_Delegation_Token) (see 
[DelegationTokenAuthenticator](../../api/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.html).renewDelegationToken)
 * [`CANCELDELEGATIONTOKEN`](#Cancel_Delegation_Token) (see 
[DelegationTokenAuthenticator](../../api/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.html).cancelDelegationToken)
+* [`ALLOWSNAPSHOT`](#Allow_Snapshot)
+* [`DISALLOWSNAPSHOT`](#Disallow_Snapshot)
 * [`CREATESNAPSHOT`](#Create_Snapshot) (see 
[FileSystem](../../api/org/apache/hadoop/fs/FileSystem.html).createSnapshot)
 * [`RENAMESNAPSHOT`](#Rename_Snapshot) (see 
[FileSystem](../../api/org/apache/hadoop/fs/FileSystem.html).renameSnapshot)
 * [`SETXATTR`](#Set_XAttr) (see 
[FileSystem](../../api/org/apache/hadoop/fs/FileSystem.html).setXAttr)
@@ -1302,6 +1304,28 @@ See also: 
[HDFSErasureCoding](./HDFSErasureCoding.html#Administrative_commands).
 Snapshot Operations
 ---
 
+### Allow Snapshot
+
+* Submit a HTTP PUT request.
+
+curl -i -X PUT 
"http://:/webhdfs/v1/?op=ALLOWSNAPSHOT"
+
+The client receives a response with zero content length on success:
+
+HTTP/1.1 200 OK
+Content-Length: 0
+
+### Disallow Snapshot
+
+* Submit a HTTP PUT request.
+
+curl -i -X PUT 
"http://:/webhdfs/v1/?op=DISALLOWSNAPSHOT"
+
+The client receives a response with zero content length on success:
+
+HTTP/1.1 200 OK
+Content-Length: 0
+
 ### Create Snapshot
 
 * Submit a HTTP PUT request.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-14082. RBF: Add option to fail operations when a subcluster is unavailable. Contributed by Inigo Goiri.

2018-11-20 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-13891 4d8cc85c2 -> 175fd4cc4


HDFS-14082. RBF: Add option to fail operations when a subcluster is 
unavailable. Contributed by Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/175fd4cc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/175fd4cc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/175fd4cc

Branch: refs/heads/HDFS-13891
Commit: 175fd4cc4821a5c197953485c18b72d570435c16
Parents: 4d8cc85
Author: Yiqun Lin 
Authored: Wed Nov 21 10:40:26 2018 +0800
Committer: Yiqun Lin 
Committed: Wed Nov 21 10:40:26 2018 +0800

--
 .../server/federation/router/RBFConfigKeys.java |  4 ++
 .../federation/router/RouterClientProtocol.java | 15 +++--
 .../federation/router/RouterRpcServer.java  |  9 +++
 .../src/main/resources/hdfs-rbf-default.xml | 10 
 .../router/TestRouterRpcMultiDestination.java   | 59 
 5 files changed, 93 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/175fd4cc/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
index dd72e36..10018fe 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
@@ -125,6 +125,10 @@ public class RBFConfigKeys extends 
CommonConfigurationKeysPublic {
   public static final String DFS_ROUTER_CLIENT_REJECT_OVERLOAD =
   FEDERATION_ROUTER_PREFIX + "client.reject.overload";
   public static final boolean DFS_ROUTER_CLIENT_REJECT_OVERLOAD_DEFAULT = 
false;
+  public static final String DFS_ROUTER_ALLOW_PARTIAL_LIST =
+  FEDERATION_ROUTER_PREFIX + "client.allow-partial-listing";
+  public static final boolean DFS_ROUTER_ALLOW_PARTIAL_LIST_DEFAULT = true;
+
 
   // HDFS Router State Store connection
   public static final String FEDERATION_FILE_RESOLVER_CLIENT_CLASS =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/175fd4cc/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index de94eaf..2fc5358 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -111,6 +111,9 @@ public class RouterClientProtocol implements ClientProtocol 
{
   private final FileSubclusterResolver subclusterResolver;
   private final ActiveNamenodeResolver namenodeResolver;
 
+  /** If it requires response from all subclusters. */
+  private final boolean allowPartialList;
+
   /** Identifier for the super user. */
   private final String superUser;
   /** Identifier for the super group. */
@@ -124,6 +127,10 @@ public class RouterClientProtocol implements 
ClientProtocol {
 this.subclusterResolver = rpcServer.getSubclusterResolver();
 this.namenodeResolver = rpcServer.getNamenodeResolver();
 
+this.allowPartialList = conf.getBoolean(
+RBFConfigKeys.DFS_ROUTER_ALLOW_PARTIAL_LIST,
+RBFConfigKeys.DFS_ROUTER_ALLOW_PARTIAL_LIST_DEFAULT);
+
 // User and group for reporting
 this.superUser = System.getProperty("user.name");
 this.superGroup = conf.get(
@@ -606,8 +613,8 @@ public class RouterClientProtocol implements ClientProtocol 
{
 new Class[] {String.class, startAfter.getClass(), boolean.class},
 new RemoteParam(), startAfter, needLocation);
 Map listings =
-rpcClient.invokeConcurrent(
-locations, method, false, false, DirectoryListing.class);
+rpcClient.invokeConcurrent(locations, method,
+!this.allowPartialList, false, DirectoryListing.class);
 
 Map nnListing = new TreeMap<>();
 int totalRemainingEntries = 0;
@@ -996,8 +1003,8 @@ public class RouterClientProtocol implements 
ClientProtocol {
   RemoteMethod method = new 

svn commit: r1846983 - /hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml

2018-11-20 Thread yqlin
Author: yqlin
Date: Tue Nov 20 09:03:30 2018
New Revision: 1846983

URL: http://svn.apache.org/viewvc?rev=1846983=rev
Log:
Add Yiqun to PMC list.

Modified:
hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml

Modified: hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml?rev=1846983=1846982=1846983=diff
==
--- hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml 
(original)
+++ hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml Tue 
Nov 20 09:03:30 2018
@@ -664,6 +664,14 @@
 
 
 
+yqlin
+https://github.com/linyiqun;>Yiqun Lin
+Vipshop
+
++8
+
+
+
 zhz
 http://zhe-thoughts.github.io/about/;>Zhe Zhang
 LinkedIn



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDDS-817. Create SCM metrics for disk from node report. Contributed by Bharat Viswanadham.

2018-11-19 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk 1a00b4e32 -> d0cc67944


HDDS-817. Create SCM metrics for disk from node report. Contributed by Bharat 
Viswanadham.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d0cc6794
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d0cc6794
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d0cc6794

Branch: refs/heads/trunk
Commit: d0cc679441da436d7004b38d0eb83af3891e6e09
Parents: 1a00b4e
Author: Yiqun Lin 
Authored: Tue Nov 20 14:22:30 2018 +0800
Committer: Yiqun Lin 
Committed: Tue Nov 20 14:22:30 2018 +0800

--
 .../hadoop/hdds/scm/node/NodeManagerMXBean.java |   7 ++
 .../hadoop/hdds/scm/node/NodeStateManager.java  |  14 +++
 .../hadoop/hdds/scm/node/SCMNodeManager.java|  60 ++
 .../hdds/scm/container/MockNodeManager.java |   9 ++
 .../testutils/ReplicationNodeManagerMock.java   |   5 +
 .../ozone/scm/TestSCMNodeManagerMXBean.java | 112 +++
 6 files changed, 207 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0cc6794/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManagerMXBean.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManagerMXBean.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManagerMXBean.java
index d84cf53..e1b51ef 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManagerMXBean.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManagerMXBean.java
@@ -35,4 +35,11 @@ public interface NodeManagerMXBean {
* @return A state to number of nodes that in this state mapping
*/
   Map getNodeCount();
+
+  /**
+   * Get the disk metrics like capacity, usage and remaining based on the
+   * storage type.
+   */
+  Map getNodeInfo();
+
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0cc6794/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
index cddd3ae..a26b0cc 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
@@ -274,6 +274,20 @@ public class NodeStateManager implements Runnable, 
Closeable {
   }
 
   /**
+   * Get information about the node.
+   *
+   * @param datanodeUUID datanode UUID
+   *
+   * @return DatanodeInfo
+   *
+   * @throws NodeNotFoundException if the node is not present
+   */
+  public DatanodeInfo getNode(UUID datanodeUUID)
+  throws NodeNotFoundException {
+return nodeStateMap.getNodeInfo(datanodeUUID);
+  }
+
+  /**
* Updates the last heartbeat time of the node.
*
* @throws NodeNotFoundException if the node is not present

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0cc6794/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
index 24672cf..374ff90 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
@@ -19,6 +19,7 @@ package org.apache.hadoop.hdds.scm.node;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
+import 
org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos;
 import org.apache.hadoop.hdds.protocol.proto
 .StorageContainerDatanodeProtocolProtos.PipelineReportsProto;
 import org.apache.hadoop.hdds.scm.container.ContainerID;
@@ -58,6 +59,7 @@ import org.slf4j.LoggerFactory;
 import javax.management.ObjectName;
 import java.io.IOException;
 import java.net.InetAddress;
+import java.util.ArrayList;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
@@ -182,6 +184,16 @@ public class SCMNodeManager
 SCMNodeStat stat;
 try {
   stat = nodeStateManager.getNodeStat(dnId);
+
+  // Updating the storage report for the datanode.
+  // I dont think we will get NotFound exception, as we are taking
+  // nodeInfo from nodeStateMap, as I see it is 

hadoop git commit: HDDS-831. TestOzoneShell in integration-test is flaky. Contributed by Nanda kumar.

2018-11-12 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk 4c465f553 -> f8713f8ad


HDDS-831. TestOzoneShell in integration-test is flaky. Contributed by Nanda 
kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f8713f8a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f8713f8a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f8713f8a

Branch: refs/heads/trunk
Commit: f8713f8adea9d69330933a2cde594ed11ed9520c
Parents: 4c465f5
Author: Yiqun Lin 
Authored: Tue Nov 13 10:38:27 2018 +0800
Committer: Yiqun Lin 
Committed: Tue Nov 13 10:38:27 2018 +0800

--
 .../test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f8713f8a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
--
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
index 1900024..bd05b92 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
@@ -266,7 +266,7 @@ public class TestOzoneShell {
*/
   @Test
   public void testCreateVolumeWithoutUser() throws Exception {
-String volumeName = "volume" + RandomStringUtils.randomNumeric(1);
+String volumeName = "volume" + RandomStringUtils.randomNumeric(5);
 String[] args = new String[] {"volume", "create", url + "/" + volumeName,
 "--root"};
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDDS-802. Container State Manager should get open pipelines for allocating container. Contributed by Lokesh Jain.

2018-11-08 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk c80f753b0 -> 9317a61f3


HDDS-802. Container State Manager should get open pipelines for allocating 
container. Contributed by Lokesh Jain.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9317a61f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9317a61f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9317a61f

Branch: refs/heads/trunk
Commit: 9317a61f3cdc5ca91c6934eec9898cee3d65441a
Parents: c80f753
Author: Yiqun Lin 
Authored: Thu Nov 8 23:41:43 2018 +0800
Committer: Yiqun Lin 
Committed: Thu Nov 8 23:41:43 2018 +0800

--
 .../scm/container/ContainerStateManager.java|  4 +-
 .../hdds/scm/pipeline/PipelineManager.java  |  3 +
 .../hdds/scm/pipeline/PipelineStateManager.java |  5 ++
 .../hdds/scm/pipeline/PipelineStateMap.java | 22 +++
 .../hdds/scm/pipeline/SCMPipelineManager.java   | 11 
 .../scm/pipeline/TestPipelineStateManager.java  | 61 ++--
 6 files changed, 100 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9317a61f/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
index 87505c3..74c8dcb 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
@@ -248,8 +248,8 @@ public class ContainerStateManager {
 try {
   pipeline = pipelineManager.createPipeline(type, replicationFactor);
 } catch (IOException e) {
-  final List pipelines =
-  pipelineManager.getPipelines(type, replicationFactor);
+  final List pipelines = pipelineManager
+  .getPipelines(type, replicationFactor, Pipeline.PipelineState.OPEN);
   if (pipelines.isEmpty()) {
 throw new IOException("Could not allocate container");
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9317a61f/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineManager.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineManager.java
index 04ec535..cce09f3 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineManager.java
@@ -46,6 +46,9 @@ public interface PipelineManager extends Closeable {
   List getPipelines(ReplicationType type,
   ReplicationFactor factor);
 
+  List getPipelines(ReplicationType type,
+  ReplicationFactor factor, Pipeline.PipelineState state);
+
   void addContainerToPipeline(PipelineID pipelineID, ContainerID containerID)
   throws IOException;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9317a61f/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateManager.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateManager.java
index 67f74d3..9f95378 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateManager.java
@@ -64,6 +64,11 @@ class PipelineStateManager {
 return pipelineStateMap.getPipelines(type, factor);
   }
 
+  List getPipelines(ReplicationType type, ReplicationFactor factor,
+  PipelineState state) {
+return pipelineStateMap.getPipelines(type, factor, state);
+  }
+
   List getPipelines(ReplicationType type, PipelineState... states) {
 return pipelineStateMap.getPipelines(type, states);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9317a61f/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateMap.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateMap.java
 

hadoop git commit: HDDS-809. Refactor SCMChillModeManager.

2018-11-06 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk 482716e5a -> addec2929


HDDS-809. Refactor SCMChillModeManager.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/addec292
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/addec292
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/addec292

Branch: refs/heads/trunk
Commit: addec29297e61a417f0ce711bd76b6db53d504eb
Parents: 482716e
Author: Yiqun Lin 
Authored: Wed Nov 7 13:53:28 2018 +0800
Committer: Yiqun Lin 
Committed: Wed Nov 7 13:54:08 2018 +0800

--
 .../org/apache/hadoop/hdds/scm/ScmUtils.java|   2 +-
 .../hadoop/hdds/scm/block/BlockManagerImpl.java |   2 +-
 .../hdds/scm/chillmode/ChillModeExitRule.java   |  32 ++
 .../hdds/scm/chillmode/ChillModePrecheck.java   |  68 
 .../scm/chillmode/ChillModeRestrictedOps.java   |  41 +++
 .../scm/chillmode/ContainerChillModeRule.java   | 112 +++
 .../scm/chillmode/DataNodeChillModeRule.java|  83 +
 .../hadoop/hdds/scm/chillmode/Precheck.java |  29 ++
 .../hdds/scm/chillmode/SCMChillModeManager.java | 153 +
 .../hadoop/hdds/scm/chillmode/package-info.java |  18 ++
 .../hdds/scm/server/ChillModePrecheck.java  |  69 
 .../apache/hadoop/hdds/scm/server/Precheck.java |  29 --
 .../hdds/scm/server/SCMChillModeManager.java| 319 ---
 .../scm/server/SCMClientProtocolServer.java |   1 +
 .../scm/server/StorageContainerManager.java |   1 +
 .../scm/chillmode/TestSCMChillModeManager.java  | 215 +
 .../scm/server/TestSCMChillModeManager.java | 215 -
 .../hadoop/ozone/om/TestScmChillMode.java   |   2 +-
 18 files changed, 756 insertions(+), 635 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/addec292/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/ScmUtils.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/ScmUtils.java 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/ScmUtils.java
index 435f0a5..43b4452 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/ScmUtils.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/ScmUtils.java
@@ -19,8 +19,8 @@
 package org.apache.hadoop.hdds.scm;
 
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ScmOps;
+import org.apache.hadoop.hdds.scm.chillmode.Precheck;
 import org.apache.hadoop.hdds.scm.exceptions.SCMException;
-import org.apache.hadoop.hdds.scm.server.Precheck;
 
 /**
  * SCM utility class.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/addec292/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
index c878d97..85658b9 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
@@ -23,6 +23,7 @@ import org.apache.hadoop.hdds.client.ContainerBlockID;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ScmOps;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.scm.ScmUtils;
+import org.apache.hadoop.hdds.scm.chillmode.ChillModePrecheck;
 import org.apache.hadoop.hdds.scm.container.ContainerManager;
 import org.apache.hadoop.hdds.scm.container.common.helpers.AllocatedBlock;
 import 
org.apache.hadoop.hdds.scm.container.common.helpers.ContainerWithPipeline;
@@ -32,7 +33,6 @@ import org.apache.hadoop.hdds.scm.node.NodeManager;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType;
-import org.apache.hadoop.hdds.scm.server.ChillModePrecheck;
 import org.apache.hadoop.hdds.server.events.EventHandler;
 import org.apache.hadoop.hdds.server.events.EventPublisher;
 import org.apache.hadoop.metrics2.util.MBeans;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/addec292/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ChillModeExitRule.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ChillModeExitRule.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ChillModeExitRule.java
new file mode 100644
index 000..d283dfe
--- /dev/null
+++ 

hadoop git commit: HDDS-796. Fix failed test TestStorageContainerManagerHttpServer#testHttpPolicy.

2018-11-05 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk d43cc5db0 -> 15df2e7a7


HDDS-796. Fix failed test TestStorageContainerManagerHttpServer#testHttpPolicy.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/15df2e7a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/15df2e7a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/15df2e7a

Branch: refs/heads/trunk
Commit: 15df2e7a7547e12e884b624d9f17ad2799d9ccf9
Parents: d43cc5d
Author: Yiqun Lin 
Authored: Mon Nov 5 17:31:06 2018 +0800
Committer: Yiqun Lin 
Committed: Mon Nov 5 17:31:06 2018 +0800

--
 .../java/org/apache/hadoop/hdds/server/BaseHttpServer.java| 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/15df2e7a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
--
diff --git 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
index 2726fc3..5e7d7b8 100644
--- 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
+++ 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
@@ -115,13 +115,10 @@ public abstract class BaseHttpServer {
 final Optional addressPort =
 getPortNumberFromConfigKeys(conf, addressKey);
 
-final Optional addresHost =
+final Optional addressHost =
 getHostNameFromConfigKeys(conf, addressKey);
 
-String hostName = bindHost.orElse(addresHost.get());
-if (hostName == null || hostName.isEmpty()) {
-  hostName = bindHostDefault;
-}
+String hostName = bindHost.orElse(addressHost.orElse(bindHostDefault));
 
 return NetUtils.createSocketAddr(
 hostName + ":" + addressPort.orElse(bindPortdefault));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-14049. TestHttpFSServerWebServer fails on Windows because of missing winutils.exe. Contributed by Inigo Goiri.

2018-11-03 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk cb8d679c9 -> 4e3df75eb


HDFS-14049. TestHttpFSServerWebServer fails on Windows because of missing 
winutils.exe. Contributed by Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4e3df75e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4e3df75e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4e3df75e

Branch: refs/heads/trunk
Commit: 4e3df75eb72adbab18a1d6476f228a0b504238fa
Parents: cb8d679
Author: Yiqun Lin 
Authored: Sun Nov 4 09:15:53 2018 +0800
Committer: Yiqun Lin 
Committed: Sun Nov 4 09:15:53 2018 +0800

--
 .../hadoop/fs/http/server/TestHttpFSServerWebServer.java | 11 +++
 1 file changed, 11 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4e3df75e/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerWebServer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerWebServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerWebServer.java
index 5250543..97d41d3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerWebServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerWebServer.java
@@ -30,6 +30,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.security.authentication.server.AuthenticationFilter;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.test.HadoopUsersConfTestHelper;
+import org.apache.hadoop.util.Shell;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.BeforeClass;
@@ -55,6 +56,16 @@ public class TestHttpFSServerWebServer {
 confDir.mkdirs();
 logsDir.mkdirs();
 tempDir.mkdirs();
+
+if (Shell.WINDOWS) {
+  File binDir = new File(homeDir, "bin");
+  binDir.mkdirs();
+  File winutils = Shell.getWinUtilsFile();
+  if (winutils.exists()) {
+FileUtils.copyFileToDirectory(winutils, binDir);
+  }
+}
+
 System.setProperty("hadoop.home.dir", homeDir.getAbsolutePath());
 System.setProperty("hadoop.log.dir", logsDir.getAbsolutePath());
 System.setProperty("httpfs.home.dir", homeDir.getAbsolutePath());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDDS-751. Replace usage of Guava Optional with Java Optional.

2018-11-01 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk 8fe85af63 -> d16d5f730


HDDS-751. Replace usage of Guava Optional with Java Optional.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d16d5f73
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d16d5f73
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d16d5f73

Branch: refs/heads/trunk
Commit: d16d5f730e9d139d3e026805f21ac2c9b0bbb98b
Parents: 8fe85af
Author: Yiqun Lin 
Authored: Fri Nov 2 10:50:32 2018 +0800
Committer: Yiqun Lin 
Committed: Fri Nov 2 10:50:32 2018 +0800

--
 .../java/org/apache/hadoop/hdds/HddsUtils.java  | 29 +---
 .../apache/hadoop/hdds/scm/HddsServerUtil.java  | 17 ++--
 .../hadoop/hdds/server/BaseHttpServer.java  |  9 --
 .../java/org/apache/hadoop/ozone/OmUtils.java   |  8 +++---
 4 files changed, 32 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d16d5f73/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
index 89edfdd..18637af 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
@@ -18,7 +18,6 @@
 
 package org.apache.hadoop.hdds;
 
-import com.google.common.base.Optional;
 import com.google.common.base.Strings;
 import com.google.common.net.HostAndPort;
 import org.apache.hadoop.classification.InterfaceAudience;
@@ -45,6 +44,7 @@ import java.util.Calendar;
 import java.util.Collection;
 import java.util.HashSet;
 import java.util.Map;
+import java.util.Optional;
 import java.util.TimeZone;
 
 import static org.apache.hadoop.hdfs.DFSConfigKeys
@@ -114,7 +114,7 @@ public final class HddsUtils {
 ScmConfigKeys.OZONE_SCM_CLIENT_ADDRESS_KEY);
 
 return NetUtils.createSocketAddr(host.get() + ":" + port
-.or(ScmConfigKeys.OZONE_SCM_CLIENT_PORT_DEFAULT));
+.orElse(ScmConfigKeys.OZONE_SCM_CLIENT_PORT_DEFAULT));
   }
 
   /**
@@ -162,7 +162,7 @@ public final class HddsUtils {
 ScmConfigKeys.OZONE_SCM_BLOCK_CLIENT_ADDRESS_KEY);
 
 return NetUtils.createSocketAddr(host.get() + ":" + port
-.or(ScmConfigKeys.OZONE_SCM_BLOCK_CLIENT_PORT_DEFAULT));
+.orElse(ScmConfigKeys.OZONE_SCM_BLOCK_CLIENT_PORT_DEFAULT));
   }
 
   /**
@@ -186,7 +186,7 @@ public final class HddsUtils {
 return hostName;
   }
 }
-return Optional.absent();
+return Optional.empty();
   }
 
   /**
@@ -196,7 +196,7 @@ public final class HddsUtils {
*/
   public static Optional getHostName(String value) {
 if ((value == null) || value.isEmpty()) {
-  return Optional.absent();
+  return Optional.empty();
 }
 return Optional.of(HostAndPort.fromString(value).getHostText());
   }
@@ -208,11 +208,11 @@ public final class HddsUtils {
*/
   public static Optional getHostPort(String value) {
 if ((value == null) || value.isEmpty()) {
-  return Optional.absent();
+  return Optional.empty();
 }
 int port = HostAndPort.fromString(value).getPortOrDefault(NO_PORT);
 if (port == NO_PORT) {
-  return Optional.absent();
+  return Optional.empty();
 } else {
   return Optional.of(port);
 }
@@ -239,7 +239,7 @@ public final class HddsUtils {
 return hostPort;
   }
 }
-return Optional.absent();
+return Optional.empty();
   }
 
   /**
@@ -261,20 +261,17 @@ public final class HddsUtils {
   + " Null or empty address list found.");
 }
 
-final com.google.common.base.Optional
-defaultPort =  com.google.common.base.Optional.of(ScmConfigKeys
-.OZONE_SCM_DEFAULT_PORT);
+final Optional defaultPort = Optional
+.of(ScmConfigKeys.OZONE_SCM_DEFAULT_PORT);
 for (String address : names) {
-  com.google.common.base.Optional hostname =
-  getHostName(address);
+  Optional hostname = getHostName(address);
   if (!hostname.isPresent()) {
 throw new IllegalArgumentException("Invalid hostname for SCM: "
 + hostname);
   }
-  com.google.common.base.Optional port =
-  getHostPort(address);
+  Optional port = getHostPort(address);
   InetSocketAddress addr = NetUtils.createSocketAddr(hostname.get(),
-  port.or(defaultPort.get()));
+  port.orElse(defaultPort.get()));
   addresses.add(addr);
 }
 return addresses;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d16d5f73/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/hdds/scm/HddsServerUtil.java

hadoop git commit: HDDS-786. Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline.

2018-11-01 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/ozone-0.3 8411c2bf5 -> 73e9e4348


HDDS-786. Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline.

(cherry picked from commit 2e8ac14dcb57a0fe07b2119c26535c3541665b70)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/73e9e434
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/73e9e434
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/73e9e434

Branch: refs/heads/ozone-0.3
Commit: 73e9e43483da50707fa22c070b0a8deba29eb8b2
Parents: 8411c2b
Author: Yiqun Lin 
Authored: Thu Nov 1 14:10:17 2018 +0800
Committer: Yiqun Lin 
Committed: Thu Nov 1 14:12:30 2018 +0800

--
 .../org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java  | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/73e9e434/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
index 3d228fa..db5a9eb 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
@@ -188,7 +188,6 @@ public class SCMClientProtocolServer implements
 }
   }
 }
-String remoteUser = getRpcRemoteUsername();
 getScm().checkAdminAccess(null);
 return scm.getContainerManager()
 .getContainerWithPipeline(containerID);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDDS-786. Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline.

2018-11-01 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk b13c56742 -> 2e8ac14dc


HDDS-786. Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2e8ac14d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2e8ac14d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2e8ac14d

Branch: refs/heads/trunk
Commit: 2e8ac14dcb57a0fe07b2119c26535c3541665b70
Parents: b13c567
Author: Yiqun Lin 
Authored: Thu Nov 1 14:10:17 2018 +0800
Committer: Yiqun Lin 
Committed: Thu Nov 1 14:10:17 2018 +0800

--
 .../org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java  | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e8ac14d/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
index e92200a..58cb871 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
@@ -189,7 +189,6 @@ public class SCMClientProtocolServer implements
 }
   }
 }
-String remoteUser = getRpcRemoteUsername();
 getScm().checkAdminAccess(null);
 return scm.getContainerManager()
 .getContainerWithPipeline(ContainerID.valueof(containerID));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-14011. RBF: Add more information to HdfsFileStatus for a mount point. Contributed by Akira Ajisaka.

2018-10-23 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-13891 ebf6bf304 -> f1566ca85


HDFS-14011. RBF: Add more information to HdfsFileStatus for a mount point. 
Contributed by Akira Ajisaka.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f1566ca8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f1566ca8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f1566ca8

Branch: refs/heads/HDFS-13891
Commit: f1566ca85afdc39d3e62d98a1d06a0a07f0055d5
Parents: ebf6bf3
Author: Yiqun Lin 
Authored: Tue Oct 23 14:34:29 2018 +0800
Committer: Yiqun Lin 
Committed: Tue Oct 23 14:34:29 2018 +0800

--
 .../resolver/FileSubclusterResolver.java|  6 ++-
 .../federation/router/RouterClientProtocol.java | 30 ---
 .../router/RouterQuotaUpdateService.java|  9 ++--
 .../hdfs/server/federation/MockResolver.java| 17 +++---
 .../federation/router/TestRouterMountTable.java | 55 +++-
 .../router/TestRouterRpcMultiDestination.java   |  5 +-
 6 files changed, 97 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f1566ca8/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/FileSubclusterResolver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/FileSubclusterResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/FileSubclusterResolver.java
index 5aa5ec9..6432bb0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/FileSubclusterResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/FileSubclusterResolver.java
@@ -61,8 +61,10 @@ public interface FileSubclusterResolver {
* cache.
*
* @param path Path to get the mount points under.
-   * @return List of mount points present at this path or zero-length list if
-   * none are found.
+   * @return List of mount points present at this path. Return zero-length
+   * list if the path is a mount point but there are no mount points
+   * under the path. Return null if the path is not a mount point
+   * and there are no mount points under the path.
* @throws IOException Throws exception if the data is not available.
*/
   List getMountPoints(String path) throws IOException;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f1566ca8/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index ddbc014..de94eaf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -718,6 +718,9 @@ public class RouterClientProtocol implements ClientProtocol 
{
   date = dates.get(src);
 }
 ret = getMountPointStatus(src, children.size(), date);
+  } else if (children != null) {
+// The src is a mount point, but there are no files or directories
+ret = getMountPointStatus(src, 0, 0);
   }
 }
 
@@ -1714,13 +1717,26 @@ public class RouterClientProtocol implements 
ClientProtocol {
 FsPermission permission = FsPermission.getDirDefault();
 String owner = this.superUser;
 String group = this.superGroup;
-try {
-  // TODO support users, it should be the user for the pointed folder
-  UserGroupInformation ugi = RouterRpcServer.getRemoteUser();
-  owner = ugi.getUserName();
-  group = ugi.getPrimaryGroupName();
-} catch (IOException e) {
-  LOG.error("Cannot get the remote user: {}", e.getMessage());
+if (subclusterResolver instanceof MountTableResolver) {
+  try {
+MountTableResolver mountTable = (MountTableResolver) 
subclusterResolver;
+MountTable entry = mountTable.getMountPoint(name);
+if (entry != null) {
+  permission = entry.getMode();
+  owner = entry.getOwnerName();
+  group = entry.getGroupName();
+}
+  } catch (IOException e) {
+LOG.error("Cannot get mount point: {}", e.getMessage());
+  }
+} else {
+  try 

hadoop git commit: HDDS-628. Fix outdated names used in HDDS documentations.

2018-10-12 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/ozone-0.3 67d516eb8 -> 54a229c33


HDDS-628. Fix outdated names used in HDDS documentations.

(cherry picked from commit 5da042227cfce440eddc263b377a70ed5b3743fb)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/54a229c3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/54a229c3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/54a229c3

Branch: refs/heads/ozone-0.3
Commit: 54a229c332fb24e3232c933dc675f96888964f3e
Parents: 67d516e
Author: Yiqun Lin 
Authored: Fri Oct 12 14:00:13 2018 +0800
Committer: Yiqun Lin 
Committed: Fri Oct 12 14:01:47 2018 +0800

--
 hadoop-ozone/docs/content/Dozone.md   | 4 ++--
 hadoop-ozone/docs/content/JavaApi.md  | 2 +-
 hadoop-ozone/docs/content/OzoneManager.md | 2 +-
 hadoop-ozone/docs/content/Settings.md | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/54a229c3/hadoop-ozone/docs/content/Dozone.md
--
diff --git a/hadoop-ozone/docs/content/Dozone.md 
b/hadoop-ozone/docs/content/Dozone.md
index 7906cf3..f6efb0f 100644
--- a/hadoop-ozone/docs/content/Dozone.md
+++ b/hadoop-ozone/docs/content/Dozone.md
@@ -63,14 +63,14 @@ Useful Docker & Ozone Commands
 
 If you make any modifications to ozone, the simplest way to test it is to run 
freon and unit tests.
 
-Here are the instructions to run corona in a docker based cluster.
+Here are the instructions to run freon in a docker based cluster.
 
 {{< highlight bash >}}
 docker-compose exec datanode bash
 {{< /highlight >}}
 
 This will open a bash shell on the data node container.
-Now we can execute corona for load generation.
+Now we can execute freon for load generation.
 
 {{< highlight bash >}}
 ozone freon randomkeys --numOfVolumes=10 --numOfBuckets 10 --numOfKeys 10

http://git-wip-us.apache.org/repos/asf/hadoop/blob/54a229c3/hadoop-ozone/docs/content/JavaApi.md
--
diff --git a/hadoop-ozone/docs/content/JavaApi.md 
b/hadoop-ozone/docs/content/JavaApi.md
index 1d32bed..e538f4b 100644
--- a/hadoop-ozone/docs/content/JavaApi.md
+++ b/hadoop-ozone/docs/content/JavaApi.md
@@ -42,7 +42,7 @@ can use
 OzoneClient ozClient = OzoneClientFactory.getRestClient();
 {{< /highlight >}}
 
-And to get a a RPC client we can call
+And to get a RPC client we can call
 
 {{< highlight java >}}
 OzoneClient ozClient = OzoneClientFactory.getRpcClient();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/54a229c3/hadoop-ozone/docs/content/OzoneManager.md
--
diff --git a/hadoop-ozone/docs/content/OzoneManager.md 
b/hadoop-ozone/docs/content/OzoneManager.md
index 560f827..5eb8663 100644
--- a/hadoop-ozone/docs/content/OzoneManager.md
+++ b/hadoop-ozone/docs/content/OzoneManager.md
@@ -70,7 +70,7 @@ We are hopeful that this leads to a more straightforward way 
of discovering sett
 OM and SCM
 ---
 [Storage container manager]({{< ref "Hdds.md" >}}) or (SCM) is the block 
manager
- for ozone. When a client requests OM for a set of data nodes to write data, 
OM talk to SCM and gets a block.
+ for ozone. When a client requests OM for a set of data nodes to write data, 
OM talks to SCM and gets a block.
 
 A block returned by SCM contains a pipeline, which is a set of nodes that we 
participate in that block replication.
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/54a229c3/hadoop-ozone/docs/content/Settings.md
--
diff --git a/hadoop-ozone/docs/content/Settings.md 
b/hadoop-ozone/docs/content/Settings.md
index 41ab04a..5c9bb41 100644
--- a/hadoop-ozone/docs/content/Settings.md
+++ b/hadoop-ozone/docs/content/Settings.md
@@ -43,7 +43,7 @@ requests blocks from SCM, to which clients can write data.
 
 ## Setting up an Ozone only cluster
 
-* Please untar the  ozone-0.2.1-SNAPSHOT to the directory where you are going
+* Please untar the ozone- to the directory where you are going
 to run Ozone from. We need Ozone jars on all machines in the cluster. So you
 need to do this on all machines in the cluster.
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDDS-628. Fix outdated names used in HDDS documentations.

2018-10-12 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk bca928d3c -> 5da042227


HDDS-628. Fix outdated names used in HDDS documentations.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5da04222
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5da04222
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5da04222

Branch: refs/heads/trunk
Commit: 5da042227cfce440eddc263b377a70ed5b3743fb
Parents: bca928d
Author: Yiqun Lin 
Authored: Fri Oct 12 14:00:13 2018 +0800
Committer: Yiqun Lin 
Committed: Fri Oct 12 14:00:13 2018 +0800

--
 hadoop-ozone/docs/content/Dozone.md   | 4 ++--
 hadoop-ozone/docs/content/JavaApi.md  | 2 +-
 hadoop-ozone/docs/content/OzoneManager.md | 2 +-
 hadoop-ozone/docs/content/Settings.md | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5da04222/hadoop-ozone/docs/content/Dozone.md
--
diff --git a/hadoop-ozone/docs/content/Dozone.md 
b/hadoop-ozone/docs/content/Dozone.md
index 7906cf3..f6efb0f 100644
--- a/hadoop-ozone/docs/content/Dozone.md
+++ b/hadoop-ozone/docs/content/Dozone.md
@@ -63,14 +63,14 @@ Useful Docker & Ozone Commands
 
 If you make any modifications to ozone, the simplest way to test it is to run 
freon and unit tests.
 
-Here are the instructions to run corona in a docker based cluster.
+Here are the instructions to run freon in a docker based cluster.
 
 {{< highlight bash >}}
 docker-compose exec datanode bash
 {{< /highlight >}}
 
 This will open a bash shell on the data node container.
-Now we can execute corona for load generation.
+Now we can execute freon for load generation.
 
 {{< highlight bash >}}
 ozone freon randomkeys --numOfVolumes=10 --numOfBuckets 10 --numOfKeys 10

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5da04222/hadoop-ozone/docs/content/JavaApi.md
--
diff --git a/hadoop-ozone/docs/content/JavaApi.md 
b/hadoop-ozone/docs/content/JavaApi.md
index 1d32bed..e538f4b 100644
--- a/hadoop-ozone/docs/content/JavaApi.md
+++ b/hadoop-ozone/docs/content/JavaApi.md
@@ -42,7 +42,7 @@ can use
 OzoneClient ozClient = OzoneClientFactory.getRestClient();
 {{< /highlight >}}
 
-And to get a a RPC client we can call
+And to get a RPC client we can call
 
 {{< highlight java >}}
 OzoneClient ozClient = OzoneClientFactory.getRpcClient();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5da04222/hadoop-ozone/docs/content/OzoneManager.md
--
diff --git a/hadoop-ozone/docs/content/OzoneManager.md 
b/hadoop-ozone/docs/content/OzoneManager.md
index 560f827..5eb8663 100644
--- a/hadoop-ozone/docs/content/OzoneManager.md
+++ b/hadoop-ozone/docs/content/OzoneManager.md
@@ -70,7 +70,7 @@ We are hopeful that this leads to a more straightforward way 
of discovering sett
 OM and SCM
 ---
 [Storage container manager]({{< ref "Hdds.md" >}}) or (SCM) is the block 
manager
- for ozone. When a client requests OM for a set of data nodes to write data, 
OM talk to SCM and gets a block.
+ for ozone. When a client requests OM for a set of data nodes to write data, 
OM talks to SCM and gets a block.
 
 A block returned by SCM contains a pipeline, which is a set of nodes that we 
participate in that block replication.
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5da04222/hadoop-ozone/docs/content/Settings.md
--
diff --git a/hadoop-ozone/docs/content/Settings.md 
b/hadoop-ozone/docs/content/Settings.md
index 41ab04a..5c9bb41 100644
--- a/hadoop-ozone/docs/content/Settings.md
+++ b/hadoop-ozone/docs/content/Settings.md
@@ -43,7 +43,7 @@ requests blocks from SCM, to which clients can write data.
 
 ## Setting up an Ozone only cluster
 
-* Please untar the  ozone-0.2.1-SNAPSHOT to the directory where you are going
+* Please untar the ozone- to the directory where you are going
 to run Ozone from. We need Ozone jars on all machines in the cluster. So you
 need to do this on all machines in the cluster.
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-13967. HDFS Router Quota Class Review. Contributed by BELUGA BEHR.

2018-10-09 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk 9bbeb5248 -> d4626b4d1


HDFS-13967. HDFS Router Quota Class Review. Contributed by BELUGA BEHR.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d4626b4d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d4626b4d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d4626b4d

Branch: refs/heads/trunk
Commit: d4626b4d1825b60ef02c0da9c45cd483d1d98f49
Parents: 9bbeb52
Author: Yiqun Lin 
Authored: Tue Oct 9 16:11:07 2018 +0800
Committer: Yiqun Lin 
Committed: Tue Oct 9 16:11:07 2018 +0800

--
 .../hdfs/server/federation/router/Quota.java| 54 ++--
 1 file changed, 26 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d4626b4d/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
index d8ed080..5d0309f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
@@ -18,12 +18,14 @@
 package org.apache.hadoop.hdfs.server.federation.router;
 
 import java.io.IOException;
-import java.util.HashMap;
-import java.util.LinkedList;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
 
+import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.fs.QuotaUsage;
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.protocol.ClientProtocol;
@@ -33,6 +35,9 @@ import 
org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.ListMultimap;
+
 /**
  * Module that implements the quota relevant RPC calls
  * {@link ClientProtocol#setQuota(String, long, long, StorageType)}
@@ -121,37 +126,31 @@ public class Quota {
 final List locations = getQuotaRemoteLocations(path);
 
 // NameService -> Locations
-Map> validLocations = new HashMap<>();
+ListMultimap validLocations =
+ArrayListMultimap.create();
+
 for (RemoteLocation loc : locations) {
-  String nsId = loc.getNameserviceId();
-  List dests = validLocations.get(nsId);
-  if (dests == null) {
-dests = new LinkedList<>();
-dests.add(loc);
-validLocations.put(nsId, dests);
-  } else {
-// Ensure the paths in the same nameservice is different.
-// Don't include parent-child paths.
-boolean isChildPath = false;
-for (RemoteLocation d : dests) {
-  if (loc.getDest().startsWith(d.getDest())) {
-isChildPath = true;
-break;
-  }
-}
+  final String nsId = loc.getNameserviceId();
+  final Collection dests = validLocations.get(nsId);
+
+  // Ensure the paths in the same nameservice is different.
+  // Do not include parent-child paths.
+  boolean isChildPath = false;
 
-if (!isChildPath) {
-  dests.add(loc);
+  for (RemoteLocation d : dests) {
+if (StringUtils.startsWith(loc.getDest(), d.getDest())) {
+  isChildPath = true;
+  break;
 }
   }
-}
 
-List quotaLocs = new LinkedList<>();
-for (List locs : validLocations.values()) {
-  quotaLocs.addAll(locs);
+  if (!isChildPath) {
+validLocations.put(nsId, loc);
+  }
 }
 
-return quotaLocs;
+return Collections
+.unmodifiableList(new ArrayList<>(validLocations.values()));
   }
 
   /**
@@ -209,7 +208,7 @@ public class Quota {
*/
   private List getQuotaRemoteLocations(String path)
   throws IOException {
-List locations = new LinkedList<>();
+List locations = new ArrayList<>();
 RouterQuotaManager manager = this.router.getQuotaManager();
 if (manager != null) {
   Set childrenPaths = manager.getPaths(path);
@@ -217,7 +216,6 @@ public class Quota {
 locations.addAll(rpcServer.getLocationsForPath(childPath, true, 
false));
   }
 }
-
 return locations;
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-13962. Add null check for add-replica pool to avoid lock acquiring. Contributed by Surendra Singh Lilhore.

2018-10-08 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 dd445e036 -> 665036c5f


HDFS-13962. Add null check for add-replica pool to avoid lock acquiring. 
Contributed by Surendra Singh Lilhore.

(cherry picked from commit 1043795f7fe44c98a34f8ea3cea708c801c3043b)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/665036c5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/665036c5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/665036c5

Branch: refs/heads/branch-3.1
Commit: 665036c5f71f6ce7ea331706ae1deb56da0fd0eb
Parents: dd445e0
Author: Yiqun Lin 
Authored: Tue Oct 9 10:33:13 2018 +0800
Committer: Yiqun Lin 
Committed: Tue Oct 9 10:36:45 2018 +0800

--
 .../hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java| 6 --
 .../hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java | 2 +-
 2 files changed, 5 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/665036c5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
index d7e56fd..e725834 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
@@ -183,8 +183,10 @@ class BlockPoolSlice {
  .setConf(conf)
  
.setInitialUsed(loadDfsUsed())
  .build();
-// initialize add replica fork join pool
-initializeAddReplicaPool(conf);
+if (addReplicaThreadPool == null) {
+  // initialize add replica fork join pool
+  initializeAddReplicaPool(conf);
+}
 // Make the dfs usage to be saved during shutdown.
 shutdownHook = new Runnable() {
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/665036c5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java
index c630b95..73d3c60 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java
@@ -154,7 +154,7 @@ class ReplicaMap {
   if (oldReplicaInfo != null) {
 return oldReplicaInfo;
   } else {
-set.add(replicaInfo);
+set.addOrReplace(replicaInfo);
   }
   return replicaInfo;
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-13962. Add null check for add-replica pool to avoid lock acquiring. Contributed by Surendra Singh Lilhore.

2018-10-08 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.2 21553e22b -> b6698e2a8


HDFS-13962. Add null check for add-replica pool to avoid lock acquiring. 
Contributed by Surendra Singh Lilhore.

(cherry picked from commit 1043795f7fe44c98a34f8ea3cea708c801c3043b)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b6698e2a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b6698e2a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b6698e2a

Branch: refs/heads/branch-3.2
Commit: b6698e2a828a652e995d4cfe83d8fcd095fdeee2
Parents: 21553e2
Author: Yiqun Lin 
Authored: Tue Oct 9 10:33:13 2018 +0800
Committer: Yiqun Lin 
Committed: Tue Oct 9 10:35:08 2018 +0800

--
 .../hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java| 6 --
 .../hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java | 2 +-
 2 files changed, 5 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b6698e2a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
index b9b581f..4a4fef9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
@@ -183,8 +183,10 @@ class BlockPoolSlice {
  .setConf(conf)
  
.setInitialUsed(loadDfsUsed())
  .build();
-// initialize add replica fork join pool
-initializeAddReplicaPool(conf);
+if (addReplicaThreadPool == null) {
+  // initialize add replica fork join pool
+  initializeAddReplicaPool(conf);
+}
 // Make the dfs usage to be saved during shutdown.
 shutdownHook = new Runnable() {
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b6698e2a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java
index c630b95..73d3c60 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java
@@ -154,7 +154,7 @@ class ReplicaMap {
   if (oldReplicaInfo != null) {
 return oldReplicaInfo;
   } else {
-set.add(replicaInfo);
+set.addOrReplace(replicaInfo);
   }
   return replicaInfo;
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-13962. Add null check for add-replica pool to avoid lock acquiring. Contributed by Surendra Singh Lilhore.

2018-10-08 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk 347ea3858 -> 1043795f7


HDFS-13962. Add null check for add-replica pool to avoid lock acquiring. 
Contributed by Surendra Singh Lilhore.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1043795f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1043795f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1043795f

Branch: refs/heads/trunk
Commit: 1043795f7fe44c98a34f8ea3cea708c801c3043b
Parents: 347ea38
Author: Yiqun Lin 
Authored: Tue Oct 9 10:33:13 2018 +0800
Committer: Yiqun Lin 
Committed: Tue Oct 9 10:33:13 2018 +0800

--
 .../hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java| 6 --
 .../hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java | 2 +-
 2 files changed, 5 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1043795f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
index b9b581f..4a4fef9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
@@ -183,8 +183,10 @@ class BlockPoolSlice {
  .setConf(conf)
  
.setInitialUsed(loadDfsUsed())
  .build();
-// initialize add replica fork join pool
-initializeAddReplicaPool(conf);
+if (addReplicaThreadPool == null) {
+  // initialize add replica fork join pool
+  initializeAddReplicaPool(conf);
+}
 // Make the dfs usage to be saved during shutdown.
 shutdownHook = new Runnable() {
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1043795f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java
index c630b95..73d3c60 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java
@@ -154,7 +154,7 @@ class ReplicaMap {
   if (oldReplicaInfo != null) {
 return oldReplicaInfo;
   } else {
-set.add(replicaInfo);
+set.addOrReplace(replicaInfo);
   }
   return replicaInfo;
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-13768. Adding replicas to volume map makes DataNode start slowly. Contributed by Surendra Singh Lilhore.

2018-10-08 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 5539dd97d -> c632c6e6e


HDFS-13768. Adding replicas to volume map makes DataNode start slowly. 
Contributed by Surendra Singh Lilhore.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c632c6e6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c632c6e6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c632c6e6

Branch: refs/heads/branch-2
Commit: c632c6e6e99e3f3722774c9fc269ace88aa5d9bb
Parents: 5539dd9
Author: Yiqun Lin 
Authored: Tue Oct 9 10:20:37 2018 +0800
Committer: Yiqun Lin 
Committed: Tue Oct 9 10:20:37 2018 +0800

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   3 +
 .../hadoop/hdfs/server/datanode/DataNode.java   |   2 +-
 .../datanode/fsdataset/impl/BlockPoolSlice.java | 182 +--
 .../datanode/fsdataset/impl/FsDatasetImpl.java  |  14 ++
 .../datanode/fsdataset/impl/FsDatasetUtil.java  |  30 +--
 .../datanode/fsdataset/impl/FsVolumeList.java   |   5 +-
 .../datanode/fsdataset/impl/ReplicaMap.java |  24 +++
 .../src/main/resources/hdfs-default.xml |   9 +
 .../fsdataset/impl/FsDatasetImplTestUtils.java  |  14 +-
 .../fsdataset/impl/TestFsVolumeList.java|  62 +++
 10 files changed, 315 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c632c6e6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index a72ff87..edd99f2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -304,6 +304,9 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final longDFS_CONTENT_SUMMARY_SLEEP_MICROSEC_DEFAULT = 500;
   public static final String  DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY = 
"dfs.datanode.failed.volumes.tolerated";
   public static final int DFS_DATANODE_FAILED_VOLUMES_TOLERATED_DEFAULT = 
0;
+  public static final String
+  DFS_DATANODE_VOLUMES_REPLICA_ADD_THREADPOOL_SIZE_KEY =
+  "dfs.datanode.volumes.replica-add.threadpool.size";
   public static final String  DFS_DATANODE_SYNCONCLOSE_KEY = 
"dfs.datanode.synconclose";
   public static final boolean DFS_DATANODE_SYNCONCLOSE_DEFAULT = false;
   public static final String  DFS_DATANODE_SOCKET_REUSE_KEEPALIVE_KEY = 
"dfs.datanode.socket.reuse.keepalive";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c632c6e6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index dfca5d6..80a7ca2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -1630,7 +1630,7 @@ public class DataNode extends ReconfigurableBase
 return blockPoolManager.get(bpid);
   }
 
-  int getBpOsCount() {
+  public int getBpOsCount() {
 return blockPoolManager.getAllNamenodeThreads().size();
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c632c6e6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
index f46b6a4..0ed5c39 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
@@ -28,8 +28,19 @@ import java.io.InputStream;
 import java.io.OutputStreamWriter;
 import java.io.RandomAccessFile;
 import java.io.Writer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Comparator;
 import java.util.Iterator;
+import java.util.List;
+import java.util.Queue;
 import java.util.Scanner;
+import 

hadoop git commit: HDFS-13957. Fix incorrect option used in description of InMemoryAliasMap.

2018-10-04 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.2 b3ac88693 -> 62d02eecd


HDFS-13957. Fix incorrect option used in description of InMemoryAliasMap.

(cherry picked from commit 619e490333fa89601fd476dedac6d16610e9a52a)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/62d02eec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/62d02eec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/62d02eec

Branch: refs/heads/branch-3.2
Commit: 62d02eecd0079a9f1fbfb18743c5324a61a03a7c
Parents: b3ac886
Author: Yiqun Lin 
Authored: Fri Oct 5 09:55:08 2018 +0800
Committer: Yiqun Lin 
Committed: Fri Oct 5 10:09:22 2018 +0800

--
 .../hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/62d02eec/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
index b8d5321..21145e6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
@@ -152,7 +152,7 @@ Currently, the following two types of alias maps are 
supported.
 
 This is a LevelDB-based alias map that runs as a separate server in Namenode.
 The alias map itself can be created using the `fs2img` tool using the option
-`-Ddfs.provided.aliasmap.leveldb.path=file:///path/to/leveldb/map/dingos.db -o 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap`
+`-Ddfs.provided.aliasmap.leveldb.path=file:///path/to/leveldb/map/dingos.db -b 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap`
 as in the example above.
 
 Datanodes contact this alias map using the 
`org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMapProtocol` protocol.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-13957. Fix incorrect option used in description of InMemoryAliasMap.

2018-10-04 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 86a1ad442 -> dd70adf31


HDFS-13957. Fix incorrect option used in description of InMemoryAliasMap.

(cherry picked from commit 619e490333fa89601fd476dedac6d16610e9a52a)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/dd70adf3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/dd70adf3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/dd70adf3

Branch: refs/heads/branch-3.1
Commit: dd70adf3184ce6a280df51bd84cfcfdadbb33b32
Parents: 86a1ad4
Author: Yiqun Lin 
Authored: Fri Oct 5 09:55:08 2018 +0800
Committer: Yiqun Lin 
Committed: Fri Oct 5 10:08:28 2018 +0800

--
 .../hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/dd70adf3/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
index b8d5321..21145e6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
@@ -152,7 +152,7 @@ Currently, the following two types of alias maps are 
supported.
 
 This is a LevelDB-based alias map that runs as a separate server in Namenode.
 The alias map itself can be created using the `fs2img` tool using the option
-`-Ddfs.provided.aliasmap.leveldb.path=file:///path/to/leveldb/map/dingos.db -o 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap`
+`-Ddfs.provided.aliasmap.leveldb.path=file:///path/to/leveldb/map/dingos.db -b 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap`
 as in the example above.
 
 Datanodes contact this alias map using the 
`org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMapProtocol` protocol.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-13957. Fix incorrect option used in description of InMemoryAliasMap.

2018-10-04 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk cc2babc1f -> 619e49033


HDFS-13957. Fix incorrect option used in description of InMemoryAliasMap.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/619e4903
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/619e4903
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/619e4903

Branch: refs/heads/trunk
Commit: 619e490333fa89601fd476dedac6d16610e9a52a
Parents: cc2babc
Author: Yiqun Lin 
Authored: Fri Oct 5 09:55:08 2018 +0800
Committer: Yiqun Lin 
Committed: Fri Oct 5 09:55:08 2018 +0800

--
 .../hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/619e4903/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
index b8d5321..21145e6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
@@ -152,7 +152,7 @@ Currently, the following two types of alias maps are 
supported.
 
 This is a LevelDB-based alias map that runs as a separate server in Namenode.
 The alias map itself can be created using the `fs2img` tool using the option
-`-Ddfs.provided.aliasmap.leveldb.path=file:///path/to/leveldb/map/dingos.db -o 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap`
+`-Ddfs.provided.aliasmap.leveldb.path=file:///path/to/leveldb/map/dingos.db -b 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap`
 as in the example above.
 
 Datanodes contact this alias map using the 
`org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMapProtocol` protocol.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15817. Reuse Object Mapper in KMSJSONReader. Contributed by Jonathan Eagles.

2018-10-03 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.2 4de3cf196 -> e185ae2d1


HADOOP-15817. Reuse Object Mapper in KMSJSONReader. Contributed by Jonathan 
Eagles.

(cherry picked from commit 81f635f47f0737eb551bef1aa55afdf7b268253d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e185ae2d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e185ae2d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e185ae2d

Branch: refs/heads/branch-3.2
Commit: e185ae2d17e1ac4e432549fde077a5ee21041d8f
Parents: 4de3cf1
Author: Yiqun Lin 
Authored: Thu Oct 4 10:30:30 2018 +0800
Committer: Yiqun Lin 
Committed: Thu Oct 4 11:04:39 2018 +0800

--
 .../org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e185ae2d/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
index f6f670b..af781f5 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
@@ -38,6 +38,7 @@ import java.util.Map;
 @Consumes(MediaType.APPLICATION_JSON)
 @InterfaceAudience.Private
 public class KMSJSONReader implements MessageBodyReader {
+  private static final ObjectMapper MAPPER = new ObjectMapper();
 
   @Override
   public boolean isReadable(Class type, Type genericType,
@@ -51,7 +52,6 @@ public class KMSJSONReader implements 
MessageBodyReader {
   Annotation[] annotations, MediaType mediaType,
   MultivaluedMap httpHeaders, InputStream entityStream)
   throws IOException, WebApplicationException {
-ObjectMapper mapper = new ObjectMapper();
-return mapper.readValue(entityStream, type);
+return MAPPER.readValue(entityStream, type);
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15817. Reuse Object Mapper in KMSJSONReader. Contributed by Jonathan Eagles.

2018-10-03 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 54ef6e25b -> a3c564b01


HADOOP-15817. Reuse Object Mapper in KMSJSONReader. Contributed by Jonathan 
Eagles.

(cherry picked from commit 81f635f47f0737eb551bef1aa55afdf7b268253d)
(cherry picked from commit 7b88a57c379fe6eba7d685362e0bc756fdeff700)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a3c564b0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a3c564b0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a3c564b0

Branch: refs/heads/branch-2.9
Commit: a3c564b01e997b7911005ec3ad69fbd8e3a2af50
Parents: 54ef6e2
Author: Yiqun Lin 
Authored: Thu Oct 4 10:30:30 2018 +0800
Committer: Yiqun Lin 
Committed: Thu Oct 4 10:54:10 2018 +0800

--
 .../org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a3c564b0/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
index d3e0064..a59e94c 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
@@ -36,6 +36,7 @@ import java.util.Map;
 @Consumes(MediaType.APPLICATION_JSON)
 @InterfaceAudience.Private
 public class KMSJSONReader implements MessageBodyReader {
+  private static final ObjectMapper MAPPER = new ObjectMapper();
 
   @Override
   public boolean isReadable(Class type, Type genericType,
@@ -48,7 +49,6 @@ public class KMSJSONReader implements MessageBodyReader {
   Annotation[] annotations, MediaType mediaType,
   MultivaluedMap httpHeaders, InputStream entityStream)
   throws IOException, WebApplicationException {
-ObjectMapper mapper = new ObjectMapper();
-return mapper.readValue(entityStream, type);
+return MAPPER.readValue(entityStream, type);
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15817. Reuse Object Mapper in KMSJSONReader. Contributed by Jonathan Eagles.

2018-10-03 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 d84958405 -> 94f4b5b9f


HADOOP-15817. Reuse Object Mapper in KMSJSONReader. Contributed by Jonathan 
Eagles.

(cherry picked from commit 81f635f47f0737eb551bef1aa55afdf7b268253d)
(cherry picked from commit 7b88a57c379fe6eba7d685362e0bc756fdeff700)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/94f4b5b9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/94f4b5b9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/94f4b5b9

Branch: refs/heads/branch-2.8
Commit: 94f4b5b9f3a08100cd0759400bd3a52a23ef76fc
Parents: d849584
Author: Yiqun Lin 
Authored: Thu Oct 4 10:30:30 2018 +0800
Committer: Yiqun Lin 
Committed: Thu Oct 4 10:52:13 2018 +0800

--
 .../org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/94f4b5b9/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
index d3e0064..a59e94c 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
@@ -36,6 +36,7 @@ import java.util.Map;
 @Consumes(MediaType.APPLICATION_JSON)
 @InterfaceAudience.Private
 public class KMSJSONReader implements MessageBodyReader {
+  private static final ObjectMapper MAPPER = new ObjectMapper();
 
   @Override
   public boolean isReadable(Class type, Type genericType,
@@ -48,7 +49,6 @@ public class KMSJSONReader implements MessageBodyReader {
   Annotation[] annotations, MediaType mediaType,
   MultivaluedMap httpHeaders, InputStream entityStream)
   throws IOException, WebApplicationException {
-ObjectMapper mapper = new ObjectMapper();
-return mapper.readValue(entityStream, type);
+return MAPPER.readValue(entityStream, type);
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15817. Reuse Object Mapper in KMSJSONReader. Contributed by Jonathan Eagles.

2018-10-03 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 65aaa1017 -> 7b88a57c3


HADOOP-15817. Reuse Object Mapper in KMSJSONReader. Contributed by Jonathan 
Eagles.

(cherry picked from commit 81f635f47f0737eb551bef1aa55afdf7b268253d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7b88a57c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7b88a57c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7b88a57c

Branch: refs/heads/branch-2
Commit: 7b88a57c379fe6eba7d685362e0bc756fdeff700
Parents: 65aaa10
Author: Yiqun Lin 
Authored: Thu Oct 4 10:30:30 2018 +0800
Committer: Yiqun Lin 
Committed: Thu Oct 4 10:47:22 2018 +0800

--
 .../org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b88a57c/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
index d3e0064..a59e94c 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
@@ -36,6 +36,7 @@ import java.util.Map;
 @Consumes(MediaType.APPLICATION_JSON)
 @InterfaceAudience.Private
 public class KMSJSONReader implements MessageBodyReader {
+  private static final ObjectMapper MAPPER = new ObjectMapper();
 
   @Override
   public boolean isReadable(Class type, Type genericType,
@@ -48,7 +49,6 @@ public class KMSJSONReader implements MessageBodyReader {
   Annotation[] annotations, MediaType mediaType,
   MultivaluedMap httpHeaders, InputStream entityStream)
   throws IOException, WebApplicationException {
-ObjectMapper mapper = new ObjectMapper();
-return mapper.readValue(entityStream, type);
+return MAPPER.readValue(entityStream, type);
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15817. Reuse Object Mapper in KMSJSONReader. Contributed by Jonathan Eagles.

2018-10-03 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 b412bb224 -> d993a1fc3


HADOOP-15817. Reuse Object Mapper in KMSJSONReader. Contributed by Jonathan 
Eagles.

(cherry picked from commit 81f635f47f0737eb551bef1aa55afdf7b268253d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d993a1fc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d993a1fc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d993a1fc

Branch: refs/heads/branch-3.0
Commit: d993a1fc3dfcf9944f4074d337b57db3bb348d60
Parents: b412bb2
Author: Yiqun Lin 
Authored: Thu Oct 4 10:30:30 2018 +0800
Committer: Yiqun Lin 
Committed: Thu Oct 4 10:36:49 2018 +0800

--
 .../org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d993a1fc/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
index f6f670b..af781f5 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
@@ -38,6 +38,7 @@ import java.util.Map;
 @Consumes(MediaType.APPLICATION_JSON)
 @InterfaceAudience.Private
 public class KMSJSONReader implements MessageBodyReader {
+  private static final ObjectMapper MAPPER = new ObjectMapper();
 
   @Override
   public boolean isReadable(Class type, Type genericType,
@@ -51,7 +52,6 @@ public class KMSJSONReader implements 
MessageBodyReader {
   Annotation[] annotations, MediaType mediaType,
   MultivaluedMap httpHeaders, InputStream entityStream)
   throws IOException, WebApplicationException {
-ObjectMapper mapper = new ObjectMapper();
-return mapper.readValue(entityStream, type);
+return MAPPER.readValue(entityStream, type);
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15817. Reuse Object Mapper in KMSJSONReader. Contributed by Jonathan Eagles.

2018-10-03 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 58693c63d -> 1a890b17b


HADOOP-15817. Reuse Object Mapper in KMSJSONReader. Contributed by Jonathan 
Eagles.

(cherry picked from commit 81f635f47f0737eb551bef1aa55afdf7b268253d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1a890b17
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1a890b17
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1a890b17

Branch: refs/heads/branch-3.1
Commit: 1a890b17b95d9069c7319553d8d060927bca750a
Parents: 58693c6
Author: Yiqun Lin 
Authored: Thu Oct 4 10:30:30 2018 +0800
Committer: Yiqun Lin 
Committed: Thu Oct 4 10:35:16 2018 +0800

--
 .../org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1a890b17/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
index f6f670b..af781f5 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
@@ -38,6 +38,7 @@ import java.util.Map;
 @Consumes(MediaType.APPLICATION_JSON)
 @InterfaceAudience.Private
 public class KMSJSONReader implements MessageBodyReader {
+  private static final ObjectMapper MAPPER = new ObjectMapper();
 
   @Override
   public boolean isReadable(Class type, Type genericType,
@@ -51,7 +52,6 @@ public class KMSJSONReader implements 
MessageBodyReader {
   Annotation[] annotations, MediaType mediaType,
   MultivaluedMap httpHeaders, InputStream entityStream)
   throws IOException, WebApplicationException {
-ObjectMapper mapper = new ObjectMapper();
-return mapper.readValue(entityStream, type);
+return MAPPER.readValue(entityStream, type);
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15817. Reuse Object Mapper in KMSJSONReader. Contributed by Jonathan Eagles.

2018-10-03 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk 39b35036b -> 81f635f47


HADOOP-15817. Reuse Object Mapper in KMSJSONReader. Contributed by Jonathan 
Eagles.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/81f635f4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/81f635f4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/81f635f4

Branch: refs/heads/trunk
Commit: 81f635f47f0737eb551bef1aa55afdf7b268253d
Parents: 39b3503
Author: Yiqun Lin 
Authored: Thu Oct 4 10:30:30 2018 +0800
Committer: Yiqun Lin 
Committed: Thu Oct 4 10:30:30 2018 +0800

--
 .../org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/81f635f4/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
index f6f670b..af781f5 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
@@ -38,6 +38,7 @@ import java.util.Map;
 @Consumes(MediaType.APPLICATION_JSON)
 @InterfaceAudience.Private
 public class KMSJSONReader implements MessageBodyReader {
+  private static final ObjectMapper MAPPER = new ObjectMapper();
 
   @Override
   public boolean isReadable(Class type, Type genericType,
@@ -51,7 +52,6 @@ public class KMSJSONReader implements 
MessageBodyReader {
   Annotation[] annotations, MediaType mediaType,
   MultivaluedMap httpHeaders, InputStream entityStream)
   throws IOException, WebApplicationException {
-ObjectMapper mapper = new ObjectMapper();
-return mapper.readValue(entityStream, type);
+return MAPPER.readValue(entityStream, type);
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-13768. Adding replicas to volume map makes DataNode start slowly. Contributed by Surendra Singh Lilhore.

2018-10-01 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 c306da08e -> 65af98b58


HDFS-13768. Adding replicas to volume map makes DataNode start slowly. 
Contributed by Surendra Singh Lilhore.

(cherry picked from commit 5689355783de005ebc604f4403dc5129a286bfca)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/65af98b5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/65af98b5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/65af98b5

Branch: refs/heads/branch-3.1
Commit: 65af98b58a6cf66037a295e5ca951e31e472c8ce
Parents: c306da0
Author: Yiqun Lin 
Authored: Tue Oct 2 09:43:14 2018 +0800
Committer: Yiqun Lin 
Committed: Tue Oct 2 09:46:23 2018 +0800

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   3 +
 .../hadoop/hdfs/server/datanode/DataNode.java   |   2 +-
 .../datanode/fsdataset/impl/BlockPoolSlice.java | 177 +--
 .../datanode/fsdataset/impl/FsDatasetImpl.java  |  14 ++
 .../datanode/fsdataset/impl/FsDatasetUtil.java  |  30 ++--
 .../datanode/fsdataset/impl/FsVolumeList.java   |   5 +-
 .../datanode/fsdataset/impl/ReplicaMap.java |  25 +++
 .../src/main/resources/hdfs-default.xml |   9 +
 .../fsdataset/impl/TestFsVolumeList.java|  64 ++-
 9 files changed, 300 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/65af98b5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 97bb469..aa5e758 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -359,6 +359,9 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final longDFS_CONTENT_SUMMARY_SLEEP_MICROSEC_DEFAULT = 500;
   public static final String  DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY = 
"dfs.datanode.failed.volumes.tolerated";
   public static final int DFS_DATANODE_FAILED_VOLUMES_TOLERATED_DEFAULT = 
0;
+  public static final String
+  DFS_DATANODE_VOLUMES_REPLICA_ADD_THREADPOOL_SIZE_KEY =
+  "dfs.datanode.volumes.replica-add.threadpool.size";
   public static final String  DFS_DATANODE_SYNCONCLOSE_KEY = 
"dfs.datanode.synconclose";
   public static final boolean DFS_DATANODE_SYNCONCLOSE_DEFAULT = false;
   public static final String  DFS_DATANODE_SOCKET_REUSE_KEEPALIVE_KEY = 
"dfs.datanode.socket.reuse.keepalive";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/65af98b5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index ade2b11..787f42c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -1695,7 +1695,7 @@ public class DataNode extends ReconfigurableBase
 return blockPoolManager.get(bpid);
   }
   
-  int getBpOsCount() {
+  public int getBpOsCount() {
 return blockPoolManager.getAllNamenodeThreads().size();
   }
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/65af98b5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
index 3f9de78..d7e56fd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
@@ -28,8 +28,19 @@ import java.io.InputStream;
 import java.io.OutputStreamWriter;
 import java.io.RandomAccessFile;
 import java.io.Writer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Comparator;
 import java.util.Iterator;
+import java.util.List;
+import java.util.Queue;
 import 

hadoop git commit: HDFS-13768. Adding replicas to volume map makes DataNode start slowly. Contributed by Surendra Singh Lilhore.

2018-10-01 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk f6c5ef990 -> 568935578


HDFS-13768. Adding replicas to volume map makes DataNode start slowly. 
Contributed by Surendra Singh Lilhore.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/56893557
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/56893557
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/56893557

Branch: refs/heads/trunk
Commit: 5689355783de005ebc604f4403dc5129a286bfca
Parents: f6c5ef9
Author: Yiqun Lin 
Authored: Tue Oct 2 09:43:14 2018 +0800
Committer: Yiqun Lin 
Committed: Tue Oct 2 09:43:14 2018 +0800

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   3 +
 .../hadoop/hdfs/server/datanode/DataNode.java   |   2 +-
 .../datanode/fsdataset/impl/BlockPoolSlice.java | 177 +--
 .../datanode/fsdataset/impl/FsDatasetImpl.java  |  14 ++
 .../datanode/fsdataset/impl/FsDatasetUtil.java  |  30 ++--
 .../datanode/fsdataset/impl/FsVolumeList.java   |   5 +-
 .../datanode/fsdataset/impl/ReplicaMap.java |  25 +++
 .../src/main/resources/hdfs-default.xml |   9 +
 .../fsdataset/impl/TestFsVolumeList.java|  64 ++-
 9 files changed, 300 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/56893557/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index a7e7b9b..d8024dc 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -365,6 +365,9 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final longDFS_CONTENT_SUMMARY_SLEEP_MICROSEC_DEFAULT = 500;
   public static final String  DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY = 
"dfs.datanode.failed.volumes.tolerated";
   public static final int DFS_DATANODE_FAILED_VOLUMES_TOLERATED_DEFAULT = 
0;
+  public static final String
+  DFS_DATANODE_VOLUMES_REPLICA_ADD_THREADPOOL_SIZE_KEY =
+  "dfs.datanode.volumes.replica-add.threadpool.size";
   public static final String  DFS_DATANODE_SYNCONCLOSE_KEY = 
"dfs.datanode.synconclose";
   public static final boolean DFS_DATANODE_SYNCONCLOSE_DEFAULT = false;
   public static final String  DFS_DATANODE_SOCKET_REUSE_KEEPALIVE_KEY = 
"dfs.datanode.socket.reuse.keepalive";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/56893557/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index c980395..270e30b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -1695,7 +1695,7 @@ public class DataNode extends ReconfigurableBase
 return blockPoolManager.get(bpid);
   }
   
-  int getBpOsCount() {
+  public int getBpOsCount() {
 return blockPoolManager.getAllNamenodeThreads().size();
   }
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/56893557/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
index 2adfb6b..b9b581f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
@@ -28,8 +28,19 @@ import java.io.InputStream;
 import java.io.OutputStreamWriter;
 import java.io.RandomAccessFile;
 import java.io.Writer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Comparator;
 import java.util.Iterator;
+import java.util.List;
+import java.util.Queue;
 import java.util.Scanner;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import 

hadoop git commit: HADOOP-15742. Log if ipc backoff is enabled in CallQueueManager. Contributed by Ryan Wu.

2018-09-17 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk 281c192e7 -> ee051ef9f


HADOOP-15742. Log if ipc backoff is enabled in CallQueueManager. Contributed by 
Ryan Wu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ee051ef9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ee051ef9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ee051ef9

Branch: refs/heads/trunk
Commit: ee051ef9fec1fddb612aa1feae9fd3df7091354f
Parents: 281c192
Author: Yiqun Lin 
Authored: Tue Sep 18 11:10:33 2018 +0800
Committer: Yiqun Lin 
Committed: Tue Sep 18 11:10:33 2018 +0800

--
 .../src/main/java/org/apache/hadoop/ipc/CallQueueManager.java   | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ee051ef9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
index d1bd180..29649a6 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
@@ -81,8 +81,9 @@ public class CallQueueManager
 this.clientBackOffEnabled = clientBackOffEnabled;
 this.putRef = new AtomicReference>(bq);
 this.takeRef = new AtomicReference>(bq);
-LOG.info("Using callQueue: " + backingClass + " queueCapacity: " +
-maxQueueSize + " scheduler: " + schedulerClass);
+LOG.info("Using callQueue: {}, queueCapacity: {}, " +
+"scheduler: {}, ipcBackoff: {}.",
+backingClass, maxQueueSize, schedulerClass, clientBackOffEnabled);
   }
 
   @VisibleForTesting // only!


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-13884. Improve the description of the setting dfs.image.compress. Contributed by Ryan Wu.

2018-09-09 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk eef3bafae -> 0da49642f


HDFS-13884. Improve the description of the setting dfs.image.compress. 
Contributed by Ryan Wu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0da49642
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0da49642
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0da49642

Branch: refs/heads/trunk
Commit: 0da49642fc1eb71997b1aa268583c1ba09a16687
Parents: eef3baf
Author: Yiqun Lin 
Authored: Mon Sep 10 13:57:36 2018 +0800
Committer: Yiqun Lin 
Committed: Mon Sep 10 13:57:36 2018 +0800

--
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml| 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0da49642/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 5f115ec..1573582 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -1284,7 +1284,11 @@
 
   dfs.image.compress
   false
-  Should the dfs image be compressed?
+  When this value is true, the dfs image will be compressed.
+Enabling this will be very helpful if dfs image is large since it can
+avoid consuming a lot of network bandwidth when SBN uploads a new dfs
+image to ANN. The compressed codec is specified by the setting
+dfs.image.compression.codec.
   
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-13815. RBF: Add check to order command. Contributed by Ranith Sardar.

2018-09-05 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 2d68708a1 -> c898757f5


HDFS-13815. RBF: Add check to order command. Contributed by Ranith Sardar.

(cherry picked from commit 9315db5f5da09c2ef86be168465c16932afa2d85)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c898757f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c898757f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c898757f

Branch: refs/heads/branch-3.1
Commit: c898757f5541a523b12f2e5cfb504d624da13e1b
Parents: 2d68708
Author: Yiqun Lin 
Authored: Wed Sep 5 23:33:27 2018 +0800
Committer: Yiqun Lin 
Committed: Wed Sep 5 23:35:30 2018 +0800

--
 .../hdfs/tools/federation/RouterAdmin.java  | 10 
 .../federation/router/TestRouterAdminCLI.java   | 57 +++-
 2 files changed, 66 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c898757f/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
index ef8d7c1..0a681e9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
@@ -263,10 +263,14 @@ public class RouterAdmin extends Configured implements 
Tool {
   if ("-add".equals(cmd)) {
 if (addMount(argv, i)) {
   System.out.println("Successfully added mount point " + argv[i]);
+} else {
+  exitCode = -1;
 }
   } else if ("-update".equals(cmd)) {
 if (updateMount(argv, i)) {
   System.out.println("Successfully updated mount point " + argv[i]);
+} else {
+  exitCode = -1;
 }
   } else if ("-rm".equals(cmd)) {
 if (removeMount(argv[i])) {
@@ -369,6 +373,9 @@ public class RouterAdmin extends Configured implements Tool 
{
 i++;
 short modeValue = Short.parseShort(parameters[i], 8);
 mode = new FsPermission(modeValue);
+  } else {
+printUsage("-add");
+return false;
   }
 
   i++;
@@ -521,6 +528,9 @@ public class RouterAdmin extends Configured implements Tool 
{
 i++;
 short modeValue = Short.parseShort(parameters[i], 8);
 mode = new FsPermission(modeValue);
+  } else {
+printUsage("-update");
+return false;
   }
 
   i++;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c898757f/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
index fa29cd9..d968c60 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
@@ -224,6 +224,24 @@ public class TestRouterAdminCLI {
 testAddOrderMountTable(DestinationOrder.HASH_ALL);
   }
 
+  @Test
+  public void testAddOrderErrorMsg() throws Exception {
+DestinationOrder order = DestinationOrder.HASH;
+final String mnt = "/newAdd1" + order;
+final String nsId = "ns0,ns1";
+final String dest = "/changAdd";
+
+String[] argv1 = new String[] {"-add", mnt, nsId, dest, "-order",
+order.toString()};
+assertEquals(0, ToolRunner.run(admin, argv1));
+
+// Add the order with wrong command
+String[] argv = new String[] {"-add", mnt, nsId, dest, "-orde",
+order.toString()};
+assertEquals(-1, ToolRunner.run(admin, argv));
+
+  }
+
   private void testAddOrderMountTable(DestinationOrder order)
   throws Exception {
 final String mnt = "/" + order;
@@ -403,7 +421,7 @@ public class TestRouterAdminCLI {
 argv = new String[] {"-add", "/testpath2-2", "ns0", "/testdir2-2",
 "-owner", TEST_USER, "-group", TEST_USER, "-mode", "0255"};
 assertEquals(0, ToolRunner.run(admin, argv));
-verifyExecutionResult("/testpath2-2", false, 0, 0);
+verifyExecutionResult("/testpath2-2", false, -1, 0);
 
 // set mount table entry with read and write 

hadoop git commit: HDFS-13815. RBF: Add check to order command. Contributed by Ranith Sardar.

2018-09-05 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk df0d61e3a -> 9315db5f5


HDFS-13815. RBF: Add check to order command. Contributed by Ranith Sardar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9315db5f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9315db5f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9315db5f

Branch: refs/heads/trunk
Commit: 9315db5f5da09c2ef86be168465c16932afa2d85
Parents: df0d61e
Author: Yiqun Lin 
Authored: Wed Sep 5 23:33:27 2018 +0800
Committer: Yiqun Lin 
Committed: Wed Sep 5 23:33:27 2018 +0800

--
 .../hdfs/tools/federation/RouterAdmin.java  | 10 
 .../federation/router/TestRouterAdminCLI.java   | 57 +++-
 2 files changed, 66 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9315db5f/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
index ef8d7c1..0a681e9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
@@ -263,10 +263,14 @@ public class RouterAdmin extends Configured implements 
Tool {
   if ("-add".equals(cmd)) {
 if (addMount(argv, i)) {
   System.out.println("Successfully added mount point " + argv[i]);
+} else {
+  exitCode = -1;
 }
   } else if ("-update".equals(cmd)) {
 if (updateMount(argv, i)) {
   System.out.println("Successfully updated mount point " + argv[i]);
+} else {
+  exitCode = -1;
 }
   } else if ("-rm".equals(cmd)) {
 if (removeMount(argv[i])) {
@@ -369,6 +373,9 @@ public class RouterAdmin extends Configured implements Tool 
{
 i++;
 short modeValue = Short.parseShort(parameters[i], 8);
 mode = new FsPermission(modeValue);
+  } else {
+printUsage("-add");
+return false;
   }
 
   i++;
@@ -521,6 +528,9 @@ public class RouterAdmin extends Configured implements Tool 
{
 i++;
 short modeValue = Short.parseShort(parameters[i], 8);
 mode = new FsPermission(modeValue);
+  } else {
+printUsage("-update");
+return false;
   }
 
   i++;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9315db5f/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
index fa29cd9..d968c60 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
@@ -224,6 +224,24 @@ public class TestRouterAdminCLI {
 testAddOrderMountTable(DestinationOrder.HASH_ALL);
   }
 
+  @Test
+  public void testAddOrderErrorMsg() throws Exception {
+DestinationOrder order = DestinationOrder.HASH;
+final String mnt = "/newAdd1" + order;
+final String nsId = "ns0,ns1";
+final String dest = "/changAdd";
+
+String[] argv1 = new String[] {"-add", mnt, nsId, dest, "-order",
+order.toString()};
+assertEquals(0, ToolRunner.run(admin, argv1));
+
+// Add the order with wrong command
+String[] argv = new String[] {"-add", mnt, nsId, dest, "-orde",
+order.toString()};
+assertEquals(-1, ToolRunner.run(admin, argv));
+
+  }
+
   private void testAddOrderMountTable(DestinationOrder order)
   throws Exception {
 final String mnt = "/" + order;
@@ -403,7 +421,7 @@ public class TestRouterAdminCLI {
 argv = new String[] {"-add", "/testpath2-2", "ns0", "/testdir2-2",
 "-owner", TEST_USER, "-group", TEST_USER, "-mode", "0255"};
 assertEquals(0, ToolRunner.run(admin, argv));
-verifyExecutionResult("/testpath2-2", false, 0, 0);
+verifyExecutionResult("/testpath2-2", false, -1, 0);
 
 // set mount table entry with read and write permission
 argv = new String[] {"-add", "/testpath2-3", "ns0", "/testdir2-3",
@@ 

hadoop git commit: HDFS-13863. FsDatasetImpl should log DiskOutOfSpaceException. Contributed by Fei Hui.

2018-08-29 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 fbaa11ef4 -> 6a547856e


HDFS-13863. FsDatasetImpl should log DiskOutOfSpaceException. Contributed by 
Fei Hui.

(cherry picked from commit 582cb10ec74ed5666946a3769002ceb80ba660cb)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6a547856
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6a547856
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6a547856

Branch: refs/heads/branch-3.0
Commit: 6a547856ef205c89129b092e535e9916780ecd37
Parents: fbaa11e
Author: Yiqun Lin 
Authored: Thu Aug 30 11:21:13 2018 +0800
Committer: Yiqun Lin 
Committed: Thu Aug 30 11:24:25 2018 +0800

--
 .../hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6a547856/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index 1eeec27..c2c25ff 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -1347,6 +1347,9 @@ class FsDatasetImpl implements FsDatasetSpi 
{
   datanode.getMetrics().incrRamDiskBlocksWrite();
 } catch (DiskOutOfSpaceException de) {
   // Ignore the exception since we just fall back to persistent 
storage.
+  LOG.warn("Insufficient space for placing the block on a transient "
+  + "volume, fall back to persistent storage: "
+  + de.getMessage());
 } finally {
   if (ref == null) {
 cacheManager.release(b.getNumBytes());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



  1   2   3   4   5   >