hadoop git commit: YARN-7598. Document how to use classpath isolation for aux-services in YARN. Contributed by Xuan Gong.

2018-04-24 Thread junping_du
Repository: hadoop
Updated Branches:
  refs/heads/trunk b06601acc -> 56788d759


YARN-7598. Document how to use classpath isolation for aux-services in YARN. 
Contributed by Xuan Gong.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/56788d75
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/56788d75
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/56788d75

Branch: refs/heads/trunk
Commit: 56788d759f47b4b158617701f543a9dcb4df69cd
Parents: b06601a
Author: Junping Du 
Authored: Tue Apr 24 18:29:14 2018 +0800
Committer: Junping Du 
Committed: Tue Apr 24 18:29:14 2018 +0800

--
 .../src/site/markdown/NodeManager.md| 49 +++-
 1 file changed, 48 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/56788d75/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
index 3261cd7..12201b9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
@@ -87,4 +87,51 @@ Step 4.  Auxiliary services.
 
   * NodeManagers in a YARN cluster can be configured to run auxiliary 
services. For a completely functional NM restart, YARN relies on any auxiliary 
service configured to also support recovery. This usually includes (1) avoiding 
usage of ephemeral ports so that previously running clients (in this case, 
usually containers) are not disrupted after restart and (2) having the 
auxiliary service itself support recoverability by reloading any previous state 
when NodeManager restarts and reinitializes the auxiliary service.
 
-  * A simple example for the above is the auxiliary service 'ShuffleHandler' 
for MapReduce (MR). ShuffleHandler respects the above two requirements already, 
so users/admins don't have do anything for it to support NM restart: (1) The 
configuration property **mapreduce.shuffle.port** controls which port the 
ShuffleHandler on a NodeManager host binds to, and it defaults to a 
non-ephemeral port. (2) The ShuffleHandler service also already supports 
recovery of previous state after NM restarts.
+  * A simple example for the above is the auxiliary service 'ShuffleHandler' 
for MapReduce (MR). ShuffleHandler respects the above two requirements already, 
so users/admins don't have to do anything for it to support NM restart: (1) The 
configuration property **mapreduce.shuffle.port** controls which port the 
ShuffleHandler on a NodeManager host binds to, and it defaults to a 
non-ephemeral port. (2) The ShuffleHandler service also already supports 
recovery of previous state after NM restarts.
+
+
+Auxiliary Service Classpath Isolation
+-
+
+### Introduction
+To launch auxiliary services on a NodeManager, users have to add their jar to 
NodeManager's classpath directly, thus put them on the system classloader. But 
if multiple versions of the plugin are present on the classpath, there is no 
control over which version actually gets loaded. Or if there are any conflicts 
between the dependencies introduced by the auxiliary services and the 
NodeManager itself, they can break the NodeManager, the auxiliary services, or 
both. To solve this issue, we can instantiate auxiliary services using a 
classloader that is different from the system classloader.
+
+### Configuration
+This section describes the configuration variables for aux-service classpath 
isolation.
+
+The following settings need to be set in *yarn-site.xml*.
+
+|Configuration Name | Description |
+|: |: |
+| `yarn.nodemanager.aux-services.%s.classpath` | Provide local directory which 
includes the related jar file as well as all the dependencies’ jar file. We 
could specify the single jar file or use ${local_dir_to_jar}/* to load all jars 
under the dep directory. |
+| `yarn.nodemanager.aux-services.%s.remote-classpath` | Provide remote 
absolute or relative path to jar file(We also support zip, tar.gz, tgz, tar and 
gz files as well). For the same aux-service class, we can only specify one of 
the configurations: yarn.nodemanager.aux-services.%s.classpath or 
yarn.nodemanager.aux-services.%s.remote-classpath. The YarnRuntimeException 
will be thrown. Please also make sure that the owner of the jar file must be 
the same as the NodeManager user and the permbits should satisfy (permbits & 
0022)==0 (such as 600, it's not writable by group or other).|
+| `yarn.nodemanager.aux-services.%s.system-classes` | Normally, we do not ne

hadoop git commit: YARN-7598. Document how to use classpath isolation for aux-services in YARN. Contributed by Xuan Gong.

2018-04-24 Thread junping_du
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 14cfcf6b7 -> 51194439b


YARN-7598. Document how to use classpath isolation for aux-services in YARN. 
Contributed by Xuan Gong.

(cherry picked from commit 56788d759f47b4b158617701f543a9dcb4df69cd)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/51194439
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/51194439
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/51194439

Branch: refs/heads/branch-3.0
Commit: 51194439ba6b8de0836c0379754c692031d7f909
Parents: 14cfcf6
Author: Junping Du 
Authored: Tue Apr 24 18:29:14 2018 +0800
Committer: Junping Du 
Committed: Tue Apr 24 18:30:26 2018 +0800

--
 .../src/site/markdown/NodeManager.md| 49 +++-
 1 file changed, 48 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/51194439/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
index 3261cd7..12201b9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
@@ -87,4 +87,51 @@ Step 4.  Auxiliary services.
 
   * NodeManagers in a YARN cluster can be configured to run auxiliary 
services. For a completely functional NM restart, YARN relies on any auxiliary 
service configured to also support recovery. This usually includes (1) avoiding 
usage of ephemeral ports so that previously running clients (in this case, 
usually containers) are not disrupted after restart and (2) having the 
auxiliary service itself support recoverability by reloading any previous state 
when NodeManager restarts and reinitializes the auxiliary service.
 
-  * A simple example for the above is the auxiliary service 'ShuffleHandler' 
for MapReduce (MR). ShuffleHandler respects the above two requirements already, 
so users/admins don't have do anything for it to support NM restart: (1) The 
configuration property **mapreduce.shuffle.port** controls which port the 
ShuffleHandler on a NodeManager host binds to, and it defaults to a 
non-ephemeral port. (2) The ShuffleHandler service also already supports 
recovery of previous state after NM restarts.
+  * A simple example for the above is the auxiliary service 'ShuffleHandler' 
for MapReduce (MR). ShuffleHandler respects the above two requirements already, 
so users/admins don't have to do anything for it to support NM restart: (1) The 
configuration property **mapreduce.shuffle.port** controls which port the 
ShuffleHandler on a NodeManager host binds to, and it defaults to a 
non-ephemeral port. (2) The ShuffleHandler service also already supports 
recovery of previous state after NM restarts.
+
+
+Auxiliary Service Classpath Isolation
+-
+
+### Introduction
+To launch auxiliary services on a NodeManager, users have to add their jar to 
NodeManager's classpath directly, thus put them on the system classloader. But 
if multiple versions of the plugin are present on the classpath, there is no 
control over which version actually gets loaded. Or if there are any conflicts 
between the dependencies introduced by the auxiliary services and the 
NodeManager itself, they can break the NodeManager, the auxiliary services, or 
both. To solve this issue, we can instantiate auxiliary services using a 
classloader that is different from the system classloader.
+
+### Configuration
+This section describes the configuration variables for aux-service classpath 
isolation.
+
+The following settings need to be set in *yarn-site.xml*.
+
+|Configuration Name | Description |
+|: |: |
+| `yarn.nodemanager.aux-services.%s.classpath` | Provide local directory which 
includes the related jar file as well as all the dependencies’ jar file. We 
could specify the single jar file or use ${local_dir_to_jar}/* to load all jars 
under the dep directory. |
+| `yarn.nodemanager.aux-services.%s.remote-classpath` | Provide remote 
absolute or relative path to jar file(We also support zip, tar.gz, tgz, tar and 
gz files as well). For the same aux-service class, we can only specify one of 
the configurations: yarn.nodemanager.aux-services.%s.classpath or 
yarn.nodemanager.aux-services.%s.remote-classpath. The YarnRuntimeException 
will be thrown. Please also make sure that the owner of the jar file must be 
the same as the NodeManager user and the permbits should satisfy (permbits & 
0022)==0 (such as 600, it's not writable by group or other)

hadoop git commit: YARN-7598. Document how to use classpath isolation for aux-services in YARN. Contributed by Xuan Gong.

2018-04-24 Thread junping_du
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 ea2863f2b -> 9209ebae1


YARN-7598. Document how to use classpath isolation for aux-services in YARN. 
Contributed by Xuan Gong.

(cherry picked from commit 56788d759f47b4b158617701f543a9dcb4df69cd)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9209ebae
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9209ebae
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9209ebae

Branch: refs/heads/branch-3.1
Commit: 9209ebae18560b8a2c75a98fae64aac6be76baa3
Parents: ea2863f
Author: Junping Du 
Authored: Tue Apr 24 18:29:14 2018 +0800
Committer: Junping Du 
Committed: Tue Apr 24 18:30:58 2018 +0800

--
 .../src/site/markdown/NodeManager.md| 49 +++-
 1 file changed, 48 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9209ebae/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
index 3261cd7..12201b9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
@@ -87,4 +87,51 @@ Step 4.  Auxiliary services.
 
   * NodeManagers in a YARN cluster can be configured to run auxiliary 
services. For a completely functional NM restart, YARN relies on any auxiliary 
service configured to also support recovery. This usually includes (1) avoiding 
usage of ephemeral ports so that previously running clients (in this case, 
usually containers) are not disrupted after restart and (2) having the 
auxiliary service itself support recoverability by reloading any previous state 
when NodeManager restarts and reinitializes the auxiliary service.
 
-  * A simple example for the above is the auxiliary service 'ShuffleHandler' 
for MapReduce (MR). ShuffleHandler respects the above two requirements already, 
so users/admins don't have do anything for it to support NM restart: (1) The 
configuration property **mapreduce.shuffle.port** controls which port the 
ShuffleHandler on a NodeManager host binds to, and it defaults to a 
non-ephemeral port. (2) The ShuffleHandler service also already supports 
recovery of previous state after NM restarts.
+  * A simple example for the above is the auxiliary service 'ShuffleHandler' 
for MapReduce (MR). ShuffleHandler respects the above two requirements already, 
so users/admins don't have to do anything for it to support NM restart: (1) The 
configuration property **mapreduce.shuffle.port** controls which port the 
ShuffleHandler on a NodeManager host binds to, and it defaults to a 
non-ephemeral port. (2) The ShuffleHandler service also already supports 
recovery of previous state after NM restarts.
+
+
+Auxiliary Service Classpath Isolation
+-
+
+### Introduction
+To launch auxiliary services on a NodeManager, users have to add their jar to 
NodeManager's classpath directly, thus put them on the system classloader. But 
if multiple versions of the plugin are present on the classpath, there is no 
control over which version actually gets loaded. Or if there are any conflicts 
between the dependencies introduced by the auxiliary services and the 
NodeManager itself, they can break the NodeManager, the auxiliary services, or 
both. To solve this issue, we can instantiate auxiliary services using a 
classloader that is different from the system classloader.
+
+### Configuration
+This section describes the configuration variables for aux-service classpath 
isolation.
+
+The following settings need to be set in *yarn-site.xml*.
+
+|Configuration Name | Description |
+|: |: |
+| `yarn.nodemanager.aux-services.%s.classpath` | Provide local directory which 
includes the related jar file as well as all the dependencies’ jar file. We 
could specify the single jar file or use ${local_dir_to_jar}/* to load all jars 
under the dep directory. |
+| `yarn.nodemanager.aux-services.%s.remote-classpath` | Provide remote 
absolute or relative path to jar file(We also support zip, tar.gz, tgz, tar and 
gz files as well). For the same aux-service class, we can only specify one of 
the configurations: yarn.nodemanager.aux-services.%s.classpath or 
yarn.nodemanager.aux-services.%s.remote-classpath. The YarnRuntimeException 
will be thrown. Please also make sure that the owner of the jar file must be 
the same as the NodeManager user and the permbits should satisfy (permbits & 
0022)==0 (such as 600, it's not writable by group or other)

hadoop git commit: YARN-7598. Document how to use classpath isolation for aux-services in YARN. Contributed by Xuan Gong.

2018-04-24 Thread junping_du
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 cc5416d94 -> af70c69fb


YARN-7598. Document how to use classpath isolation for aux-services in YARN. 
Contributed by Xuan Gong.

(cherry picked from commit 56788d759f47b4b158617701f543a9dcb4df69cd)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/af70c69f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/af70c69f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/af70c69f

Branch: refs/heads/branch-2
Commit: af70c69fb2b58f9ec25302d218abf372adf72bfc
Parents: cc5416d
Author: Junping Du 
Authored: Tue Apr 24 18:29:14 2018 +0800
Committer: Junping Du 
Committed: Tue Apr 24 18:31:29 2018 +0800

--
 .../src/site/markdown/NodeManager.md| 49 +++-
 1 file changed, 48 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/af70c69f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
index 3261cd7..12201b9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
@@ -87,4 +87,51 @@ Step 4.  Auxiliary services.
 
   * NodeManagers in a YARN cluster can be configured to run auxiliary 
services. For a completely functional NM restart, YARN relies on any auxiliary 
service configured to also support recovery. This usually includes (1) avoiding 
usage of ephemeral ports so that previously running clients (in this case, 
usually containers) are not disrupted after restart and (2) having the 
auxiliary service itself support recoverability by reloading any previous state 
when NodeManager restarts and reinitializes the auxiliary service.
 
-  * A simple example for the above is the auxiliary service 'ShuffleHandler' 
for MapReduce (MR). ShuffleHandler respects the above two requirements already, 
so users/admins don't have do anything for it to support NM restart: (1) The 
configuration property **mapreduce.shuffle.port** controls which port the 
ShuffleHandler on a NodeManager host binds to, and it defaults to a 
non-ephemeral port. (2) The ShuffleHandler service also already supports 
recovery of previous state after NM restarts.
+  * A simple example for the above is the auxiliary service 'ShuffleHandler' 
for MapReduce (MR). ShuffleHandler respects the above two requirements already, 
so users/admins don't have to do anything for it to support NM restart: (1) The 
configuration property **mapreduce.shuffle.port** controls which port the 
ShuffleHandler on a NodeManager host binds to, and it defaults to a 
non-ephemeral port. (2) The ShuffleHandler service also already supports 
recovery of previous state after NM restarts.
+
+
+Auxiliary Service Classpath Isolation
+-
+
+### Introduction
+To launch auxiliary services on a NodeManager, users have to add their jar to 
NodeManager's classpath directly, thus put them on the system classloader. But 
if multiple versions of the plugin are present on the classpath, there is no 
control over which version actually gets loaded. Or if there are any conflicts 
between the dependencies introduced by the auxiliary services and the 
NodeManager itself, they can break the NodeManager, the auxiliary services, or 
both. To solve this issue, we can instantiate auxiliary services using a 
classloader that is different from the system classloader.
+
+### Configuration
+This section describes the configuration variables for aux-service classpath 
isolation.
+
+The following settings need to be set in *yarn-site.xml*.
+
+|Configuration Name | Description |
+|: |: |
+| `yarn.nodemanager.aux-services.%s.classpath` | Provide local directory which 
includes the related jar file as well as all the dependencies’ jar file. We 
could specify the single jar file or use ${local_dir_to_jar}/* to load all jars 
under the dep directory. |
+| `yarn.nodemanager.aux-services.%s.remote-classpath` | Provide remote 
absolute or relative path to jar file(We also support zip, tar.gz, tgz, tar and 
gz files as well). For the same aux-service class, we can only specify one of 
the configurations: yarn.nodemanager.aux-services.%s.classpath or 
yarn.nodemanager.aux-services.%s.remote-classpath. The YarnRuntimeException 
will be thrown. Please also make sure that the owner of the jar file must be 
the same as the NodeManager user and the permbits should satisfy (permbits & 
0022)==0 (such as 600, it's not writable by group or other).|
+

hadoop git commit: MAPREDUCE-7072. mapred job -history prints duplicate counter in human output (wilfreds via rkanter)

2018-04-24 Thread rkanter
Repository: hadoop
Updated Branches:
  refs/heads/trunk 56788d759 -> 1b9ecc264


MAPREDUCE-7072. mapred job -history prints duplicate counter in human output 
(wilfreds via rkanter)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1b9ecc26
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1b9ecc26
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1b9ecc26

Branch: refs/heads/trunk
Commit: 1b9ecc264a6abe9d9d5412318c67d3d2936bd9ac
Parents: 56788d7
Author: Robert Kanter 
Authored: Tue Apr 24 11:30:38 2018 -0700
Committer: Robert Kanter 
Committed: Tue Apr 24 11:30:38 2018 -0700

--
 .../HumanReadableHistoryViewerPrinter.java  |  3 +-
 .../jobhistory/JSONHistoryViewerPrinter.java|  3 +-
 .../jobhistory/TestHistoryViewerPrinter.java| 76 
 3 files changed, 80 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b9ecc26/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/HumanReadableHistoryViewerPrinter.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/HumanReadableHistoryViewerPrinter.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/HumanReadableHistoryViewerPrinter.java
index 685fa05..060ba24 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/HumanReadableHistoryViewerPrinter.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/HumanReadableHistoryViewerPrinter.java
@@ -148,7 +148,8 @@ class HumanReadableHistoryViewerPrinter implements 
HistoryViewerPrinter {
   "Total Value"));
   buff.append("\n--" +
   "-");
-  for (String groupName : totalCounters.getGroupNames()) {
+  for (CounterGroup counterGroup : totalCounters) {
+String groupName = counterGroup.getName();
 CounterGroup totalGroup = totalCounters.getGroup(groupName);
 CounterGroup mapGroup = mapCounters.getGroup(groupName);
 CounterGroup reduceGroup = reduceCounters.getGroup(groupName);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b9ecc26/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JSONHistoryViewerPrinter.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JSONHistoryViewerPrinter.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JSONHistoryViewerPrinter.java
index cfb6641..5f8e9ad 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JSONHistoryViewerPrinter.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JSONHistoryViewerPrinter.java
@@ -104,7 +104,8 @@ class JSONHistoryViewerPrinter implements 
HistoryViewerPrinter {
 // Killed jobs might not have counters
 if (totalCounters != null) {
   JSONObject jGroups = new JSONObject();
-  for (String groupName : totalCounters.getGroupNames()) {
+  for (CounterGroup counterGroup : totalCounters) {
+String groupName = counterGroup.getName();
 CounterGroup totalGroup = totalCounters.getGroup(groupName);
 CounterGroup mapGroup = mapCounters.getGroup(groupName);
 CounterGroup reduceGroup = reduceCounters.getGroup(groupName);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b9ecc26/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestHistoryViewerPrinter.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestHistoryViewerPrinter.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestHistoryViewerPrinter.java
index 588500c..3601ea7 100644
--- 
a/hadoop-m

[02/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.js.map
--
diff --git 
a/hadoop-hdsl/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.js.map 
b/hadoop-hdsl/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.js.map
deleted file mode 100644
index 594da5a3..000
--- 
a/hadoop-hdsl/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.js.map
+++ /dev/null
@@ -1 +0,0 @@
-{"version":3,"file":"nv.d3.min.js","sources":["../src/core.js","../src/dom.js","../src/interactiveLayer.js","../src/tooltip.js","../src/utils.js","../src/models/axis.js","../src/models/boxPlot.js","../src/models/boxPlotChart.js","../src/models/bullet.js","../src/models/bulletChart.js","../src/models/candlestickBar.js","../src/models/cumulativeLineChart.js","../src/models/discreteBar.js","../src/models/discreteBarChart.js","../src/models/distribution.js","../src/models/focus.js","../src/models/forceDirectedGraph.js","../src/models/furiousLegend.js","../src/models/historicalBar.js","../src/models/historicalBarChart.js","../src/models/legend.js","../src/models/line.js","../src/models/lineChart.js","../src/models/linePlusBarChart.js","../src/models/multiBar.js","../src/models/multiBarChart.js","../src/models/multiBarHorizontal.js","../src/models/multiBarHorizontalChart.js","../src/models/multiChart.js","../src/models/ohlcBar.js","../src/models/parallelCoordinates.js","../src/models/para
 
llelCoordinatesChart.js","../src/models/pie.js","../src/models/pieChart.js","../src/models/sankey.js","../src/models/sankeyChart.js","../src/models/scatter.js","../src/models/scatterChart.js","../src/models/sparkline.js","../src/models/sparklinePlus.js","../src/models/stackedArea.js","../src/models/stackedAreaChart.js","../src/models/sunburst.js","../src/models/sunburstChart.js"],"names":["nv","dev","tooltip","utils","models","charts","logs","dom","d3","require","dispatch","Function","prototype","bind","oThis","this","TypeError","aArgs","Array","slice","call","arguments","fToBind","fNOP","fBound","apply","concat","on","e","startTime","Date","endTime","totalTime","log","window","console","length","deprecated","name","info","warn","render","step","active","render_start","renderLoop","chart","graph","i","queue","generate","callback","splice","setTimeout","render_end","addGraph","obj","push","module","exports","write","undefined","fastdom","mutate","read","measure","interactiveGuideline
 
","layer","selection","each","data","mouseHandler","d3mouse","mouse","mouseX","mouseY","subtractMargin","mouseOutAnyReason","isMSIE","event","offsetX","offsetY","target","tagName","className","baseVal","match","margin","left","top","type","availableWidth","availableHeight","relatedTarget","ownerSVGElement","nvPointerEventsClass","elementMouseout","renderGuideLine","hidden","scaleIsOrdinal","xScale","rangeBands","pointXValue","elementIndex","bisect","range","rangeBand","domain","invert","elementMousemove","elementDblclick","elementClick","elementMouseDown","elementMouseUp","container","select","width","height","wrap","selectAll","wrapEnter","enter","append","attr","svgContainer","guideLine","x","showGuideLine","line","NaNtoZero","String","d","exit","remove","scale","linear","ActiveXObject","duration","hideDelay","_","interactiveBisect","values","searchVal","xAccessor","_xAccessor","_cmp","v","bisector","index","max","currentValue","nextIndex","min","nextValue","Math","abs","nearestVa
 
lueIndex","threshold","yDistMax","Infinity","indexToHighlight","forEach","delta","initTooltip","node","document","body","id","classes","style","classed","nvtooltip","enabled","dataSeriesExists","newContent","contentGenerator","innerHTML","positionTooltip","floor","random","gravity","distance","snapDistance","lastPosition","headerEnabled","valueFormatter","headerFormatter","keyFormatter","table","createElement","theadEnter","html","value","tbodyEnter","trowEnter","p","series","highlight","color","total","key","filter","percent","format","opacityScale","opacity","outerHTML","footer","position","pos","clientX","clientY","getComputedStyle","transform","client","getBoundingClientRect","isArray","isObject","calcGravityOffset","tmp","offsetHeight","offsetWidth","clientWidth","documentElement","clientHeight","gravityOffset","interrupt","transition","delay","old_translate","new_translate","round","translateInterpolator","interpolateString","is_hidden","styleTween","options","optionsFunc","_o
 
ptions","Object","create","get","set","chartContainer","fixedTop","offset","point","y","initOptions","windowSize","size","innerWidth","innerHeight","compatMode","a","isFunction","isDate","toString","isNumber","isNaN","windowResize","handler","addEventListener","clear","removeEventListener","getColor","defaultColor","color_scale","ordinal","category20","customTheme","dictionary","getKey","defaultColors","defIndex","pjax","links","content","load","href

[04/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/framework/src/main/resources/webapps/static/dfs-dust.js
--
diff --git 
a/hadoop-hdsl/framework/src/main/resources/webapps/static/dfs-dust.js 
b/hadoop-hdsl/framework/src/main/resources/webapps/static/dfs-dust.js
deleted file mode 100644
index c7af6a1..000
--- a/hadoop-hdsl/framework/src/main/resources/webapps/static/dfs-dust.js
+++ /dev/null
@@ -1,133 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-(function ($, dust, exports) {
-  "use strict";
-
-  var filters = {
-'fmt_bytes': function (v) {
-  var UNITS = ['B', 'KB', 'MB', 'GB', 'TB', 'PB', 'ZB'];
-  var prev = 0, i = 0;
-  while (Math.floor(v) > 0 && i < UNITS.length) {
-prev = v;
-v /= 1024;
-i += 1;
-  }
-
-  if (i > 0 && i < UNITS.length) {
-v = prev;
-i -= 1;
-  }
-  return Math.round(v * 100) / 100 + ' ' + UNITS[i];
-},
-
-'fmt_percentage': function (v) {
-  return Math.round(v * 100) / 100 + '%';
-},
-'elapsed': function(v) {
-  //elapsed sec from epoch sec
-  return Date.now() - v * 1000;
-},
-'fmt_time': function (v) {
-  var s = Math.floor(v / 1000), h = Math.floor(s / 3600);
-  s -= h * 3600;
-  var m = Math.floor(s / 60);
-  s -= m * 60;
-
-  var res = s + " sec";
-  if (m !== 0) {
-res = m + " mins, " + res;
-  }
-
-  if (h !== 0) {
-res = h + " hrs, " + res;
-  }
-
-  return res;
-},
-
-'date_tostring' : function (v) {
-  return moment(Number(v)).format('ddd MMM DD HH:mm:ss ZZ ');
-},
-
-'format_compile_info' : function (v) {
-  var info = v.split(" by ")
-  var date = moment(info[0]).format('ddd MMM DD HH:mm:ss ZZ ');
-  return date.concat(" by ").concat(info[1]);
- },
-
-'helper_to_permission': function (v) {
-  var symbols = [ '---', '--x', '-w-', '-wx', 'r--', 'r-x', 'rw-', 'rwx' ];
-  var vInt = parseInt(v, 8);
-  var sticky = (vInt & (1 << 9)) != 0;
-
-  var res = "";
-  for (var i = 0; i < 3; ++i) {
-res = symbols[(v % 10)] + res;
-v = Math.floor(v / 10);
-  }
-
-  if (sticky) {
-var otherExec = (vInt & 1) == 1;
-res = res.substr(0, res.length - 1) + (otherExec ? 't' : 'T');
-  }
-
-  return res;
-},
-
-'helper_to_directory' : function (v) {
-  return v === 'DIRECTORY' ? 'd' : '-';
-},
-
-'helper_to_acl_bit': function (v) {
-  return v ? '+' : "";
-},
-
-'fmt_number': function (v) {
-  return v.toLocaleString();
-}
-  };
-  $.extend(dust.filters, filters);
-
-  /**
-   * Load a sequence of JSON.
-   *
-   * beans is an array of tuples in the format of {url, name}.
-   */
-  function load_json(beans, success_cb, error_cb) {
-var data = {}, error = false, to_be_completed = beans.length;
-
-$.each(beans, function(idx, b) {
-  if (error) {
-return false;
-  }
-  $.get(b.url, function (resp) {
-data[b.name] = resp;
-to_be_completed -= 1;
-if (to_be_completed === 0) {
-  success_cb(data);
-}
-  }).error(function (jqxhr, text, err) {
-error = true;
-error_cb(b.url, jqxhr, text, err);
-  });
-});
-  }
-
-  exports.load_json = load_json;
-
-}($, dust, window));

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.css
--
diff --git 
a/hadoop-hdsl/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.css 
b/hadoop-hdsl/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.css
deleted file mode 100644
index b8a5c0f..000
--- a/hadoop-hdsl/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.css
+++ /dev/null
@@ -1,2 +0,0 @@
-.nvd3 .nv-axis line,.nvd3 .nv-axis 
path{fill:none;shape-rendering:crispEdges}.nv-brush .extent,.nvd3 .background 
path,.nvd3 .nv-axis line,.nvd3 .nv-axis 
path{shape-rendering:crispEdges}.nv-distx,.nv-disty,.nv-noninteractive,.nvd3 
.nv-axi

[07/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/framework/src/main/resources/webapps/static/angular-1.6.4.min.js
--
diff --git 
a/hadoop-hdsl/framework/src/main/resources/webapps/static/angular-1.6.4.min.js 
b/hadoop-hdsl/framework/src/main/resources/webapps/static/angular-1.6.4.min.js
deleted file mode 100644
index c4bf158..000
--- 
a/hadoop-hdsl/framework/src/main/resources/webapps/static/angular-1.6.4.min.js
+++ /dev/null
@@ -1,332 +0,0 @@
-/*
- AngularJS v1.6.4
- (c) 2010-2017 Google, Inc. http://angularjs.org
- License: MIT
-*/
-(function(x){'use strict';function L(a,b){b=b||Error;return function(){var 
d=arguments[0],c;c="["+(a?a+":":"")+d+"] 
http://errors.angularjs.org/1.6.4/"+(a?a+"/":"")+d;for(d=1;dc)return"...";var d=b.$$hashKey,f;if(H(a)){f=0;for(var 
g=a.length;f").append(a).html();try{return 
a[0].nodeType===Ia?Q(d):d.match(/^(<[^>]+>)/)[1].replace(/^<([\w-]+)/,function(a,b){return"<"+Q(b)})}catch(c){return
 Q(d)}}function Qc(a){try{return decodeURIComponent(a)}catch(b){}}function 
Rc(a){var b={};q((a||"").split("&"),function(a){var 
c,e,f;a&&(e=a=a.replace(/\+/g,"%20"),c=a.indexOf("="),-1!==c&&(e=a.substring(0,c),f=a.substring(c+1)),e=Qc(e),u(e)&&(f=
-u(f)?Qc(f):!0,ua.call(b,e)?H(b[e])?b[e].push(f):b[e]=[b[e],f]:b[e]=f))});return
 b}function Zb(a){var 
b=[];q(a,function(a,c){H(a)?q(a,function(a){b.push($(c,!0)+(!0===a?"":"="+$(a,!0)))}):b.push($(c,!0)+(!0===a?"":"="+$(a,!0)))});return
 b.length?b.join("&"):""}function db(a){return 
$(a,!0).replace(/%26/gi,"&").replace(/%3D/gi,"=").replace(/%2B/gi,"+")}function 
$(a,b){return 
encodeURIComponent(a).replace(/%40/gi,"@").replace(/%3A/gi,":").replace(/%24/g,"$").replace(/%2C/gi,",").replace(/%3B/gi,";").replace(/%20/g,
-b?"%20":"+")}function te(a,b){var 
d,c,e=Ja.length;for(c=0;c protocol indicates an extension, 
document.location.href does not match."))}
-function Sc(a,b,d){C(d)||(d={});d=S({strictDi:!1},d);var 
c=function(){a=B(a);if(a.injector()){var 
c=a[0]===x.document?"document":xa(a);throw 
Fa("btstrpd",c.replace(//,">"));}b=b||[];b.unshift(["$provide",function(b){b.value("$rootElement",a)}]);d.debugInfoEnabled&&b.push(["$compileProvider",function(a){a.debugInfoEnabled(!0)}]);b.unshift("ng");c=eb(b,d.strictDi);c.invoke(["$rootScope","$rootElement","$compile","$injector",function(a,b,c,d){a.$apply(function(){b.data("$injector",
-d);c(b)(a)})}]);return 
c},e=/^NG_ENABLE_DEBUG_INFO!/,f=/^NG_DEFER_BOOTSTRAP!/;x&&e.test(x.name)&&(d.debugInfoEnabled=!0,x.name=x.name.replace(e,""));if(x&&!f.test(x.name))return
 
c();x.name=x.name.replace(f,"");ea.resumeBootstrap=function(a){q(a,function(a){b.push(a)});return
 c()};D(ea.resumeDeferredBootstrap)&&ea.resumeDeferredBootstrap()}function 
we(){x.name="NG_ENABLE_DEBUG_INFO!"+x.name;x.location.reload()}function 
xe(a){a=ea.element(a).injector();if(!a)throw Fa("test");return 
a.get("$$testability")}
-function Tc(a,b){b=b||"_";return 
a.replace(ye,function(a,c){return(c?b:"")+a.toLowerCase()})}function ze(){var 
a;if(!Uc){var b=rb();(na=w(b)?x.jQuery:b?x[b]:void 
0)&&na.fn.on?(B=na,S(na.fn,{scope:Na.scope,isolateScope:Na.isolateScope,controller:Na.controller,injector:Na.injector,inheritedData:Na.inheritedData}),a=na.cleanData,na.cleanData=function(b){for(var
 
c,e=0,f;null!=(f=b[e]);e++)(c=na._data(f,"events"))&&c.$destroy&&na(f).triggerHandler("$destroy");a(b)}):B=W;ea.element=B;Uc=!0}}function
 fb(a,
-b,d){if(!a)throw Fa("areq",b||"?",d||"required");return a}function 
sb(a,b,d){d&&H(a)&&(a=a[a.length-1]);fb(D(a),b,"not a function, got 
"+(a&&"object"===typeof a?a.constructor.name||"Object":typeof a));return 
a}function Ka(a,b){if("hasOwnProperty"===a)throw Fa("badname",b);}function 
Vc(a,b,d){if(!b)return a;b=b.split(".");for(var 
c,e=a,f=b.length,g=0;g")+c[2];for(c=c[0];c--;)d=d.lastChild;f=ab(f,d.childNodes);
-d=e.firstChild;d.textContent=""}else 
f.push(b.createTextNode(a));e.textContent="";e.innerHTML="";q(f,function(a){e.appendChild(a)});return
 e}function W(a){if(a instanceof W)return a;var b;F(a)&&(a=T(a),b=!0);if(!(this 
instanceof W)){if(b&&"<"!==a.charAt(0))throw dc("nosel");return new 
W(a)}if(b){b=x.document;var 
d;a=(d=dg.exec(a))?[b.createElement(d[1])]:(d=dd(a,b))?d.childNodes:[];ec(this,a)}else
 D(a)?ed(a):ec(this,a)}function fc(a){return a.cloneNode(!0)}function 
xb(a,b){!b&&bc(a)&&B.cleanData([a]);
-a.querySelectorAll&&B.cleanData(a.querySelectorAll("*"))}function 
fd(a,b,d,c){if(u(c))throw dc("offargs");var 
e=(c=yb(a))&&c.events,f=c&&c.handle;if(f)if(b){var g=function(b){var 
c=e[b];u(d)&&$a(c||[],d);u(d)&&c&&0l&&this.remove(p.key);return 
b}},get:function(a){if(l";b=ta.firstChild.attributes;var 
d=b[0];b.removeNamedItem(d.name);d.value=c;a.attributes.setNamedItem(d)}function
 La(a,
-b){try{a.addClass(b)}catch(c){}}function ca(a,b,c,d,e){a instanceof 
B||(a=B(a));var f=Ma(a,b,a,c,d,e);ca.$$addScopeClass(a);var g=null;return 
function(b,c,d){if(!a)throw 
fa("multilink");fb(b,"scope");e&&e.n

[01/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
Repository: hadoop
Updated Branches:
  refs/heads/trunk 1b9ecc264 -> 9d6befb29


http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/framework/src/main/resources/webapps/static/ozone.css
--
diff --git a/hadoop-hdsl/framework/src/main/resources/webapps/static/ozone.css 
b/hadoop-hdsl/framework/src/main/resources/webapps/static/ozone.css
deleted file mode 100644
index 271ac74..000
--- a/hadoop-hdsl/framework/src/main/resources/webapps/static/ozone.css
+++ /dev/null
@@ -1,60 +0,0 @@
-/**
- *   Licensed to the Apache Software Foundation (ASF) under one or more
- *  contributor license agreements.  See the NOTICE file distributed with
- *  this work for additional information regarding copyright ownership.
- *  The ASF licenses this file to You under the Apache License, Version 2.0
- *  (the "License"); you may not use this file except in compliance with
- *  the License.  You may obtain a copy of the License at
- *
- *  http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
-*/
-body {
-padding: 40px;
-padding-top: 60px;
-}
-.starter-template {
-padding: 40px 15px;
-text-align: center;
-}
-
-
-.btn {
-border: 0 none;
-font-weight: 700;
-letter-spacing: 1px;
-text-transform: uppercase;
-}
-
-.btn:focus, .btn:active:focus, .btn.active:focus {
-outline: 0 none;
-}
-
-.table-striped > tbody > tr:nth-child(2n+1).selectedtag > td:hover {
-background-color: #3276b1;
-}
-.table-striped > tbody > tr:nth-child(2n+1).selectedtag > td {
-background-color: #3276b1;
-}
-.tagPanel tr.selectedtag td {
-background-color: #3276b1;
-}
-.top-buffer { margin-top:4px; }
-
-
-.sortorder:after {
-content: '\25b2';   // BLACK UP-POINTING TRIANGLE
-}
-.sortorder.reverse:after {
-content: '\25bc';   // BLACK DOWN-POINTING TRIANGLE
-}
-
-.wrap-table{
-word-wrap: break-word;
-table-layout: fixed;
-}


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[06/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/framework/src/main/resources/webapps/static/angular-nvd3-1.0.9.min.js
--
diff --git 
a/hadoop-hdsl/framework/src/main/resources/webapps/static/angular-nvd3-1.0.9.min.js
 
b/hadoop-hdsl/framework/src/main/resources/webapps/static/angular-nvd3-1.0.9.min.js
deleted file mode 100644
index 4aced57..000
--- 
a/hadoop-hdsl/framework/src/main/resources/webapps/static/angular-nvd3-1.0.9.min.js
+++ /dev/null
@@ -1 +0,0 @@
-!function(window){"use strict";var nv=window.nv;"undefined"!=typeof 
exports&&(nv=require("nvd3")),angular.module("nvd3",[]).directive("nvd3",["nvd3Utils",function(nvd3Utils){return{restrict:"AE",scope:{data:"=",options:"=",api:"=?",events:"=?",config:"=?",onReady:"&?"},link:function(scope,element,attrs){function
 
configure(chart,options,chartType){chart&&options&&angular.forEach(chart,function(value,key){"_"===key[0]||("dispatch"===key?(void
 
0!==options[key]&&null!==options[key]||scope._config.extended&&(options[key]={}),configureEvents(value,options[key])):"tooltip"===key?(void
 
0!==options[key]&&null!==options[key]||scope._config.extended&&(options[key]={}),configure(chart[key],options[key],chartType)):"contentGenerator"===key?options[key]&&chart[key](options[key]):-1===["axis","clearHighlights","defined","highlightPoint","nvPointerEventsClass","options","rangeBand","rangeBands","scatter","open","close","node"].indexOf(key)&&(void
 0===options[key]||null===options[key]?scope._config.
 extended&&(options[key]=value()):chart[key](options[key])))})}function 
configureEvents(dispatch,options){dispatch&&options&&angular.forEach(dispatch,function(value,key){void
 
0===options[key]||null===options[key]?scope._config.extended&&(options[key]=value.on):dispatch.on(key+"._",options[key])})}function
 configureWrapper(name){var 
_=nvd3Utils.deepExtend(defaultWrapper(name),scope.options[name]||{});scope._config.extended&&(scope.options[name]=_);var
 
wrapElement=angular.element("").html(_.html||"").addClass(name).addClass(_.className).removeAttr("style").css(_.css);_.html||wrapElement.text(_.text),_.enable&&("title"===name?element.prepend(wrapElement):"subtitle"===name?angular.element(element[0].querySelector(".title")).after(wrapElement):"caption"===name&&element.append(wrapElement))}function
 configureStyles(){var 
_=nvd3Utils.deepExtend(defaultStyles(),scope.options.styles||{});scope._config.extended&&(scope.options.styles=_),angular.forEach(_.classes,function(value,key){
 
value?element.addClass(key):element.removeClass(key)}),element.removeAttr("style").css(_.css)}function
 defaultWrapper(_){switch(_){case"title":return{enable:!1,text:"Write Your 
Title",className:"h4",css:{width:scope.options.chart.width+"px",textAlign:"center"}};case"subtitle":return{enable:!1,text:"Write
 Your 
Subtitle",css:{width:scope.options.chart.width+"px",textAlign:"center"}};case"caption":return{enable:!1,text:"Figure
 1. Write Your Caption 
text.",css:{width:scope.options.chart.width+"px",textAlign:"center"function 
defaultStyles(){return{classes:{"with-3d-shadow":!0,"with-transitions":!0,gallery:!1},css:{}}}function
 
dataWatchFn(newData,oldData){newData!==oldData&&(scope._config.disabled||(scope._config.refreshDataOnly?scope.api.update():scope.api.refresh()))}var
 
defaultConfig={extended:!1,visible:!0,disabled:!1,refreshDataOnly:!0,deepWatchOptions:!0,deepWatchData:!0,deepWatchDataDepth:2,debounce:10,debounceImmediate:!0};scope.isReady=!1,scope._config=angular.extend(defaultC
 
onfig,scope.config),scope.api={refresh:function(){scope.api.updateWithOptions(),scope.isReady=!0},refreshWithTimeout:function(t){setTimeout(function(){scope.api.refresh()},t)},update:function(){scope.chart&&scope.svg?"sunburstChart"===scope.options.chart.type?scope.svg.datum(angular.copy(scope.data)).call(scope.chart):scope.svg.datum(scope.data).call(scope.chart):scope.api.refresh()},updateWithTimeout:function(t){setTimeout(function(){scope.api.update()},t)},updateWithOptions:function(options){if(arguments.length){if(scope.options=options,scope._config.deepWatchOptions&&!scope._config.disabled)return}else
 
options=scope.options;scope.api.clearElement(),angular.isDefined(options)!==!1&&scope._config.visible&&(scope.chart=nv.models[options.chart.type](),scope.chart.id=Math.random().toString(36).substr(2,15),angular.forEach(scope.chart,function(value,key){"_"===key[0]||["clearHighlights","highlightPoint","id","options","resizeHandler","state","open","close","tooltipContent"].indexOf(key
 )>=0||("dispatch"===key?(void 
0!==options.chart[key]&&null!==options.chart[key]||scope._config.extended&&(options.chart[key]={}),configureEvents(scope.chart[key],options.chart[key])):["bars","bars1","bars2","boxplot","bullet","controls","discretebar","distX","distY","focus","interactiveLayer","legend","lines","lines1","lines2","multibar","pie","scatter","scatters1","scatters2","sparkline","stack1","stack2","sunburst"

[09/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
--
diff --git 
a/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
 
b/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
deleted file mode 100644
index 0ef9406..000
--- 
a/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
+++ /dev/null
@@ -1,270 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with this
- * work for additional information regarding copyright ownership.  The ASF
- * licenses this file to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- * License for the specific language governing permissions and limitations 
under
- * the License.
- */
-
-package org.apache.hadoop.ozone.container.ozoneimpl;
-
-import com.google.common.annotations.VisibleForTesting;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
-import org.apache.hadoop.hdsl.protocol.DatanodeDetails;
-import org.apache.hadoop.ozone.OzoneConfigKeys;
-import org.apache.hadoop.ozone.container.common.helpers.ContainerData;
-import org.apache.hadoop.ozone.container.common.impl.ChunkManagerImpl;
-import org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl;
-import org.apache.hadoop.ozone.container.common.impl.Dispatcher;
-import org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl;
-import org.apache.hadoop.ozone.container.common.interfaces.ChunkManager;
-import org.apache.hadoop.ozone.container.common.interfaces.ContainerDispatcher;
-import org.apache.hadoop.ozone.container.common.interfaces.ContainerManager;
-import org.apache.hadoop.ozone.container.common.interfaces.KeyManager;
-import 
org.apache.hadoop.ozone.container.common.statemachine.background.BlockDeletingService;
-import org.apache.hadoop.ozone.container.common.transport.server.XceiverServer;
-import 
org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis;
-
-import org.apache.hadoop.hdsl.protocol.proto.HdslProtos;
-import 
org.apache.hadoop.hdsl.protocol.proto.StorageContainerDatanodeProtocolProtos.ContainerReportsRequestProto;
-import 
org.apache.hadoop.hdsl.protocol.proto.StorageContainerDatanodeProtocolProtos.ReportState;
-import 
org.apache.hadoop.hdsl.protocol.proto.StorageContainerDatanodeProtocolProtos.SCMNodeReport;
-import 
org.apache.hadoop.ozone.container.common.transport.server.XceiverServerSpi;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.IOException;
-import java.nio.file.Paths;
-import java.util.LinkedList;
-import java.util.List;
-import java.util.concurrent.TimeUnit;
-
-import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_DATA_DIR_KEY;
-import static org.apache.hadoop.ozone.OzoneConsts.CONTAINER_ROOT_PREFIX;
-import static org.apache.hadoop.ozone.OzoneConfigKeys
-.OZONE_BLOCK_DELETING_SERVICE_INTERVAL;
-import static org.apache.hadoop.ozone.OzoneConfigKeys
-.OZONE_BLOCK_DELETING_SERVICE_INTERVAL_DEFAULT;
-import static org.apache.hadoop.ozone.OzoneConsts.INVALID_PORT;
-import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_BLOCK_DELETING_SERVICE_TIMEOUT;
-import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_BLOCK_DELETING_SERVICE_TIMEOUT_DEFAULT;
-
-/**
- * Ozone main class sets up the network server and initializes the container
- * layer.
- */
-public class OzoneContainer {
-  private static final Logger LOG =
-  LoggerFactory.getLogger(OzoneContainer.class);
-
-  private final Configuration ozoneConfig;
-  private final ContainerDispatcher dispatcher;
-  private final ContainerManager manager;
-  private final XceiverServerSpi[] server;
-  private final ChunkManager chunkManager;
-  private final KeyManager keyManager;
-  private final BlockDeletingService blockDeletingService;
-
-  /**
-   * Creates a network endpoint and enables Ozone container.
-   *
-   * @param ozoneConfig - Config
-   * @throws IOException
-   */
-  public OzoneContainer(
-  DatanodeDetails datanodeDetails, Configuration ozoneConfig)
-  throws IOException {
-this.ozoneConfig = ozoneConfig;
-List locations = new LinkedList<>();
-String[] paths = ozoneConfig.getStrings(
-OzoneC

[10/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java
--
diff --git 
a/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java
 
b/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java
deleted file mode 100644
index 08f47a2..000
--- 
a/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java
+++ /dev/null
@@ -1,157 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with this
- * work for additional information regarding copyright ownership.  The ASF
- * licenses this file to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- * License for the specific language governing permissions and limitations 
under
- * the License.
- */
-package org.apache.hadoop.ozone.container.common.states.datanode;
-
-import com.google.common.base.Strings;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hdsl.HdslUtils;
-import org.apache.hadoop.hdsl.protocol.DatanodeDetails;
-import org.apache.hadoop.ozone.container.common.helpers.ContainerUtils;
-import 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine;
-import 
org.apache.hadoop.ozone.container.common.statemachine.SCMConnectionManager;
-import org.apache.hadoop.ozone.container.common.statemachine.StateContext;
-import org.apache.hadoop.ozone.container.common.states.DatanodeState;
-
-import org.apache.hadoop.scm.ScmConfigKeys;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.File;
-import java.io.IOException;
-import java.net.InetSocketAddress;
-import java.util.Collection;
-import java.util.concurrent.Callable;
-import java.util.concurrent.ExecutionException;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Future;
-import java.util.concurrent.TimeUnit;
-import java.util.concurrent.TimeoutException;
-
-import static org.apache.hadoop.hdsl.HdslUtils.getSCMAddresses;
-
-/**
- * Init Datanode State is the task that gets run when we are in Init State.
- */
-public class InitDatanodeState implements DatanodeState,
-Callable {
-  static final Logger LOG = LoggerFactory.getLogger(InitDatanodeState.class);
-  private final SCMConnectionManager connectionManager;
-  private final Configuration conf;
-  private final StateContext context;
-  private Future result;
-
-  /**
-   *  Create InitDatanodeState Task.
-   *
-   * @param conf - Conf
-   * @param connectionManager - Connection Manager
-   * @param context - Current Context
-   */
-  public InitDatanodeState(Configuration conf,
-   SCMConnectionManager connectionManager,
-   StateContext context) {
-this.conf = conf;
-this.connectionManager = connectionManager;
-this.context = context;
-  }
-
-  /**
-   * Computes a result, or throws an exception if unable to do so.
-   *
-   * @return computed result
-   * @throws Exception if unable to compute a result
-   */
-  @Override
-  public DatanodeStateMachine.DatanodeStates call() throws Exception {
-Collection addresses = null;
-try {
-  addresses = getSCMAddresses(conf);
-} catch (IllegalArgumentException e) {
-  if(!Strings.isNullOrEmpty(e.getMessage())) {
-LOG.error("Failed to get SCM addresses: " + e.getMessage());
-  }
-  return DatanodeStateMachine.DatanodeStates.SHUTDOWN;
-}
-
-if (addresses == null || addresses.isEmpty()) {
-  LOG.error("Null or empty SCM address list found.");
-  return DatanodeStateMachine.DatanodeStates.SHUTDOWN;
-} else {
-  for (InetSocketAddress addr : addresses) {
-connectionManager.addSCMServer(addr);
-  }
-}
-
-// If datanode ID is set, persist it to the ID file.
-persistContainerDatanodeDetails();
-
-return this.context.getState().getNextState();
-  }
-
-  /**
-   * Persist DatanodeDetails to datanode.id file.
-   */
-  private void persistContainerDatanodeDetails() throws IOException {
-String dataNodeIDPath = HdslUtils.getDatanodeIdFilePath(conf);
-File idPath = new File(dataNodeIDPath);
-DatanodeDetails datanodeDetails = this.context.getParent()
-.getDatanodeDetails();
-if (d

[03/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.js
--
diff --git 
a/hadoop-hdsl/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.js 
b/hadoop-hdsl/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.js
deleted file mode 100644
index 9cfd702..000
--- a/hadoop-hdsl/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.js
+++ /dev/null
@@ -1,11 +0,0 @@
-/* nvd3 version 1.8.5 (https://github.com/novus/nvd3) 2016-12-01 */
-
-!function(){var 
a={};a.dev=!1,a.tooltip=a.tooltip||{},a.utils=a.utils||{},a.models=a.models||{},a.charts={},a.logs={},a.dom={},"undefined"!=typeof
 module&&"undefined"!=typeof exports&&"undefined"==typeof 
d3&&(d3=require("d3")),a.dispatch=d3.dispatch("render_start","render_end"),Function.prototype.bind||(Function.prototype.bind=function(a){if("function"!=typeof
 this)throw new TypeError("Function.prototype.bind - what is trying to be bound 
is not callable");var 
b=Array.prototype.slice.call(arguments,1),c=this,d=function(){},e=function(){return
 c.apply(this instanceof 
d&&a?this:a,b.concat(Array.prototype.slice.call(arguments)))};return 
d.prototype=this.prototype,e.prototype=new 
d,e}),a.dev&&(a.dispatch.on("render_start",function(b){a.logs.startTime=+new 
Date}),a.dispatch.on("render_end",function(b){a.logs.endTime=+new 
Date,a.logs.totalTime=a.logs.endTime-a.logs.startTime,a.log("total",a.logs.totalTime)})),a.log=function(){if(a.dev&&window.console&&console.log&&console.log.apply)console
 .log.apply(console,arguments);else 
if(a.dev&&window.console&&"function"==typeof 
console.log&&Function.prototype.bind){var 
b=Function.prototype.bind.call(console.log,console);b.apply(console,arguments)}return
 
arguments[arguments.length-1]},a.deprecated=function(a,b){console&&console.warn&&console.warn("nvd3
 warning: `"+a+"` has been deprecated. 
",b||"")},a.render=function(b){b=b||1,a.render.active=!0,a.dispatch.render_start();var
 c=function(){for(var 
d,e,f=0;b>f&&(e=a.render.queue[f]);f++)d=e.generate(),typeof e.callback==typeof 
Function&&e.callback(d);a.render.queue.splice(0,f),a.render.queue.length?setTimeout(c):(a.dispatch.render_end(),a.render.active=!1)};setTimeout(c)},a.render.active=!1,a.render.queue=[],a.addGraph=function(b){typeof
 arguments[0]==typeof 
Function&&(b={generate:arguments[0],callback:arguments[1]}),a.render.queue.push(b),a.render.active||a.render()},"undefined"!=typeof
 module&&"undefined"!=typeof exports&&(module.exports=a),"undefined"!=typeof 
window&&(window.nv=
 a),a.dom.write=function(a){return void 
0!==window.fastdom?fastdom.mutate(a):a()},a.dom.read=function(a){return void 
0!==window.fastdom?fastdom.measure(a):a()},a.interactiveGuideline=function(){"use
 strict";function b(l){l.each(function(l){function m(){var 
a=d3.mouse(this),d=a[0],e=a[1],h=!0,i=!1;if(k&&(d=d3.event.offsetX,e=d3.event.offsetY,"svg"!==d3.event.target.tagName&&(h=!1),d3.event.target.className.baseVal.match("nv-legend")&&(i=!0)),h&&(d-=c.left,e-=c.top),"mouseout"===d3.event.type||0>d||0>e||d>o||e>p||d3.event.relatedTarget&&void
 
0===d3.event.relatedTarget.ownerSVGElement||i){if(k&&d3.event.relatedTarget&&void
 0===d3.event.relatedTarget.ownerSVGElement&&(void 
0===d3.event.relatedTarget.className||d3.event.relatedTarget.className.match(j.nvPointerEventsClass)))return;return
 g.elementMouseout({mouseX:d,mouseY:e}),b.renderGuideLine(null),void 
j.hidden(!0)}j.hidden(!1);var l="function"==typeof f.rangeBands,m=void 
0;if(l){var n=d3.bisect(f.range(),d)-1;if(!(f.range()[n]+f.rangeB
 and()>=d))return 
g.elementMouseout({mouseX:d,mouseY:e}),b.renderGuideLine(null),void 
j.hidden(!0);m=f.domain()[d3.bisect(f.range(),d)-1]}else 
m=f.invert(d);g.elementMousemove({mouseX:d,mouseY:e,pointXValue:m}),"dblclick"===d3.event.type&&g.elementDblclick({mouseX:d,mouseY:e,pointXValue:m}),"click"===d3.event.type&&g.elementClick({mouseX:d,mouseY:e,pointXValue:m}),"mousedown"===d3.event.type&&g.elementMouseDown({mouseX:d,mouseY:e,pointXValue:m}),"mouseup"===d3.event.type&&g.elementMouseUp({mouseX:d,mouseY:e,pointXValue:m})}var
 
n=d3.select(this),o=d||960,p=e||400,q=n.selectAll("g.nv-wrap.nv-interactiveLineLayer").data([l]),r=q.enter().append("g").attr("class","
 nv-wrap 
nv-interactiveLineLayer");r.append("g").attr("class","nv-interactiveGuideLine"),i&&(i.on("touchmove",m).on("mousemove",m,!0).on("mouseout",m,!0).on("mousedown",m,!0).on("mouseup",m,!0).on("dblclick",m).on("click",m),b.guideLine=null,b.renderGuideLine=function(c){h&&(b.guideLine&&b.guideLine.attr("x1")===c||a.dom.write(f
 unction(){var 
b=q.select(".nv-interactiveGuideLine").selectAll("line").data(null!=c?[a.utils.NaNtoZero(c)]:[],String);b.enter().append("line").attr("class","nv-guideline").attr("x1",function(a){return
 a}).attr("x2",function(a){return 
a}).attr("y1",p).attr("y2",0),b.exit().remove()}))})})}var 
c={left:0,top:0},d=null,e=null,f=d3.scale.linear(),g=d3.dispa

[12/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/Dispatcher.java
--
diff --git 
a/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/Dispatcher.java
 
b/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/Dispatcher.java
deleted file mode 100644
index a1690b5..000
--- 
a/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/Dispatcher.java
+++ /dev/null
@@ -1,708 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *  http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.ozone.container.common.impl;
-
-import com.google.common.base.Preconditions;
-import com.google.protobuf.ByteString;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hdsl.protocol.proto.ContainerProtos;
-import org.apache.hadoop.hdsl.protocol.proto.ContainerProtos
-.ContainerCommandRequestProto;
-import org.apache.hadoop.hdsl.protocol.proto.ContainerProtos
-.ContainerCommandResponseProto;
-import org.apache.hadoop.hdsl.protocol.proto.ContainerProtos.Type;
-import org.apache.hadoop.ozone.container.common.helpers.ChunkInfo;
-import org.apache.hadoop.ozone.container.common.helpers.ChunkUtils;
-import org.apache.hadoop.ozone.container.common.helpers.ContainerData;
-import org.apache.hadoop.ozone.container.common.helpers.ContainerMetrics;
-import org.apache.hadoop.ozone.container.common.helpers.ContainerUtils;
-import org.apache.hadoop.ozone.container.common.helpers.FileUtils;
-import org.apache.hadoop.ozone.container.common.helpers.KeyData;
-import org.apache.hadoop.ozone.container.common.helpers.KeyUtils;
-import org.apache.hadoop.ozone.container.common.interfaces.ContainerDispatcher;
-import org.apache.hadoop.ozone.container.common.interfaces.ContainerManager;
-import org.apache.hadoop.scm.container.common.helpers.Pipeline;
-import 
org.apache.hadoop.scm.container.common.helpers.StorageContainerException;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.IOException;
-import java.security.NoSuchAlgorithmException;
-import java.util.LinkedList;
-import java.util.List;
-
-import static 
org.apache.hadoop.hdsl.protocol.proto.ContainerProtos.Result.CLOSED_CONTAINER_IO;
-import static 
org.apache.hadoop.hdsl.protocol.proto.ContainerProtos.Result.GET_SMALL_FILE_ERROR;
-import static 
org.apache.hadoop.hdsl.protocol.proto.ContainerProtos.Result.NO_SUCH_ALGORITHM;
-import static 
org.apache.hadoop.hdsl.protocol.proto.ContainerProtos.Result.PUT_SMALL_FILE_ERROR;
-
-/**
- * Ozone Container dispatcher takes a call from the netty server and routes it
- * to the right handler function.
- */
-public class Dispatcher implements ContainerDispatcher {
-  static final Logger LOG = LoggerFactory.getLogger(Dispatcher.class);
-
-  private final ContainerManager containerManager;
-  private ContainerMetrics metrics;
-  private Configuration conf;
-
-  /**
-   * Constructs an OzoneContainer that receives calls from
-   * XceiverServerHandler.
-   *
-   * @param containerManager - A class that manages containers.
-   */
-  public Dispatcher(ContainerManager containerManager, Configuration config) {
-Preconditions.checkNotNull(containerManager);
-this.containerManager = containerManager;
-this.metrics = null;
-this.conf = config;
-  }
-
-  @Override
-  public void init() {
-this.metrics = ContainerMetrics.create(conf);
-  }
-
-  @Override
-  public void shutdown() {
-  }
-
-  @Override
-  public ContainerCommandResponseProto dispatch(
-  ContainerCommandRequestProto msg) {
-LOG.trace("Command {}, trace ID: {} ", msg.getCmdType().toString(),
-msg.getTraceID());
-long startNanos = System.nanoTime();
-ContainerCommandResponseProto resp = null;
-try {
-  Preconditions.checkNotNull(msg);
-  Type cmdType = msg.getCmdType();
-  metrics.incContainerOpcMetrics(cmdType);
-  if ((cmdType == Type.CreateContainer) ||
-  (cmdType == Type.DeleteContainer) ||
-  (cmdType == Type.ReadContainer) ||
-  (cmdType == Type

[15/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/common/src/main/resources/ozone-default.xml
--
diff --git a/hadoop-hdsl/common/src/main/resources/ozone-default.xml 
b/hadoop-hdsl/common/src/main/resources/ozone-default.xml
deleted file mode 100644
index 9feadcf..000
--- a/hadoop-hdsl/common/src/main/resources/ozone-default.xml
+++ /dev/null
@@ -1,1031 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-  
-  
-ozone.container.cache.size
-1024
-PERFORMANCE, CONTAINER, STORAGE
-The open container is cached on the data node side. We 
maintain
-  an LRU
-  cache for caching the recently used containers. This setting controls the
-  size of that cache.
-
-  
-  
-dfs.container.ipc
-9859
-OZONE, CONTAINER, MANAGEMENT
-The ipc port number of container.
-  
-  
-dfs.container.ipc.random.port
-false
-OZONE, DEBUG, CONTAINER
-Allocates a random free port for ozone container. This is used
-  only while
-  running unit tests.
-
-  
-  
-dfs.container.ratis.datanode.storage.dir
-
-OZONE, CONTAINER, STORAGE, MANAGEMENT, RATIS
-This directory is used for storing Ratis metadata like logs. 
If
-  this is
-  not set then default metadata dirs is used. A warning will be logged if
-  this not set. Ideally, this should be mapped to a fast disk like an SSD.
-
-  
-  
-dfs.container.ratis.enabled
-false
-OZONE, MANAGEMENT, PIPELINE, RATIS
-Ozone supports different kinds of replication pipelines. Ratis
-  is one of
-  the replication pipeline supported by ozone.
-
-  
-  
-dfs.container.ratis.ipc
-9858
-OZONE, CONTAINER, PIPELINE, RATIS, MANAGEMENT
-The ipc port number of container.
-  
-  
-dfs.container.ratis.ipc.random.port
-false
-OZONE,DEBUG
-Allocates a random free port for ozone ratis port for the
-  container. This
-  is used only while running unit tests.
-
-  
-  
-dfs.container.ratis.rpc.type
-GRPC
-OZONE, RATIS, MANAGEMENT
-Ratis supports different kinds of transports like netty, GRPC,
-  Hadoop RPC
-  etc. This picks one of those for this cluster.
-
-  
-  
-dfs.container.ratis.num.write.chunk.threads
-60
-OZONE, RATIS, PERFORMANCE
-Maximum number of threads in the thread pool that Ratis
-  will use for writing chunks (60 by default).
-
-  
-  
-dfs.container.ratis.segment.size
-1073741824
-OZONE, RATIS, PERFORMANCE
-The size of the raft segment used by Apache Ratis on 
datanodes.
-  (1 GB by default)
-
-  
-  
-dfs.container.ratis.segment.preallocated.size
-134217728
-OZONE, RATIS, PERFORMANCE
-The size of the buffer which is preallocated for raft segment
-  used by Apache Ratis on datanodes.(128 MB by default)
-
-  
-  
-ozone.container.report.interval
-6ms
-OZONE, CONTAINER, MANAGEMENT
-Time interval of the datanode to send container report. Each
-  datanode periodically send container report upon receive
-  sendContainerReport from SCM. Unit could be defined with
-  postfix (ns,ms,s,m,h,d)
-  
-  
-  
-ozone.administrators
-
-OZONE, SECURITY
-Ozone administrator users delimited by the comma.
-  If not set, only the user who launches an ozone service will be the admin
-  user. This property must be set if ozone services are started by 
different
-  users. Otherwise, the RPC layer will reject calls from other servers 
which
-  are started by users not in the list.
-
-  
-  
-ozone.block.deleting.container.limit.per.interval
-10
-OZONE, PERFORMANCE, SCM
-A maximum number of containers to be scanned by block deleting
-  service per
-  time interval. The block deleting service spawns a thread to handle block
-  deletions in a container. This property is used to throttle the number of
-  threads spawned for block deletions.
-
-  
-  
-ozone.block.deleting.limit.per.task
-1000
-OZONE, PERFORMANCE, SCM
-A maximum number of blocks to be deleted by block deleting
-  service per
-  time interval. This property is used to throttle the actual number of
-  block deletions on a data node per container.
-
-  
-  
-ozone.block.deleting.service.interval
-1m
-OZONE, PERFORMANCE, SCM
-Time interval of the block deleting service.
-  The block deleting service runs on each datanode periodically and
-  deletes blocks queued for deletion. Unit could be defined with
-  postfix (ns,ms,s,m,h,d)
-
-  
-  
-ozone.block.deleting.service.timeout
-30ms
-OZONE, PERFORMANCE, SCM
-A timeout value of block deletion service. If this is set
-  greater than 0,
-  the service will stop waiting for the block deleting completion after 
this
-  time. If timeout happens to a large proportion of block deletio

[05/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/framework/src/main/resources/webapps/static/d3-3.5.17.min.js
--
diff --git 
a/hadoop-hdsl/framework/src/main/resources/webapps/static/d3-3.5.17.min.js 
b/hadoop-hdsl/framework/src/main/resources/webapps/static/d3-3.5.17.min.js
deleted file mode 100644
index 1664873..000
--- a/hadoop-hdsl/framework/src/main/resources/webapps/static/d3-3.5.17.min.js
+++ /dev/null
@@ -1,5 +0,0 @@
-!function(){function n(n){return 
n&&(n.ownerDocument||n.document||n).documentElement}function t(n){return 
n&&(n.ownerDocument&&n.ownerDocument.defaultView||n.document&&n||n.defaultView)}function
 e(n,t){return t>n?-1:n>t?1:n>=t?0:NaN}function r(n){return 
null===n?NaN:+n}function i(n){return!isNaN(n)}function 
u(n){return{left:function(t,e,r,i){for(arguments.length<3&&(r=0),arguments.length<4&&(i=t.length);i>r;){var
 u=r+i>>>1;n(t[u],e)<0?r=u+1:i=u}return 
r},right:function(t,e,r,i){for(arguments.length<3&&(r=0),arguments.length<4&&(i=t.length);i>r;){var
 u=r+i>>>1;n(t[u],e)>0?i=u:r=u+1}return r}}}function o(n){return 
n.length}function a(n){for(var t=1;n*t%1;)t*=10;return t}function 
l(n,t){for(var e in 
t)Object.defineProperty(n.prototype,e,{value:t[e],enumerable:!1})}function 
c(){this._=Object.create(null)}function 
f(n){return(n+="")===bo||n[0]===_o?_o+n:n}function 
s(n){return(n+="")[0]===_o?n.slice(1):n}function h(n){return f(n)in 
this._}function p(n){return(n=f(n))in this._&&delete this
 ._[n]}function g(){var n=[];for(var t in this._)n.push(s(t));return n}function 
v(){var n=0;for(var t in this._)++n;return n}function d(){for(var n in 
this._)return!1;return!0}function y(){this._=Object.create(null)}function 
m(n){return n}function M(n,t,e){return function(){var 
r=e.apply(t,arguments);return r===t?n:r}}function x(n,t){if(t in n)return 
t;t=t.charAt(0).toUpperCase()+t.slice(1);for(var e=0,r=wo.length;r>e;++e){var 
i=wo[e]+t;if(i in n)return i}}function b(){}function _(){}function 
w(n){function t(){for(var 
t,r=e,i=-1,u=r.length;++ie;e++)for(var 
i,u=n[e],o=0,a=u.length;a>o;o++)(i=u[o])&&t(i,o,e);return n}function 
Z(n){return ko(n,qo),n}function V(n){var t,e;return function(r,i,u){var 
o,a=n[u].update,l=a.length;for(u!=e&&(e=u,t=0),i>=t&&(t=i+1);!(o=a[t])&&++t0&&(n=n.slice(0,a));var c=To.get(n);return 
c&&(n=c,l=B),a?t?i:r:t?b:u}function $(n,t){return function(e){var 
r=ao.event;ao.event=e,t[0]=this.__data__;try{n.apply(this,t)}finally{ao.event=r}}}function
 B(n,t){var e=$(n,t);return function(n){var 
t=this,r=n.relatedTarget;r&&(r===t||8&r.compareDocumentPosition(t))||e.call(t,n)}}function
 W(e){var r=".dragsuppress-"+ 
++Do,i="click"+r,u=ao.select(t(e)).on("touchmove"+r,S).on("dragstart"+r,S).on("selectstart"+r,S);if(null==Ro&&(Ro="onselectstart"in
 e?!1:x(e.style,"userSelect")),Ro){var o=n(e).style,a=o[Ro];o[Ro]="none"}return 
function(n){if(u.on(r,null),Ro&&(o[Ro]=a),n){var 
t=function(){u.on(i,null)};u.on(i,function(){S(),t()},!0),setTimeout(t,0)}}}function
 J(n,e){e.changedTouches&&(e=e.changedTouches[0]);var 
r=n.ownerSVGElement||n;if(r.createSVGPoint){var 
i=r.createSVGPoint();if(0>Po){var u=t(n);if(u.scrollX||u.scrollY){r=ao.select
 
("body").append("svg").style({position:"absolute",top:0,left:0,margin:0,padding:0,border:"none"},"important");var
 o=r[0][0].getScreenCTM();Po=!(o.f||o.e),r.remove()}}return 
Po?(i.x=e.pageX,i.y=e.pageY):(i.x=e.clientX,i.y=e.clientY),i=i.matrixTransform(n.getScreenCTM().inverse()),[i.x,i.y]}var
 
a=n.getBoundingClientRect();return[e.clientX-a.left-n.clientLeft,e.clientY-a.top-n.clientTop]}function
 G(){return ao.event.changedTouches[0].identifier}function K(n){return 
n>0?1:0>n?-1:0}function 
Q(n,t,e){return(t[0]-n[0])*(e[1]-n[1])-(t[1]-n[1])*(e[0]-n[0])}function 
nn(n){return n>1?0:-1>n?Fo:Math.acos(n)}function tn(n){return 
n>1?Io:-1>n?-Io:Math.asin(n)}function 
en(n){return((n=Math.exp(n))-1/n)/2}function 
rn(n){return((n=Math.exp(n))+1/n)/2}function 
un(n){return((n=Math.exp(2*n))-1)/(n+1)}function 
on(n){return(n=Math.sin(n/2))*n}function an(){}function ln(n,t,e){return this 
instanceof ln?(this.h=+n,this.s=+t,void(this.l=+e)):arguments.length<2?n 
instanceof ln?new ln(n.h,n.s,n.l):_n(""+n,wn
 ,ln):new ln(n,t,e)}function cn(n,t,e){function r(n){return 
n>360?n-=360:0>n&&(n+=360),60>n?u+(o-u)*n/60:180>n?o:240>n?u+(o-u)*(240-n)/60:u}function
 i(n){return Math.round(255*r(n))}var u,o;return 
n=isNaN(n)?0:(n%=360)<0?n+360:n,t=isNaN(t)?0:0>t?0:t>1?1:t,e=0>e?0:e>1?1:e,o=.5>=e?e*(1+t):e+t-e*t,u=2*e-o,new
 mn(i(n+120),i(n),i(n-120))}function fn(n,t,e){return this instanceof 
fn?(this.h=+n,this.c=+t,void(this.l=+e)):arguments.length<2?n instanceof fn?new 
fn(n.h,n.c,n.l):n instanceof 
hn?gn(n.l,n.a,n.b):gn((n=Sn((n=ao.rgb(n)).r,n.g,n.b)).l,n.a,n.b):new 
fn(n,t,e)}function sn(n,t,e){return isNaN(n)&&(n=0),isNaN(t)&&(t=0),new 
hn(e,Math.cos(n*=Yo)*t,Math.sin(n)*t)}function hn(n,t,e){return this instanceof 
hn?(this.l=+n,this.a=+t,void(this.b=+e))

[08/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
--
diff --git 
a/hadoop-hdsl/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
 
b/hadoop-hdsl/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
deleted file mode 100644
index 187ecda..000
--- 
a/hadoop-hdsl/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
+++ /dev/null
@@ -1,351 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/**
- * These .proto interfaces are private and unstable.
- * Please see http://wiki.apache.org/hadoop/Compatibility
- * for what changes are allowed for a *unstable* .proto interface.
- */
-
-option java_package = "org.apache.hadoop.hdsl.protocol.proto";
-
-option java_outer_classname = "StorageContainerDatanodeProtocolProtos";
-
-option java_generic_services = true;
-
-option java_generate_equals_and_hash = true;
-
-package hadoop.hdsl;
-
-import "hdsl.proto";
-
-
-/**
-* This message is send by data node to indicate that it is alive or it is
-* registering with the node manager.
-*/
-message SCMHeartbeatRequestProto {
-  required DatanodeDetailsProto datanodeDetails = 1;
-  optional SCMNodeReport nodeReport = 2;
-  optional ReportState containerReportState = 3;
-}
-
-enum DatanodeContainerState {
-  closed = 0;
-  open = 1;
-}
-
-/**
-NodeState contains messages from datanode to SCM saying that it has
-some information that SCM might be interested in.*/
-message ReportState {
-  enum states {
-noContainerReports = 0;
-completeContinerReport = 1;
-deltaContainerReport = 2;
-  }
-  required states state = 1;
-  required int64 count = 2 [default = 0];
-}
-
-
-/**
-This message is used to persist the information about a container in the
-SCM database, This information allows SCM to startup faster and avoid having
-all container info in memory all the time.
-  */
-message ContainerPersistanceProto {
-  required DatanodeContainerState state = 1;
-  required hadoop.hdsl.Pipeline pipeline = 2;
-  required ContainerInfo info = 3;
-}
-
-/**
-This message is used to do a quick look up of which containers are effected
-if a node goes down
-*/
-message NodeContianerMapping {
-  repeated string contianerName = 1;
-}
-
-/**
-A container report contains the following information.
-*/
-message ContainerInfo {
-  required string containerName = 1;
-  optional string finalhash = 2;
-  optional int64 size = 3;
-  optional int64 used = 4;
-  optional int64 keyCount = 5;
-  // TODO: move the io count to separate message
-  optional int64 readCount = 6;
-  optional int64 writeCount = 7;
-  optional int64 readBytes = 8;
-  optional int64 writeBytes = 9;
-  required int64 containerID = 10;
-  optional hadoop.hdsl.LifeCycleState state = 11;
-}
-
-// The deleted blocks which are stored in deletedBlock.db of scm.
-message DeletedBlocksTransaction {
-  required int64 txID = 1;
-  required string containerName = 2;
-  repeated string blockID = 3;
-  // the retry time of sending deleting command to datanode.
-  required int32 count = 4;
-}
-
-/**
-A set of container reports, max count is generally set to
-8192 since that keeps the size of the reports under 1 MB.
-*/
-message ContainerReportsRequestProto {
-  enum reportType {
-fullReport = 0;
-deltaReport = 1;
-  }
-  required DatanodeDetailsProto datanodeDetails = 1;
-  repeated ContainerInfo reports = 2;
-  required reportType type = 3;
-}
-
-message ContainerReportsResponseProto {
-}
-
-/**
-* This message is send along with the heart beat to report datanode
-* storage utilization by SCM.
-*/
-message SCMNodeReport {
-  repeated SCMStorageReport storageReport = 1;
-}
-
-message SCMStorageReport {
-  required string storageUuid = 1;
-  optional uint64 capacity = 2 [default = 0];
-  optional uint64 scmUsed = 3 [default = 0];
-  optional uint64 remaining = 4 [default = 0];
-  //optional hadoop.hdfs.StorageTypeProto storageType = 5 [default = DISK];
-}
-
-message SCMRegisterRequestProto {
-  required DatanodeDetailsProto datanodeDetails = 1;
-  optional SCMNodeAddressList addressList = 2;
-}
-

[35/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/framework/src/main/resources/webapps/static/angular-nvd3-1.0.9.min.js
--
diff --git 
a/hadoop-hdds/framework/src/main/resources/webapps/static/angular-nvd3-1.0.9.min.js
 
b/hadoop-hdds/framework/src/main/resources/webapps/static/angular-nvd3-1.0.9.min.js
new file mode 100644
index 000..4aced57
--- /dev/null
+++ 
b/hadoop-hdds/framework/src/main/resources/webapps/static/angular-nvd3-1.0.9.min.js
@@ -0,0 +1 @@
+!function(window){"use strict";var nv=window.nv;"undefined"!=typeof 
exports&&(nv=require("nvd3")),angular.module("nvd3",[]).directive("nvd3",["nvd3Utils",function(nvd3Utils){return{restrict:"AE",scope:{data:"=",options:"=",api:"=?",events:"=?",config:"=?",onReady:"&?"},link:function(scope,element,attrs){function
 
configure(chart,options,chartType){chart&&options&&angular.forEach(chart,function(value,key){"_"===key[0]||("dispatch"===key?(void
 
0!==options[key]&&null!==options[key]||scope._config.extended&&(options[key]={}),configureEvents(value,options[key])):"tooltip"===key?(void
 
0!==options[key]&&null!==options[key]||scope._config.extended&&(options[key]={}),configure(chart[key],options[key],chartType)):"contentGenerator"===key?options[key]&&chart[key](options[key]):-1===["axis","clearHighlights","defined","highlightPoint","nvPointerEventsClass","options","rangeBand","rangeBands","scatter","open","close","node"].indexOf(key)&&(void
 0===options[key]||null===options[key]?scope._config.
 extended&&(options[key]=value()):chart[key](options[key])))})}function 
configureEvents(dispatch,options){dispatch&&options&&angular.forEach(dispatch,function(value,key){void
 
0===options[key]||null===options[key]?scope._config.extended&&(options[key]=value.on):dispatch.on(key+"._",options[key])})}function
 configureWrapper(name){var 
_=nvd3Utils.deepExtend(defaultWrapper(name),scope.options[name]||{});scope._config.extended&&(scope.options[name]=_);var
 
wrapElement=angular.element("").html(_.html||"").addClass(name).addClass(_.className).removeAttr("style").css(_.css);_.html||wrapElement.text(_.text),_.enable&&("title"===name?element.prepend(wrapElement):"subtitle"===name?angular.element(element[0].querySelector(".title")).after(wrapElement):"caption"===name&&element.append(wrapElement))}function
 configureStyles(){var 
_=nvd3Utils.deepExtend(defaultStyles(),scope.options.styles||{});scope._config.extended&&(scope.options.styles=_),angular.forEach(_.classes,function(value,key){
 
value?element.addClass(key):element.removeClass(key)}),element.removeAttr("style").css(_.css)}function
 defaultWrapper(_){switch(_){case"title":return{enable:!1,text:"Write Your 
Title",className:"h4",css:{width:scope.options.chart.width+"px",textAlign:"center"}};case"subtitle":return{enable:!1,text:"Write
 Your 
Subtitle",css:{width:scope.options.chart.width+"px",textAlign:"center"}};case"caption":return{enable:!1,text:"Figure
 1. Write Your Caption 
text.",css:{width:scope.options.chart.width+"px",textAlign:"center"function 
defaultStyles(){return{classes:{"with-3d-shadow":!0,"with-transitions":!0,gallery:!1},css:{}}}function
 
dataWatchFn(newData,oldData){newData!==oldData&&(scope._config.disabled||(scope._config.refreshDataOnly?scope.api.update():scope.api.refresh()))}var
 
defaultConfig={extended:!1,visible:!0,disabled:!1,refreshDataOnly:!0,deepWatchOptions:!0,deepWatchData:!0,deepWatchDataDepth:2,debounce:10,debounceImmediate:!0};scope.isReady=!1,scope._config=angular.extend(defaultC
 
onfig,scope.config),scope.api={refresh:function(){scope.api.updateWithOptions(),scope.isReady=!0},refreshWithTimeout:function(t){setTimeout(function(){scope.api.refresh()},t)},update:function(){scope.chart&&scope.svg?"sunburstChart"===scope.options.chart.type?scope.svg.datum(angular.copy(scope.data)).call(scope.chart):scope.svg.datum(scope.data).call(scope.chart):scope.api.refresh()},updateWithTimeout:function(t){setTimeout(function(){scope.api.update()},t)},updateWithOptions:function(options){if(arguments.length){if(scope.options=options,scope._config.deepWatchOptions&&!scope._config.disabled)return}else
 
options=scope.options;scope.api.clearElement(),angular.isDefined(options)!==!1&&scope._config.visible&&(scope.chart=nv.models[options.chart.type](),scope.chart.id=Math.random().toString(36).substr(2,15),angular.forEach(scope.chart,function(value,key){"_"===key[0]||["clearHighlights","highlightPoint","id","options","resizeHandler","state","open","close","tooltipContent"].indexOf(key
 )>=0||("dispatch"===key?(void 
0!==options.chart[key]&&null!==options.chart[key]||scope._config.extended&&(options.chart[key]={}),configureEvents(scope.chart[key],options.chart[key])):["bars","bars1","bars2","boxplot","bullet","controls","discretebar","distX","distY","focus","interactiveLayer","legend","lines","lines1","lines2","multibar","pie","scatter","scatters1","scatters2","sparkline","stack1","stack2","sunburst","to

[14/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/common/src/test/java/org/apache/hadoop/utils/TestRocksDBStoreMBean.java
--
diff --git 
a/hadoop-hdsl/common/src/test/java/org/apache/hadoop/utils/TestRocksDBStoreMBean.java
 
b/hadoop-hdsl/common/src/test/java/org/apache/hadoop/utils/TestRocksDBStoreMBean.java
deleted file mode 100644
index e4f00f9..000
--- 
a/hadoop-hdsl/common/src/test/java/org/apache/hadoop/utils/TestRocksDBStoreMBean.java
+++ /dev/null
@@ -1,87 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.utils;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hdsl.conf.OzoneConfiguration;
-import org.apache.hadoop.ozone.OzoneConfigKeys;
-import org.apache.hadoop.test.GenericTestUtils;
-import org.junit.Assert;
-import org.junit.Test;
-
-import javax.management.MBeanServer;
-import java.io.File;
-import java.lang.management.ManagementFactory;
-
-/**
- * Test the JMX interface for the rocksdb metastore implementation.
- */
-public class TestRocksDBStoreMBean {
-
-  @Test
-  public void testJmxBeans() throws Exception {
-File testDir =
-GenericTestUtils.getTestDir(getClass().getSimpleName() + "-withstat");
-
-Configuration conf = new OzoneConfiguration();
-conf.set(OzoneConfigKeys.OZONE_METADATA_STORE_IMPL,
-OzoneConfigKeys.OZONE_METADATA_STORE_IMPL_ROCKSDB);
-
-RocksDBStore metadataStore =
-(RocksDBStore) MetadataStoreBuilder.newBuilder().setConf(conf)
-.setCreateIfMissing(true).setDbFile(testDir).build();
-
-for (int i = 0; i < 10; i++) {
-  metadataStore.put("key".getBytes(), "value".getBytes());
-}
-
-MBeanServer platformMBeanServer =
-ManagementFactory.getPlatformMBeanServer();
-Thread.sleep(2000);
-
-Object keysWritten = platformMBeanServer
-.getAttribute(metadataStore.getStatMBeanName(), "NUMBER_KEYS_WRITTEN");
-
-Assert.assertEquals(10L, keysWritten);
-
-Object dbWriteAverage = platformMBeanServer
-.getAttribute(metadataStore.getStatMBeanName(), "DB_WRITE_AVERAGE");
-Assert.assertTrue((double) dbWriteAverage > 0);
-
-metadataStore.close();
-
-  }
-
-  @Test()
-  public void testDisabledStat() throws Exception {
-File testDir = GenericTestUtils
-.getTestDir(getClass().getSimpleName() + "-withoutstat");
-
-Configuration conf = new OzoneConfiguration();
-conf.set(OzoneConfigKeys.OZONE_METADATA_STORE_IMPL,
-OzoneConfigKeys.OZONE_METADATA_STORE_IMPL_ROCKSDB);
-conf.set(OzoneConfigKeys.OZONE_METADATA_STORE_ROCKSDB_STATISTICS,
-OzoneConfigKeys.OZONE_METADATA_STORE_ROCKSDB_STATISTICS_OFF);
-
-RocksDBStore metadataStore =
-(RocksDBStore) MetadataStoreBuilder.newBuilder().setConf(conf)
-.setCreateIfMissing(true).setDbFile(testDir).build();
-
-Assert.assertNull(metadataStore.getStatMBeanName());
-  }
-}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/container-service/pom.xml
--
diff --git a/hadoop-hdsl/container-service/pom.xml 
b/hadoop-hdsl/container-service/pom.xml
deleted file mode 100644
index 7d6d543..000
--- a/hadoop-hdsl/container-service/pom.xml
+++ /dev/null
@@ -1,98 +0,0 @@
-
-
-http://maven.apache.org/POM/4.0.0";
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
-http://maven.apache.org/xsd/maven-4.0.0.xsd";>
-  4.0.0
-  
-org.apache.hadoop
-hadoop-hdsl
-3.2.0-SNAPSHOT
-  
-  hadoop-hdsl-container-service
-  3.2.0-SNAPSHOT
-  Apache Hadoop HDSL Container server
-  Apache Hadoop HDSL Container server
-  jar
-
-  
-hdsl
-true
-  
-
-  
-
-  org.apache.hadoop
-  hadoop-hdsl-common
-  provided
-
-
-  org.apache.hadoop
-  hadoop-hdsl-server-framework
-  provided
-
-
-
-  org.mockito
-  mockito-core
-  2.2.0
-  test
-
-
-  
-
-  
-
-  
-org.apache.hadoop
-hadoop-maven-plugins
-
-  
-   

[25/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
new file mode 100644
index 000..0174c17
--- /dev/null
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
@@ -0,0 +1,904 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.node;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.hadoop.hdds.scm.HddsServerUtil;
+import org.apache.hadoop.hdds.scm.StorageContainerManager;
+import org.apache.hadoop.hdds.scm.VersionInfo;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeStat;
+import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.DatanodeDetailsProto;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ReportState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.SCMNodeReport;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.SCMRegisteredCmdResponseProto
+.ErrorCode;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.SCMStorageReport;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.SCMVersionRequestProto;
+import org.apache.hadoop.ipc.Server;
+import org.apache.hadoop.metrics2.util.MBeans;
+import org.apache.hadoop.ozone.protocol.StorageContainerNodeProtocol;
+import org.apache.hadoop.ozone.protocol.VersionResponse;
+import org.apache.hadoop.ozone.protocol.commands.RegisteredCommand;
+import org.apache.hadoop.ozone.protocol.commands.ReregisterCommand;
+import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
+import org.apache.hadoop.ozone.protocol.commands.SendContainerCommand;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.util.concurrent.HadoopExecutors;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.management.ObjectName;
+import java.io.IOException;
+import java.net.InetAddress;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.UUID;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.stream.Collectors;
+
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DEAD;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState
+.HEALTHY;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState
+.INVALID;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.STALE;
+import static org.apache.hadoop.util.Time.monotonicNow;
+
+/**
+ * Maintains information about the Datanodes on SCM side.
+ * 
+ * Heartbeats under SCM is very simple compared to HDFS heartbeatManager.
+ * 
+ * Here we maintain 3 maps, and we propagate a node from healthyNodesMap to
+ * staleNodesMap to deadNodesMap. This moving of a node from one map to another
+ * is controlled by 4 configuration variables. These variables define how many
+ * heartbeats must go missing for the node to move from one map to another.
+ * 
+ * Each heartbeat that SCMNodeManager receives is  put into heartbe

[13/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ChunkManagerImpl.java
--
diff --git 
a/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ChunkManagerImpl.java
 
b/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ChunkManagerImpl.java
deleted file mode 100644
index 7c950dc..000
--- 
a/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ChunkManagerImpl.java
+++ /dev/null
@@ -1,232 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *  http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-package org.apache.hadoop.ozone.container.common.impl;
-
-import com.google.common.base.Preconditions;
-import org.apache.hadoop.fs.FileUtil;
-import org.apache.hadoop.hdsl.protocol.proto.ContainerProtos;
-import org.apache.hadoop.ozone.OzoneConsts;
-import org.apache.hadoop.ozone.container.common.helpers.ContainerData;
-import 
org.apache.hadoop.scm.container.common.helpers.StorageContainerException;
-import org.apache.hadoop.ozone.container.common.helpers.ChunkInfo;
-import org.apache.hadoop.ozone.container.common.helpers.ChunkUtils;
-import org.apache.hadoop.ozone.container.common.interfaces.ChunkManager;
-import org.apache.hadoop.ozone.container.common.interfaces.ContainerManager;
-import org.apache.hadoop.scm.container.common.helpers.Pipeline;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.File;
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.nio.file.Files;
-import java.nio.file.StandardCopyOption;
-import java.security.NoSuchAlgorithmException;
-import java.util.concurrent.ExecutionException;
-
-import static org.apache.hadoop.hdsl.protocol.proto.ContainerProtos
-.Result.CONTAINER_INTERNAL_ERROR;
-import static org.apache.hadoop.hdsl.protocol.proto.ContainerProtos
-.Result.UNSUPPORTED_REQUEST;
-
-/**
- * An implementation of ChunkManager that is used by default in ozone.
- */
-public class ChunkManagerImpl implements ChunkManager {
-  static final Logger LOG =
-  LoggerFactory.getLogger(ChunkManagerImpl.class);
-
-  private final ContainerManager containerManager;
-
-  /**
-   * Constructs a ChunkManager.
-   *
-   * @param manager - ContainerManager.
-   */
-  public ChunkManagerImpl(ContainerManager manager) {
-this.containerManager = manager;
-  }
-
-  /**
-   * writes a given chunk.
-   *
-   * @param pipeline - Name and the set of machines that make this container.
-   * @param keyName - Name of the Key.
-   * @param info - ChunkInfo.
-   * @throws StorageContainerException
-   */
-  @Override
-  public void writeChunk(Pipeline pipeline, String keyName, ChunkInfo info,
-  byte[] data, ContainerProtos.Stage stage)
-  throws StorageContainerException {
-// we don't want container manager to go away while we are writing chunks.
-containerManager.readLock();
-
-// TODO : Take keyManager Write lock here.
-try {
-  Preconditions.checkNotNull(pipeline, "Pipeline cannot be null");
-  String containerName = pipeline.getContainerName();
-  Preconditions.checkNotNull(containerName,
-  "Container name cannot be null");
-  ContainerData container =
-  containerManager.readContainer(containerName);
-  File chunkFile = ChunkUtils.validateChunk(pipeline, container, info);
-  File tmpChunkFile = getTmpChunkFile(chunkFile, info);
-
-  LOG.debug("writing chunk:{} chunk stage:{} chunk file:{} tmp chunk file",
-  info.getChunkName(), stage, chunkFile, tmpChunkFile);
-  switch (stage) {
-  case WRITE_DATA:
-ChunkUtils.writeData(tmpChunkFile, info, data);
-break;
-  case COMMIT_DATA:
-commitChunk(tmpChunkFile, chunkFile, containerName, info.getLen());
-break;
-  case COMBINED:
-// directly write to the chunk file
-long oldSize = chunkFile.length();
-ChunkUtils.writeData(chunkFile, info, data);
-long newSize = chunkFile.length();
-containerManager.incrBytesUsed(containerName, newSize - oldSize);
-   

[33/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/framework/src/main/resources/webapps/static/dfs-dust.js
--
diff --git 
a/hadoop-hdds/framework/src/main/resources/webapps/static/dfs-dust.js 
b/hadoop-hdds/framework/src/main/resources/webapps/static/dfs-dust.js
new file mode 100644
index 000..c7af6a1
--- /dev/null
+++ b/hadoop-hdds/framework/src/main/resources/webapps/static/dfs-dust.js
@@ -0,0 +1,133 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+(function ($, dust, exports) {
+  "use strict";
+
+  var filters = {
+'fmt_bytes': function (v) {
+  var UNITS = ['B', 'KB', 'MB', 'GB', 'TB', 'PB', 'ZB'];
+  var prev = 0, i = 0;
+  while (Math.floor(v) > 0 && i < UNITS.length) {
+prev = v;
+v /= 1024;
+i += 1;
+  }
+
+  if (i > 0 && i < UNITS.length) {
+v = prev;
+i -= 1;
+  }
+  return Math.round(v * 100) / 100 + ' ' + UNITS[i];
+},
+
+'fmt_percentage': function (v) {
+  return Math.round(v * 100) / 100 + '%';
+},
+'elapsed': function(v) {
+  //elapsed sec from epoch sec
+  return Date.now() - v * 1000;
+},
+'fmt_time': function (v) {
+  var s = Math.floor(v / 1000), h = Math.floor(s / 3600);
+  s -= h * 3600;
+  var m = Math.floor(s / 60);
+  s -= m * 60;
+
+  var res = s + " sec";
+  if (m !== 0) {
+res = m + " mins, " + res;
+  }
+
+  if (h !== 0) {
+res = h + " hrs, " + res;
+  }
+
+  return res;
+},
+
+'date_tostring' : function (v) {
+  return moment(Number(v)).format('ddd MMM DD HH:mm:ss ZZ ');
+},
+
+'format_compile_info' : function (v) {
+  var info = v.split(" by ")
+  var date = moment(info[0]).format('ddd MMM DD HH:mm:ss ZZ ');
+  return date.concat(" by ").concat(info[1]);
+ },
+
+'helper_to_permission': function (v) {
+  var symbols = [ '---', '--x', '-w-', '-wx', 'r--', 'r-x', 'rw-', 'rwx' ];
+  var vInt = parseInt(v, 8);
+  var sticky = (vInt & (1 << 9)) != 0;
+
+  var res = "";
+  for (var i = 0; i < 3; ++i) {
+res = symbols[(v % 10)] + res;
+v = Math.floor(v / 10);
+  }
+
+  if (sticky) {
+var otherExec = (vInt & 1) == 1;
+res = res.substr(0, res.length - 1) + (otherExec ? 't' : 'T');
+  }
+
+  return res;
+},
+
+'helper_to_directory' : function (v) {
+  return v === 'DIRECTORY' ? 'd' : '-';
+},
+
+'helper_to_acl_bit': function (v) {
+  return v ? '+' : "";
+},
+
+'fmt_number': function (v) {
+  return v.toLocaleString();
+}
+  };
+  $.extend(dust.filters, filters);
+
+  /**
+   * Load a sequence of JSON.
+   *
+   * beans is an array of tuples in the format of {url, name}.
+   */
+  function load_json(beans, success_cb, error_cb) {
+var data = {}, error = false, to_be_completed = beans.length;
+
+$.each(beans, function(idx, b) {
+  if (error) {
+return false;
+  }
+  $.get(b.url, function (resp) {
+data[b.name] = resp;
+to_be_completed -= 1;
+if (to_be_completed === 0) {
+  success_cb(data);
+}
+  }).error(function (jqxhr, text, err) {
+error = true;
+error_cb(b.url, jqxhr, text, err);
+  });
+});
+  }
+
+  exports.load_json = load_json;
+
+}($, dust, window));

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.css
--
diff --git 
a/hadoop-hdds/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.css 
b/hadoop-hdds/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.css
new file mode 100644
index 000..b8a5c0f
--- /dev/null
+++ b/hadoop-hdds/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.css
@@ -0,0 +1,2 @@
+.nvd3 .nv-axis line,.nvd3 .nv-axis 
path{fill:none;shape-rendering:crispEdges}.nv-brush .extent,.nvd3 .background 
path,.nvd3 .nv-axis line,.nvd3 .nv-axis 
path{shape-rendering:crispEdges}.nv-distx,.nv-disty,.nv-noninteractive,.nvd3 
.nv-axis,.nvd3.

[26/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/replication/package-info.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/replication/package-info.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/replication/package-info.java
new file mode 100644
index 000..7bbe2ef
--- /dev/null
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/replication/package-info.java
@@ -0,0 +1,23 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.hdds.scm.container.replication;
+/*
+ This package contains routines that manage replication of a container. This
+ relies on container reports to understand the replication level of a
+ container - UnderReplicated, Replicated, OverReplicated -- and manages the
+ replication level based on that.
+ */
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerAttribute.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerAttribute.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerAttribute.java
new file mode 100644
index 000..288fa2d
--- /dev/null
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerAttribute.java
@@ -0,0 +1,245 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ *
+ */
+package org.apache.hadoop.hdds.scm.container.states;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.TreeSet;
+
+import static org.apache.hadoop.hdds.scm.exceptions.SCMException.ResultCodes
+.FAILED_TO_CHANGE_CONTAINER_STATE;
+
+/**
+ * Each Attribute that we manage for a container is maintained as a map.
+ * 
+ * Currently we manage the following attributes for a container.
+ * 
+ * 1. StateMap - LifeCycleState -> Set of ContainerIDs
+ * 2. TypeMap  - ReplicationType -> Set of ContainerIDs
+ * 3. OwnerMap - OwnerNames -> Set of ContainerIDs
+ * 4. FactorMap - ReplicationFactor -> Set of ContainerIDs
+ * 
+ * This means that for a cluster size of 750 PB -- we will have around 150
+ * Million containers, if we assume 5GB average container size.
+ * 
+ * That implies that these maps will take around 2/3 GB of RAM which will be
+ * pinned down in the SCM. This is deemed acceptable since we can tune the
+ * container size --say we make it 10GB average size, then we can deal with a
+ * cluster size of 1.5 exa bytes with the same metadata in SCMs memory.
+ * 
+ * Please note: **This class is not thread safe**. This used to be thread safe,
+ * while bench marking we found that ContainerStateMap would be taking 5
+ * locks for a single container insert. If we remove locks in this class,
+ * then we are able to perform about 540K operations per second, with the
+ * locks in 

[18/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/protocolPB/package-info.java
--
diff --git 
a/hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/protocolPB/package-info.java
 
b/hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/protocolPB/package-info.java
deleted file mode 100644
index 860386d..000
--- 
a/hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/protocolPB/package-info.java
+++ /dev/null
@@ -1,24 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.ozone.protocolPB;
-
-/**
- * This package contains classes for the Protocol Buffers binding of Ozone
- * protocols.
- */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/web/utils/JsonUtils.java
--
diff --git 
a/hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/web/utils/JsonUtils.java
 
b/hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/web/utils/JsonUtils.java
deleted file mode 100644
index 909873f..000
--- 
a/hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/web/utils/JsonUtils.java
+++ /dev/null
@@ -1,71 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *  http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.ozone.web.utils;
-
-import java.io.IOException;
-import java.util.List;
-
-import com.fasterxml.jackson.databind.ObjectMapper;
-import com.fasterxml.jackson.databind.ObjectReader;
-import com.fasterxml.jackson.databind.ObjectWriter;
-import com.fasterxml.jackson.databind.type.CollectionType;
-
-/**
- * JSON Utility functions used in ozone.
- */
-public final class JsonUtils {
-
-  // Reuse ObjectMapper instance for improving performance.
-  // ObjectMapper is thread safe as long as we always configure instance
-  // before use.
-  private static final ObjectMapper MAPPER = new ObjectMapper();
-  private static final ObjectReader READER = MAPPER.readerFor(Object.class);
-  private static final ObjectWriter WRITTER =
-  MAPPER.writerWithDefaultPrettyPrinter();
-
-  private JsonUtils() {
-// Never constructed
-  }
-
-  public static String toJsonStringWithDefaultPrettyPrinter(String jsonString)
-  throws IOException {
-Object json = READER.readValue(jsonString);
-return WRITTER.writeValueAsString(json);
-  }
-
-  public static String toJsonString(Object obj) throws IOException {
-return MAPPER.writeValueAsString(obj);
-  }
-
-  /**
-   * Deserialize a list of elements from a given string,
-   * each element in the list is in the given type.
-   *
-   * @param str json string.
-   * @param elementType element type.
-   * @return List of elements of type elementType
-   * @throws IOException
-   */
-  public static List toJsonList(String str, Class elementType)
-  throws IOException {
-CollectionType type = MAPPER.getTypeFactory()
-.constructCollectionType(List.class, elementType);
-return MAPPER.readValue(str, type);
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/common/src/main/java/org/apache/hadoop/scm/ScmConfigKeys.java
--
diff --git 
a/hadoop-hdsl/common/src/main/java/org/apache/hadoop/scm/ScmConfigKeys.java 
b/hadoop-hdsl/common/src/main/java

[21/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/package-info.java
--
diff --git 
a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/package-info.java
 
b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/package-info.java
new file mode 100644
index 000..0630df2
--- /dev/null
+++ 
b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/package-info.java
@@ -0,0 +1,19 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.cli.container;
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/package-info.java
--
diff --git 
a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/package-info.java
 
b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/package-info.java
new file mode 100644
index 000..4762d55
--- /dev/null
+++ 
b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/package-info.java
@@ -0,0 +1,19 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.cli;
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/client/pom.xml
--
diff --git a/hadoop-hdsl/client/pom.xml b/hadoop-hdsl/client/pom.xml
deleted file mode 100644
index 1f1eaf0..000
--- a/hadoop-hdsl/client/pom.xml
+++ /dev/null
@@ -1,49 +0,0 @@
-
-
-http://maven.apache.org/POM/4.0.0";
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
-http://maven.apache.org/xsd/maven-4.0.0.xsd";>
-  4.0.0
-  
-org.apache.hadoop
-hadoop-hdsl
-3.2.0-SNAPSHOT
-  
-  hadoop-hdsl-client
-  3.2.0-SNAPSHOT
-  Apache Hadoop HDSL Client libraries
-  Apache Hadoop HDSL Client
-  jar
-
-  
-hdsl
-true
-  
-
-  
-
-  org.apache.hadoop
-  hadoop-hdsl-common
-  provided
-
-
-
-  io.netty
-  netty-all
-
-
-  
-
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientUtils.java
--
diff --git 
a/hadoop-hdsl/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientUtils.java
 
b/hadoop-hdsl/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientUtils.java
deleted file mode 100644
index c77f965..000
--- 
a/hadoop-hdsl/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientUtils.java
+++ /dev/null
@@ -1,231 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by appli

[23/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/states/TestContainerAttribute.java
--
diff --git 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/states/TestContainerAttribute.java
 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/states/TestContainerAttribute.java
new file mode 100644
index 000..63cc9bf
--- /dev/null
+++ 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/states/TestContainerAttribute.java
@@ -0,0 +1,143 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ *
+ */
+
+package org.apache.hadoop.hdds.scm.container.states;
+
+import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+import java.util.Arrays;
+import java.util.List;
+
+/**
+ * Test ContainerAttribute management.
+ */
+public class TestContainerAttribute {
+
+  @Rule
+  public ExpectedException thrown = ExpectedException.none();
+
+  @Test
+  public void testInsert() throws SCMException {
+ContainerAttribute containerAttribute = new 
ContainerAttribute<>();
+ContainerID id = new ContainerID(42);
+containerAttribute.insert(1, id);
+Assert.assertEquals(1,
+containerAttribute.getCollection(1).size());
+Assert.assertTrue(containerAttribute.getCollection(1).contains(id));
+
+// Insert again and verify that it overwrites an existing value.
+ContainerID newId =
+new ContainerID(42);
+containerAttribute.insert(1, newId);
+Assert.assertEquals(1,
+containerAttribute.getCollection(1).size());
+Assert.assertTrue(containerAttribute.getCollection(1).contains(newId));
+  }
+
+  @Test
+  public void testHasKey() throws SCMException {
+ContainerAttribute containerAttribute = new 
ContainerAttribute<>();
+
+for (int x = 1; x < 42; x++) {
+  containerAttribute.insert(1, new ContainerID(x));
+}
+Assert.assertTrue(containerAttribute.hasKey(1));
+for (int x = 1; x < 42; x++) {
+  Assert.assertTrue(containerAttribute.hasContainerID(1, x));
+}
+
+Assert.assertFalse(containerAttribute.hasContainerID(1,
+new ContainerID(42)));
+  }
+
+  @Test
+  public void testClearSet() throws SCMException {
+List keyslist = Arrays.asList("Key1", "Key2", "Key3");
+ContainerAttribute containerAttribute = new ContainerAttribute<>();
+for (String k : keyslist) {
+  for (int x = 1; x < 101; x++) {
+containerAttribute.insert(k, new ContainerID(x));
+  }
+}
+for (String k : keyslist) {
+  Assert.assertEquals(100,
+  containerAttribute.getCollection(k).size());
+}
+containerAttribute.clearSet("Key1");
+Assert.assertEquals(0,
+containerAttribute.getCollection("Key1").size());
+  }
+
+  @Test
+  public void testRemove() throws SCMException {
+
+List keyslist = Arrays.asList("Key1", "Key2", "Key3");
+ContainerAttribute containerAttribute = new ContainerAttribute<>();
+
+for (String k : keyslist) {
+  for (int x = 1; x < 101; x++) {
+containerAttribute.insert(k, new ContainerID(x));
+  }
+}
+for (int x = 1; x < 101; x += 2) {
+  containerAttribute.remove("Key1", new ContainerID(x));
+}
+
+for (int x = 1; x < 101; x += 2) {
+  Assert.assertFalse(containerAttribute.hasContainerID("Key1",
+  new ContainerID(x)));
+}
+
+Assert.assertEquals(100,
+containerAttribute.getCollection("Key2").size());
+
+Assert.assertEquals(100,
+containerAttribute.getCollection("Key3").size());
+
+Assert.assertEquals(50,
+containerAttribute.getCollection("Key1").size());
+  }
+
+  @Test
+  public void tesUpdate() throws SCMException {
+String key1 = "Key1";
+String key2 = "Key2";
+String key3 = "Key3";
+
+ContainerAttribute containerAttribute = new ContainerAttribute<>();
+ContainerID id = new ContainerID(42);
+
+containerAttribute.insert(key1, id);
+ 

[16/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/common/src/main/java/org/apache/hadoop/utils/MetadataStoreBuilder.java
--
diff --git 
a/hadoop-hdsl/common/src/main/java/org/apache/hadoop/utils/MetadataStoreBuilder.java
 
b/hadoop-hdsl/common/src/main/java/org/apache/hadoop/utils/MetadataStoreBuilder.java
deleted file mode 100644
index 095e718..000
--- 
a/hadoop-hdsl/common/src/main/java/org/apache/hadoop/utils/MetadataStoreBuilder.java
+++ /dev/null
@@ -1,125 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *  http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.utils;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.ozone.OzoneConfigKeys;
-import static org.apache.hadoop.ozone.OzoneConfigKeys
-.OZONE_METADATA_STORE_ROCKSDB_STATISTICS;
-import static org.apache.hadoop.ozone.OzoneConfigKeys
-.OZONE_METADATA_STORE_ROCKSDB_STATISTICS_DEFAULT;
-import static org.apache.hadoop.ozone.OzoneConfigKeys
-.OZONE_METADATA_STORE_ROCKSDB_STATISTICS_OFF;
-import org.iq80.leveldb.Options;
-import org.rocksdb.BlockBasedTableConfig;
-
-import java.io.File;
-import java.io.IOException;
-
-import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_METADATA_STORE_IMPL_LEVELDB;
-import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_METADATA_STORE_IMPL_ROCKSDB;
-
-import org.rocksdb.Statistics;
-import org.rocksdb.StatsLevel;
-
-/**
- * Builder for metadata store.
- */
-public class MetadataStoreBuilder {
-
-  private File dbFile;
-  private long cacheSize;
-  private boolean createIfMissing = true;
-  private Configuration conf;
-
-  public static MetadataStoreBuilder newBuilder() {
-return new MetadataStoreBuilder();
-  }
-
-  public MetadataStoreBuilder setDbFile(File dbPath) {
-this.dbFile = dbPath;
-return this;
-  }
-
-  public MetadataStoreBuilder setCacheSize(long cache) {
-this.cacheSize = cache;
-return this;
-  }
-
-  public MetadataStoreBuilder setCreateIfMissing(boolean doCreate) {
-this.createIfMissing = doCreate;
-return this;
-  }
-
-  public MetadataStoreBuilder setConf(Configuration configuration) {
-this.conf = configuration;
-return this;
-  }
-
-  public MetadataStore build() throws IOException {
-if (dbFile == null) {
-  throw new IllegalArgumentException("Failed to build metadata store, "
-  + "dbFile is required but not found");
-}
-
-// Build db store based on configuration
-MetadataStore store = null;
-String impl = conf == null ?
-OzoneConfigKeys.OZONE_METADATA_STORE_IMPL_DEFAULT :
-conf.getTrimmed(OzoneConfigKeys.OZONE_METADATA_STORE_IMPL,
-OzoneConfigKeys.OZONE_METADATA_STORE_IMPL_DEFAULT);
-if (OZONE_METADATA_STORE_IMPL_LEVELDB.equals(impl)) {
-  Options options = new Options();
-  options.createIfMissing(createIfMissing);
-  if (cacheSize > 0) {
-options.cacheSize(cacheSize);
-  }
-  store = new LevelDBStore(dbFile, options);
-} else if (OZONE_METADATA_STORE_IMPL_ROCKSDB.equals(impl)) {
-  org.rocksdb.Options opts = new org.rocksdb.Options();
-  opts.setCreateIfMissing(createIfMissing);
-
-  if (cacheSize > 0) {
-BlockBasedTableConfig tableConfig = new BlockBasedTableConfig();
-tableConfig.setBlockCacheSize(cacheSize);
-opts.setTableFormatConfig(tableConfig);
-  }
-
-  String rocksDbStat = conf == null ?
-  OZONE_METADATA_STORE_ROCKSDB_STATISTICS_DEFAULT :
-  conf.getTrimmed(OZONE_METADATA_STORE_ROCKSDB_STATISTICS,
-  OZONE_METADATA_STORE_ROCKSDB_STATISTICS_DEFAULT);
-
-  if (!rocksDbStat.equals(OZONE_METADATA_STORE_ROCKSDB_STATISTICS_OFF)) {
-Statistics statistics = new Statistics();
-statistics.setStatsLevel(StatsLevel.valueOf(rocksDbStat));
-opts = opts.setStatistics(statistics);
-
-  }
-  store = new RocksDBStore(dbFile, opts);
-} else {
-  throw new IllegalArgumentException("Invalid argument for "
-  + OzoneConfigKeys.OZONE_METADATA_STORE_IMPL
-  + ". Expecting " + OZONE_METADATA_STORE_IMPL_LEVELDB
-  + " or " + OZONE_METADATA_STORE_IMPL_RO

[28/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java
new file mode 100644
index 000..0f4988a
--- /dev/null
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java
@@ -0,0 +1,356 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.block;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.Lists;
+import com.google.common.primitives.Longs;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.utils.BatchOperation;
+import org.apache.hadoop.utils.MetadataKeyFilters.MetadataKeyFilter;
+import org.apache.hadoop.utils.MetadataStore;
+import org.apache.hadoop.utils.MetadataStoreBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
+
+import static org.apache.hadoop.hdds.scm.ScmConfigKeys
+.OZONE_SCM_BLOCK_DELETION_MAX_RETRY;
+import static org.apache.hadoop.hdds.scm.ScmConfigKeys
+.OZONE_SCM_BLOCK_DELETION_MAX_RETRY_DEFAULT;
+import static org.apache.hadoop.hdds.scm.ScmConfigKeys
+.OZONE_SCM_DB_CACHE_SIZE_DEFAULT;
+import static org.apache.hadoop.hdds.scm.ScmConfigKeys
+.OZONE_SCM_DB_CACHE_SIZE_MB;
+import static org.apache.hadoop.hdds.server.ServerUtils.getOzoneMetaDirPath;
+import static org.apache.hadoop.ozone.OzoneConsts.DELETED_BLOCK_DB;
+
+/**
+ * A implement class of {@link DeletedBlockLog}, and it uses
+ * K/V db to maintain block deletion transactions between scm and datanode.
+ * This is a very basic implementation, it simply scans the log and
+ * memorize the position that scanned by last time, and uses this to
+ * determine where the next scan starts. It has no notion about weight
+ * of each transaction so as long as transaction is still valid, they get
+ * equally same chance to be retrieved which only depends on the nature
+ * order of the transaction ID.
+ */
+public class DeletedBlockLogImpl implements DeletedBlockLog {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(DeletedBlockLogImpl.class);
+
+  private static final byte[] LATEST_TXID =
+  DFSUtil.string2Bytes("#LATEST_TXID#");
+
+  private final int maxRetry;
+  private final MetadataStore deletedStore;
+  private final Lock lock;
+  // The latest id of deleted blocks in the db.
+  private long lastTxID;
+  private long lastReadTxID;
+
+  public DeletedBlockLogImpl(Configuration conf) throws IOException {
+maxRetry = conf.getInt(OZONE_SCM_BLOCK_DELETION_MAX_RETRY,
+OZONE_SCM_BLOCK_DELETION_MAX_RETRY_DEFAULT);
+
+File metaDir = getOzoneMetaDirPath(conf);
+String scmMetaDataDir = metaDir.getPath();
+File deletedLogDbPath = new File(scmMetaDataDir, DELETED_BLOCK_DB);
+int cacheSize = conf.getInt(OZONE_SCM_DB_CACHE_SIZE_MB,
+OZONE_SCM_DB_CACHE_SIZE_DEFAULT);
+// Load store of all transactions.
+deletedStore = MetadataStoreBuilder.newBuilder()
+.setCreateIfMissing(true)
+.setConf(conf)
+.setDbFile(deletedLogDbPath)
+.setCacheSize(cacheSize * OzoneConsts.MB)
+.build();
+
+this.lock = new ReentrantLock();
+// start from the head of deleted store.
+lastReadTxID = 0;
+lastTxID = findLatestTxIDInStore();
+  }
+
+  @VisibleForTesting
+  MetadataStore getDeletedStore() {
+return deletedStore;
+  }
+
+  /**
+   * There is no need to l

[11/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
--
diff --git 
a/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
 
b/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
deleted file mode 100644
index 91fa9c3..000
--- 
a/hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
+++ /dev/null
@@ -1,384 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with this
- * work for additional information regarding copyright ownership.  The ASF
- * licenses this file to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- * License for the specific language governing permissions and limitations 
under
- * the License.
- */
-package org.apache.hadoop.ozone.container.common.statemachine;
-
-import com.google.common.annotations.VisibleForTesting;
-import com.google.common.util.concurrent.ThreadFactoryBuilder;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hdsl.conf.OzoneConfiguration;
-import org.apache.hadoop.hdsl.protocol.DatanodeDetails;
-import org.apache.hadoop.ozone.container.common.statemachine.commandhandler
-.CloseContainerHandler;
-import 
org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CommandDispatcher;
-import 
org.apache.hadoop.ozone.container.common.statemachine.commandhandler.ContainerReportHandler;
-import 
org.apache.hadoop.ozone.container.common.statemachine.commandhandler.DeleteBlocksCommandHandler;
-import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
-import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
-import org.apache.hadoop.util.Time;
-import org.apache.hadoop.util.concurrent.HadoopExecutors;
-
-import static 
org.apache.hadoop.ozone.scm.HdslServerUtil.getScmHeartbeatInterval;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.Closeable;
-import java.io.IOException;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.TimeUnit;
-import java.util.concurrent.atomic.AtomicLong;
-
-/**
- * State Machine Class.
- */
-public class DatanodeStateMachine implements Closeable {
-  @VisibleForTesting
-  static final Logger LOG =
-  LoggerFactory.getLogger(DatanodeStateMachine.class);
-  private final ExecutorService executorService;
-  private final Configuration conf;
-  private final SCMConnectionManager connectionManager;
-  private final long heartbeatFrequency;
-  private StateContext context;
-  private final OzoneContainer container;
-  private DatanodeDetails datanodeDetails;
-  private final CommandDispatcher commandDispatcher;
-  private long commandsHandled;
-  private AtomicLong nextHB;
-  private Thread stateMachineThread = null;
-  private Thread cmdProcessThread = null;
-
-  /**
-   * Constructs a a datanode state machine.
-   *
-   * @param datanodeDetails - DatanodeDetails used to identify a datanode
-   * @param conf - Configuration.
-   */
-  public DatanodeStateMachine(DatanodeDetails datanodeDetails,
-  Configuration conf) throws IOException {
-this.conf = conf;
-this.datanodeDetails = datanodeDetails;
-executorService = HadoopExecutors.newCachedThreadPool(
-new ThreadFactoryBuilder().setDaemon(true)
-.setNameFormat("Datanode State Machine Thread - %d").build());
-connectionManager = new SCMConnectionManager(conf);
-context = new StateContext(this.conf, DatanodeStates.getInitState(), this);
-heartbeatFrequency = TimeUnit.SECONDS.toMillis(
-getScmHeartbeatInterval(conf));
-container = new OzoneContainer(this.datanodeDetails,
-new OzoneConfiguration(conf));
-nextHB = new AtomicLong(Time.monotonicNow());
-
- // When we add new handlers just adding a new handler here should do the
- // trick.
-commandDispatcher = CommandDispatcher.newBuilder()
-.addHandler(new ContainerReportHandler())
-.addHandler(new CloseContainerHandler())
-.addHandler(new DeleteBlocksCommandHandler(
-container.getContainerManager(), conf))
-.setConnectionManager(connectionManager)
-.setContainer(container)
-.setContext(context)
-  

[81/83] [abbrv] hadoop git commit: HDFS-13422. Ozone: Fix whitespaces and license issues in HDFS-7240 branch. Contributed by Lokesh Jain.

2018-04-24 Thread xyao
HDFS-13422. Ozone: Fix whitespaces and license issues in HDFS-7240 branch. 
Contributed by Lokesh Jain.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/94cb164d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/94cb164d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/94cb164d

Branch: refs/heads/trunk
Commit: 94cb164dece9d63bc2ac0f62c3bd5f3713d65235
Parents: eea3128
Author: Xiaoyu Yao 
Authored: Tue Apr 17 10:26:08 2018 -0700
Committer: Xiaoyu Yao 
Committed: Tue Apr 17 10:26:08 2018 -0700

--
 .../org/apache/hadoop/hdds/HddsConfigKeys.java   | 17 +
 hadoop-hdds/framework/README.md  |  4 ++--
 hadoop-hdds/framework/pom.xml| 19 ---
 hadoop-hdds/pom.xml  | 19 +++
 hadoop-hdds/tools/pom.xml|  2 +-
 hadoop-ozone/acceptance-test/README.md   |  6 +++---
 hadoop-ozone/acceptance-test/pom.xml |  4 ++--
 hadoop-ozone/pom.xml |  2 ++
 .../genesis/BenchMarkDatanodeDispatcher.java | 17 +
 .../genesis/BenchMarkMetadataStoreReads.java | 17 +
 .../genesis/BenchMarkMetadataStoreWrites.java| 17 +
 .../apache/hadoop/ozone/genesis/GenesisUtil.java | 17 +
 pom.xml  |  3 ++-
 13 files changed, 116 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/94cb164d/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
index 040f080..dec2c1c 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.hadoop.hdds;
 
 public final class HddsConfigKeys {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/94cb164d/hadoop-hdds/framework/README.md
--
diff --git a/hadoop-hdds/framework/README.md b/hadoop-hdds/framework/README.md
index 59cdac7..0eda3f5 100644
--- a/hadoop-hdds/framework/README.md
+++ b/hadoop-hdds/framework/README.md
@@ -17,8 +17,8 @@
 
 # Server framework for HDDS/Ozone
 
-This project contains generic utilities and resources for all the HDDS/Ozone 
+This project contains generic utilities and resources for all the HDDS/Ozone
 server-side components.
 
-The project is shared between the server/service projects but not with the 
+The project is shared between the server/service projects but not with the
 client packages.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/94cb164d/hadoop-hdds/framework/pom.xml
--
diff --git a/hadoop-hdds/framework/pom.xml b/hadoop-hdds/framework/pom.xml
index 959bf4d..c8d0797 100644
--- a/hadoop-hdds/framework/pom.xml
+++ b/hadoop-hdds/framework/pom.xml
@@ -44,25 +44,6 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd";>
   
 
   
-org.apache.rat
-apache-rat-plugin
-
-  
-.gitattributes
-.idea/**
-src/main/conf/*
-src/main/webapps/static/angular-1.6.4.min.js
-
src/main/webapps/static/angular-nvd3-1.0.9.min.js
-
src/main/webapps/static/angular-route-1.6.4.min.js
-src/main/webapps/static/d3-3.5.17.min.js
-src/main/webapps/static/nvd3-1.8.5.min.css
-src/main/webapps/static/nvd3-1.8.5.min.css.map
-src/main/webapps/static/nvd3-1.8.5.min.js
-src/main/webapps/static/nvd3-1.8.5.min.js.map
-  
-
- 

[68/83] [abbrv] hadoop git commit: HDFS-13413. Ozone: ClusterId and DatanodeUuid should be marked mandatory fields in SCMRegisteredCmdResponseProto. Contributed by Shashikant Banerjee.

2018-04-24 Thread xyao
HDFS-13413. Ozone: ClusterId and DatanodeUuid should be marked mandatory fields 
in SCMRegisteredCmdResponseProto. Contributed by Shashikant Banerjee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c36a850a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c36a850a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c36a850a

Branch: refs/heads/trunk
Commit: c36a850af5f554f210010e7fb8039953de283746
Parents: dd43835
Author: Nanda kumar 
Authored: Fri Apr 13 00:47:38 2018 +0530
Committer: Nanda kumar 
Committed: Fri Apr 13 00:47:38 2018 +0530

--
 .../common/states/endpoint/RegisterEndpointTask.java | 8 
 .../hadoop/ozone/protocol/commands/RegisteredCommand.java| 6 ++
 .../src/main/proto/StorageContainerDatanodeProtocol.proto| 4 ++--
 3 files changed, 12 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c36a850a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/RegisterEndpointTask.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/RegisterEndpointTask.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/RegisterEndpointTask.java
index de186a7..ca3bef0 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/RegisterEndpointTask.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/RegisterEndpointTask.java
@@ -17,6 +17,8 @@
 package org.apache.hadoop.ozone.container.common.states.endpoint;
 
 import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import org.apache.commons.lang.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
@@ -28,6 +30,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
+import java.util.UUID;
 import java.util.concurrent.Callable;
 import java.util.concurrent.Future;
 
@@ -98,6 +101,11 @@ public final class RegisterEndpointTask implements
   SCMRegisteredCmdResponseProto response = rpcEndPoint.getEndPoint()
   .register(datanodeDetails.getProtoBufMessage(),
   conf.getStrings(ScmConfigKeys.OZONE_SCM_NAMES));
+  Preconditions.checkState(UUID.fromString(response.getDatanodeUUID())
+  .equals(datanodeDetails.getUuid()),
+  "Unexpected datanode ID in the response.");
+  Preconditions.checkState(!StringUtils.isBlank(response.getClusterID()),
+  "Invalid cluster ID in the response.");
   if (response.hasHostname() && response.hasIpAddress()) {
 datanodeDetails.setHostName(response.getHostname());
 datanodeDetails.setIpAddress(response.getIpAddress());

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c36a850a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/RegisteredCommand.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/RegisteredCommand.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/RegisteredCommand.java
index a7e81d8..593b84b 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/RegisteredCommand.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/RegisteredCommand.java
@@ -162,13 +162,11 @@ public class RegisteredCommand extends
   Preconditions.checkNotNull(response);
   if (response.hasHostname() && response.hasIpAddress()) {
 return new RegisteredCommand(response.getErrorCode(),
-response.hasDatanodeUUID() ? response.getDatanodeUUID() : "",
-response.hasClusterID() ? response.getClusterID() : "",
+response.getDatanodeUUID(), response.getClusterID(),
 response.getHostname(), response.getIpAddress());
   } else {
 return new RegisteredCommand(response.getErrorCode(),
-response.hasDatanodeUUID() ? response.getDatanodeUUID() : "",
-response.hasClusterID() ? response.getClusterID() : "");
+response.getDatanodeUUID(), response.getClusterID());
   }
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c36a850a/hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
---

[47/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolPB.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolPB.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolPB.java
new file mode 100644
index 000..f234ad3
--- /dev/null
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolPB.java
@@ -0,0 +1,36 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.protocolPB;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerLocationProtocolProtos
+.StorageContainerLocationProtocolService;
+import org.apache.hadoop.ipc.ProtocolInfo;
+
+/**
+ * Protocol used from an HDFS node to StorageContainerManager.  This extends 
the
+ * Protocol Buffers service interface to add Hadoop-specific annotations.
+ */
+@ProtocolInfo(protocolName =
+"org.apache.hadoop.ozone.protocol.StorageContainerLocationProtocol",
+protocolVersion = 1)
+@InterfaceAudience.Private
+public interface StorageContainerLocationProtocolPB
+extends StorageContainerLocationProtocolService.BlockingInterface {
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/package-info.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/package-info.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/package-info.java
new file mode 100644
index 000..652ae60
--- /dev/null
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/package-info.java
@@ -0,0 +1,24 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.protocolPB;
+
+/**
+ * This package contains classes for the client of the storage container
+ * protocol.
+ */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java
new file mode 100644
index 000..1559816
--- /dev/null
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java
@@ -0,0 +1,396 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  

[69/83] [abbrv] hadoop git commit: HDFS-13423. Ozone: Clean-up of ozone related change from hadoop-hdfs-project. Contributed by Nanda Kumar.

2018-04-24 Thread xyao
HDFS-13423. Ozone: Clean-up of ozone related change from hadoop-hdfs-project. 
Contributed by Nanda Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/584c573a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/584c573a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/584c573a

Branch: refs/heads/trunk
Commit: 584c573a5604d49522c4b7766fc52f4d3eb92496
Parents: c36a850
Author: Mukul Kumar Singh 
Authored: Fri Apr 13 14:13:06 2018 +0530
Committer: Mukul Kumar Singh 
Committed: Fri Apr 13 14:13:06 2018 +0530

--
 .../java/org/apache/hadoop/hdds/HddsUtils.java  | 46 
 .../hadoop/ozone/HddsDatanodeService.java   |  3 +-
 .../hdfs/server/common/HdfsServerConstants.java |  3 +-
 .../hadoop/hdfs/server/common/StorageInfo.java  |  4 --
 .../hadoop/hdfs/server/datanode/DataNode.java   | 14 +-
 .../web/RestCsrfPreventionFilterHandler.java|  2 +-
 .../hadoop-hdfs/src/main/proto/HdfsServer.proto |  8 
 .../namenode/TestFavoredNodesEndToEnd.java  |  6 +--
 .../hadoop/ozone/MiniOzoneClassicCluster.java   | 19 +---
 .../hadoop/ozone/MiniOzoneTestHelper.java   | 22 --
 .../apache/hadoop/ozone/RatisTestHelper.java|  2 +-
 .../hadoop/ozone/TestMiniOzoneCluster.java  |  8 +---
 .../apache/hadoop/ozone/TestOzoneHelper.java|  3 +-
 .../TestStorageContainerManagerHelper.java  |  2 +-
 .../TestCloseContainerHandler.java  |  2 +-
 .../ozoneimpl/TestOzoneContainerRatis.java  |  3 +-
 .../container/ozoneimpl/TestRatisManager.java   |  3 +-
 .../ksm/TestKeySpaceManagerRestInterface.java   |  3 +-
 .../hadoop/ozone/ozShell/TestOzoneShell.java|  3 +-
 .../org/apache/hadoop/ozone/scm/TestSCMCli.java |  3 +-
 .../apache/hadoop/ozone/scm/TestSCMMetrics.java |  3 +-
 .../ozone/web/TestDistributedOzoneVolumes.java  |  4 +-
 .../hadoop/ozone/web/TestLocalOzoneVolumes.java |  4 +-
 .../hadoop/ozone/web/TestOzoneWebAccess.java|  3 +-
 .../hadoop/ozone/web/client/TestBuckets.java|  3 +-
 .../hadoop/ozone/web/client/TestKeys.java   |  4 +-
 .../ozone/web/client/TestOzoneClient.java   |  3 +-
 .../hadoop/ozone/web/client/TestVolume.java |  3 +-
 .../web/netty/ObjectStoreRestHttpServer.java| 45 +--
 29 files changed, 109 insertions(+), 122 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/584c573a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
index f00f503..a0b5c47 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
@@ -22,18 +22,26 @@ import com.google.common.base.Optional;
 import com.google.common.base.Strings;
 import com.google.common.net.HostAndPort;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.net.DNS;
 import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.ozone.OzoneConfigKeys;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.net.InetSocketAddress;
+import java.net.UnknownHostException;
 import java.nio.file.Paths;
 import java.util.Collection;
 import java.util.HashSet;
 
+import static org.apache.hadoop.hdfs.DFSConfigKeys
+.DFS_DATANODE_DNS_INTERFACE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys
+.DFS_DATANODE_DNS_NAMESERVER_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_HOST_NAME_KEY;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ENABLED;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ENABLED_DEFAULT;
 
@@ -269,4 +277,42 @@ public class HddsUtils {
 }
 return dataNodeIDPath;
   }
+
+  /**
+   * Returns the hostname for this datanode. If the hostname is not
+   * explicitly configured in the given config, then it is determined
+   * via the DNS class.
+   *
+   * @param conf Configuration
+   *
+   * @return the hostname (NB: may not be a FQDN)
+   * @throws UnknownHostException if the dfs.datanode.dns.interface
+   *option is used and the hostname can not be determined
+   */
+  public static String getHostName(Configuration conf)
+  throws UnknownHostException {
+String name = conf.get(DFS_DATANODE_HOST_NAME_KEY);
+if (name == null) {
+  String dnsInterface = conf.get(
+  CommonConfigurationKeys.HADOOP_SECURITY_DNS_INTERFACE_KEY);
+  String nameServer = conf.get(
+  C

[40/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
new file mode 100644
index 000..8e9482f
--- /dev/null
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
@@ -0,0 +1,387 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.container.common.statemachine;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.ozone.container.common.statemachine.commandhandler
+.CloseContainerHandler;
+import org.apache.hadoop.ozone.container.common.statemachine.commandhandler
+.CommandDispatcher;
+import org.apache.hadoop.ozone.container.common.statemachine.commandhandler
+.ContainerReportHandler;
+import org.apache.hadoop.ozone.container.common.statemachine.commandhandler
+.DeleteBlocksCommandHandler;
+import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
+import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.util.concurrent.HadoopExecutors;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicLong;
+
+import static 
org.apache.hadoop.hdds.scm.HddsServerUtil.getScmHeartbeatInterval;
+
+/**
+ * State Machine Class.
+ */
+public class DatanodeStateMachine implements Closeable {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(DatanodeStateMachine.class);
+  private final ExecutorService executorService;
+  private final Configuration conf;
+  private final SCMConnectionManager connectionManager;
+  private final long heartbeatFrequency;
+  private StateContext context;
+  private final OzoneContainer container;
+  private DatanodeDetails datanodeDetails;
+  private final CommandDispatcher commandDispatcher;
+  private long commandsHandled;
+  private AtomicLong nextHB;
+  private Thread stateMachineThread = null;
+  private Thread cmdProcessThread = null;
+
+  /**
+   * Constructs a a datanode state machine.
+   *
+   * @param datanodeDetails - DatanodeDetails used to identify a datanode
+   * @param conf - Configuration.
+   */
+  public DatanodeStateMachine(DatanodeDetails datanodeDetails,
+  Configuration conf) throws IOException {
+this.conf = conf;
+this.datanodeDetails = datanodeDetails;
+executorService = HadoopExecutors.newCachedThreadPool(
+new ThreadFactoryBuilder().setDaemon(true)
+.setNameFormat("Datanode State Machine Thread - %d").build());
+connectionManager = new SCMConnectionManager(conf);
+context = new StateContext(this.conf, DatanodeStates.getInitState(), this);
+heartbeatFrequency = TimeUnit.SECONDS.toMillis(
+getScmHeartbeatInterval(conf));
+container = new OzoneContainer(this.datanodeDetails,
+new OzoneConfiguration(conf));
+nextHB = new AtomicLong(Time.monotonicNow());
+
+ // When we add new handlers just adding a new handler here should do the
+ // trick.
+commandDispatcher = CommandDispatcher.newBuilder()
+.addHandler(new ContainerReportHandler())
+.addHandler(new CloseContainerHandler())
+.addHandler(new DeleteBlocksCommandHandler(
+container.getContainerManager(), conf))
+.setConnectionManager(connectionManager)
+.setContainer(container)
+.setContext(contex

[27/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMCommonPolicy.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMCommonPolicy.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMCommonPolicy.java
new file mode 100644
index 000..0a595d5
--- /dev/null
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMCommonPolicy.java
@@ -0,0 +1,197 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container.placement.algorithms;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Random;
+import java.util.stream.Collectors;
+
+/**
+ * SCM CommonPolicy implements a set of invariants which are common
+ * for all container placement policies, acts as the repository of helper
+ * functions which are common to placement policies.
+ */
+public abstract class SCMCommonPolicy implements ContainerPlacementPolicy {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(SCMCommonPolicy.class);
+  private final NodeManager nodeManager;
+  private final Random rand;
+  private final Configuration conf;
+
+  /**
+   * Constructs SCM Common Policy Class.
+   *
+   * @param nodeManager NodeManager
+   * @param conf Configuration class.
+   */
+  public SCMCommonPolicy(NodeManager nodeManager, Configuration conf) {
+this.nodeManager = nodeManager;
+this.rand = new Random();
+this.conf = conf;
+  }
+
+  /**
+   * Return node manager.
+   *
+   * @return node manager
+   */
+  public NodeManager getNodeManager() {
+return nodeManager;
+  }
+
+  /**
+   * Returns the Random Object.
+   *
+   * @return rand
+   */
+  public Random getRand() {
+return rand;
+  }
+
+  /**
+   * Get Config.
+   *
+   * @return Configuration
+   */
+  public Configuration getConf() {
+return conf;
+  }
+
+  /**
+   * Given the replication factor and size required, return set of datanodes
+   * that satisfy the nodes and size requirement.
+   * 
+   * Here are some invariants of container placement.
+   * 
+   * 1. We place containers only on healthy nodes.
+   * 2. We place containers on nodes with enough space for that container.
+   * 3. if a set of containers are requested, we either meet the required
+   * number of nodes or we fail that request.
+   *
+   * @param nodesRequired - number of datanodes required.
+   * @param sizeRequired - size required for the container or block.
+   * @return list of datanodes chosen.
+   * @throws SCMException SCM exception.
+   */
+
+  public List chooseDatanodes(int nodesRequired, final long
+  sizeRequired) throws SCMException {
+List healthyNodes =
+nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+String msg;
+if (healthyNodes.size() == 0) {
+  msg = "No healthy node found to allocate container.";
+  LOG.error(msg);
+  throw new SCMException(msg, SCMException.ResultCodes
+  .FAILED_TO_FIND_HEALTHY_NODES);
+}
+
+if (healthyNodes.size() < nodesRequired) {
+  msg = String.format("Not enough healthy nodes to allocate container. %d "
+  + " datanodes required. Found %d",
+  nodesRequired, healthyNodes.size());
+  LOG.error(msg);
+  throw new SCMException(msg,
+  SCMException.ResultCodes.FAILED_TO_FIND_SUITABLE_NODE);
+}
+List healthyList = healthyNodes.stream().filter(d ->
+hasEnoughSpace(d, sizeRequired)).collect(Collectors.toList());
+
+

[32/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.js
--
diff --git 
a/hadoop-hdds/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.js 
b/hadoop-hdds/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.js
new file mode 100644
index 000..9cfd702
--- /dev/null
+++ b/hadoop-hdds/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.js
@@ -0,0 +1,11 @@
+/* nvd3 version 1.8.5 (https://github.com/novus/nvd3) 2016-12-01 */
+
+!function(){var 
a={};a.dev=!1,a.tooltip=a.tooltip||{},a.utils=a.utils||{},a.models=a.models||{},a.charts={},a.logs={},a.dom={},"undefined"!=typeof
 module&&"undefined"!=typeof exports&&"undefined"==typeof 
d3&&(d3=require("d3")),a.dispatch=d3.dispatch("render_start","render_end"),Function.prototype.bind||(Function.prototype.bind=function(a){if("function"!=typeof
 this)throw new TypeError("Function.prototype.bind - what is trying to be bound 
is not callable");var 
b=Array.prototype.slice.call(arguments,1),c=this,d=function(){},e=function(){return
 c.apply(this instanceof 
d&&a?this:a,b.concat(Array.prototype.slice.call(arguments)))};return 
d.prototype=this.prototype,e.prototype=new 
d,e}),a.dev&&(a.dispatch.on("render_start",function(b){a.logs.startTime=+new 
Date}),a.dispatch.on("render_end",function(b){a.logs.endTime=+new 
Date,a.logs.totalTime=a.logs.endTime-a.logs.startTime,a.log("total",a.logs.totalTime)})),a.log=function(){if(a.dev&&window.console&&console.log&&console.log.apply)console
 .log.apply(console,arguments);else 
if(a.dev&&window.console&&"function"==typeof 
console.log&&Function.prototype.bind){var 
b=Function.prototype.bind.call(console.log,console);b.apply(console,arguments)}return
 
arguments[arguments.length-1]},a.deprecated=function(a,b){console&&console.warn&&console.warn("nvd3
 warning: `"+a+"` has been deprecated. 
",b||"")},a.render=function(b){b=b||1,a.render.active=!0,a.dispatch.render_start();var
 c=function(){for(var 
d,e,f=0;b>f&&(e=a.render.queue[f]);f++)d=e.generate(),typeof e.callback==typeof 
Function&&e.callback(d);a.render.queue.splice(0,f),a.render.queue.length?setTimeout(c):(a.dispatch.render_end(),a.render.active=!1)};setTimeout(c)},a.render.active=!1,a.render.queue=[],a.addGraph=function(b){typeof
 arguments[0]==typeof 
Function&&(b={generate:arguments[0],callback:arguments[1]}),a.render.queue.push(b),a.render.active||a.render()},"undefined"!=typeof
 module&&"undefined"!=typeof exports&&(module.exports=a),"undefined"!=typeof 
window&&(window.nv=
 a),a.dom.write=function(a){return void 
0!==window.fastdom?fastdom.mutate(a):a()},a.dom.read=function(a){return void 
0!==window.fastdom?fastdom.measure(a):a()},a.interactiveGuideline=function(){"use
 strict";function b(l){l.each(function(l){function m(){var 
a=d3.mouse(this),d=a[0],e=a[1],h=!0,i=!1;if(k&&(d=d3.event.offsetX,e=d3.event.offsetY,"svg"!==d3.event.target.tagName&&(h=!1),d3.event.target.className.baseVal.match("nv-legend")&&(i=!0)),h&&(d-=c.left,e-=c.top),"mouseout"===d3.event.type||0>d||0>e||d>o||e>p||d3.event.relatedTarget&&void
 
0===d3.event.relatedTarget.ownerSVGElement||i){if(k&&d3.event.relatedTarget&&void
 0===d3.event.relatedTarget.ownerSVGElement&&(void 
0===d3.event.relatedTarget.className||d3.event.relatedTarget.className.match(j.nvPointerEventsClass)))return;return
 g.elementMouseout({mouseX:d,mouseY:e}),b.renderGuideLine(null),void 
j.hidden(!0)}j.hidden(!1);var l="function"==typeof f.rangeBands,m=void 
0;if(l){var n=d3.bisect(f.range(),d)-1;if(!(f.range()[n]+f.rangeB
 and()>=d))return 
g.elementMouseout({mouseX:d,mouseY:e}),b.renderGuideLine(null),void 
j.hidden(!0);m=f.domain()[d3.bisect(f.range(),d)-1]}else 
m=f.invert(d);g.elementMousemove({mouseX:d,mouseY:e,pointXValue:m}),"dblclick"===d3.event.type&&g.elementDblclick({mouseX:d,mouseY:e,pointXValue:m}),"click"===d3.event.type&&g.elementClick({mouseX:d,mouseY:e,pointXValue:m}),"mousedown"===d3.event.type&&g.elementMouseDown({mouseX:d,mouseY:e,pointXValue:m}),"mouseup"===d3.event.type&&g.elementMouseUp({mouseX:d,mouseY:e,pointXValue:m})}var
 
n=d3.select(this),o=d||960,p=e||400,q=n.selectAll("g.nv-wrap.nv-interactiveLineLayer").data([l]),r=q.enter().append("g").attr("class","
 nv-wrap 
nv-interactiveLineLayer");r.append("g").attr("class","nv-interactiveGuideLine"),i&&(i.on("touchmove",m).on("mousemove",m,!0).on("mouseout",m,!0).on("mousedown",m,!0).on("mouseup",m,!0).on("dblclick",m).on("click",m),b.guideLine=null,b.renderGuideLine=function(c){h&&(b.guideLine&&b.guideLine.attr("x1")===c||a.dom.write(f
 unction(){var 
b=q.select(".nv-interactiveGuideLine").selectAll("line").data(null!=c?[a.utils.NaNtoZero(c)]:[],String);b.enter().append("line").attr("class","nv-guideline").attr("x1",function(a){return
 a}).attr("x2",function(a){return 
a}).attr("y1",p).attr("y2",0),b.exit().remove()}))})})}var 
c={left:0,top:0},d=null,e=null,f=d3.scale.linear(),g=d3.dispatch(

[24/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/server-scm/src/main/webapps/scm/scm-overview.html
--
diff --git a/hadoop-hdds/server-scm/src/main/webapps/scm/scm-overview.html 
b/hadoop-hdds/server-scm/src/main/webapps/scm/scm-overview.html
new file mode 100644
index 000..fca23ba
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/webapps/scm/scm-overview.html
@@ -0,0 +1,60 @@
+
+Node counts
+
+
+
+
+{{typestat.key}}
+{{typestat.value}}
+
+
+
+
+Status
+
+
+
+Client Rpc port
+{{$ctrl.overview.jmx.ClientRpcPort}}
+
+
+Datanode Rpc port
+{{$ctrl.overview.jmx.DatanodeRpcPort}}
+
+
+Block Manager: Open containers
+{{$ctrl.blockmanagermetrics.OpenContainersNo}}
+
+
+Node Manager: Minimum chill mode nodes
+{{$ctrl.nodemanagermetrics.MinimumChillModeNodes}}
+
+
+Node Manager: Out-of-node chill mode
+{{$ctrl.nodemanagermetrics.OutOfNodeChillMode}}
+
+
+Node Manager: Chill mode status
+{{$ctrl.nodemanagermetrics.ChillModeStatus}}
+
+
+Node Manager: Manual chill mode
+{{$ctrl.nodemanagermetrics.InManualChillMode}}
+
+
+
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/server-scm/src/main/webapps/scm/scm.js
--
diff --git a/hadoop-hdds/server-scm/src/main/webapps/scm/scm.js 
b/hadoop-hdds/server-scm/src/main/webapps/scm/scm.js
new file mode 100644
index 000..bcfa8b7
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/webapps/scm/scm.js
@@ -0,0 +1,54 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+(function () {
+"use strict";
+angular.module('scm', ['ozone', 'nvd3']);
+
+angular.module('scm').component('scmOverview', {
+templateUrl: 'scm-overview.html',
+require: {
+overview: "^overview"
+},
+controller: function ($http) {
+var ctrl = this;
+$http.get("jmx?qry=Hadoop:service=BlockManager,name=*")
+.then(function (result) {
+ctrl.blockmanagermetrics = result.data.beans[0];
+});
+
$http.get("jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo")
+.then(function (result) {
+ctrl.nodemanagermetrics = result.data.beans[0];
+});
+
+var statusSortOrder = {
+"HEALTHY": "a",
+"STALE": "b",
+"DEAD": "c",
+"UNKNOWN": "z",
+"DECOMMISSIONING": "x",
+"DECOMMISSIONED": "y"
+};
+ctrl.nodeOrder = function (v1, v2) {
+//status with non defined sort order will be "undefined"
+return ("" + statusSortOrder[v1.value]).localeCompare("" + 
statusSortOrder[v2.value])
+}
+
+}
+});
+
+})();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/HddsServerUtilTest.java
--
diff --git 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/HddsServerUtilTest.java
 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/HddsServerUtilTest.java
new file mode 100644
index 000..6e01e53
--- /dev/null
+++ 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/HddsServerUtilTest.java
@@ -0,0 +1,308 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applic

[77/83] [abbrv] hadoop git commit: Merge branch 'trunk' into HDFS-7240

2018-04-24 Thread xyao
Merge branch 'trunk' into HDFS-7240


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3d9a9491
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3d9a9491
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3d9a9491

Branch: refs/heads/trunk
Commit: 3d9a949183bc31839e1447af37ad360192a22b8c
Parents: 06d228a ea95a33
Author: Xiaoyu Yao 
Authored: Mon Apr 16 10:44:54 2018 -0700
Committer: Xiaoyu Yao 
Committed: Mon Apr 16 10:44:54 2018 -0700

--
 hadoop-project-dist/pom.xml |   4 +-
 hadoop-project/pom.xml  | 158 +--
 .../hadoop/yarn/conf/YarnConfiguration.java |   2 +-
 .../src/main/resources/yarn-default.xml |   2 +-
 ...TestCapacitySchedulerSurgicalPreemption.java | 150 ++
 ...TestPerNodeTimelineCollectorsAuxService.java |   2 +
 .../src/site/markdown/TimelineServiceV2.md  |   2 +-
 pom.xml |  27 
 8 files changed, 263 insertions(+), 84 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3d9a9491/hadoop-project/pom.xml
--
diff --cc hadoop-project/pom.xml
index 4595c28,02ea0ba..4e76f4b
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@@ -560,115 -558,8 +560,115 @@@
  

  org.apache.hadoop
 +hadoop-ozone
 +${project.version}
 +  
 +  
 +org.apache.hadoop
 +hadoop-ozone-objectstore-service
 +${project.version}
 +  
 +
 +  
 +org.apache.hadoop
 +hadoop-hdds-common
 +${project.version}
 +  
 +
 +  
 +org.apache.hadoop
 +hadoop-hdds-client
 +${project.version}
 +  
 +  
 +org.apache.hadoop
 +hadoop-hdds-tools
 +${project.version}
 +  
 +  
 +org.apache.hadoop
 +hadoop-ozone-tools
 +${project.version}
 +  
 +  
 +org.apache.hadoop
 +hadoop-ozone-integration-test
 +${project.version}
 +test-jar
 +  
 +
 +  
 +org.apache.hadoop
 +hadoop-hdds-server-framework
 +${project.version}
 +  
 +
 +  
 +org.apache.hadoop
 +hadoop-hdds-server-scm
 +${project.version}
 +  
 +
 +  
 +org.apache.hadoop
 +hadoop-hdds-container-service
 +${project.version}
 +  
 +  
 +org.apache.hadoop
 +hadoop-hdds-container-service
 +${project.version}
 +test-jar
 +  
 +
 +  
 +org.apache.hadoop
 +hadoop-hdds-server-scm
 +test-jar
 +${project.version}
 +  
 +
 +
 +  
 +org.apache.hadoop
 +hadoop-ozone-common
 +${project.version}
 +  
 +
 +  
 +org.apache.hadoop
 +hadoop-ozone-ozone-manager
 +${project.version}
 +  
 +  
 +org.apache.hadoop
 +hadoop-ozone-ozone-manager
 +${project.version}
 +test-jar
 +  
 +
 +  
 +org.apache.hadoop
 +hadoop-ozone-client
 +${project.version}
 +  
 +
 +  
 +org.openjdk.jmh
 +jmh-core
 +1.19
 +  
 +  
 +org.openjdk.jmh
 +jmh-generator-annprocess
 +1.19
 +  
 +
 +
 +  
 +org.apache.hadoop
  hadoop-kms
- ${project.version}
+ ${hadoop.version}


  org.apache.hadoop

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3d9a9491/pom.xml
--


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[45/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/MetadataStore.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/MetadataStore.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/MetadataStore.java
new file mode 100644
index 000..b90b08f
--- /dev/null
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/MetadataStore.java
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.utils;
+
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.utils.MetadataKeyFilters.MetadataKeyFilter;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Interface for key-value store that stores ozone metadata.
+ * Ozone metadata is stored as key value pairs, both key and value
+ * are arbitrary byte arrays.
+ */
+@InterfaceStability.Evolving
+public interface MetadataStore extends Closeable{
+
+  /**
+   * Puts a key-value pair into the store.
+   *
+   * @param key metadata key
+   * @param value metadata value
+   */
+  void put(byte[] key, byte[] value) throws IOException;
+
+  /**
+   * @return true if the metadata store is empty.
+   *
+   * @throws IOException
+   */
+  boolean isEmpty() throws IOException;
+
+  /**
+   * Returns the value mapped to the given key in byte array.
+   *
+   * @param key metadata key
+   * @return value in byte array
+   * @throws IOException
+   */
+  byte[] get(byte[] key) throws IOException;
+
+  /**
+   * Deletes a key from the metadata store.
+   *
+   * @param key metadata key
+   * @throws IOException
+   */
+  void delete(byte[] key) throws IOException;
+
+  /**
+   * Returns a certain range of key value pairs as a list based on a
+   * startKey or count. Further a {@link MetadataKeyFilter} can be added to
+   * filter keys if necessary. To prevent race conditions while listing
+   * entries, this implementation takes a snapshot and lists the entries from
+   * the snapshot. This may, on the other hand, cause the range result slight
+   * different with actual data if data is updating concurrently.
+   * 
+   * If the startKey is specified and found in levelDB, this key and the keys
+   * after this key will be included in the result. If the startKey is null
+   * all entries will be included as long as other conditions are satisfied.
+   * If the given startKey doesn't exist and empty list will be returned.
+   * 
+   * The count argument is to limit number of total entries to return,
+   * the value for count must be an integer greater than 0.
+   * 
+   * This method allows to specify one or more {@link MetadataKeyFilter}
+   * to filter keys by certain condition. Once given, only the entries
+   * whose key passes all the filters will be included in the result.
+   *
+   * @param startKey a start key.
+   * @param count max number of entries to return.
+   * @param filters customized one or more {@link MetadataKeyFilter}.
+   * @return a list of entries found in the database or an empty list if the
+   * startKey is invalid.
+   * @throws IOException if there are I/O errors.
+   * @throws IllegalArgumentException if count is less than 0.
+   */
+  List> getRangeKVs(byte[] startKey,
+  int count, MetadataKeyFilter... filters)
+  throws IOException, IllegalArgumentException;
+
+  /**
+   * This method is very similar to {@link #getRangeKVs}, the only
+   * different is this method is supposed to return a sequential range
+   * of elements based on the filters. While iterating the elements,
+   * if it met any entry that cannot pass the filter, the iterator will stop
+   * from this point without looking for next match. If no filter is given,
+   * this method behaves just like {@link #getRangeKVs}.
+   *
+   * @param startKey a start key.
+   * @param count max number of entries to return.
+   * @param filters customized one or more {@link MetadataKeyFilter}.
+   * @return a list of entries found in the database.
+   * @throws IOEx

[70/83] [abbrv] hadoop git commit: HDFS-13197. Ozone: Fix ConfServlet#getOzoneTags cmd. Contributed by Ajay Kumar.

2018-04-24 Thread xyao
HDFS-13197. Ozone: Fix ConfServlet#getOzoneTags cmd. Contributed by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/66610b5f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/66610b5f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/66610b5f

Branch: refs/heads/trunk
Commit: 66610b5fd5dc29c1bff006874bf46d426d3a9dfa
Parents: 584c573
Author: Xiaoyu Yao 
Authored: Fri Apr 13 13:42:57 2018 -0700
Committer: Xiaoyu Yao 
Committed: Fri Apr 13 13:42:57 2018 -0700

--
 .../hadoop/hdds/conf/HddsConfServlet.java   | 181 +
 .../common/src/main/resources/ozone-default.xml |  10 +
 .../hadoop/hdds/server/BaseHttpServer.java  |  10 +-
 .../src/main/resources/webapps/static/ozone.js  | 668 ++-
 .../webapps/static/templates/config.html|  28 +-
 5 files changed, 562 insertions(+), 335 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/66610b5f/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/HddsConfServlet.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/HddsConfServlet.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/HddsConfServlet.java
new file mode 100644
index 000..068e41f
--- /dev/null
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/HddsConfServlet.java
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.conf;
+
+import com.google.gson.Gson;
+import java.io.IOException;
+import java.io.Writer;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Properties;
+import javax.servlet.ServletException;
+import javax.servlet.http.HttpServlet;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import javax.ws.rs.core.HttpHeaders;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.http.HttpServer2;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A servlet to print out the running configuration data.
+ */
+@InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce"})
+@InterfaceStability.Unstable
+public class HddsConfServlet extends HttpServlet {
+
+  private static final long serialVersionUID = 1L;
+
+  protected static final String FORMAT_JSON = "json";
+  protected static final String FORMAT_XML = "xml";
+  private static final String COMMAND = "cmd";
+  private static final OzoneConfiguration OZONE_CONFIG =
+  new OzoneConfiguration();
+  transient Logger LOG = LoggerFactory.getLogger(HddsConfServlet.class);
+
+
+  /**
+   * Return the Configuration of the daemon hosting this servlet.
+   * This is populated when the HttpServer starts.
+   */
+  private Configuration getConfFromContext() {
+Configuration conf = (Configuration) getServletContext().getAttribute(
+HttpServer2.CONF_CONTEXT_ATTRIBUTE);
+assert conf != null;
+return conf;
+  }
+
+  @Override
+  public void doGet(HttpServletRequest request, HttpServletResponse response)
+  throws ServletException, IOException {
+
+if (!HttpServer2.isInstrumentationAccessAllowed(getServletContext(),
+request, response)) {
+  return;
+}
+
+String format = parseAcceptHeader(request);
+if (FORMAT_XML.equals(format)) {
+  response.setContentType("text/xml; charset=utf-8");
+} else if (FORMAT_JSON.equals(format)) {
+  response.setContentType("application/json; charset=utf-8");
+}
+
+String name = request.getParameter("name");
+Writer out = response.getWriter();
+String cmd = request.getParameter(COMMAND);
+
+processCommand(cmd, format, request, response, out, name);
+out.close();
+  }
+
+  private void processCommand(String cmd, String format,
+  HttpServletRequest requ

[63/83] [abbrv] hadoop git commit: HDFS-13415. Ozone: Remove cblock code from HDFS-7240. Contributed by Elek, Marton.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ea85801c/hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/cache/impl/CBlockLocalCache.java
--
diff --git 
a/hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/cache/impl/CBlockLocalCache.java
 
b/hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/cache/impl/CBlockLocalCache.java
deleted file mode 100644
index ec5a4c9..000
--- 
a/hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/cache/impl/CBlockLocalCache.java
+++ /dev/null
@@ -1,577 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.cblock.jscsiHelper.cache.impl;
-
-import com.google.common.base.Preconditions;
-import com.google.common.primitives.Longs;
-import org.apache.commons.lang.StringUtils;
-import org.apache.hadoop.cblock.jscsiHelper.ContainerCacheFlusher;
-import org.apache.hadoop.cblock.jscsiHelper.cache.CacheModule;
-import org.apache.hadoop.cblock.jscsiHelper.cache.LogicalBlock;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hdds.scm.XceiverClientManager;
-import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
-import org.apache.hadoop.cblock.jscsiHelper.CBlockTargetMetrics;
-import org.apache.hadoop.utils.LevelDBStore;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.File;
-import java.io.IOException;
-import java.net.URI;
-import java.net.URISyntaxException;
-import java.nio.file.FileStore;
-import java.nio.file.Files;
-import java.nio.file.Path;
-import java.nio.file.Paths;
-import java.util.List;
-
-import static org.apache.hadoop.cblock.CBlockConfigKeys
-.DFS_CBLOCK_DISK_CACHE_PATH_DEFAULT;
-import static org.apache.hadoop.cblock.CBlockConfigKeys
-.DFS_CBLOCK_DISK_CACHE_PATH_KEY;
-import static org.apache.hadoop.cblock.CBlockConfigKeys
-.DFS_CBLOCK_ENABLE_SHORT_CIRCUIT_IO;
-import static org.apache.hadoop.cblock.CBlockConfigKeys
-.DFS_CBLOCK_ENABLE_SHORT_CIRCUIT_IO_DEFAULT;
-import static org.apache.hadoop.cblock.CBlockConfigKeys.DFS_CBLOCK_TRACE_IO;
-import static org.apache.hadoop.cblock.CBlockConfigKeys
-.DFS_CBLOCK_TRACE_IO_DEFAULT;
-
-/**
- * A local cache used by the CBlock ISCSI server. This class is enabled or
- * disabled via config settings.
- */
-public class CBlockLocalCache implements CacheModule {
-  private static final Logger LOG =
-  LoggerFactory.getLogger(CBlockLocalCache.class);
-  private static final Logger TRACER =
-  LoggerFactory.getLogger("TraceIO");
-
-  private final Configuration conf;
-  /**
-   * LevelDB cache file.
-   */
-  private final LevelDBStore cacheDB;
-
-  /**
-   * AsyncBlock writer updates the cacheDB and writes the blocks async to
-   * remote containers.
-   */
-  private final AsyncBlockWriter blockWriter;
-
-  /**
-   * Sync block reader tries to read from the cache and if we get a cache
-   * miss we will fetch the block from remote location. It will asynchronously
-   * update the cacheDB.
-   */
-  private final SyncBlockReader blockReader;
-  private final String userName;
-  private final String volumeName;
-
-  /**
-   * From a block ID we are able to get the pipeline by indexing this array.
-   */
-  private final Pipeline[] containerList;
-  private final int blockSize;
-  private XceiverClientManager clientManager;
-  /**
-   * If this flag is enabled then cache traces all I/O, all reads and writes
-   * are visible in the log with sha of the block written. Makes the system
-   * slower use it only for debugging or creating trace simulations.
-   */
-  private final boolean traceEnabled;
-  private final boolean enableShortCircuitIO;
-  private final long volumeSize;
-  private long currentCacheSize;
-  private File dbPath;
-  private final ContainerCacheFlusher flusher;
-  private CBlockTargetMetrics cblockTargetMetrics;
-
-  /**
-   * Get Db Path.
-   * @return the file instance of the db.
-   */
-  public File getDbPath() {
-return dbPath;
-  }
-
-  /**
-   * Constructor for CBlockLocalCache invoked via the builder.
-   *
-   * @param conf -  Configuration
-   * @param volumeName - vol

[78/83] [abbrv] hadoop git commit: HDFS-13459. Ozone: Clean-up of ozone related change from MiniDFSCluste. Contributed by Nandakumar.

2018-04-24 Thread xyao
HDFS-13459. Ozone: Clean-up of ozone related change from MiniDFSCluste. 
Contributed by Nandakumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1e0507ac
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1e0507ac
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1e0507ac

Branch: refs/heads/trunk
Commit: 1e0507ac56a517437066ac580e1cc775e22dc314
Parents: 3d9a949
Author: Anu Engineer 
Authored: Mon Apr 16 16:18:06 2018 -0700
Committer: Anu Engineer 
Committed: Mon Apr 16 16:18:06 2018 -0700

--
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  |  51 ---
 .../hdfs/MiniDFSClusterWithNodeGroup.java   |   2 +-
 .../src/test/resources/hadoop-22-dfs-dir.tgz| Bin 328893 -> 413239 bytes
 3 files changed, 22 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1e0507ac/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
index ec1c08b..4c3aed7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
@@ -536,14 +536,6 @@ public class MiniDFSCluster implements AutoCloseable {
   this.ipcPort = ipcPort;
 }
 
-public Configuration getConf() {
-  return conf;
-}
-
-public DataNode getDatanode() {
-  return datanode;
-}
-
 public void setDnArgs(String ... args) {
   dnArgs = args;
 }
@@ -1083,6 +1075,9 @@ public class MiniDFSCluster implements AutoCloseable {
*/
   public static void configureNameNodes(MiniDFSNNTopology nnTopology, boolean 
federation,
   Configuration conf) throws IOException {
+Preconditions.checkArgument(nnTopology.countNameNodes() > 0,
+"empty NN topology: no namenodes specified!");
+
 if (!federation && nnTopology.countNameNodes() == 1) {
   NNConf onlyNN = nnTopology.getOnlyNameNode();
   // we only had one NN, set DEFAULT_NAME for it. If not explicitly
@@ -1597,7 +1592,7 @@ public class MiniDFSCluster implements AutoCloseable {
 dnConf.addResource(dnConfOverlays[i]);
   }
   // Set up datanode address
-  setupDatanodeAddress(i, dnConf, setupHostsFile, checkDataNodeAddrConfig);
+  setupDatanodeAddress(dnConf, setupHostsFile, checkDataNodeAddrConfig);
   if (manageDfsDirs) {
 String dirs = makeDataNodeDirs(i, storageTypes == null ?
   null : storageTypes[i - curDatanodesNum]);
@@ -2358,7 +2353,7 @@ public class MiniDFSCluster implements AutoCloseable {
   conf.set(DFS_DATANODE_ADDRESS_KEY, 
   addr.getAddress().getHostAddress() + ":" + addr.getPort());
   conf.set(DFS_DATANODE_IPC_ADDRESS_KEY,
-  addr.getAddress().getHostAddress() + ":" + dnprop.ipcPort);
+  addr.getAddress().getHostAddress() + ":" + dnprop.ipcPort); 
 }
 final DataNode newDn = DataNode.createDataNode(args, conf, 
secureResources);
 
@@ -2899,19 +2894,16 @@ public class MiniDFSCluster implements AutoCloseable {
 
   /**
* Get a storage directory for a datanode.
-   * For examples,
* 
-   * /data/dn0_data0
-   * /data/dn0_data1
-   * /data/dn1_data0
-   * /data/dn1_data1
+   * /data/data<2*dnIndex + 1>
+   * /data/data<2*dnIndex + 2>
* 
*
* @param dnIndex datanode index (starts from 0)
* @param dirIndex directory index.
* @return Storage directory
*/
-  public static File getStorageDir(int dnIndex, int dirIndex) {
+  public File getStorageDir(int dnIndex, int dirIndex) {
 return new File(getBaseDirectory(), getStorageDirPath(dnIndex, dirIndex));
   }
 
@@ -2922,8 +2914,8 @@ public class MiniDFSCluster implements AutoCloseable {
* @param dirIndex directory index.
* @return storage directory path
*/
-  private static String getStorageDirPath(int dnIndex, int dirIndex) {
-return "data/dn" + dnIndex + "_data" + dirIndex;
+  private String getStorageDirPath(int dnIndex, int dirIndex) {
+return "data/data" + (storagesPerDatanode * dnIndex + 1 + dirIndex);
   }
 
   /**
@@ -3188,36 +3180,35 @@ public class MiniDFSCluster implements AutoCloseable {
 }
   }
   
-  protected void setupDatanodeAddress(
-  int i, Configuration dnConf, boolean setupHostsFile,
-  boolean checkDataNodeAddrConfig) throws IOException {
+  protected void setupDatanodeAddress(Configuration conf, boolean 
setupHostsFile,
+   boolean checkDataNodeAddrConfig) throws IOException 
{
 if (setupHostsFile) {
-  

[42/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/KeyUtils.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/KeyUtils.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/KeyUtils.java
new file mode 100644
index 000..33eb911
--- /dev/null
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/KeyUtils.java
@@ -0,0 +1,148 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.apache.hadoop.ozone.container.common.helpers;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.scm.container.common.helpers
+.StorageContainerException;
+import org.apache.hadoop.hdds.protocol.proto.ContainerProtos;
+import org.apache.hadoop.ozone.container.common.utils.ContainerCache;
+import org.apache.hadoop.utils.MetadataStore;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+
+import static org.apache.hadoop.hdds.protocol.proto.ContainerProtos.Result
+.NO_SUCH_KEY;
+import static org.apache.hadoop.hdds.protocol.proto.ContainerProtos.Result
+.UNABLE_TO_READ_METADATA_DB;
+
+/**
+ * Utils functions to help key functions.
+ */
+public final class KeyUtils {
+  public static final String ENCODING_NAME = "UTF-8";
+  public static final Charset ENCODING = Charset.forName(ENCODING_NAME);
+
+  /**
+   * Never Constructed.
+   */
+  private KeyUtils() {
+  }
+
+  /**
+   * Get a DB handler for a given container.
+   * If the handler doesn't exist in cache yet, first create one and
+   * add into cache. This function is called with containerManager
+   * ReadLock held.
+   *
+   * @param container container.
+   * @param conf configuration.
+   * @return MetadataStore handle.
+   * @throws StorageContainerException
+   */
+  public static MetadataStore getDB(ContainerData container,
+  Configuration conf) throws StorageContainerException {
+Preconditions.checkNotNull(container);
+ContainerCache cache = ContainerCache.getInstance(conf);
+Preconditions.checkNotNull(cache);
+try {
+  return cache.getDB(container.getContainerName(), container.getDBPath());
+} catch (IOException ex) {
+  String message =
+  String.format("Unable to open DB. DB Name: %s, Path: %s. ex: %s",
+  container.getContainerName(), container.getDBPath(), 
ex.getMessage());
+  throw new StorageContainerException(message, UNABLE_TO_READ_METADATA_DB);
+}
+  }
+
+  /**
+   * Remove a DB handler from cache.
+   *
+   * @param container - Container data.
+   * @param conf - Configuration.
+   */
+  public static void removeDB(ContainerData container,
+  Configuration conf) {
+Preconditions.checkNotNull(container);
+ContainerCache cache = ContainerCache.getInstance(conf);
+Preconditions.checkNotNull(cache);
+cache.removeDB(container.getContainerName());
+  }
+  /**
+   * Shutdown all DB Handles.
+   *
+   * @param cache - Cache for DB Handles.
+   */
+  @SuppressWarnings("unchecked")
+  public static void shutdownCache(ContainerCache cache)  {
+cache.shutdownCache();
+  }
+
+  /**
+   * Returns successful keyResponse.
+   * @param msg - Request.
+   * @return Response.
+   */
+  public static ContainerProtos.ContainerCommandResponseProto
+  getKeyResponse(ContainerProtos.ContainerCommandRequestProto msg) {
+return ContainerUtils.getContainerResponse(msg);
+  }
+
+
+  public static ContainerProtos.ContainerCommandResponseProto
+  getKeyDataResponse(ContainerProtos.ContainerCommandRequestProto msg,
+  KeyData data) {
+ContainerProtos.GetKeyResponseProto.Builder getKey = ContainerProtos
+.GetKeyResponseProto.newBuilder();
+getKey.setKeyData(data.getProtoBufMessage());
+ContainerProtos.ContainerCommandResponseProto.Builder builder =
+ContainerUtils.getContainerResponse(msg, ContainerProtos.Result
+.SUCCESS, "");
+builder.setGetKey(getKey);
+

[61/83] [abbrv] hadoop git commit: HDFS-13415. Ozone: Remove cblock code from HDFS-7240. Contributed by Elek, Marton.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ea85801c/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/TestCBlockReadWrite.java
--
diff --git 
a/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/TestCBlockReadWrite.java
 
b/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/TestCBlockReadWrite.java
deleted file mode 100644
index fb58a4e..000
--- 
a/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/TestCBlockReadWrite.java
+++ /dev/null
@@ -1,377 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *  http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-package org.apache.hadoop.cblock;
-
-import com.google.common.primitives.Longs;
-import org.apache.commons.codec.digest.DigestUtils;
-import org.apache.commons.lang.RandomStringUtils;
-import org.apache.hadoop.cblock.jscsiHelper.CBlockTargetMetrics;
-import org.apache.hadoop.cblock.jscsiHelper.ContainerCacheFlusher;
-import org.apache.hadoop.cblock.jscsiHelper.cache.LogicalBlock;
-import org.apache.hadoop.cblock.jscsiHelper.cache.impl.CBlockLocalCache;
-import org.apache.hadoop.hdds.conf.OzoneConfiguration;
-import org.apache.hadoop.io.IOUtils;
-import org.apache.hadoop.ozone.MiniOzoneClassicCluster;
-import org.apache.hadoop.ozone.MiniOzoneCluster;
-import org.apache.hadoop.ozone.OzoneConsts;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType;
-import org.apache.hadoop.hdds.scm.XceiverClientManager;
-import org.apache.hadoop.hdds.scm.XceiverClientSpi;
-import org.apache.hadoop.hdds.scm.container.common.helpers.PipelineChannel;
-import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
-import org.apache.hadoop.hdds.scm.protocolPB
-.StorageContainerLocationProtocolClientSideTranslatorPB;
-import org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls;
-import org.apache.hadoop.test.GenericTestUtils;
-import org.junit.AfterClass;
-import org.junit.Assert;
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-import java.io.IOException;
-import java.nio.charset.StandardCharsets;
-import java.util.LinkedList;
-import java.util.List;
-import java.util.concurrent.TimeUnit;
-import java.util.concurrent.TimeoutException;
-
-import static org.apache.hadoop.cblock.CBlockConfigKeys
-.DFS_CBLOCK_DISK_CACHE_PATH_KEY;
-import static org.apache.hadoop.cblock.CBlockConfigKeys
-.DFS_CBLOCK_TRACE_IO;
-import static org.apache.hadoop.cblock.CBlockConfigKeys
-.DFS_CBLOCK_ENABLE_SHORT_CIRCUIT_IO;
-import static org.apache.hadoop.cblock.CBlockConfigKeys
-.DFS_CBLOCK_BLOCK_BUFFER_FLUSH_INTERVAL;
-import static org.apache.hadoop.cblock.CBlockConfigKeys
-.DFS_CBLOCK_CACHE_BLOCK_BUFFER_SIZE;
-
-/**
- * Tests for Cblock read write functionality.
- */
-public class TestCBlockReadWrite {
-  private final static long GB = 1024 * 1024 * 1024;
-  private final static int KB = 1024;
-  private static MiniOzoneCluster cluster;
-  private static OzoneConfiguration config;
-  private static StorageContainerLocationProtocolClientSideTranslatorPB
-  storageContainerLocationClient;
-  private static XceiverClientManager xceiverClientManager;
-
-  @BeforeClass
-  public static void init() throws IOException {
-config = new OzoneConfiguration();
-String path = GenericTestUtils
-.getTempPath(TestCBlockReadWrite.class.getSimpleName());
-config.set(DFS_CBLOCK_DISK_CACHE_PATH_KEY, path);
-config.setBoolean(DFS_CBLOCK_TRACE_IO, true);
-config.setBoolean(DFS_CBLOCK_ENABLE_SHORT_CIRCUIT_IO, true);
-cluster = new MiniOzoneClassicCluster.Builder(config)
-.numDataNodes(1)
-.setHandlerType(OzoneConsts.OZONE_HANDLER_DISTRIBUTED).build();
-storageContainerLocationClient = cluster
-.createStorageContainerLocationClient();
-xceiverClientManager = new XceiverClientManager(config);
-  }
-
-  @AfterClass
-  public static void shutdown() throws InterruptedException {
-if (cluster != null) {
-  cluster.shutdown();
-}
-IOUtils.cleanupWithLogger(null, storageContainerLocationClient, cluster);
-  }

[44/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/common/src/main/resources/ozone-default.xml
--
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
new file mode 100644
index 000..8018d29
--- /dev/null
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -0,0 +1,1031 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+  
+  
+ozone.container.cache.size
+1024
+PERFORMANCE, CONTAINER, STORAGE
+The open container is cached on the data node side. We 
maintain
+  an LRU
+  cache for caching the recently used containers. This setting controls the
+  size of that cache.
+
+  
+  
+dfs.container.ipc
+9859
+OZONE, CONTAINER, MANAGEMENT
+The ipc port number of container.
+  
+  
+dfs.container.ipc.random.port
+false
+OZONE, DEBUG, CONTAINER
+Allocates a random free port for ozone container. This is used
+  only while
+  running unit tests.
+
+  
+  
+dfs.container.ratis.datanode.storage.dir
+
+OZONE, CONTAINER, STORAGE, MANAGEMENT, RATIS
+This directory is used for storing Ratis metadata like logs. 
If
+  this is
+  not set then default metadata dirs is used. A warning will be logged if
+  this not set. Ideally, this should be mapped to a fast disk like an SSD.
+
+  
+  
+dfs.container.ratis.enabled
+false
+OZONE, MANAGEMENT, PIPELINE, RATIS
+Ozone supports different kinds of replication pipelines. Ratis
+  is one of
+  the replication pipeline supported by ozone.
+
+  
+  
+dfs.container.ratis.ipc
+9858
+OZONE, CONTAINER, PIPELINE, RATIS, MANAGEMENT
+The ipc port number of container.
+  
+  
+dfs.container.ratis.ipc.random.port
+false
+OZONE,DEBUG
+Allocates a random free port for ozone ratis port for the
+  container. This
+  is used only while running unit tests.
+
+  
+  
+dfs.container.ratis.rpc.type
+GRPC
+OZONE, RATIS, MANAGEMENT
+Ratis supports different kinds of transports like netty, GRPC,
+  Hadoop RPC
+  etc. This picks one of those for this cluster.
+
+  
+  
+dfs.container.ratis.num.write.chunk.threads
+60
+OZONE, RATIS, PERFORMANCE
+Maximum number of threads in the thread pool that Ratis
+  will use for writing chunks (60 by default).
+
+  
+  
+dfs.container.ratis.segment.size
+1073741824
+OZONE, RATIS, PERFORMANCE
+The size of the raft segment used by Apache Ratis on 
datanodes.
+  (1 GB by default)
+
+  
+  
+dfs.container.ratis.segment.preallocated.size
+134217728
+OZONE, RATIS, PERFORMANCE
+The size of the buffer which is preallocated for raft segment
+  used by Apache Ratis on datanodes.(128 MB by default)
+
+  
+  
+ozone.container.report.interval
+6ms
+OZONE, CONTAINER, MANAGEMENT
+Time interval of the datanode to send container report. Each
+  datanode periodically send container report upon receive
+  sendContainerReport from SCM. Unit could be defined with
+  postfix (ns,ms,s,m,h,d)
+  
+  
+  
+ozone.administrators
+
+OZONE, SECURITY
+Ozone administrator users delimited by the comma.
+  If not set, only the user who launches an ozone service will be the admin
+  user. This property must be set if ozone services are started by 
different
+  users. Otherwise, the RPC layer will reject calls from other servers 
which
+  are started by users not in the list.
+
+  
+  
+ozone.block.deleting.container.limit.per.interval
+10
+OZONE, PERFORMANCE, SCM
+A maximum number of containers to be scanned by block deleting
+  service per
+  time interval. The block deleting service spawns a thread to handle block
+  deletions in a container. This property is used to throttle the number of
+  threads spawned for block deletions.
+
+  
+  
+ozone.block.deleting.limit.per.task
+1000
+OZONE, PERFORMANCE, SCM
+A maximum number of blocks to be deleted by block deleting
+  service per
+  time interval. This property is used to throttle the actual number of
+  block deletions on a data node per container.
+
+  
+  
+ozone.block.deleting.service.interval
+1m
+OZONE, PERFORMANCE, SCM
+Time interval of the block deleting service.
+  The block deleting service runs on each datanode periodically and
+  deletes blocks queued for deletion. Unit could be defined with
+  postfix (ns,ms,s,m,h,d)
+
+  
+  
+ozone.block.deleting.service.timeout
+30ms
+OZONE, PERFORMANCE, SCM
+A timeout value of block deletion service. If this is set
+  greater than 0,
+  the service will stop waiting for the block deleting completion after 
this
+  time. If timeout happens to a large proportion of block deletion, t

[57/83] [abbrv] hadoop git commit: HDFS-13324. Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails. Contributed by Shashikant Banerjee.

2018-04-24 Thread xyao
HDFS-13324. Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails. 
Contributed by Shashikant Banerjee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d8fd9220
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d8fd9220
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d8fd9220

Branch: refs/heads/trunk
Commit: d8fd9220dadcfac5e1168ebee7d5c1380646d419
Parents: df3ff90
Author: Nanda kumar 
Authored: Wed Apr 11 14:25:38 2018 +0530
Committer: Nanda kumar 
Committed: Wed Apr 11 14:25:38 2018 +0530

--
 .../hadoop/hdds/protocol/DatanodeDetails.java   | 88 +---
 hadoop-hdds/common/src/main/proto/hdds.proto|  8 +-
 .../hadoop/ozone/HddsDatanodeService.java   |  7 --
 .../common/TestDatanodeStateMachine.java|  2 -
 .../org/apache/hadoop/hdds/scm/TestUtils.java   |  2 -
 .../hdds/scm/block/TestDeletedBlockLog.java |  4 -
 .../ozone/container/ContainerTestHelper.java|  2 -
 .../apache/hadoop/ozone/ksm/TestKSMSQLCli.java  |  2 +-
 .../hadoop/ozone/scm/TestContainerSQLCli.java   |  2 +-
 .../apache/hadoop/ozone/scm/TestSCMMetrics.java |  2 -
 .../hadoop/ozone/ksm/KeySpaceManager.java   |  6 --
 .../hadoop/ozone/genesis/GenesisUtil.java   |  2 -
 .../org/apache/hadoop/ozone/scm/cli/SQLCLI.java | 31 +++
 .../hadoop/ozone/scm/cli/package-info.java  |  2 +-
 14 files changed, 22 insertions(+), 138 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d8fd9220/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
index 1463591..764b3cd 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
@@ -42,8 +42,6 @@ public final class DatanodeDetails implements 
Comparable {
 
   private String ipAddress;
   private String hostName;
-  private Integer infoPort;
-  private Integer infoSecurePort;
   private Integer containerPort;
   private Integer ratisPort;
   private Integer ozoneRestPort;
@@ -55,21 +53,15 @@ public final class DatanodeDetails implements 
Comparable {
* @param uuid DataNode's UUID
* @param ipAddress IP Address of this DataNode
* @param hostName DataNode's hostname
-   * @param infoPort HTTP Port
-   * @param infoSecurePort HTTPS Port
* @param containerPort Container Port
* @param ratisPort Ratis Port
* @param ozoneRestPort Rest Port
*/
-  private DatanodeDetails(
-  String uuid, String ipAddress, String hostName, Integer infoPort,
-  Integer infoSecurePort, Integer containerPort, Integer ratisPort,
-  Integer ozoneRestPort) {
+  private DatanodeDetails(String uuid, String ipAddress, String hostName,
+  Integer containerPort, Integer ratisPort, Integer ozoneRestPort) {
 this.uuid = UUID.fromString(uuid);
 this.ipAddress = ipAddress;
 this.hostName = hostName;
-this.infoPort = infoPort;
-this.infoSecurePort = infoSecurePort;
 this.containerPort = containerPort;
 this.ratisPort = ratisPort;
 this.ozoneRestPort = ozoneRestPort;
@@ -130,41 +122,6 @@ public final class DatanodeDetails implements 
Comparable {
   }
 
   /**
-   * Sets the InfoPort.
-   * @param port InfoPort
-   */
-  public void setInfoPort(int port) {
-infoPort = port;
-  }
-
-  /**
-   * Returns DataNodes Info Port.
-   *
-   * @return InfoPort
-   */
-  public int getInfoPort() {
-return infoPort;
-  }
-
-  /**
-   * Sets the InfoSecurePort.
-   *
-   * @param port InfoSecurePort
-   */
-  public void setInfoSecurePort(int port) {
-infoSecurePort = port;
-  }
-
-  /**
-   * Returns DataNodes Secure Info Port.
-   *
-   * @return InfoSecurePort
-   */
-  public int getInfoSecurePort() {
-return infoSecurePort;
-  }
-
-  /**
* Sets the Container Port.
* @param port ContainerPort
*/
@@ -231,12 +188,6 @@ public final class DatanodeDetails implements 
Comparable {
 if (datanodeDetailsProto.hasHostName()) {
   builder.setHostName(datanodeDetailsProto.getHostName());
 }
-if (datanodeDetailsProto.hasInfoPort()) {
-builder.setInfoPort(datanodeDetailsProto.getInfoPort());
-}
-if (datanodeDetailsProto.hasInfoSecurePort()) {
-builder.setInfoSecurePort(datanodeDetailsProto.getInfoSecurePort());
-}
 if (datanodeDetailsProto.hasContainerPort()) {
   builder.setContainerPort(datanodeDetailsProto.getContainerPort());
 }
@@ -263,12 +214,6 @@ public final class DatanodeDetails implements 

[43/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/TestRocksDBStoreMBean.java
--
diff --git 
a/hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/TestRocksDBStoreMBean.java
 
b/hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/TestRocksDBStoreMBean.java
new file mode 100644
index 000..a7ce60b
--- /dev/null
+++ 
b/hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/TestRocksDBStoreMBean.java
@@ -0,0 +1,87 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.utils;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Assert;
+import org.junit.Test;
+
+import javax.management.MBeanServer;
+import java.io.File;
+import java.lang.management.ManagementFactory;
+
+/**
+ * Test the JMX interface for the rocksdb metastore implementation.
+ */
+public class TestRocksDBStoreMBean {
+
+  @Test
+  public void testJmxBeans() throws Exception {
+File testDir =
+GenericTestUtils.getTestDir(getClass().getSimpleName() + "-withstat");
+
+Configuration conf = new OzoneConfiguration();
+conf.set(OzoneConfigKeys.OZONE_METADATA_STORE_IMPL,
+OzoneConfigKeys.OZONE_METADATA_STORE_IMPL_ROCKSDB);
+
+RocksDBStore metadataStore =
+(RocksDBStore) MetadataStoreBuilder.newBuilder().setConf(conf)
+.setCreateIfMissing(true).setDbFile(testDir).build();
+
+for (int i = 0; i < 10; i++) {
+  metadataStore.put("key".getBytes(), "value".getBytes());
+}
+
+MBeanServer platformMBeanServer =
+ManagementFactory.getPlatformMBeanServer();
+Thread.sleep(2000);
+
+Object keysWritten = platformMBeanServer
+.getAttribute(metadataStore.getStatMBeanName(), "NUMBER_KEYS_WRITTEN");
+
+Assert.assertEquals(10L, keysWritten);
+
+Object dbWriteAverage = platformMBeanServer
+.getAttribute(metadataStore.getStatMBeanName(), "DB_WRITE_AVERAGE");
+Assert.assertTrue((double) dbWriteAverage > 0);
+
+metadataStore.close();
+
+  }
+
+  @Test()
+  public void testDisabledStat() throws Exception {
+File testDir = GenericTestUtils
+.getTestDir(getClass().getSimpleName() + "-withoutstat");
+
+Configuration conf = new OzoneConfiguration();
+conf.set(OzoneConfigKeys.OZONE_METADATA_STORE_IMPL,
+OzoneConfigKeys.OZONE_METADATA_STORE_IMPL_ROCKSDB);
+conf.set(OzoneConfigKeys.OZONE_METADATA_STORE_ROCKSDB_STATISTICS,
+OzoneConfigKeys.OZONE_METADATA_STORE_ROCKSDB_STATISTICS_OFF);
+
+RocksDBStore metadataStore =
+(RocksDBStore) MetadataStoreBuilder.newBuilder().setConf(conf)
+.setCreateIfMissing(true).setDbFile(testDir).build();
+
+Assert.assertNull(metadataStore.getStatMBeanName());
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/container-service/pom.xml
--
diff --git a/hadoop-hdds/container-service/pom.xml 
b/hadoop-hdds/container-service/pom.xml
new file mode 100644
index 000..736272d
--- /dev/null
+++ b/hadoop-hdds/container-service/pom.xml
@@ -0,0 +1,98 @@
+
+
+http://maven.apache.org/POM/4.0.0";
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
+http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+  4.0.0
+  
+org.apache.hadoop
+hadoop-hdds
+3.2.0-SNAPSHOT
+  
+  hadoop-hdds-container-service
+  3.2.0-SNAPSHOT
+  Apache HDDS Container server
+  Apache HDDS Container server
+  jar
+
+  
+hdds
+true
+  
+
+  
+
+  org.apache.hadoop
+  hadoop-hdds-common
+  provided
+
+
+  org.apache.hadoop
+  hadoop-hdds-server-framework
+  provided
+
+
+
+  org.mockito
+  mockito-core
+  2.2.0
+  test
+
+
+  
+
+  
+
+  
+org.apache.hadoop
+hadoop-maven-plugins
+
+  
+compile-proto

[67/83] [abbrv] hadoop git commit: HDFS-13414. Ozone: Update existing Ozone documentation according to the recent changes. Contributed by Elek, Marton.

2018-04-24 Thread xyao
HDFS-13414. Ozone: Update existing Ozone documentation according to the recent 
changes. Contributed by Elek, Marton.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/dd43835b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/dd43835b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/dd43835b

Branch: refs/heads/trunk
Commit: dd43835b3644aab7266718213e6323f38b8ea1bb
Parents: 40398d3
Author: Mukul Kumar Singh 
Authored: Thu Apr 12 21:21:44 2018 +0530
Committer: Mukul Kumar Singh 
Committed: Thu Apr 12 21:21:44 2018 +0530

--
 .../src/main/site/markdown/OzoneCommandShell.md | 38 ++---
 .../site/markdown/OzoneGettingStarted.md.vm | 59 ++--
 .../src/main/site/markdown/OzoneRest.md | 32 +--
 3 files changed, 78 insertions(+), 51 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/dd43835b/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneCommandShell.md
--
diff --git 
a/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneCommandShell.md 
b/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneCommandShell.md
index a274a22..fc63742 100644
--- a/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneCommandShell.md
+++ b/hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneCommandShell.md
@@ -25,10 +25,10 @@ The Ozone commands take the following format.
  -root`
 
 The *port* specified in command should match the port mentioned in the config
-property `dfs.datanode.http.address`. This property can be set in 
`hdfs-site.xml`.
-The default value for the port is `9864` and is used in below commands.
+property `hdds.rest.http-address`. This property can be set in 
`ozone-site.xml`.
+The default value for the port is `9880` and is used in below commands.
 
-The *--root* option is a command line short cut that allows *ozone oz*
+The *-root* option is a command line short cut that allows *ozone oz*
 commands to be run as the user that started the cluster. This is useful to
 indicate that you want the commands to be run as some admin user. The only
 reason for this option is that it makes the life of a lazy developer more
@@ -44,37 +44,37 @@ ozone cluster.
 
 Volumes can be created only by Admins. Here is an example of creating a volume.
 
-* `ozone oz -createVolume http://localhost:9864/hive -user bilbo -quota
+* `ozone oz -createVolume http://localhost:9880/hive -user bilbo -quota
 100TB -root`
 
 The above command creates a volume called `hive` owned by user `bilbo`. The
-`--root` option allows the command to be executed as user `hdfs` which is an
+`-root` option allows the command to be executed as user `hdfs` which is an
 admin in the cluster.
 
 ### Update Volume
 
 Updates information like ownership and quota on an existing volume.
 
-* `ozone oz  -updateVolume  http://localhost:9864/hive -quota 500TB -root`
+* `ozone oz  -updateVolume  http://localhost:9880/hive -quota 500TB -root`
 
 The above command changes the volume quota of hive from 100TB to 500TB.
 
 ### Delete Volume
 Deletes a Volume if it is empty.
 
-* `ozone oz -deleteVolume http://localhost:9864/hive -root`
+* `ozone oz -deleteVolume http://localhost:9880/hive -root`
 
 
 ### Info Volume
 Info volume command allows the owner or the administrator of the cluster to 
read meta-data about a specific volume.
 
-* `ozone oz -infoVolume http://localhost:9864/hive -root`
+* `ozone oz -infoVolume http://localhost:9880/hive -root`
 
 ### List Volumes
 
 List volume command can be used by administrator to list volumes of any user. 
It can also be used by a user to list volumes owned by him.
 
-* `ozone oz -listVolume http://localhost:9864/ -user bilbo -root`
+* `ozone oz -listVolume http://localhost:9880/ -user bilbo -root`
 
 The above command lists all volumes owned by user bilbo.
 
@@ -89,7 +89,7 @@ Following examples assume that these commands are run by the 
owner of the volume
 
 Create bucket call allows the owner of a volume to create a bucket.
 
-* `ozone oz -createBucket http://localhost:9864/hive/january`
+* `ozone oz -createBucket http://localhost:9880/hive/january`
 
 This call creates a bucket called `january` in the volume called `hive`. If
 the volume does not exist, then this call will fail.
@@ -98,23 +98,23 @@ the volume does not exist, then this call will fail.
 ### Update Bucket
 Updates bucket meta-data, like ACLs.
 
-* `ozone oz -updateBucket http://localhost:9864/hive/january  -addAcl
+* `ozone oz -updateBucket http://localhost:9880/hive/january  -addAcl
 user:spark:rw`
 
 ### Delete Bucket
 Deletes a bucket if it is empty.
 
-* `ozone oz -deleteBucket http://localhost:9864/hive/january`
+* `ozone oz -deleteBucket http://localhost:9880/hive/january`
 
 ### Info Bucket
 Returns inf

[36/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/framework/src/main/resources/webapps/static/angular-1.6.4.min.js
--
diff --git 
a/hadoop-hdds/framework/src/main/resources/webapps/static/angular-1.6.4.min.js 
b/hadoop-hdds/framework/src/main/resources/webapps/static/angular-1.6.4.min.js
new file mode 100644
index 000..c4bf158
--- /dev/null
+++ 
b/hadoop-hdds/framework/src/main/resources/webapps/static/angular-1.6.4.min.js
@@ -0,0 +1,332 @@
+/*
+ AngularJS v1.6.4
+ (c) 2010-2017 Google, Inc. http://angularjs.org
+ License: MIT
+*/
+(function(x){'use strict';function L(a,b){b=b||Error;return function(){var 
d=arguments[0],c;c="["+(a?a+":":"")+d+"] 
http://errors.angularjs.org/1.6.4/"+(a?a+"/":"")+d;for(d=1;dc)return"...";var d=b.$$hashKey,f;if(H(a)){f=0;for(var 
g=a.length;f").append(a).html();try{return 
a[0].nodeType===Ia?Q(d):d.match(/^(<[^>]+>)/)[1].replace(/^<([\w-]+)/,function(a,b){return"<"+Q(b)})}catch(c){return
 Q(d)}}function Qc(a){try{return decodeURIComponent(a)}catch(b){}}function 
Rc(a){var b={};q((a||"").split("&"),function(a){var 
c,e,f;a&&(e=a=a.replace(/\+/g,"%20"),c=a.indexOf("="),-1!==c&&(e=a.substring(0,c),f=a.substring(c+1)),e=Qc(e),u(e)&&(f=
+u(f)?Qc(f):!0,ua.call(b,e)?H(b[e])?b[e].push(f):b[e]=[b[e],f]:b[e]=f))});return
 b}function Zb(a){var 
b=[];q(a,function(a,c){H(a)?q(a,function(a){b.push($(c,!0)+(!0===a?"":"="+$(a,!0)))}):b.push($(c,!0)+(!0===a?"":"="+$(a,!0)))});return
 b.length?b.join("&"):""}function db(a){return 
$(a,!0).replace(/%26/gi,"&").replace(/%3D/gi,"=").replace(/%2B/gi,"+")}function 
$(a,b){return 
encodeURIComponent(a).replace(/%40/gi,"@").replace(/%3A/gi,":").replace(/%24/g,"$").replace(/%2C/gi,",").replace(/%3B/gi,";").replace(/%20/g,
+b?"%20":"+")}function te(a,b){var 
d,c,e=Ja.length;for(c=0;c protocol indicates an extension, 
document.location.href does not match."))}
+function Sc(a,b,d){C(d)||(d={});d=S({strictDi:!1},d);var 
c=function(){a=B(a);if(a.injector()){var 
c=a[0]===x.document?"document":xa(a);throw 
Fa("btstrpd",c.replace(//,">"));}b=b||[];b.unshift(["$provide",function(b){b.value("$rootElement",a)}]);d.debugInfoEnabled&&b.push(["$compileProvider",function(a){a.debugInfoEnabled(!0)}]);b.unshift("ng");c=eb(b,d.strictDi);c.invoke(["$rootScope","$rootElement","$compile","$injector",function(a,b,c,d){a.$apply(function(){b.data("$injector",
+d);c(b)(a)})}]);return 
c},e=/^NG_ENABLE_DEBUG_INFO!/,f=/^NG_DEFER_BOOTSTRAP!/;x&&e.test(x.name)&&(d.debugInfoEnabled=!0,x.name=x.name.replace(e,""));if(x&&!f.test(x.name))return
 
c();x.name=x.name.replace(f,"");ea.resumeBootstrap=function(a){q(a,function(a){b.push(a)});return
 c()};D(ea.resumeDeferredBootstrap)&&ea.resumeDeferredBootstrap()}function 
we(){x.name="NG_ENABLE_DEBUG_INFO!"+x.name;x.location.reload()}function 
xe(a){a=ea.element(a).injector();if(!a)throw Fa("test");return 
a.get("$$testability")}
+function Tc(a,b){b=b||"_";return 
a.replace(ye,function(a,c){return(c?b:"")+a.toLowerCase()})}function ze(){var 
a;if(!Uc){var b=rb();(na=w(b)?x.jQuery:b?x[b]:void 
0)&&na.fn.on?(B=na,S(na.fn,{scope:Na.scope,isolateScope:Na.isolateScope,controller:Na.controller,injector:Na.injector,inheritedData:Na.inheritedData}),a=na.cleanData,na.cleanData=function(b){for(var
 
c,e=0,f;null!=(f=b[e]);e++)(c=na._data(f,"events"))&&c.$destroy&&na(f).triggerHandler("$destroy");a(b)}):B=W;ea.element=B;Uc=!0}}function
 fb(a,
+b,d){if(!a)throw Fa("areq",b||"?",d||"required");return a}function 
sb(a,b,d){d&&H(a)&&(a=a[a.length-1]);fb(D(a),b,"not a function, got 
"+(a&&"object"===typeof a?a.constructor.name||"Object":typeof a));return 
a}function Ka(a,b){if("hasOwnProperty"===a)throw Fa("badname",b);}function 
Vc(a,b,d){if(!b)return a;b=b.split(".");for(var 
c,e=a,f=b.length,g=0;g")+c[2];for(c=c[0];c--;)d=d.lastChild;f=ab(f,d.childNodes);
+d=e.firstChild;d.textContent=""}else 
f.push(b.createTextNode(a));e.textContent="";e.innerHTML="";q(f,function(a){e.appendChild(a)});return
 e}function W(a){if(a instanceof W)return a;var b;F(a)&&(a=T(a),b=!0);if(!(this 
instanceof W)){if(b&&"<"!==a.charAt(0))throw dc("nosel");return new 
W(a)}if(b){b=x.document;var 
d;a=(d=dg.exec(a))?[b.createElement(d[1])]:(d=dd(a,b))?d.childNodes:[];ec(this,a)}else
 D(a)?ed(a):ec(this,a)}function fc(a){return a.cloneNode(!0)}function 
xb(a,b){!b&&bc(a)&&B.cleanData([a]);
+a.querySelectorAll&&B.cleanData(a.querySelectorAll("*"))}function 
fd(a,b,d,c){if(u(c))throw dc("offargs");var 
e=(c=yb(a))&&c.events,f=c&&c.handle;if(f)if(b){var g=function(b){var 
c=e[b];u(d)&&$a(c||[],d);u(d)&&c&&0l&&this.remove(p.key);return 
b}},get:function(a){if(l";b=ta.firstChild.attributes;var 
d=b[0];b.removeNamedItem(d.name);d.value=c;a.attributes.setNamedItem(d)}function
 La(a,
+b){try{a.addClass(b)}catch(c){}}function ca(a,b,c,d,e){a instanceof 
B||(a=B(a));var f=Ma(a,b,a,c,d,e);ca.$$addScopeClass(a);var g=null;return 
function(b,c,d){if(!a)throw 
fa("multilink");fb(b,"scope");e&&e.needs

[51/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
HDFS-13405. Ozone: Rename HDSL to HDDS.
Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee 
and Anu Engineer.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/651a05a1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/651a05a1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/651a05a1

Branch: refs/heads/trunk
Commit: 651a05a18135ee39b6640f7e386acb086be1cf51
Parents: b2974ff
Author: Anu Engineer 
Authored: Thu Apr 5 11:24:39 2018 -0700
Committer: Anu Engineer 
Committed: Thu Apr 5 11:24:39 2018 -0700

--
 dev-support/bin/dist-layout-stitching   |   18 +-
 .../main/resources/assemblies/hadoop-src.xml|2 +-
 hadoop-cblock/server/pom.xml|8 +-
 .../org/apache/hadoop/cblock/CBlockManager.java |   14 +-
 .../org/apache/hadoop/cblock/CblockUtils.java   |4 +-
 .../cblock/client/CBlockVolumeClient.java   |2 +-
 .../cblock/jscsiHelper/BlockWriterTask.java |6 +-
 ...ockClientProtocolClientSideTranslatorPB.java |2 +-
 .../cblock/jscsiHelper/CBlockIStorageImpl.java  |4 +-
 .../cblock/jscsiHelper/CBlockTargetServer.java  |4 +-
 .../jscsiHelper/ContainerCacheFlusher.java  |4 +-
 .../cblock/jscsiHelper/SCSITargetDaemon.java|   20 +-
 .../cache/impl/AsyncBlockWriter.java|8 +-
 .../cache/impl/CBlockLocalCache.java|4 +-
 .../jscsiHelper/cache/impl/SyncBlockReader.java |8 +-
 .../cblock/kubernetes/DynamicProvisioner.java   |2 +-
 .../hadoop/cblock/meta/ContainerDescriptor.java |2 +-
 .../hadoop/cblock/meta/VolumeDescriptor.java|2 +-
 .../cblock/proto/MountVolumeResponse.java   |2 +-
 ...entServerProtocolServerSideTranslatorPB.java |2 +-
 .../hadoop/cblock/storage/StorageManager.java   |   12 +-
 .../main/proto/CBlockClientServerProtocol.proto |4 +-
 .../apache/hadoop/cblock/TestBufferManager.java |   12 +-
 .../hadoop/cblock/TestCBlockReadWrite.java  |   20 +-
 .../apache/hadoop/cblock/TestCBlockServer.java  |4 +-
 .../cblock/TestCBlockServerPersistence.java |4 +-
 .../hadoop/cblock/TestLocalBlockCache.java  |   12 +-
 .../kubernetes/TestDynamicProvisioner.java  |2 +-
 .../cblock/util/ContainerLookUpService.java |2 +-
 .../hadoop/cblock/util/MockStorageClient.java   |   24 +-
 .../org/apache/hadoop/cblock/cli/CBlockCli.java |2 +-
 .../org/apache/hadoop/cblock/TestCBlockCLI.java |2 +-
 .../src/main/bin/hadoop-functions.sh|4 +-
 hadoop-dist/pom.xml |   10 +-
 .../src/main/compose/cblock/docker-config   |2 +-
 .../src/main/compose/ozone/docker-config|2 +-
 hadoop-hdds/client/pom.xml  |   49 +
 .../apache/hadoop/hdds/scm/XceiverClient.java   |  192 +++
 .../hadoop/hdds/scm/XceiverClientHandler.java   |  202 +++
 .../hdds/scm/XceiverClientInitializer.java  |   72 +
 .../hadoop/hdds/scm/XceiverClientManager.java   |  218 +++
 .../hadoop/hdds/scm/XceiverClientMetrics.java   |   92 ++
 .../hadoop/hdds/scm/XceiverClientRatis.java |  266 
 .../scm/client/ContainerOperationClient.java|  407 ++
 .../hadoop/hdds/scm/client/HddsClientUtils.java |  232 
 .../hadoop/hdds/scm/client/package-info.java|   23 +
 .../apache/hadoop/hdds/scm/package-info.java|   23 +
 .../hdds/scm/storage/ChunkInputStream.java  |  261 
 .../hdds/scm/storage/ChunkOutputStream.java |  227 +++
 .../hadoop/hdds/scm/storage/package-info.java   |   23 +
 .../common/dev-support/findbugsExcludeFile.xml  |   21 +
 hadoop-hdds/common/pom.xml  |  128 ++
 .../org/apache/hadoop/hdds/HddsConfigKeys.java  |6 +
 .../java/org/apache/hadoop/hdds/HddsUtils.java  |  272 
 .../apache/hadoop/hdds/client/OzoneQuota.java   |  203 +++
 .../hadoop/hdds/client/ReplicationFactor.java   |   63 +
 .../hadoop/hdds/client/ReplicationType.java |   28 +
 .../apache/hadoop/hdds/client/package-info.java |   23 +
 .../hadoop/hdds/conf/OzoneConfiguration.java|  162 +++
 .../apache/hadoop/hdds/conf/package-info.java   |   18 +
 .../org/apache/hadoop/hdds/package-info.java|   23 +
 .../hadoop/hdds/protocol/DatanodeDetails.java   |  422 ++
 .../hadoop/hdds/protocol/package-info.java  |   22 +
 .../apache/hadoop/hdds/scm/ScmConfigKeys.java   |  271 
 .../org/apache/hadoop/hdds/scm/ScmInfo.java |   81 ++
 .../hadoop/hdds/scm/XceiverClientSpi.java   |  129 ++
 .../hadoop/hdds/scm/client/ScmClient.java   |  139 ++
 .../hadoop/hdds/scm/client/package-info.java|   24 +
 .../hadoop/hdds/scm/container/ContainerID.java  |   97 ++
 .../common/helpers/AllocatedBlock.java  |   77 ++
 .../container/common/helpers/ContainerInfo.java |  333 +
 .../common/helpers/DeleteBlockResult.java   |   51 +
 .../scm/container/common/helpers/Pipeline.ja

[20/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/client/src/main/java/org/apache/hadoop/scm/storage/ChunkOutputStream.java
--
diff --git 
a/hadoop-hdsl/client/src/main/java/org/apache/hadoop/scm/storage/ChunkOutputStream.java
 
b/hadoop-hdsl/client/src/main/java/org/apache/hadoop/scm/storage/ChunkOutputStream.java
deleted file mode 100644
index 52a981f..000
--- 
a/hadoop-hdsl/client/src/main/java/org/apache/hadoop/scm/storage/ChunkOutputStream.java
+++ /dev/null
@@ -1,227 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *  http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.scm.storage;
-
-import static org.apache.hadoop.scm.storage.ContainerProtocolCalls.putKey;
-import static org.apache.hadoop.scm.storage.ContainerProtocolCalls.writeChunk;
-
-import java.io.IOException;
-import java.io.OutputStream;
-import java.nio.ByteBuffer;
-import java.util.UUID;
-
-import com.google.protobuf.ByteString;
-
-import org.apache.commons.codec.digest.DigestUtils;
-import org.apache.hadoop.hdsl.protocol.proto.ContainerProtos.ChunkInfo;
-import org.apache.hadoop.hdsl.protocol.proto.ContainerProtos.KeyData;
-import org.apache.hadoop.hdsl.protocol.proto.HdslProtos.KeyValue;
-import org.apache.hadoop.scm.XceiverClientManager;
-import org.apache.hadoop.scm.XceiverClientSpi;
-
-/**
- * An {@link OutputStream} used by the REST service in combination with the
- * SCMClient to write the value of a key to a sequence
- * of container chunks.  Writes are buffered locally and periodically written 
to
- * the container as a new chunk.  In order to preserve the semantics that
- * replacement of a pre-existing key is atomic, each instance of the stream has
- * an internal unique identifier.  This unique identifier and a monotonically
- * increasing chunk index form a composite key that is used as the chunk name.
- * After all data is written, a putKey call creates or updates the 
corresponding
- * container key, and this call includes the full list of chunks that make up
- * the key data.  The list of chunks is updated all at once.  Therefore, a
- * concurrent reader never can see an intermediate state in which different
- * chunks of data from different versions of the key data are interleaved.
- * This class encapsulates all state management for buffering and writing
- * through to the container.
- */
-public class ChunkOutputStream extends OutputStream {
-
-  private final String containerKey;
-  private final String key;
-  private final String traceID;
-  private final KeyData.Builder containerKeyData;
-  private XceiverClientManager xceiverClientManager;
-  private XceiverClientSpi xceiverClient;
-  private ByteBuffer buffer;
-  private final String streamId;
-  private int chunkIndex;
-  private int chunkSize;
-
-  /**
-   * Creates a new ChunkOutputStream.
-   *
-   * @param containerKey container key
-   * @param key chunk key
-   * @param xceiverClientManager client manager that controls client
-   * @param xceiverClient client to perform container calls
-   * @param traceID container protocol call args
-   * @param chunkSize chunk size
-   */
-  public ChunkOutputStream(String containerKey, String key,
-  XceiverClientManager xceiverClientManager, XceiverClientSpi 
xceiverClient,
-  String traceID, int chunkSize) {
-this.containerKey = containerKey;
-this.key = key;
-this.traceID = traceID;
-this.chunkSize = chunkSize;
-KeyValue keyValue = KeyValue.newBuilder()
-.setKey("TYPE").setValue("KEY").build();
-this.containerKeyData = KeyData.newBuilder()
-.setContainerName(xceiverClient.getPipeline().getContainerName())
-.setName(containerKey)
-.addMetadata(keyValue);
-this.xceiverClientManager = xceiverClientManager;
-this.xceiverClient = xceiverClient;
-this.buffer = ByteBuffer.allocate(chunkSize);
-this.streamId = UUID.randomUUID().toString();
-this.chunkIndex = 0;
-  }
-
-  @Override
-  public synchronized void write(int b) throws IOException {
-checkOpen();
-int rollbackPosition = buffer.position();
-int rollbackLimit = buffer.limit();
-buffer.put((byte)b);
-if (buffer.position() == ch

[80/83] [abbrv] hadoop git commit: HDFS-13407. Ozone: Use separated version schema for Hdds/Ozone projects. Contributed by Elek Marton.

2018-04-24 Thread xyao
HDFS-13407. Ozone: Use separated version schema for Hdds/Ozone projects. 
Contributed by Elek Marton.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/eea3128f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/eea3128f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/eea3128f

Branch: refs/heads/trunk
Commit: eea3128fdbdb8553dd6f8a4d20de62cb130c6e39
Parents: c10788e
Author: Xiaoyu Yao 
Authored: Mon Apr 16 16:17:52 2018 -0700
Committer: Xiaoyu Yao 
Committed: Tue Apr 17 09:11:29 2018 -0700

--
 dev-support/bin/dist-layout-stitching| 25 -
 hadoop-dist/pom.xml  |  1 +
 hadoop-hdds/client/pom.xml   |  4 ++--
 hadoop-hdds/common/pom.xml   |  4 ++--
 hadoop-hdds/container-service/pom.xml|  4 ++--
 hadoop-hdds/framework/pom.xml|  4 ++--
 hadoop-hdds/pom.xml  |  2 +-
 hadoop-hdds/server-scm/pom.xml   |  4 ++--
 hadoop-hdds/tools/pom.xml|  4 ++--
 hadoop-ozone/client/pom.xml  |  4 ++--
 hadoop-ozone/common/pom.xml  |  4 ++--
 hadoop-ozone/integration-test/pom.xml|  4 ++--
 hadoop-ozone/objectstore-service/pom.xml |  4 ++--
 hadoop-ozone/ozone-manager/pom.xml   |  4 ++--
 hadoop-ozone/pom.xml |  2 +-
 hadoop-ozone/tools/pom.xml   |  4 ++--
 hadoop-project/pom.xml   | 32 +--
 pom.xml  |  3 +++
 18 files changed, 60 insertions(+), 53 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/eea3128f/dev-support/bin/dist-layout-stitching
--
diff --git a/dev-support/bin/dist-layout-stitching 
b/dev-support/bin/dist-layout-stitching
index df043dd..6557161 100755
--- a/dev-support/bin/dist-layout-stitching
+++ b/dev-support/bin/dist-layout-stitching
@@ -21,6 +21,9 @@ VERSION=$1
 # project.build.directory
 BASEDIR=$2
 
+#hdds.version
+HDDS_VERSION=$3
+
 function run()
 {
   declare res
@@ -144,19 +147,19 @@ run cp -p 
"${ROOT}/hadoop-client-modules/hadoop-client-runtime/target/hadoop-cli
 run cp -p 
"${ROOT}/hadoop-client-modules/hadoop-client-minicluster/target/hadoop-client-minicluster-${VERSION}.jar"
 share/hadoop/client/
 
 # HDDS
-run copy "${ROOT}/hadoop-hdds/common/target/hadoop-hdds-common-${VERSION}" .
-run copy 
"${ROOT}/hadoop-hdds/framework/target/hadoop-hdds-server-framework-${VERSION}" .
-run copy 
"${ROOT}/hadoop-hdds/server-scm/target/hadoop-hdds-server-scm-${VERSION}" .
-run copy 
"${ROOT}/hadoop-hdds/container-service/target/hadoop-hdds-container-service-${VERSION}"
 .
-run copy "${ROOT}/hadoop-hdds/client/target/hadoop-hdds-client-${VERSION}" .
-run copy "${ROOT}/hadoop-hdds/tools/target/hadoop-hdds-tools-${VERSION}" .
+run copy 
"${ROOT}/hadoop-hdds/common/target/hadoop-hdds-common-${HDDS_VERSION}" .
+run copy 
"${ROOT}/hadoop-hdds/framework/target/hadoop-hdds-server-framework-${HDDS_VERSION}"
 .
+run copy 
"${ROOT}/hadoop-hdds/server-scm/target/hadoop-hdds-server-scm-${HDDS_VERSION}" .
+run copy 
"${ROOT}/hadoop-hdds/container-service/target/hadoop-hdds-container-service-${HDDS_VERSION}"
 .
+run copy 
"${ROOT}/hadoop-hdds/client/target/hadoop-hdds-client-${HDDS_VERSION}" .
+run copy "${ROOT}/hadoop-hdds/tools/target/hadoop-hdds-tools-${HDDS_VERSION}" .
 
 # Ozone
-run copy "${ROOT}/hadoop-ozone/common/target/hadoop-ozone-common-${VERSION}" .
-run copy 
"${ROOT}/hadoop-ozone/ozone-manager/target/hadoop-ozone-ozone-manager-${VERSION}"
 .
-run copy 
"${ROOT}/hadoop-ozone/objectstore-service/target/hadoop-ozone-objectstore-service-${VERSION}"
 .
-run copy "${ROOT}/hadoop-ozone/client/target/hadoop-ozone-client-${VERSION}" .
-run copy "${ROOT}/hadoop-ozone/tools/target/hadoop-ozone-tools-${VERSION}" .
+run copy 
"${ROOT}/hadoop-ozone/common/target/hadoop-ozone-common-${HDDS_VERSION}" .
+run copy 
"${ROOT}/hadoop-ozone/ozone-manager/target/hadoop-ozone-ozone-manager-${HDDS_VERSION}"
 .
+run copy 
"${ROOT}/hadoop-ozone/objectstore-service/target/hadoop-ozone-objectstore-service-${HDDS_VERSION}"
 .
+run copy 
"${ROOT}/hadoop-ozone/client/target/hadoop-ozone-client-${HDDS_VERSION}" .
+run copy 
"${ROOT}/hadoop-ozone/tools/target/hadoop-ozone-tools-${HDDS_VERSION}" .
 
 run copy 
"${ROOT}/hadoop-tools/hadoop-tools-dist/target/hadoop-tools-dist-${VERSION}" .
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/eea3128f/hadoop-dist/pom.xml
--
diff --git a/hadoop-dist/pom.xml b/hadoop-dist/pom.xml
index 48e6673..f9b8573 100644
--- a/hadoop-dist/pom.xml
+++ b/hadoop-dist/pom.xml
@@ -174,6 +174,7 @@
 
${basedir}/../dev-support/bin/dist-layout-stitching
   

[53/83] [abbrv] hadoop git commit: HDFS-13342. Ozone: Rename and fix ozone CLI scripts. Contributed by Shashikant Banerjee.

2018-04-24 Thread xyao
HDFS-13342. Ozone: Rename and fix ozone CLI scripts. Contributed by Shashikant 
Banerjee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8658ed7d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8658ed7d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8658ed7d

Branch: refs/heads/trunk
Commit: 8658ed7dccd2e53aed55f0158293886e7a8a45c8
Parents: b67a7ed
Author: Mukul Kumar Singh 
Authored: Fri Apr 6 16:55:08 2018 +0530
Committer: Mukul Kumar Singh 
Committed: Fri Apr 6 16:55:08 2018 +0530

--
 .../src/main/compose/cblock/docker-compose.yaml |  18 +-
 .../src/main/compose/ozone/docker-compose.yaml  |  14 +-
 .../hdds/scm/StorageContainerManager.java   |   6 +-
 .../src/test/compose/docker-compose.yaml|  14 +-
 .../test/robotframework/acceptance/ozone.robot  |  18 +-
 hadoop-ozone/common/src/main/bin/oz | 202 ---
 hadoop-ozone/common/src/main/bin/ozone  | 202 +++
 hadoop-ozone/common/src/main/bin/start-ozone.sh |  14 +-
 hadoop-ozone/common/src/main/bin/stop-ozone.sh  |  14 +-
 .../apache/hadoop/ozone/freon/OzoneGetConf.java |   4 +-
 .../src/main/shellprofile.d/hadoop-ozone.sh |   2 +-
 .../hadoop/ozone/ksm/KeySpaceManager.java   |   4 +-
 .../apache/hadoop/ozone/web/ozShell/Shell.java  |  16 +-
 .../src/main/site/markdown/OzoneCommandShell.md |  34 ++--
 14 files changed, 281 insertions(+), 281 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8658ed7d/hadoop-dist/src/main/compose/cblock/docker-compose.yaml
--
diff --git a/hadoop-dist/src/main/compose/cblock/docker-compose.yaml 
b/hadoop-dist/src/main/compose/cblock/docker-compose.yaml
index b88514e..fa4d267 100644
--- a/hadoop-dist/src/main/compose/cblock/docker-compose.yaml
+++ b/hadoop-dist/src/main/compose/cblock/docker-compose.yaml
@@ -17,7 +17,7 @@
 version: "3"
 services:
namenode:
-  image: elek/hadoop-runner:o3-refactor
+  image: apache/hadoop-runner
   hostname: namenode
   volumes:
  - ../..//hadoop-${VERSION}:/opt/hadoop
@@ -29,38 +29,38 @@ services:
  - ./docker-config
   command: ["/opt/hadoop/bin/hdfs","namenode"]
datanode:
-  image: elek/hadoop-runner:o3-refactor
+  image: apache/hadoop-runner
   volumes:
 - ../..//hadoop-${VERSION}:/opt/hadoop
   ports:
 - 9864
-  command: ["/opt/hadoop/bin/oz","datanode"]
+  command: ["/opt/hadoop/bin/ozone","datanode"]
   env_file:
  - ./docker-config
jscsi:
-  image: elek/hadoop-runner:o3-refactor
+  image: apache/hadoop-runner
   ports:
 - 3260:3260
   volumes:
  - ../..//hadoop-${VERSION}:/opt/hadoop
   env_file:
   - ./docker-config
-  command: ["/opt/hadoop/bin/oz","jscsi"]
+  command: ["/opt/hadoop/bin/ozone","jscsi"]
cblock:
-  image: elek/hadoop-runner:o3-refactor
+  image: apache/hadoop-runner
   volumes:
  - ../..//hadoop-${VERSION}:/opt/hadoop
   env_file:
   - ./docker-config
-  command: ["/opt/hadoop/bin/oz","cblockserver"]
+  command: ["/opt/hadoop/bin/ozone","cblockserver"]
scm:
-  image: elek/hadoop-runner:o3-refactor
+  image: apache/hadoop-runner
   volumes:
  - ../..//hadoop-${VERSION}:/opt/hadoop
   ports:
  - 9876:9876
   env_file:
   - ./docker-config
-  command: ["/opt/hadoop/bin/oz","scm"]
+  command: ["/opt/hadoop/bin/ozone","scm"]
   environment:
   ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8658ed7d/hadoop-dist/src/main/compose/ozone/docker-compose.yaml
--
diff --git a/hadoop-dist/src/main/compose/ozone/docker-compose.yaml 
b/hadoop-dist/src/main/compose/ozone/docker-compose.yaml
index f2b263c..13a7db6 100644
--- a/hadoop-dist/src/main/compose/ozone/docker-compose.yaml
+++ b/hadoop-dist/src/main/compose/ozone/docker-compose.yaml
@@ -17,7 +17,7 @@
 version: "3"
 services:
namenode:
-  image: elek/hadoop-runner:o3-refactor
+  image: apache/hadoop-runner
   hostname: namenode
   volumes:
  - ../..//hadoop-${VERSION}:/opt/hadoop
@@ -29,16 +29,16 @@ services:
  - ./docker-config
   command: ["/opt/hadoop/bin/hdfs","namenode"]
datanode:
-  image: elek/hadoop-runner:o3-refactor
+  image: apache/hadoop-runner
   volumes:
 - ../..//hadoop-${VERSION}:/opt/hadoop
   ports:
 - 9864
-  command: ["/opt/hadoop/bin/oz","datanode"]
+  command: ["/opt/hadoop/bin/ozone","datanode"]
   env_file:
 - ./docker-config
ksm:
-  ima

[75/83] [abbrv] hadoop git commit: HDFS-13424. Ozone: Refactor MiniOzoneClassicCluster. Contributed by Nanda Kumar.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/06d228a3/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java
--
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java
index 5885898..4a6ca1d 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java
@@ -19,11 +19,9 @@
 package org.apache.hadoop.ozone.container.ozoneimpl;
 
 import org.apache.hadoop.hdds.protocol.proto.ContainerProtos;
-import org.apache.hadoop.ozone.MiniOzoneClassicCluster;
 import org.apache.hadoop.ozone.MiniOzoneCluster;
 import org.apache.hadoop.ozone.OzoneConfigKeys;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
-import org.apache.hadoop.ozone.OzoneConsts;
 import org.apache.hadoop.ozone.container.ContainerTestHelper;
 import org.apache.hadoop.hdds.scm.TestUtils;
 import org.apache.hadoop.ozone.web.utils.OzoneUtils;
@@ -59,8 +57,8 @@ public class TestOzoneContainer {
 OzoneContainer container = null;
 MiniOzoneCluster cluster = null;
 try {
-  cluster = new MiniOzoneClassicCluster.Builder(conf)
-  .setHandlerType(OzoneConsts.OZONE_HANDLER_DISTRIBUTED).build();
+  cluster = MiniOzoneCluster.newBuilder(conf).build();
+  cluster.waitForClusterToBeReady();
   // We don't start Ozone Container via data node, we will do it
   // independently in our test path.
   Pipeline pipeline = ContainerTestHelper.createSingleNodePipeline(
@@ -105,9 +103,10 @@ public class TestOzoneContainer {
   conf.setInt(OzoneConfigKeys.DFS_CONTAINER_IPC_PORT,
   pipeline.getLeader().getContainerPort());
 
-  cluster = new MiniOzoneClassicCluster.Builder(conf)
+  cluster = MiniOzoneCluster.newBuilder(conf)
   .setRandomContainerPort(false)
-  .setHandlerType(OzoneConsts.OZONE_HANDLER_DISTRIBUTED).build();
+  .build();
+  cluster.waitForClusterToBeReady();
 
   // This client talks to ozone container via datanode.
   XceiverClient client = new XceiverClient(pipeline, conf);
@@ -208,9 +207,10 @@ public class TestOzoneContainer {
   OzoneConfiguration conf = newOzoneConfiguration();
 
   client = createClientForTesting(conf);
-  cluster = new MiniOzoneClassicCluster.Builder(conf)
+  cluster = MiniOzoneCluster.newBuilder(conf)
   .setRandomContainerPort(false)
-  .setHandlerType(OzoneConsts.OZONE_HANDLER_DISTRIBUTED).build();
+  .build();
+  cluster.waitForClusterToBeReady();
   String containerName = client.getPipeline().getContainerName();
 
   runTestBothGetandPutSmallFile(containerName, client);
@@ -266,9 +266,10 @@ public class TestOzoneContainer {
   OzoneConfiguration conf = newOzoneConfiguration();
 
   client = createClientForTesting(conf);
-  cluster = new MiniOzoneClassicCluster.Builder(conf)
+  cluster = MiniOzoneCluster.newBuilder(conf)
   .setRandomContainerPort(false)
-  .setHandlerType(OzoneConsts.OZONE_HANDLER_DISTRIBUTED).build();
+  .build();
+  cluster.waitForClusterToBeReady();
   client.connect();
 
   String containerName = client.getPipeline().getContainerName();
@@ -356,9 +357,10 @@ public class TestOzoneContainer {
   OzoneConfiguration conf = newOzoneConfiguration();
 
   client = createClientForTesting(conf);
-  cluster = new MiniOzoneClassicCluster.Builder(conf)
+  cluster = MiniOzoneCluster.newBuilder(conf)
   .setRandomContainerPort(false)
-  .setHandlerType(OzoneConsts.OZONE_HANDLER_DISTRIBUTED).build();
+  .build();
+  cluster.waitForClusterToBeReady();
   client.connect();
 
   String containerName = client.getPipeline().getContainerName();
@@ -471,9 +473,10 @@ public class TestOzoneContainer {
   OzoneConfiguration conf = newOzoneConfiguration();
 
   client = createClientForTesting(conf);
-  cluster = new MiniOzoneClassicCluster.Builder(conf)
+  cluster = MiniOzoneCluster.newBuilder(conf)
   .setRandomContainerPort(false)
-  .setHandlerType(OzoneConsts.OZONE_HANDLER_DISTRIBUTED).build();
+  .build();
+  cluster.waitForClusterToBeReady();
   String containerName = client.getPipeline().getContainerName();
   runAsyncTests(containerName, client);
 } finally {
@@ -492,9 +495,10 @@ public class TestOzoneContainer {
   OzoneConfiguration conf = newOzoneConfiguration();
 
   client = createClientForTesting(conf);
-  cluster = new MiniOzoneClassicCluster.Builder(conf)
+  cluster = MiniOzoneCluster.newBuilder(conf)
   

[22/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/placement/TestContainerPlacement.java
--
diff --git 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/placement/TestContainerPlacement.java
 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/placement/TestContainerPlacement.java
new file mode 100644
index 000..0801c25
--- /dev/null
+++ 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/placement/TestContainerPlacement.java
@@ -0,0 +1,134 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.container.placement;
+
+import org.apache.commons.math3.stat.descriptive.DescriptiveStatistics;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.scm.container.MockNodeManager;
+import org.apache.hadoop.hdds.scm.container.placement.algorithms
+.SCMContainerPlacementCapacity;
+import org.apache.hadoop.hdds.scm.container.placement.algorithms
+.SCMContainerPlacementRandom;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.List;
+import java.util.Random;
+
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState
+.HEALTHY;
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Asserts that allocation strategy works as expected.
+ */
+public class TestContainerPlacement {
+
+  private DescriptiveStatistics computeStatistics(NodeManager nodeManager) {
+DescriptiveStatistics descriptiveStatistics = new DescriptiveStatistics();
+for (DatanodeDetails dd : nodeManager.getNodes(HEALTHY)) {
+  float weightedValue =
+  nodeManager.getNodeStat(dd).get().getScmUsed().get() / (float)
+  nodeManager.getNodeStat(dd).get().getCapacity().get();
+  descriptiveStatistics.addValue(weightedValue);
+}
+return descriptiveStatistics;
+  }
+
+  /**
+   * This test simulates lots of Cluster I/O and updates the metadata in SCM.
+   * We simulate adding and removing containers from the cluster. It asserts
+   * that our placement algorithm has taken the capacity of nodes into
+   * consideration by asserting that standard deviation of used space on these
+   * has improved.
+   */
+  @Test
+  public void testCapacityPlacementYieldsBetterDataDistribution() throws
+  SCMException {
+final int opsCount = 200 * 1000;
+final int nodesRequired = 3;
+Random random = new Random();
+
+// The nature of init code in MockNodeManager yields similar clusters.
+MockNodeManager nodeManagerCapacity = new MockNodeManager(true, 100);
+MockNodeManager nodeManagerRandom = new MockNodeManager(true, 100);
+DescriptiveStatistics beforeCapacity =
+computeStatistics(nodeManagerCapacity);
+DescriptiveStatistics beforeRandom = computeStatistics(nodeManagerRandom);
+
+//Assert that our initial layout of clusters are similar.
+assertEquals(beforeCapacity.getStandardDeviation(), beforeRandom
+.getStandardDeviation(), 0.001);
+
+SCMContainerPlacementCapacity capacityPlacer = new
+SCMContainerPlacementCapacity(nodeManagerCapacity, new 
Configuration());
+SCMContainerPlacementRandom randomPlacer = new
+SCMContainerPlacementRandom(nodeManagerRandom, new Configuration());
+
+for (int x = 0; x < opsCount; x++) {
+  long containerSize = random.nextInt(100) * OzoneConsts.GB;
+  List nodesCapacity =
+  capacityPlacer.chooseDatanodes(nodesRequired, containerSize);
+  assertEquals(nodesRequired, nodesCapacity.size());
+
+  List nodesRandom = 
randomPlacer.chooseDatanodes(nodesRequired,
+  containerSize);
+
+  // One fifth of all calls are delete
+  if (x % 5 == 0) {
+deleteContainer(nodeManagerCapacity, nodesCapacity, containerSize);
+deleteContainer(nodeManagerRandom, nodesRandom, containerSize);
+  } else {
+  

[60/83] [abbrv] hadoop git commit: HDFS-13415. Ozone: Remove cblock code from HDFS-7240. Contributed by Elek, Marton.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ea85801c/hadoop-cblock/tools/src/test/org/apache/hadoop/cblock/TestCBlockCLI.java
--
diff --git 
a/hadoop-cblock/tools/src/test/org/apache/hadoop/cblock/TestCBlockCLI.java 
b/hadoop-cblock/tools/src/test/org/apache/hadoop/cblock/TestCBlockCLI.java
deleted file mode 100644
index a3f53aa..000
--- a/hadoop-cblock/tools/src/test/org/apache/hadoop/cblock/TestCBlockCLI.java
+++ /dev/null
@@ -1,242 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *  http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-package org.apache.hadoop.cblock;
-
-import org.apache.hadoop.cblock.cli.CBlockCli;
-import org.apache.hadoop.cblock.meta.VolumeDescriptor;
-import org.apache.hadoop.cblock.util.MockStorageClient;
-import org.apache.hadoop.conf.OzoneConfiguration;
-import org.apache.hadoop.hdds.scm.client.ScmClient;
-import org.apache.hadoop.test.GenericTestUtils;
-import org.junit.After;
-import org.junit.AfterClass;
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-import java.io.ByteArrayOutputStream;
-import java.io.File;
-import java.io.IOException;
-import java.io.PrintStream;
-import java.util.List;
-
-import static org.apache.hadoop.cblock.CBlockConfigKeys
-.DFS_CBLOCK_JSCSIRPC_ADDRESS_KEY;
-import static org.apache.hadoop.cblock.CBlockConfigKeys
-.DFS_CBLOCK_SERVICERPC_ADDRESS_KEY;
-import static org.apache.hadoop.cblock.CBlockConfigKeys
-.DFS_CBLOCK_SERVICE_LEVELDB_PATH_KEY;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-
-/**
- * A testing class for cblock command line tool.
- */
-public class TestCBlockCLI {
-  private static final long GB = 1 * 1024 * 1024 * 1024L;
-  private static final int KB = 1024;
-  private static CBlockCli cmd;
-  private static OzoneConfiguration conf;
-  private static CBlockManager cBlockManager;
-  private static ByteArrayOutputStream outContent;
-  private static PrintStream testPrintOut;
-
-  @BeforeClass
-  public static void setup() throws IOException {
-outContent = new ByteArrayOutputStream();
-ScmClient storageClient = new MockStorageClient();
-conf = new OzoneConfiguration();
-String path = GenericTestUtils
-.getTempPath(TestCBlockCLI.class.getSimpleName());
-File filePath = new File(path);
-if (!filePath.exists() && !filePath.mkdirs()) {
-  throw new IOException("Unable to create test DB dir");
-}
-conf.set(DFS_CBLOCK_SERVICERPC_ADDRESS_KEY, "127.0.0.1:0");
-conf.set(DFS_CBLOCK_JSCSIRPC_ADDRESS_KEY, "127.0.0.1:0");
-conf.set(DFS_CBLOCK_SERVICE_LEVELDB_PATH_KEY, path.concat(
-"/testCblockCli.dat"));
-cBlockManager = new CBlockManager(conf, storageClient);
-cBlockManager.start();
-testPrintOut = new PrintStream(outContent);
-cmd = new CBlockCli(conf, testPrintOut);
-  }
-
-  @AfterClass
-  public static void clean() {
-if (cBlockManager != null) {
-  cBlockManager.stop();
-  cBlockManager.join();
-  cBlockManager.clean();
-}
-  }
-
-  @After
-  public void reset() {
-outContent.reset();
-  }
-
-  /**
-   * Test the help command.
-   * @throws Exception
-   */
-  @Test
-  public void testCliHelp() throws Exception {
-PrintStream initialStdOut = System.out;
-System.setOut(testPrintOut);
-String[] args = {"-h"};
-cmd.run(args);
-String helpPrints =
-"usage: cblock\n" +
-" -c,--createVolume" +
-"   create a fresh new volume\n" +
-" -d,--deleteVolume   " +
-"  delete a volume\n" +
-" -h,--help " +
-"  help\n" +
-" -i,--infoVolume " +
-"  info a volume\n" +
-" -l,--listVolume " +
-"  list all volumes\n" +
-" -s,--serverAddr :  " +
-"  specify server address:port\n";
-assertEquals(helpPrints, outContent.toString());
-outContent.reset();
-System.setOut(initialStdOut);
-  }
-
-  /**
-   * Test volume listing command.
-   * @thr

[17/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/common/src/main/java/org/apache/hadoop/scm/protocol/package-info.java
--
diff --git 
a/hadoop-hdsl/common/src/main/java/org/apache/hadoop/scm/protocol/package-info.java
 
b/hadoop-hdsl/common/src/main/java/org/apache/hadoop/scm/protocol/package-info.java
deleted file mode 100644
index 274f859..000
--- 
a/hadoop-hdsl/common/src/main/java/org/apache/hadoop/scm/protocol/package-info.java
+++ /dev/null
@@ -1,19 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.scm.protocol;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/common/src/main/java/org/apache/hadoop/scm/protocolPB/ScmBlockLocationProtocolClientSideTranslatorPB.java
--
diff --git 
a/hadoop-hdsl/common/src/main/java/org/apache/hadoop/scm/protocolPB/ScmBlockLocationProtocolClientSideTranslatorPB.java
 
b/hadoop-hdsl/common/src/main/java/org/apache/hadoop/scm/protocolPB/ScmBlockLocationProtocolClientSideTranslatorPB.java
deleted file mode 100644
index 0de759f..000
--- 
a/hadoop-hdsl/common/src/main/java/org/apache/hadoop/scm/protocolPB/ScmBlockLocationProtocolClientSideTranslatorPB.java
+++ /dev/null
@@ -1,207 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with this
- * work for additional information regarding copyright ownership.  The ASF
- * licenses this file to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- * License for the specific language governing permissions and limitations 
under
- * the License.
- */
-package org.apache.hadoop.scm.protocolPB;
-
-import com.google.common.base.Preconditions;
-import com.google.common.collect.Sets;
-import com.google.protobuf.RpcController;
-import com.google.protobuf.ServiceException;
-import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.ipc.ProtobufHelper;
-import org.apache.hadoop.ipc.ProtocolTranslator;
-import org.apache.hadoop.ipc.RPC;
-import org.apache.hadoop.ozone.common.DeleteBlockGroupResult;
-import org.apache.hadoop.ozone.common.BlockGroup;
-import org.apache.hadoop.hdsl.protocol.proto.HdslProtos;
-import 
org.apache.hadoop.hdsl.protocol.proto.ScmBlockLocationProtocolProtos.DeleteScmKeyBlocksRequestProto;
-import 
org.apache.hadoop.hdsl.protocol.proto.ScmBlockLocationProtocolProtos.AllocateScmBlockRequestProto;
-import 
org.apache.hadoop.hdsl.protocol.proto.ScmBlockLocationProtocolProtos.AllocateScmBlockResponseProto;
-import 
org.apache.hadoop.hdsl.protocol.proto.ScmBlockLocationProtocolProtos.DeleteScmKeyBlocksResponseProto;
-import 
org.apache.hadoop.hdsl.protocol.proto.ScmBlockLocationProtocolProtos.GetScmBlockLocationsRequestProto;
-import 
org.apache.hadoop.hdsl.protocol.proto.ScmBlockLocationProtocolProtos.GetScmBlockLocationsResponseProto;
-import 
org.apache.hadoop.hdsl.protocol.proto.ScmBlockLocationProtocolProtos.ScmLocatedBlockProto;
-import 
org.apache.hadoop.hdsl.protocol.proto.ScmBlockLocationProtocolProtos.KeyBlocks;
-import org.apache.hadoop.scm.container.common.helpers.AllocatedBlock;
-import org.apache.hadoop.scm.ScmInfo;
-import org.apache.hadoop.scm.container.common.helpers.Pipeline;
-import org.apache.hadoop.scm.protocol.ScmBlockLocationProtocol;
-
-import java.io.Closeable;
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Set;
-import java.util.stream.Collectors;
-
-/**
- * This class is the client-side translator to translate the requests made on
- * the {@link ScmBlockLocationProtocol} interface to the RPC server
- * implementing {@link ScmBlockLocationP

[65/83] [abbrv] hadoop git commit: HDFS-13415. Ozone: Remove cblock code from HDFS-7240. Contributed by Elek, Marton.

2018-04-24 Thread xyao
HDFS-13415. Ozone: Remove cblock code from HDFS-7240. Contributed by Elek, 
Marton.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ea85801c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ea85801c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ea85801c

Branch: refs/heads/trunk
Commit: ea85801ce32eeccaac2f6c0726024c37ee1fe192
Parents: 4a8aa0e
Author: Mukul Kumar Singh 
Authored: Wed Apr 11 18:42:16 2018 +0530
Committer: Mukul Kumar Singh 
Committed: Wed Apr 11 18:42:16 2018 +0530

--
 dev-support/bin/dist-layout-stitching   |   4 -
 .../main/resources/assemblies/hadoop-src.xml|   1 -
 hadoop-cblock/pom.xml   |  61 --
 .../server/dev-support/findbugsExcludeFile.xml  |  21 -
 hadoop-cblock/server/pom.xml| 159 -
 .../apache/hadoop/cblock/CBlockConfigKeys.java  | 222 ---
 .../org/apache/hadoop/cblock/CBlockManager.java | 426 -
 .../org/apache/hadoop/cblock/CblockUtils.java   | 129 
 ...ckServiceProtocolClientSideTranslatorPB.java | 135 -
 .../cblock/client/CBlockVolumeClient.java   |  83 ---
 .../hadoop/cblock/client/package-info.java  |  18 -
 .../cblock/exception/CBlockException.java   |  29 -
 .../hadoop/cblock/exception/package-info.java   |  18 -
 .../cblock/jscsiHelper/BlockWriterTask.java | 175 --
 ...ockClientProtocolClientSideTranslatorPB.java | 147 -
 .../cblock/jscsiHelper/CBlockIStorageImpl.java  | 440 --
 .../jscsiHelper/CBlockManagerHandler.java   |  50 --
 .../cblock/jscsiHelper/CBlockTargetMetrics.java | 334 ---
 .../cblock/jscsiHelper/CBlockTargetServer.java  | 128 
 .../jscsiHelper/ContainerCacheFlusher.java  | 599 ---
 .../cblock/jscsiHelper/SCSITargetDaemon.java| 132 
 .../cblock/jscsiHelper/cache/CacheModule.java   |  52 --
 .../cblock/jscsiHelper/cache/LogicalBlock.java  |  50 --
 .../cache/impl/AsyncBlockWriter.java| 221 ---
 .../cache/impl/BlockBufferFlushTask.java| 118 
 .../cache/impl/BlockBufferManager.java  | 184 --
 .../cache/impl/CBlockLocalCache.java| 577 --
 .../jscsiHelper/cache/impl/DiskBlock.java   |  77 ---
 .../jscsiHelper/cache/impl/SyncBlockReader.java | 263 
 .../jscsiHelper/cache/impl/package-info.java|  18 -
 .../cblock/jscsiHelper/cache/package-info.java  |  18 -
 .../hadoop/cblock/jscsiHelper/package-info.java |  18 -
 .../cblock/kubernetes/DynamicProvisioner.java   | 331 --
 .../hadoop/cblock/kubernetes/package-info.java  |  23 -
 .../hadoop/cblock/meta/ContainerDescriptor.java | 107 
 .../hadoop/cblock/meta/VolumeDescriptor.java| 269 -
 .../apache/hadoop/cblock/meta/VolumeInfo.java   |  79 ---
 .../apache/hadoop/cblock/meta/package-info.java |  18 -
 .../org/apache/hadoop/cblock/package-info.java  |  18 -
 .../cblock/proto/CBlockClientProtocol.java  |  38 --
 .../cblock/proto/CBlockServiceProtocol.java |  45 --
 .../cblock/proto/MountVolumeResponse.java   |  79 ---
 .../hadoop/cblock/proto/package-info.java   |  18 -
 .../CBlockClientServerProtocolPB.java   |  37 --
 ...entServerProtocolServerSideTranslatorPB.java | 116 
 .../protocolPB/CBlockServiceProtocolPB.java |  35 --
 ...ckServiceProtocolServerSideTranslatorPB.java | 159 -
 .../hadoop/cblock/protocolPB/package-info.java  |  18 -
 .../hadoop/cblock/storage/StorageManager.java   | 427 -
 .../hadoop/cblock/storage/package-info.java |  18 -
 .../org/apache/hadoop/cblock/util/KeyUtil.java  |  49 --
 .../apache/hadoop/cblock/util/package-info.java |  18 -
 .../main/proto/CBlockClientServerProtocol.proto |  93 ---
 .../src/main/proto/CBlockServiceProtocol.proto  | 133 
 .../src/main/resources/cblock-default.xml   | 347 ---
 .../apache/hadoop/cblock/TestBufferManager.java | 456 --
 .../cblock/TestCBlockConfigurationFields.java   |  35 --
 .../hadoop/cblock/TestCBlockReadWrite.java  | 377 
 .../apache/hadoop/cblock/TestCBlockServer.java  | 212 ---
 .../cblock/TestCBlockServerPersistence.java | 132 
 .../hadoop/cblock/TestLocalBlockCache.java  | 444 --
 .../kubernetes/TestDynamicProvisioner.java  |  74 ---
 .../cblock/util/ContainerLookUpService.java |  73 ---
 .../hadoop/cblock/util/MockStorageClient.java   | 176 --
 .../dynamicprovisioner/expected1-pv.json|  54 --
 .../dynamicprovisioner/input1-pvc.json  |  55 --
 hadoop-cblock/tools/pom.xml |  42 --
 .../org/apache/hadoop/cblock/cli/CBlockCli.java | 265 
 .../apache/hadoop/cblock/cli/package-info.java  |  18 -
 .../org/apache/hadoop/cblock/TestCBlockCLI.java | 242 
 .../src/main/bin/hadoop-functions.sh|   2 -
 .../hadoop-common/src/main/conf/

[73/83] [abbrv] hadoop git commit: HDFS-13446. Ozone: Fix OzoneFileSystem contract test failures. Contributed by Mukul Kumar Singh.

2018-04-24 Thread xyao
HDFS-13446. Ozone: Fix OzoneFileSystem contract test failures. Contributed by 
Mukul Kumar Singh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fd84dea0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fd84dea0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fd84dea0

Branch: refs/heads/trunk
Commit: fd84dea03e9aa3f8d98247c8663def31823d614f
Parents: 72a3743
Author: Nanda kumar 
Authored: Sun Apr 15 00:02:22 2018 +0530
Committer: Nanda kumar 
Committed: Sun Apr 15 00:02:22 2018 +0530

--
 hadoop-hdds/container-service/pom.xml   |   5 +
 .../states/datanode/InitDatanodeState.java  |   7 +
 .../src/test/resources/log4j.properties |  23 ++
 hadoop-hdds/server-scm/pom.xml  |   5 +
 .../TestStorageContainerManagerHttpServer.java  |   2 +-
 .../hdds/scm/block/TestDeletedBlockLog.java |   4 +-
 .../hdds/scm/container/MockNodeManager.java |   2 +-
 .../ozone/client/io/ChunkGroupOutputStream.java |   3 +
 hadoop-tools/hadoop-ozone/pom.xml   |  25 ++
 .../hadoop/fs/ozone/TestOzoneFSInputStream.java |   4 +-
 .../fs/ozone/TestOzoneFileInterfaces.java   | 229 ++
 .../contract/ITestOzoneContractCreate.java  |  48 
 .../contract/ITestOzoneContractDelete.java  |  48 
 .../contract/ITestOzoneContractDistCp.java  |  50 
 .../ITestOzoneContractGetFileStatus.java|  61 +
 .../ozone/contract/ITestOzoneContractMkdir.java |  48 
 .../ozone/contract/ITestOzoneContractOpen.java  |  47 
 .../contract/ITestOzoneContractRename.java  |  49 
 .../contract/ITestOzoneContractRootDir.java |  51 
 .../ozone/contract/ITestOzoneContractSeek.java  |  47 
 .../hadoop/fs/ozone/contract/OzoneContract.java | 122 ++
 .../src/test/resources/contract/ozone.xml   | 113 +
 .../src/test/resources/log4j.properties |  23 ++
 .../fs/ozone/TestOzoneFileInterfaces.java   | 235 ---
 .../contract/ITestOzoneContractCreate.java  |  48 
 .../contract/ITestOzoneContractDelete.java  |  48 
 .../contract/ITestOzoneContractDistCp.java  |  50 
 .../ITestOzoneContractGetFileStatus.java|  61 -
 .../ozone/contract/ITestOzoneContractMkdir.java |  48 
 .../ozone/contract/ITestOzoneContractOpen.java  |  47 
 .../contract/ITestOzoneContractRename.java  |  49 
 .../contract/ITestOzoneContractRootDir.java |  51 
 .../ozone/contract/ITestOzoneContractSeek.java  |  47 
 .../hadoop/fs/ozone/contract/OzoneContract.java | 125 --
 .../src/todo/resources/contract/ozone.xml   | 113 -
 .../src/todo/resources/log4j.properties |  23 --
 36 files changed, 1010 insertions(+), 951 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fd84dea0/hadoop-hdds/container-service/pom.xml
--
diff --git a/hadoop-hdds/container-service/pom.xml 
b/hadoop-hdds/container-service/pom.xml
index 736272d..f09f03d 100644
--- a/hadoop-hdds/container-service/pom.xml
+++ b/hadoop-hdds/container-service/pom.xml
@@ -52,6 +52,11 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd";>
   test
 
 
+
+  io.dropwizard.metrics
+  metrics-core
+  test
+
   
 
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fd84dea0/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java
index ac245d5..f04d392 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java
@@ -20,6 +20,7 @@ import com.google.common.base.Strings;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdds.HddsUtils;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.ozone.container.common.helpers.ContainerUtils;
 import org.apache.hadoop.ozone.container.common.statemachine
 .DatanodeStateMachine;
@@ -107,6 +108,12 @@ public class InitDatanodeState implements DatanodeState,
*/
   private void persistContainerDatanodeDetails() throws IOException {
 String dataNodeIDPath = HddsUtils.getDatanodeIdFilePath(conf);
+if (Strings.isNullOrEmpty

[29/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/StorageContainerManager.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/StorageContainerManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/StorageContainerManager.java
new file mode 100644
index 000..1a78dee
--- /dev/null
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/StorageContainerManager.java
@@ -0,0 +1,1290 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.hdds.scm;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import com.google.common.cache.Cache;
+import com.google.common.cache.CacheBuilder;
+import com.google.common.cache.RemovalListener;
+import com.google.common.cache.RemovalNotification;
+import com.google.protobuf.BlockingService;
+import com.google.protobuf.InvalidProtocolBufferException;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.hdds.HddsUtils;
+import org.apache.hadoop.hdds.scm.block.BlockManager;
+import org.apache.hadoop.hdds.scm.block.BlockManagerImpl;
+import org.apache.hadoop.hdds.scm.container.ContainerMapping;
+import org.apache.hadoop.hdds.scm.container.Mapping;
+import org.apache.hadoop.hdds.scm.container.common.helpers.AllocatedBlock;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
+import org.apache.hadoop.hdds.scm.container.common.helpers.DeleteBlockResult;
+import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.ContainerStat;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMMetrics;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException.ResultCodes;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.SCMNodeManager;
+import org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol;
+import org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol;
+import org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolPB;
+import 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolPB;
+import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.DatanodeDetailsProto;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
+import org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerBlocksDeletionACKProto;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerBlocksDeletionACKProto
+.DeleteBlockTransactionResult;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos
+.ContainerBlocksDeletionACKResponseProto;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReportsRequestProto;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReportsResponseProto;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ReportState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.SCMCmdType;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.SCMCommandResponseProto;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.SCMHeartbeatResponseProto;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.SCMNodeAddressList;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContain

[74/83] [abbrv] hadoop git commit: HDFS-13424. Ozone: Refactor MiniOzoneClassicCluster. Contributed by Nanda Kumar.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/06d228a3/hadoop-tools/hadoop-ozone/pom.xml
--
diff --git a/hadoop-tools/hadoop-ozone/pom.xml 
b/hadoop-tools/hadoop-ozone/pom.xml
index a7d0cfa..1cacbb3 100644
--- a/hadoop-tools/hadoop-ozone/pom.xml
+++ b/hadoop-tools/hadoop-ozone/pom.xml
@@ -170,5 +170,30 @@
   hadoop-mapreduce-client-jobclient
   test
 
+
+  org.apache.hadoop
+  hadoop-hdds-server-framework
+  test
+
+
+  org.apache.hadoop
+  hadoop-hdds-server-scm
+  test
+
+
+  org.apache.hadoop
+  hadoop-hdds-client
+  test
+
+
+  org.apache.hadoop
+  hadoop-hdds-container-service
+  test
+
+
+  org.apache.hadoop
+  hadoop-ozone-ozone-manager
+  test
+
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/06d228a3/hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
--
diff --git 
a/hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
 
b/hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
index f09dd2a..a7a53dc 100644
--- 
a/hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
+++ 
b/hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
@@ -24,13 +24,12 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdfs.DFSUtil;
-import org.apache.hadoop.hdfs.server.datanode.DataNode;
 import org.apache.hadoop.hdfs.server.datanode.ObjectStoreHandler;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
-import org.apache.hadoop.ozone.MiniOzoneClassicCluster;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
 import org.apache.hadoop.ozone.OzoneConfigKeys;
-import org.apache.hadoop.ozone.OzoneConsts;
 import org.apache.hadoop.ozone.web.handlers.BucketArgs;
 import org.apache.hadoop.ozone.web.handlers.UserArgs;
 import org.apache.hadoop.ozone.web.interfaces.StorageHandler;
@@ -48,7 +47,7 @@ import java.util.Arrays;
  * Test OzoneFSInputStream by reading through multiple interfaces.
  */
 public class TestOzoneFSInputStream {
-  private static MiniOzoneClassicCluster cluster = null;
+  private static MiniOzoneCluster cluster = null;
   private static FileSystem fs;
   private static StorageHandler storageHandler;
   private static Path filePath = null;
@@ -66,10 +65,10 @@ public class TestOzoneFSInputStream {
   public static void init() throws Exception {
 OzoneConfiguration conf = new OzoneConfiguration();
 conf.setLong(OzoneConfigKeys.OZONE_SCM_BLOCK_SIZE_IN_MB, 10);
-cluster = new MiniOzoneClassicCluster.Builder(conf)
-.numDataNodes(10)
-.setHandlerType(OzoneConsts.OZONE_HANDLER_DISTRIBUTED)
+cluster = MiniOzoneCluster.newBuilder(conf)
+.setNumDatanodes(10)
 .build();
+cluster.waitForClusterToBeReady();
 storageHandler =
 new ObjectStoreHandler(conf).getStorageHandler();
 
@@ -88,9 +87,10 @@ public class TestOzoneFSInputStream {
 storageHandler.createBucket(bucketArgs);
 
 // Fetch the host and port for File System init
-DataNode dataNode = cluster.getDataNodes().get(0);
-int port = dataNode.getInfoPort();
-String host = dataNode.getDatanodeHostname();
+DatanodeDetails datanodeDetails = cluster.getHddsDatanodes().get(0)
+.getDatanodeDetails();
+int port = datanodeDetails.getOzoneRestPort();
+String host = datanodeDetails.getHostName();
 
 // Set the fs.defaultFS and start the filesystem
 String uri = String.format("%s://%s.%s/",

http://git-wip-us.apache.org/repos/asf/hadoop/blob/06d228a3/hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
--
diff --git 
a/hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
 
b/hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
index a1c9404..9f94e37 100644
--- 
a/hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
+++ 
b/hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
@@ -23,6 +23,7 @@ import java.net.URI;
 import java.util.Arrays;
 import java.util.Collection;
 
+import org.apache.hadoop.ozone.MiniOzoneCluster;
 import org.junit.Before;
 import org.junit.Test;
 import org.junit.runner.RunWith;
@@ -40,8 +41,6 @@ import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import o

[83/83] [abbrv] hadoop git commit: Merge branch 'HDFS-7240' into trunk

2018-04-24 Thread xyao
Merge branch 'HDFS-7240' into trunk


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9d6befb2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9d6befb2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9d6befb2

Branch: refs/heads/trunk
Commit: 9d6befb2989a127b5dd7776522a12357b4eb3c61
Parents: 1b9ecc2 9197588
Author: Xiaoyu Yao 
Authored: Tue Apr 24 12:08:43 2018 -0700
Committer: Xiaoyu Yao 
Committed: Tue Apr 24 12:08:43 2018 -0700

--
 LICENSE.txt |   68 +
 dev-support/bin/dist-layout-stitching   |   22 +-
 dev-support/docker/Dockerfile   |3 +
 .../assemblies/hadoop-src-with-hdsl.xml |   56 +
 .../main/resources/assemblies/hadoop-src.xml|2 +
 .../ensure-jars-have-correct-contents.sh|6 +
 .../hadoop-client-minicluster/pom.xml   |7 +
 .../hadoop-client-runtime/pom.xml   |1 +
 .../src/main/bin/hadoop-functions.sh|5 +
 .../hadoop-common/src/main/conf/hadoop-env.sh   |   17 +
 .../src/main/conf/log4j.properties  |   34 +
 .../src/main/resources/core-default.xml |   13 +
 .../conf/TestCommonConfigurationFields.java |3 +
 hadoop-dist/pom.xml |   83 ++
 hadoop-dist/src/main/compose/ozone/.env |   17 +
 .../src/main/compose/ozone/docker-compose.yaml  |   61 +
 .../src/main/compose/ozone/docker-config|   35 +
 hadoop-hdds/client/pom.xml  |   49 +
 .../apache/hadoop/hdds/scm/XceiverClient.java   |  192 +++
 .../hadoop/hdds/scm/XceiverClientHandler.java   |  202 +++
 .../hdds/scm/XceiverClientInitializer.java  |   72 +
 .../hadoop/hdds/scm/XceiverClientManager.java   |  218 +++
 .../hadoop/hdds/scm/XceiverClientMetrics.java   |   92 ++
 .../hadoop/hdds/scm/XceiverClientRatis.java |  266 
 .../scm/client/ContainerOperationClient.java|  407 ++
 .../hadoop/hdds/scm/client/HddsClientUtils.java |  232 
 .../hadoop/hdds/scm/client/package-info.java|   23 +
 .../apache/hadoop/hdds/scm/package-info.java|   23 +
 .../hdds/scm/storage/ChunkInputStream.java  |  261 
 .../hdds/scm/storage/ChunkOutputStream.java |  227 +++
 .../hadoop/hdds/scm/storage/package-info.java   |   23 +
 .../common/dev-support/findbugsExcludeFile.xml  |   21 +
 hadoop-hdds/common/pom.xml  |  128 ++
 .../org/apache/hadoop/hdds/HddsConfigKeys.java  |   23 +
 .../java/org/apache/hadoop/hdds/HddsUtils.java  |  318 +
 .../apache/hadoop/hdds/client/OzoneQuota.java   |  203 +++
 .../hadoop/hdds/client/ReplicationFactor.java   |   63 +
 .../hadoop/hdds/client/ReplicationType.java |   28 +
 .../apache/hadoop/hdds/client/package-info.java |   23 +
 .../hadoop/hdds/conf/HddsConfServlet.java   |  182 +++
 .../hadoop/hdds/conf/OzoneConfiguration.java|  162 +++
 .../apache/hadoop/hdds/conf/package-info.java   |   18 +
 .../org/apache/hadoop/hdds/package-info.java|   23 +
 .../hadoop/hdds/protocol/DatanodeDetails.java   |  353 +
 .../hadoop/hdds/protocol/package-info.java  |   22 +
 .../apache/hadoop/hdds/scm/ScmConfigKeys.java   |  271 
 .../org/apache/hadoop/hdds/scm/ScmInfo.java |   81 ++
 .../hadoop/hdds/scm/XceiverClientSpi.java   |  129 ++
 .../hadoop/hdds/scm/client/ScmClient.java   |  139 ++
 .../hadoop/hdds/scm/client/package-info.java|   24 +
 .../hadoop/hdds/scm/container/ContainerID.java  |   97 ++
 .../common/helpers/AllocatedBlock.java  |   77 ++
 .../container/common/helpers/ContainerInfo.java |  333 +
 .../common/helpers/DeleteBlockResult.java   |   51 +
 .../scm/container/common/helpers/Pipeline.java  |  253 
 .../common/helpers/PipelineChannel.java |  122 ++
 .../helpers/StorageContainerException.java  |  104 ++
 .../container/common/helpers/package-info.java  |   22 +
 .../hadoop/hdds/scm/container/package-info.java |   18 +
 .../apache/hadoop/hdds/scm/package-info.java|   24 +
 .../hdds/scm/protocol/LocatedContainer.java |  127 ++
 .../scm/protocol/ScmBlockLocationProtocol.java  |   72 +
 .../hdds/scm/protocol/ScmLocatedBlock.java  |  100 ++
 .../StorageContainerLocationProtocol.java   |  124 ++
 .../hadoop/hdds/scm/protocol/package-info.java  |   19 +
 ...kLocationProtocolClientSideTranslatorPB.java |  215 +++
 .../protocolPB/ScmBlockLocationProtocolPB.java  |   35 +
 ...rLocationProtocolClientSideTranslatorPB.java |  316 +
 .../StorageContainerLocationProtocolPB.java |   36 +
 .../hdds/scm/protocolPB/package-info.java   |   24 +
 .../scm/storage/ContainerProtocolCalls.java |  396 ++
 .../hadoop/hdds/scm/storage/package-info.java   |   23 +
 .../java/org/apache/hadoop/ozone/OzoneAcl.java  |  231 
 .../apache/hadoop/ozone/OzoneConfigKeys.java|  241 
 .../org/apache/hadoop/ozone/Ozo

[82/83] [abbrv] hadoop git commit: Merge branch 'trunk' into HDFS-7240

2018-04-24 Thread xyao
Merge branch 'trunk' into HDFS-7240


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/91975886
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/91975886
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/91975886

Branch: refs/heads/trunk
Commit: 91975886e334aa8d2455e49e9e30db399e41e0cc
Parents: 94cb164 e4c39f3
Author: Anu Engineer 
Authored: Wed Apr 18 20:12:40 2018 -0700
Committer: Anu Engineer 
Committed: Wed Apr 18 20:12:40 2018 -0700

--
 .../hadoop/util/GenericOptionsParser.java   |   3 +
 .../org/apache/hadoop/hdfs/DFSOutputStream.java |   0
 .../java/org/apache/hadoop/hdfs/DFSPacket.java  |   0
 .../org/apache/hadoop/hdfs/TestDFSPacket.java   |   0
 .../federation/metrics/FederationMetrics.java   |   2 +-
 .../resolver/order/LocalResolver.java   |   3 +-
 .../router/RouterHeartbeatService.java  |   4 +-
 .../federation/router/RouterRpcClient.java  |   4 +-
 .../federation/router/RouterRpcServer.java  |   6 +-
 .../federation/store/StateStoreService.java |   2 +-
 .../driver/impl/StateStoreFileBaseImpl.java |   2 +-
 .../main/webapps/router/federationhealth.html   |  17 +-
 .../src/main/webapps/router/federationhealth.js |   5 +-
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   6 +-
 .../hdfs/qjournal/server/JournalNode.java   |  62 +-
 .../qjournal/server/JournalNodeHttpServer.java  |  65 --
 .../qjournal/server/JournalNodeRpcServer.java   |  30 ++-
 .../datanode/fsdataset/impl/FsDatasetImpl.java  |   9 +-
 .../datanode/fsdataset/impl/FsVolumeList.java   |   9 +-
 .../OfflineImageReconstructor.java  |   4 +-
 .../src/main/resources/hdfs-default.xml |  33 +++
 .../hdfs/client/impl/BlockReaderTestUtil.java   |   5 +
 .../TestJournalNodeRespectsBindHostKeys.java| 200 +++
 .../fsdataset/impl/TestFsDatasetImpl.java   | 107 +-
 .../apache/hadoop/hdfs/tools/TestDFSAdmin.java  |  80 +++-
 .../TestOfflineImageViewer.java |   7 +-
 .../security/TestRefreshUserMappings.java   |  10 +-
 .../v2/app/job/impl/TaskAttemptImpl.java|   0
 .../mapreduce/v2/app/job/impl/TaskImpl.java |   0
 .../mapreduce/v2/app/job/impl/TestTaskImpl.java |   0
 .../org/apache/hadoop/mapred/NotRunningJob.java |   2 +-
 .../mapred/TestClientServiceDelegate.java   |   4 +-
 .../apache/hadoop/mapred/TestYARNRunner.java|   3 +-
 hadoop-project/pom.xml  |   2 +-
 hadoop-project/src/site/site.xml|   1 +
 .../src/site/resources/css/site.css |  30 +++
 .../hadoop-aws/src/site/resources/css/site.css  |  30 +++
 .../src/site/resources/css/site.css |  30 +++
 .../src/site/resources/css/site.css |  30 +++
 hadoop-tools/hadoop-sls/pom.xml |   1 +
 .../org/apache/hadoop/yarn/sls/SLSRunner.java   |  35 +++-
 .../apache/hadoop/yarn/sls/utils/SLSUtils.java  |  24 ++-
 .../hadoop/yarn/sls/utils/TestSLSUtils.java |  25 +++
 .../test/resources/nodes-with-resources.json|  19 ++
 .../MySQL/FederationStateStoreTables.sql|   2 +-
 .../yarn/api/records/ApplicationReport.java |  45 -
 .../src/main/proto/yarn_protos.proto|   1 +
 .../client/SystemServiceManagerImpl.java|  22 +-
 ...RN-Simplified-V1-API-Layer-For-Services.yaml |   8 +-
 .../hadoop/yarn/service/TestApiServer.java  |  45 -
 .../service/client/TestSystemServiceImpl.java   | 180 -
 .../client/TestSystemServiceManagerImpl.java| 182 +
 .../resources/system-services/bad/bad.yarnfile  |  16 ++
 .../sync/user1/example-app1.yarnfile|  16 ++
 .../sync/user1/example-app2.yarnfile|  16 ++
 .../sync/user1/example-app3.json|  16 ++
 .../sync/user2/example-app1.yarnfile|  16 ++
 .../sync/user2/example-app2.yarnfile|  16 ++
 .../users/sync/user1/example-app1.yarnfile  |  16 --
 .../users/sync/user1/example-app2.yarnfile  |  16 --
 .../users/sync/user1/example-app3.json  |  16 --
 .../users/sync/user2/example-app1.yarnfile  |  16 --
 .../users/sync/user2/example-app2.yarnfile  |  16 --
 .../yarn/service/ContainerFailureTracker.java   |   7 +-
 .../hadoop/yarn/service/ServiceMaster.java  |  18 +-
 .../hadoop/yarn/service/ServiceScheduler.java   |   5 +-
 .../service/api/records/ReadinessCheck.java |   1 +
 .../yarn/service/api/records/Service.java   |  24 +++
 .../yarn/service/client/ClientAMProxy.java  |   5 +-
 .../yarn/service/client/ServiceClient.java  |  56 +++---
 .../yarn/service/component/Component.java   |  16 +-
 .../component/instance/ComponentInstance.java   |  20 ++
 .../yarn/service/conf/YarnServiceConf.java  |  29 ++-
 .../yarn/service/monitor/ServiceMonitor.java|   7 +-
 .../service/monitor/probe/DefaultProb

[64/83] [abbrv] hadoop git commit: HDFS-13415. Ozone: Remove cblock code from HDFS-7240. Contributed by Elek, Marton.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ea85801c/hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/CBlockIStorageImpl.java
--
diff --git 
a/hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/CBlockIStorageImpl.java
 
b/hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/CBlockIStorageImpl.java
deleted file mode 100644
index 4744968..000
--- 
a/hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/CBlockIStorageImpl.java
+++ /dev/null
@@ -1,440 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.cblock.jscsiHelper;
-
-import org.apache.commons.codec.digest.DigestUtils;
-import org.apache.commons.lang.StringUtils;
-import org.apache.hadoop.cblock.jscsiHelper.cache.CacheModule;
-import org.apache.hadoop.cblock.jscsiHelper.cache.LogicalBlock;
-import org.apache.hadoop.cblock.jscsiHelper.cache.impl.CBlockLocalCache;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hdds.scm.XceiverClientManager;
-import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
-import org.jscsi.target.storage.IStorageModule;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.util.ArrayList;
-import java.util.List;
-
-import static org.apache.hadoop.cblock.CBlockConfigKeys.DFS_CBLOCK_TRACE_IO;
-import static org.apache.hadoop.cblock.CBlockConfigKeys
-.DFS_CBLOCK_TRACE_IO_DEFAULT;
-
-/**
- * The SCSI Target class for CBlockSCSIServer.
- */
-final public class CBlockIStorageImpl implements IStorageModule {
-  private static final Logger LOGGER =
-  LoggerFactory.getLogger(CBlockIStorageImpl.class);
-  private static final Logger TRACER =
-  LoggerFactory.getLogger("TraceIO");
-
-  private CacheModule cache;
-  private final long volumeSize;
-  private final int blockSize;
-  private final String userName;
-  private final String volumeName;
-  private final boolean traceEnabled;
-  private final Configuration conf;
-  private final ContainerCacheFlusher flusher;
-  private List fullContainerList;
-
-  /**
-   * private: constructs a SCSI Target.
-   *
-   * @param config - config
-   * @param userName - Username
-   * @param volumeName - Name of the volume
-   * @param volumeSize - Size of the volume
-   * @param blockSize - Size of the block
-   * @param fullContainerList - Ordered list of containers that make up this
-   * volume.
-   * @param flusher - flusher which is used to flush data from
-   *  level db cache to containers
-   * @throws IOException - Throws IOException.
-   */
-  private CBlockIStorageImpl(Configuration config, String userName,
-  String volumeName, long volumeSize, int blockSize,
-  List fullContainerList, ContainerCacheFlusher flusher) {
-this.conf = config;
-this.userName = userName;
-this.volumeName = volumeName;
-this.volumeSize = volumeSize;
-this.blockSize = blockSize;
-this.fullContainerList = new ArrayList<>(fullContainerList);
-this.flusher = flusher;
-this.traceEnabled = conf.getBoolean(DFS_CBLOCK_TRACE_IO,
-DFS_CBLOCK_TRACE_IO_DEFAULT);
-  }
-
-  /**
-   * private: initialize the cache.
-   *
-   * @param xceiverClientManager - client manager that is used for creating new
-   * connections to containers.
-   * @param metrics  - target metrics to maintain metrics for target server
-   * @throws IOException - Throws IOException.
-   */
-  private void initCache(XceiverClientManager xceiverClientManager,
-  CBlockTargetMetrics metrics) throws IOException {
-this.cache = CBlockLocalCache.newBuilder()
-.setConfiguration(conf)
-.setVolumeName(this.volumeName)
-.setUserName(this.userName)
-.setPipelines(this.fullContainerList)
-.setClientManager(xceiverClientManager)
-.setBlockSize(blockSize)
-.setVolumeSize(volumeSize)
-.setFlusher(flusher)
-.setCBlockTargetMetrics(metrics)
-.build();
-this.cache.start();
-  }
-
-  /**
-   * Gets a new builder for CBlockStorageImpl.
-   *
-   * @r

[79/83] [abbrv] hadoop git commit: HDFS-13444. Ozone: Fix checkstyle issues in HDFS-7240. Contributed by Lokesh Jain.

2018-04-24 Thread xyao
HDFS-13444. Ozone: Fix checkstyle issues in HDFS-7240. Contributed by Lokesh 
Jain.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c10788ec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c10788ec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c10788ec

Branch: refs/heads/trunk
Commit: c10788ec8fe31754cec5c39623ffbf979ca14c3b
Parents: 1e0507a
Author: Nanda kumar 
Authored: Tue Apr 17 16:11:47 2018 +0530
Committer: Nanda kumar 
Committed: Tue Apr 17 16:11:47 2018 +0530

--
 .../org/apache/hadoop/hdds/HddsConfigKeys.java  |   2 +-
 .../java/org/apache/hadoop/hdds/HddsUtils.java  |   2 +-
 .../hadoop/hdds/conf/HddsConfServlet.java   |  39 ++---
 .../hadoop/hdds/scm/container/package-info.java |  18 +++
 .../hadoop/ozone/web/utils/package-info.java|  19 +++
 .../apache/hadoop/hdds/scm/HddsServerUtil.java  |   5 +-
 .../apache/hadoop/hdds/scm/package-info.java|  19 +++
 .../hadoop/ozone/HddsDatanodeService.java   |   2 +-
 .../container/common/impl/ChunkManagerImpl.java |   2 +
 .../ozone/container/common/impl/Dispatcher.java |   2 +-
 .../protocol/commands/RegisteredCommand.java|   8 +-
 .../hadoop/ozone/protocolPB/package-info.java   |  19 +++
 .../container/common/ContainerTestUtils.java|   2 +-
 .../apache/hadoop/hdds/server/ServerUtils.java  |   2 +-
 .../org/apache/hadoop/hdds/scm/TestUtils.java   |   2 +-
 .../placement/TestContainerPlacement.java   |   4 +-
 .../hdds/scm/cli/OzoneCommandHandler.java   |   2 +-
 .../org/apache/hadoop/hdds/scm/cli/SCMCLI.java  |   5 +-
 .../cli/container/CloseContainerHandler.java|   5 +-
 .../ozone/client/TestHddsClientUtils.java   |   4 +-
 .../java/org/apache/hadoop/ozone/KsmUtils.java  |   2 +-
 .../apache/hadoop/ozone/freon/OzoneGetConf.java |  18 +--
 .../container/TestContainerStateManager.java|  36 ++---
 .../apache/hadoop/ozone/MiniOzoneCluster.java   |   9 +-
 .../hadoop/ozone/MiniOzoneClusterImpl.java  |   4 +-
 .../ozone/ksm/TestContainerReportWithKeys.java  |   7 +-
 .../hadoop/ozone/scm/TestContainerSQLCli.java   |   6 +-
 .../hadoop/ozone/scm/node/TestQueryNode.java|   2 +-
 .../genesis/BenchMarkContainerStateMap.java | 141 ---
 .../genesis/BenchMarkDatanodeDispatcher.java|  22 +--
 .../genesis/BenchMarkMetadataStoreReads.java|   6 +-
 .../genesis/BenchMarkMetadataStoreWrites.java   |   4 +-
 .../ozone/genesis/BenchMarkRocksDbStore.java|   4 +-
 .../apache/hadoop/ozone/genesis/Genesis.java|   6 +-
 .../hadoop/ozone/genesis/GenesisUtil.java   |   6 +-
 35 files changed, 253 insertions(+), 183 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c10788ec/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
index 665618c..040f080 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
@@ -1,6 +1,6 @@
 package org.apache.hadoop.hdds;
 
-public class HddsConfigKeys {
+public final class HddsConfigKeys {
   private HddsConfigKeys() {
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c10788ec/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
index a0b5c47..48c6dce 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
@@ -48,7 +48,7 @@ import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ENABLED_DEFAULT;
 /**
  * HDDS specific stateless utility functions.
  */
-public class HddsUtils {
+public final class HddsUtils {
 
 
   private static final Logger LOG = LoggerFactory.getLogger(HddsUtils.class);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c10788ec/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/HddsConfServlet.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/HddsConfServlet.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/HddsConfServlet.java
index 068e41f..b8d0b24 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/HddsConfServlet.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/HddsConfServlet.java

[58/83] [abbrv] hadoop git commit: HDFS-13416. Ozone: TestNodeManager tests fail. Contributed by Bharat Viswanadham.

2018-04-24 Thread xyao
HDFS-13416. Ozone: TestNodeManager tests fail. Contributed by Bharat 
Viswanadham.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e6da4d8d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e6da4d8d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e6da4d8d

Branch: refs/heads/trunk
Commit: e6da4d8da4f55f3b26842f7e92935957aec83160
Parents: d8fd922
Author: Nanda kumar 
Authored: Wed Apr 11 14:36:51 2018 +0530
Committer: Nanda kumar 
Committed: Wed Apr 11 14:36:51 2018 +0530

--
 .../hadoop/hdds/protocol/DatanodeDetails.java   | 11 +
 .../hadoop/hdds/scm/node/SCMNodeManager.java|  6 +--
 .../hadoop/hdds/scm/node/TestNodeManager.java   | 51 +---
 3 files changed, 38 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e6da4d8d/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
index 764b3cd..b2fa291 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
@@ -241,6 +241,17 @@ public final class DatanodeDetails implements 
Comparable {
 return this.getUuid().compareTo(that.getUuid());
   }
 
+  @Override
+  public boolean equals(Object obj) {
+return obj instanceof DatanodeDetails &&
+uuid.equals(((DatanodeDetails) obj).uuid);
+  }
+
+  @Override
+  public int hashCode() {
+return uuid.hashCode();
+  }
+
   /**
* Returns DatanodeDetails.Builder instance.
*

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e6da4d8d/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
index 0174c17..af68dba 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
@@ -829,9 +829,10 @@ public class SCMNodeManager
   DatanodeDetailsProto datanodeDetailsProto, SCMNodeReport nodeReport,
   ReportState containerReportState) {
 
+Preconditions.checkNotNull(datanodeDetailsProto, "Heartbeat is missing " +
+"DatanodeDetails.");
 DatanodeDetails datanodeDetails = DatanodeDetails
 .getFromProtoBuf(datanodeDetailsProto);
-
 // Checking for NULL to make sure that we don't get
 // an exception from ConcurrentList.
 // This could be a problem in tests, if this function is invoked via
@@ -846,7 +847,6 @@ public class SCMNodeManager
 } else {
   LOG.error("Datanode ID in heartbeat is null");
 }
-
 return commandQueue.getCommand(datanodeDetails.getUuid());
   }
 
@@ -875,7 +875,7 @@ public class SCMNodeManager
*/
   @Override
   public SCMNodeMetric getNodeStat(DatanodeDetails datanodeDetails) {
-return new SCMNodeMetric(nodeStats.get(datanodeDetails));
+return new SCMNodeMetric(nodeStats.get(datanodeDetails.getUuid()));
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e6da4d8d/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java
--
diff --git 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java
 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java
index de6e30c..89ce12e 100644
--- 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java
+++ 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java
@@ -35,7 +35,6 @@ import org.apache.hadoop.ozone.OzoneConfigKeys;
 import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.test.PathUtils;
-import org.hamcrest.CoreMatchers;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
@@ -71,7 +70,6 @@ import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.STALE;
 import static org.apache.hadoop.hdds.protocol.proto
 .StorageContainerDatanodeProtocolProtos.SCMCmdType;
 import static org.hamcrest.CoreMatchers.containsString;
-import static org.hamcrest.MatcherAssert.assertThat;

[46/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/container/common/helpers/package-info.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/container/common/helpers/package-info.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/container/common/helpers/package-info.java
new file mode 100644
index 000..fa5df11
--- /dev/null
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/container/common/helpers/package-info.java
@@ -0,0 +1,23 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.container.common.helpers;
+
+/**
+ * Helper classes for the container protocol communication.
+ */
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lease/Lease.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lease/Lease.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lease/Lease.java
new file mode 100644
index 000..dfa9315
--- /dev/null
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lease/Lease.java
@@ -0,0 +1,189 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.lease;
+
+import org.apache.hadoop.util.Time;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.concurrent.Callable;
+
+/**
+ * This class represents the lease created on a resource. Callback can be
+ * registered on the lease which will be executed in case of timeout.
+ *
+ * @param  Resource type for which the lease can be associated
+ */
+public class Lease {
+
+  /**
+   * The resource for which this lease is created.
+   */
+  private final T resource;
+
+  private final long creationTime;
+
+  /**
+   * Lease lifetime in milliseconds.
+   */
+  private volatile long leaseTimeout;
+
+  private boolean expired;
+
+  /**
+   * Functions to be called in case of timeout.
+   */
+  private List> callbacks;
+
+
+  /**
+   * Creates a lease on the specified resource with given timeout.
+   *
+   * @param resource
+   *Resource for which the lease has to be created
+   * @param timeout
+   *Lease lifetime in milliseconds
+   */
+  public Lease(T resource, long timeout) {
+this.resource = resource;
+this.leaseTimeout = timeout;
+this.callbacks = Collections.synchronizedList(new ArrayList<>());
+this.creationTime = Time.monotonicNow();
+this.expired = false;
+  }
+
+  /**
+   * Returns true if the lease has expired, else false.
+   *
+   * @return true if expired, else false
+   */
+  public boolean hasExpired() {
+return expired;
+  }
+
+  /**
+   * Registers a callback which will be executed in case of timeout. Callbacks
+   * are executed in a separate Thread.
+   *
+   * @param callback
+   *The Callable which has to be executed
+   * @throws LeaseExpiredException
+   * If the lease has already timed out
+   */
+  public void registerCallBack(Callable callback)
+  throws LeaseExpiredException {
+if(hasExpired()) {
+  throw new LeaseExpiredException("Resource: " + resource);
+}
+callbacks.add(callback);
+  }
+
+  /**
+   * Returns the time ela

[54/83] [abbrv] hadoop git commit: HDFS-13301. Ozone: Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and DatanodeIDProto. Contributed by Shashikant Banerjee.

2018-04-24 Thread xyao
HDFS-13301. Ozone: Remove containerPort, ratisPort and ozoneRestPort from 
DatanodeID and DatanodeIDProto. Contributed by Shashikant Banerjee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8475d6bb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8475d6bb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8475d6bb

Branch: refs/heads/trunk
Commit: 8475d6bb55c9b3e78478d6b9d1e4be65e5b604cf
Parents: 8658ed7
Author: Nanda kumar 
Authored: Sat Apr 7 01:39:08 2018 +0530
Committer: Nanda kumar 
Committed: Sat Apr 7 01:39:08 2018 +0530

--
 .../apache/hadoop/hdfs/protocol/DatanodeID.java | 97 +---
 .../hadoop/hdfs/protocolPB/PBHelperClient.java  | 14 +--
 .../src/main/proto/hdfs.proto   |  3 -
 3 files changed, 5 insertions(+), 109 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8475d6bb/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java
index 96f22ff..af720c7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java
@@ -22,7 +22,6 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 
 import com.google.common.annotations.VisibleForTesting;
-import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos;
 
 import java.net.InetSocketAddress;
 
@@ -52,9 +51,6 @@ public class DatanodeID implements Comparable {
   private int infoSecurePort; // info server port
   private int ipcPort;   // IPC server port
   private String xferAddr;
-  private int containerPort; // container Stand_alone Rpc port.
-  private int ratisPort; // Container Ratis RPC Port.
-  private int ozoneRestPort;
 
   /**
* UUID identifying a given datanode. For upgraded Datanodes this is the
@@ -76,12 +72,11 @@ public class DatanodeID implements Comparable {
 from.getInfoPort(),
 from.getInfoSecurePort(),
 from.getIpcPort());
-this.ozoneRestPort = from.getOzoneRestPort();
 this.peerHostName = from.getPeerHostName();
   }
 
   /**
-   * Create a DatanodeID.
+   * Create a DatanodeID
* @param ipAddr IP
* @param hostName hostname
* @param datanodeUuid data node ID, UUID for new Datanodes, may be the
@@ -269,8 +264,6 @@ public class DatanodeID implements Comparable {
 infoPort = nodeReg.getInfoPort();
 infoSecurePort = nodeReg.getInfoSecurePort();
 ipcPort = nodeReg.getIpcPort();
-ratisPort = nodeReg.getRatisPort();
-ozoneRestPort = nodeReg.getOzoneRestPort();
   }
 
   /**
@@ -284,94 +277,6 @@ public class DatanodeID implements Comparable {
 return getXferAddr().compareTo(that.getXferAddr());
   }
 
-  /**
-   * Returns the container port.
-   * @return Port
-   */
-  public int getContainerPort() {
-return containerPort;
-  }
-
-  /**
-   * Sets the container port.
-   * @param containerPort - container port.
-   */
-  public void setContainerPort(int containerPort) {
-this.containerPort = containerPort;
-  }
-
-  /**
-   * Gets the Ratis Port.
-   * @return retis port.
-   */
-  public int getRatisPort() {
-return ratisPort;
-  }
-
-  /**
-   * Sets the Ratis Port.
-   * @param ratisPort - Ratis port.
-   */
-  public void setRatisPort(int ratisPort) {
-this.ratisPort = ratisPort;
-  }
-
-  /**
-   * Ozone rest port.
-   *
-   * @return rest port.
-   */
-  public int getOzoneRestPort() {
-return ozoneRestPort;
-  }
-
-  /**
-   * Set the ozone rest port.
-   *
-   * @param ozoneRestPort
-   */
-  public void setOzoneRestPort(int ozoneRestPort) {
-this.ozoneRestPort = ozoneRestPort;
-  }
-
-  /**
-   * Returns a DataNode ID from the protocol buffers.
-   *
-   * @param datanodeIDProto - protoBuf Message
-   * @return DataNodeID
-   */
-  public static DatanodeID getFromProtoBuf(
-  HdfsProtos.DatanodeIDProto datanodeIDProto) {
-DatanodeID id = new DatanodeID(datanodeIDProto.getIpAddr(),
-datanodeIDProto.getHostName(), datanodeIDProto.getDatanodeUuid(),
-datanodeIDProto.getXferPort(), datanodeIDProto.getInfoPort(),
-datanodeIDProto.getInfoSecurePort(), datanodeIDProto.getIpcPort());
-id.setContainerPort(datanodeIDProto.getContainerPort());
-id.setRatisPort(datanodeIDProto.getRatisPort());
-id.setOzoneRestPort(datanodeIDProto.getOzoneRestPort());
-return id;
-  }
-
-  /**
-

[62/83] [abbrv] hadoop git commit: HDFS-13415. Ozone: Remove cblock code from HDFS-7240. Contributed by Elek, Marton.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ea85801c/hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/protocolPB/CBlockServiceProtocolServerSideTranslatorPB.java
--
diff --git 
a/hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/protocolPB/CBlockServiceProtocolServerSideTranslatorPB.java
 
b/hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/protocolPB/CBlockServiceProtocolServerSideTranslatorPB.java
deleted file mode 100644
index 8924a0c..000
--- 
a/hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/protocolPB/CBlockServiceProtocolServerSideTranslatorPB.java
+++ /dev/null
@@ -1,159 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *  http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-package org.apache.hadoop.cblock.protocolPB;
-
-import com.google.protobuf.RpcController;
-import com.google.protobuf.ServiceException;
-import org.apache.hadoop.cblock.meta.VolumeInfo;
-import org.apache.hadoop.cblock.proto.CBlockServiceProtocol;
-import org.apache.hadoop.cblock.protocol.proto.CBlockServiceProtocolProtos;
-import org.apache.hadoop.classification.InterfaceAudience;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.IOException;
-import java.util.List;
-
-import static 
org.apache.hadoop.cblock.CBlockConfigKeys.DFS_CBLOCK_SERVICE_BLOCK_SIZE_DEFAULT;
-
-/**
- * Server side implementation of the protobuf service.
- */
-@InterfaceAudience.Private
-public class CBlockServiceProtocolServerSideTranslatorPB
-implements CBlockServiceProtocolPB {
-
-  private final CBlockServiceProtocol impl;
-  private static final Logger LOG =
-  LoggerFactory.getLogger(
-  CBlockServiceProtocolServerSideTranslatorPB.class);
-
-  @Override
-  public CBlockServiceProtocolProtos.CreateVolumeResponseProto createVolume(
-  RpcController controller,
-  CBlockServiceProtocolProtos.CreateVolumeRequestProto request)
-  throws ServiceException {
-if (LOG.isDebugEnabled()) {
-  LOG.debug("createVolume called! volume size: " + request.getVolumeSize()
-  + " block size: " + request.getBlockSize());
-}
-try {
-  if (request.hasBlockSize()) {
-impl.createVolume(request.getUserName(), request.getVolumeName(),
-request.getVolumeSize(), request.getBlockSize());
-  } else{
-impl.createVolume(request.getUserName(), request.getVolumeName(),
-request.getVolumeSize(), DFS_CBLOCK_SERVICE_BLOCK_SIZE_DEFAULT);
-  }
-} catch (IOException e) {
-  throw new ServiceException(e);
-}
-return CBlockServiceProtocolProtos.CreateVolumeResponseProto
-.newBuilder().build();
-  }
-
-  @Override
-  public CBlockServiceProtocolProtos.DeleteVolumeResponseProto deleteVolume(
-  RpcController controller,
-  CBlockServiceProtocolProtos.DeleteVolumeRequestProto request)
-  throws ServiceException {
-if (LOG.isDebugEnabled()) {
-  LOG.debug("deleteVolume called! volume name: " + request.getVolumeName()
-  + " force:" + request.getForce());
-}
-try {
-  if (request.hasForce()) {
-impl.deleteVolume(request.getUserName(), request.getVolumeName(),
-request.getForce());
-  } else {
-impl.deleteVolume(request.getUserName(), request.getVolumeName(),
-false);
-  }
-} catch (IOException e) {
-  throw new ServiceException(e);
-}
-return CBlockServiceProtocolProtos.DeleteVolumeResponseProto
-.newBuilder().build();
-  }
-
-  @Override
-  public CBlockServiceProtocolProtos.InfoVolumeResponseProto infoVolume(
-  RpcController controller,
-  CBlockServiceProtocolProtos.InfoVolumeRequestProto request
-  ) throws ServiceException {
-if (LOG.isDebugEnabled()) {
-  LOG.debug("infoVolume called! volume name: " + request.getVolumeName());
-}
-CBlockServiceProtocolProtos.InfoVolumeResponseProto.Builder resp =
-CBlockServiceProtocolProtos.InfoVolumeResponseProto.newBuilder();
-CBlockServiceProtocolProtos.VolumeInfoProto.Builder volumeInfoProto =
-CBlockServiceProtocolProtos.VolumeInfoProto.newBuilder();
-VolumeInfo volumeInfo;
-try {
-  volumeInfo =

[76/83] [abbrv] hadoop git commit: HDFS-13424. Ozone: Refactor MiniOzoneClassicCluster. Contributed by Nanda Kumar.

2018-04-24 Thread xyao
HDFS-13424. Ozone: Refactor MiniOzoneClassicCluster. Contributed by Nanda Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/06d228a3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/06d228a3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/06d228a3

Branch: refs/heads/trunk
Commit: 06d228a354b130c8a04c86a6647b52b24c886281
Parents: fd84dea
Author: Mukul Kumar Singh 
Authored: Mon Apr 16 20:18:27 2018 +0530
Committer: Mukul Kumar Singh 
Committed: Mon Apr 16 20:18:27 2018 +0530

--
 .../hadoop/ozone/HddsDatanodeService.java   |  72 ++-
 hadoop-ozone/integration-test/pom.xml   |   6 +-
 .../container/TestContainerStateManager.java|  12 +-
 .../hadoop/ozone/MiniOzoneClassicCluster.java   | 616 ---
 .../apache/hadoop/ozone/MiniOzoneCluster.java   | 291 -
 .../hadoop/ozone/MiniOzoneClusterImpl.java  | 425 +
 .../hadoop/ozone/MiniOzoneTestHelper.java   |  81 ---
 .../apache/hadoop/ozone/RatisTestHelper.java|  24 +-
 .../hadoop/ozone/TestContainerOperations.java   |   9 +-
 .../hadoop/ozone/TestMiniOzoneCluster.java  |  27 +-
 .../ozone/TestStorageContainerManager.java  |  34 +-
 .../TestStorageContainerManagerHelper.java  |  12 +-
 .../ozone/client/rest/TestOzoneRestClient.java  |   9 +-
 .../ozone/client/rpc/TestOzoneRpcClient.java|  10 +-
 .../TestCloseContainerHandler.java  |  22 +-
 .../container/ozoneimpl/TestOzoneContainer.java |  36 +-
 .../ozoneimpl/TestOzoneContainerRatis.java  |  19 +-
 .../container/ozoneimpl/TestRatisManager.java   |  18 +-
 .../hadoop/ozone/freon/TestDataValidate.java|   7 +-
 .../apache/hadoop/ozone/freon/TestFreon.java|  10 +-
 .../ozone/ksm/TestContainerReportWithKeys.java  |  20 +-
 .../apache/hadoop/ozone/ksm/TestKSMMetrcis.java |   5 +-
 .../apache/hadoop/ozone/ksm/TestKSMSQLCli.java  |  27 +-
 .../hadoop/ozone/ksm/TestKeySpaceManager.java   |   5 +-
 .../ksm/TestKeySpaceManagerRestInterface.java   |  23 +-
 .../ozone/ksm/TestKsmBlockVersioning.java   |   5 +-
 .../ksm/TestMultipleContainerReadWrite.java |   5 +-
 .../hadoop/ozone/ozShell/TestOzoneShell.java|  21 +-
 .../hadoop/ozone/scm/TestAllocateContainer.java |  13 +-
 .../hadoop/ozone/scm/TestContainerSQLCli.java   |  43 +-
 .../ozone/scm/TestContainerSmallFile.java   |  14 +-
 .../org/apache/hadoop/ozone/scm/TestSCMCli.java |  21 +-
 .../apache/hadoop/ozone/scm/TestSCMMXBean.java  |  14 +-
 .../apache/hadoop/ozone/scm/TestSCMMetrics.java |  23 +-
 .../ozone/scm/TestXceiverClientManager.java |  18 +-
 .../ozone/scm/TestXceiverClientMetrics.java |  12 +-
 .../hadoop/ozone/scm/node/TestQueryNode.java|  19 +-
 .../ozone/web/TestDistributedOzoneVolumes.java  |  14 +-
 .../hadoop/ozone/web/TestLocalOzoneVolumes.java |  18 +-
 .../ozone/web/TestOzoneRestWithMiniCluster.java |  17 +-
 .../hadoop/ozone/web/TestOzoneWebAccess.java|  14 +-
 .../hadoop/ozone/web/client/TestBuckets.java|  14 +-
 .../hadoop/ozone/web/client/TestKeys.java   |  32 +-
 .../hadoop/ozone/web/client/TestKeysRatis.java  |   5 +-
 .../ozone/web/client/TestOzoneClient.java   |  16 +-
 .../hadoop/ozone/web/client/TestVolume.java |  14 +-
 .../ozone/web/client/TestVolumeRatis.java   |  14 +-
 .../src/test/resources/log4j.properties |  18 +
 .../org/apache/hadoop/ozone/scm/cli/SQLCLI.java |   6 +-
 hadoop-tools/hadoop-ozone/pom.xml   |  25 +
 .../hadoop/fs/ozone/TestOzoneFSInputStream.java |  20 +-
 .../fs/ozone/TestOzoneFileInterfaces.java   |  15 +-
 .../hadoop/fs/ozone/contract/OzoneContract.java |  15 +-
 53 files changed, 1140 insertions(+), 1145 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/06d228a3/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
index fa0f50c..ce7ca6f 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
@@ -51,18 +51,42 @@ public class HddsDatanodeService implements ServicePlugin {
   HddsDatanodeService.class);
 
 
-  private Configuration conf;
+  private OzoneConfiguration conf;
   private DatanodeDetails datanodeDetails;
   private DatanodeStateMachine datanodeStateMachine;
   private List plugins;
 
+  /**
+   * Default constructor.
+   */
+  public HddsDatanodeService() {
+this(null);
+  }
+
+  /**
+   * Constructs {@link HddsDatanode

[38/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
new file mode 100644
index 000..33a5971
--- /dev/null
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
@@ -0,0 +1,277 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.container.ozoneimpl;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReportsRequestProto;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ReportState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.SCMNodeReport;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.ozone.container.common.helpers.ContainerData;
+import org.apache.hadoop.ozone.container.common.impl.ChunkManagerImpl;
+import org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl;
+import org.apache.hadoop.ozone.container.common.impl.Dispatcher;
+import org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl;
+import org.apache.hadoop.ozone.container.common.interfaces.ChunkManager;
+import org.apache.hadoop.ozone.container.common.interfaces.ContainerDispatcher;
+import org.apache.hadoop.ozone.container.common.interfaces.ContainerManager;
+import org.apache.hadoop.ozone.container.common.interfaces.KeyManager;
+import org.apache.hadoop.ozone.container.common.statemachine.background
+.BlockDeletingService;
+import org.apache.hadoop.ozone.container.common.transport.server.XceiverServer;
+import org.apache.hadoop.ozone.container.common.transport.server
+.XceiverServerSpi;
+import org.apache.hadoop.ozone.container.common.transport.server.ratis
+.XceiverServerRatis;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.file.Paths;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_DATA_DIR_KEY;
+import static org.apache.hadoop.ozone.OzoneConfigKeys
+.OZONE_BLOCK_DELETING_SERVICE_INTERVAL;
+import static org.apache.hadoop.ozone.OzoneConfigKeys
+.OZONE_BLOCK_DELETING_SERVICE_INTERVAL_DEFAULT;
+import static org.apache.hadoop.ozone.OzoneConfigKeys
+.OZONE_BLOCK_DELETING_SERVICE_TIMEOUT;
+import static org.apache.hadoop.ozone.OzoneConfigKeys
+.OZONE_BLOCK_DELETING_SERVICE_TIMEOUT_DEFAULT;
+import static org.apache.hadoop.ozone.OzoneConsts.CONTAINER_ROOT_PREFIX;
+import static org.apache.hadoop.ozone.OzoneConsts.INVALID_PORT;
+
+/**
+ * Ozone main class sets up the network server and initializes the container
+ * layer.
+ */
+public class OzoneContainer {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneContainer.class);
+
+  private final Configuration ozoneConfig;
+  private final ContainerDispatcher dispatcher;
+  private final ContainerManager manager;
+  private final XceiverServerSpi[] server;
+  private final ChunkManager chunkManager;
+  private final KeyManager keyManager;
+  private final BlockDeletingService blockDeletingService;
+
+  /**
+   * Creates a network endpoint and enables Ozone container.
+   *
+   * @param ozoneConfig - Config
+   * @throws IOException
+   */
+  public OzoneContainer(
+  DatanodeDetails datanodeDetails, Configuration ozoneConfig)
+  throws IOException {
+this.ozoneConfig = ozoneConfig;
+List locations = new LinkedList<>();
+String[] paths = ozone

[48/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
new file mode 100644
index 000..0d4a299
--- /dev/null
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.client;
+
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
+import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
+import org.apache.hadoop.hdds.protocol.proto.ContainerProtos.ContainerData;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+
+import java.io.IOException;
+import java.util.EnumSet;
+import java.util.List;
+
+/**
+ * The interface to call into underlying container layer.
+ *
+ * Written as interface to allow easy testing: implement a mock container layer
+ * for standalone testing of CBlock API without actually calling into remote
+ * containers. Actual container layer can simply re-implement this.
+ *
+ * NOTE this is temporarily needed class. When SCM containers are full-fledged,
+ * this interface will likely be removed.
+ */
+@InterfaceStability.Unstable
+public interface ScmClient {
+  /**
+   * Creates a Container on SCM and returns the pipeline.
+   * @param containerId - String container ID
+   * @return Pipeline
+   * @throws IOException
+   */
+  Pipeline createContainer(String containerId, String owner) throws 
IOException;
+
+  /**
+   * Gets a container by Name -- Throws if the container does not exist.
+   * @param containerId - String Container ID
+   * @return Pipeline
+   * @throws IOException
+   */
+  Pipeline getContainer(String containerId) throws IOException;
+
+  /**
+   * Close a container by name.
+   *
+   * @param pipeline the container to be closed.
+   * @throws IOException
+   */
+  void closeContainer(Pipeline pipeline) throws IOException;
+
+  /**
+   * Deletes an existing container.
+   * @param pipeline - Pipeline that represents the container.
+   * @param force - true to forcibly delete the container.
+   * @throws IOException
+   */
+  void deleteContainer(Pipeline pipeline, boolean force) throws IOException;
+
+  /**
+   * Lists a range of containers and get their info.
+   *
+   * @param startName start name, if null, start searching at the head.
+   * @param prefixName prefix name, if null, then filter is disabled.
+   * @param count count, if count < 0, the max size is unlimited.(
+   *  Usually the count will be replace with a very big
+   *  value instead of being unlimited in case the db is very big)
+   *
+   * @return a list of pipeline.
+   * @throws IOException
+   */
+  List listContainer(String startName, String prefixName,
+  int count) throws IOException;
+
+  /**
+   * Read meta data from an existing container.
+   * @param pipeline - Pipeline that represents the container.
+   * @return ContainerInfo
+   * @throws IOException
+   */
+  ContainerData readContainer(Pipeline pipeline) throws IOException;
+
+
+  /**
+   * Gets the container size -- Computed by SCM from Container Reports.
+   * @param pipeline - Pipeline
+   * @return number of bytes used by this container.
+   * @throws IOException
+   */
+  long getContainerSize(Pipeline pipeline) throws IOException;
+
+  /**
+   * Creates a Container on SCM and returns the pipeline.
+   * @param type - Replication Type.
+   * @param replicationFactor - Replication Factor
+   * @param containerId - Container ID
+   * @return Pipeline
+   * @throws IOException - in case of error.
+   */
+  Pipeline createContainer(HddsProtos.ReplicationType type,
+  HddsProtos.ReplicationFactor replicationFactor, String containerId,
+  String owner) throws IOException;
+
+  /**
+   * Returns a set of Nodes that meet a query criteria.
+   * @param nodeStatuses - A set 

[30/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/framework/src/main/resources/webapps/static/ozone.css
--
diff --git a/hadoop-hdds/framework/src/main/resources/webapps/static/ozone.css 
b/hadoop-hdds/framework/src/main/resources/webapps/static/ozone.css
new file mode 100644
index 000..271ac74
--- /dev/null
+++ b/hadoop-hdds/framework/src/main/resources/webapps/static/ozone.css
@@ -0,0 +1,60 @@
+/**
+ *   Licensed to the Apache Software Foundation (ASF) under one or more
+ *  contributor license agreements.  See the NOTICE file distributed with
+ *  this work for additional information regarding copyright ownership.
+ *  The ASF licenses this file to You under the Apache License, Version 2.0
+ *  (the "License"); you may not use this file except in compliance with
+ *  the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+*/
+body {
+padding: 40px;
+padding-top: 60px;
+}
+.starter-template {
+padding: 40px 15px;
+text-align: center;
+}
+
+
+.btn {
+border: 0 none;
+font-weight: 700;
+letter-spacing: 1px;
+text-transform: uppercase;
+}
+
+.btn:focus, .btn:active:focus, .btn.active:focus {
+outline: 0 none;
+}
+
+.table-striped > tbody > tr:nth-child(2n+1).selectedtag > td:hover {
+background-color: #3276b1;
+}
+.table-striped > tbody > tr:nth-child(2n+1).selectedtag > td {
+background-color: #3276b1;
+}
+.tagPanel tr.selectedtag td {
+background-color: #3276b1;
+}
+.top-buffer { margin-top:4px; }
+
+
+.sortorder:after {
+content: '\25b2';   // BLACK UP-POINTING TRIANGLE
+}
+.sortorder.reverse:after {
+content: '\25bc';   // BLACK DOWN-POINTING TRIANGLE
+}
+
+.wrap-table{
+word-wrap: break-word;
+table-layout: fixed;
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/framework/src/main/resources/webapps/static/ozone.js
--
diff --git a/hadoop-hdds/framework/src/main/resources/webapps/static/ozone.js 
b/hadoop-hdds/framework/src/main/resources/webapps/static/ozone.js
new file mode 100644
index 000..37cafef
--- /dev/null
+++ b/hadoop-hdds/framework/src/main/resources/webapps/static/ozone.js
@@ -0,0 +1,355 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+(function () {
+"use strict";
+
+var isIgnoredJmxKeys = function (key) {
+return key == 'name' || key == 'modelerType' || key == "$$hashKey" ||
+key.match(/tag.*/);
+};
+angular.module('ozone', ['nvd3', 'ngRoute']);
+angular.module('ozone').config(function ($routeProvider) {
+$routeProvider
+.when("/", {
+templateUrl: "main.html"
+})
+.when("/metrics/rpc", {
+template: ""
+})
+.when("/config", {
+template: ""
+})
+});
+angular.module('ozone').component('overview', {
+templateUrl: 'static/templates/overview.html',
+transclude: true,
+controller: function ($http) {
+var ctrl = this;
+
$http.get("jmx?qry=Hadoop:service=*,name=*,component=ServerRuntime")
+.then(function (result) {
+ctrl.jmx = result.data.beans[0]
+})
+}
+});
+angular.module('ozone').component('jvmParameters', {
+templateUrl: 'static/templates/jvm.html',
+controller: function ($http) {
+var ctrl = this;
+$http.get("jmx?qry=java.lang:type=Runtime")
+.then(function (result) {
+ctrl.jmx = result.data.beans[0];
+
+//convert array to a map
+var systemProperties = {};
+for (var idx in ctr

[55/83] [abbrv] hadoop git commit: HDFS-13395. Ozone: Plugins support in HDSL Datanode Service. Contributed by Nanda Kumar.

2018-04-24 Thread xyao
HDFS-13395. Ozone: Plugins support in HDSL Datanode Service. Contributed by 
Nanda Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bb3c07fa
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bb3c07fa
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bb3c07fa

Branch: refs/heads/trunk
Commit: bb3c07fa3e4f5b5c38c251e882a357eddab0957f
Parents: 8475d6b
Author: Xiaoyu Yao 
Authored: Tue Apr 10 11:28:52 2018 -0700
Committer: Xiaoyu Yao 
Committed: Tue Apr 10 11:28:52 2018 -0700

--
 .../src/main/compose/cblock/docker-config   |   3 +-
 .../src/main/compose/ozone/docker-config|   3 +-
 .../apache/hadoop/ozone/OzoneConfigKeys.java|   3 +
 .../common/src/main/resources/ozone-default.xml |   8 ++
 .../hadoop/ozone/HddsDatanodeService.java   | 118 ++-
 .../statemachine/DatanodeStateMachine.java  |  10 ++
 .../hadoop/hdfs/server/datanode/DataNode.java   |   5 -
 .../server/datanode/DataNodeServicePlugin.java  |  48 
 .../src/test/compose/docker-config  |   3 +-
 .../hadoop/ozone/MiniOzoneClassicCluster.java   |   4 +-
 .../hadoop/ozone/MiniOzoneTestHelper.java   |   5 +
 .../hadoop/ozone/web/ObjectStoreRestPlugin.java | 108 -
 .../ozone/web/OzoneHddsDatanodeService.java |  84 +
 13 files changed, 208 insertions(+), 194 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bb3c07fa/hadoop-dist/src/main/compose/cblock/docker-config
--
diff --git a/hadoop-dist/src/main/compose/cblock/docker-config 
b/hadoop-dist/src/main/compose/cblock/docker-config
index 4690de0..f69bef0 100644
--- a/hadoop-dist/src/main/compose/cblock/docker-config
+++ b/hadoop-dist/src/main/compose/cblock/docker-config
@@ -27,7 +27,8 @@ OZONE-SITE.XML_ozone.scm.client.address=scm
 OZONE-SITE.XML_dfs.cblock.jscsi.cblock.server.address=cblock
 OZONE-SITE.XML_dfs.cblock.scm.ipaddress=scm
 OZONE-SITE.XML_dfs.cblock.service.leveldb.path=/tmp
-HDFS-SITE.XML_dfs.datanode.plugins=org.apache.hadoop.ozone.web.ObjectStoreRestPlugin,org.apache.hadoop.ozone.HddsDatanodeService
+OZONE-SITE.XML_hdds.datanode.plugins=org.apache.hadoop.ozone.web.OzoneHddsDatanodeService
+HDFS-SITE.XML_dfs.datanode.plugins=org.apache.hadoop.ozone.HddsDatanodeService
 HDFS-SITE.XML_dfs.namenode.rpc-address=namenode:9000
 HDFS-SITE.XML_dfs.namenode.name.dir=/data/namenode
 HDFS-SITE.XML_rpc.metrics.quantile.enable=true

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bb3c07fa/hadoop-dist/src/main/compose/ozone/docker-config
--
diff --git a/hadoop-dist/src/main/compose/ozone/docker-config 
b/hadoop-dist/src/main/compose/ozone/docker-config
index 8e5efa9..c693db0 100644
--- a/hadoop-dist/src/main/compose/ozone/docker-config
+++ b/hadoop-dist/src/main/compose/ozone/docker-config
@@ -23,11 +23,12 @@ OZONE-SITE.XML_ozone.scm.block.client.address=scm
 OZONE-SITE.XML_ozone.metadata.dirs=/data/metadata
 OZONE-SITE.XML_ozone.handler.type=distributed
 OZONE-SITE.XML_ozone.scm.client.address=scm
+OZONE-SITE.XML_hdds.datanode.plugins=org.apache.hadoop.ozone.web.OzoneHddsDatanodeService
 HDFS-SITE.XML_dfs.namenode.rpc-address=namenode:9000
 HDFS-SITE.XML_dfs.namenode.name.dir=/data/namenode
 HDFS-SITE.XML_rpc.metrics.quantile.enable=true
 HDFS-SITE.XML_rpc.metrics.percentiles.intervals=60,300
-HDFS-SITE.XML_dfs.datanode.plugins=org.apache.hadoop.ozone.web.ObjectStoreRestPlugin,org.apache.hadoop.ozone.HddsDatanodeService
+HDFS-SITE.XML_dfs.datanode.plugins=org.apache.hadoop.ozone.HddsDatanodeService
 LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout
 LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender
 LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bb3c07fa/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
index ef96f379..72531a2 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
@@ -230,6 +230,9 @@ public final class OzoneConfigKeys {
   public static final String OZONE_SCM_WEB_AUTHENTICATION_KERBEROS_PRINCIPAL =
   "ozone.web.authentication.kerberos.principal";
 
+  public static final String HDDS_DATANODE_PLUGINS_KEY =
+  "hdds.datanode.plugins";
+
   /**
* There is no need to instantiate this class.
*/

http://git-wip-us.a

[37/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/ScmTestMock.java
--
diff --git 
a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/ScmTestMock.java
 
b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/ScmTestMock.java
new file mode 100644
index 000..41a8a80
--- /dev/null
+++ 
b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/ScmTestMock.java
@@ -0,0 +1,274 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.container.common;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.scm.VersionInfo;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.DatanodeDetailsProto;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerBlocksDeletionACKProto;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos
+.ContainerBlocksDeletionACKResponseProto;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerInfo;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ReportState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.SCMCommandResponseProto;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.SCMHeartbeatResponseProto;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.SCMNodeReport;
+import org.apache.hadoop.ozone.protocol.StorageContainerDatanodeProtocol;
+import org.apache.hadoop.ozone.protocol.VersionResponse;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.UUID;
+import java.util.concurrent.atomic.AtomicInteger;
+
+/**
+ * SCM RPC mock class.
+ */
+public class ScmTestMock implements StorageContainerDatanodeProtocol {
+  private int rpcResponseDelay;
+  private AtomicInteger heartbeatCount = new AtomicInteger(0);
+  private AtomicInteger rpcCount = new AtomicInteger(0);
+  private ReportState reportState;
+  private AtomicInteger containerReportsCount = new AtomicInteger(0);
+
+  // Map of datanode to containers
+  private Map> nodeContainers =
+  new HashMap();
+  /**
+   * Returns the number of heartbeats made to this class.
+   *
+   * @return int
+   */
+  public int getHeartbeatCount() {
+return heartbeatCount.get();
+  }
+
+  /**
+   * Returns the number of RPC calls made to this mock class instance.
+   *
+   * @return - Number of RPC calls serviced by this class.
+   */
+  public int getRpcCount() {
+return rpcCount.get();
+  }
+
+  /**
+   * Gets the RPC response delay.
+   *
+   * @return delay in milliseconds.
+   */
+  public int getRpcResponseDelay() {
+return rpcResponseDelay;
+  }
+
+  /**
+   * Sets the RPC response delay.
+   *
+   * @param rpcResponseDelay - delay in milliseconds.
+   */
+  public void setRpcResponseDelay(int rpcResponseDelay) {
+this.rpcResponseDelay = rpcResponseDelay;
+  }
+
+  /**
+   * Returns the number of container reports server has seen.
+   * @return int
+   */
+  public int getContainerReportsCount() {
+return containerReportsCount.get();
+  }
+
+  /**
+   * Returns the number of containers that have been reported so far.
+   * @return - count of reported containers.
+   */
+  public long getContainerCount() {
+return nodeContainers.values().parallelStream().mapToLong((containerMap)->{
+  return containerMap.size();
+}).sum();
+  }
+
+  /**
+   * Get the number keys reported from container reports.
+   * @return - number of keys reported.
+   */
+  public long getKeyCount() {
+return nodeContainers.values().parallelStream().mapToLong((containerMap)->{
+  return co

[49/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
--
diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
new file mode 100644
index 000..9b8eaa9
--- /dev/null
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.storage;
+
+import com.google.protobuf.ByteString;
+import org.apache.hadoop.fs.Seekable;
+import org.apache.hadoop.hdds.scm.XceiverClientManager;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import org.apache.hadoop.hdds.protocol.proto.ContainerProtos.ChunkInfo;
+import org.apache.hadoop.hdds.protocol.proto.ContainerProtos
+.ReadChunkResponseProto;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import java.util.List;
+
+/**
+ * An {@link InputStream} used by the REST service in combination with the
+ * SCMClient to read the value of a key from a sequence
+ * of container chunks.  All bytes of the key value are stored in container
+ * chunks.  Each chunk may contain multiple underlying {@link ByteBuffer}
+ * instances.  This class encapsulates all state management for iterating
+ * through the sequence of chunks and the sequence of buffers within each 
chunk.
+ */
+public class ChunkInputStream extends InputStream implements Seekable {
+
+  private static final int EOF = -1;
+
+  private final String key;
+  private final String traceID;
+  private XceiverClientManager xceiverClientManager;
+  private XceiverClientSpi xceiverClient;
+  private List chunks;
+  private int chunkIndex;
+  private long[] chunkOffset;
+  private List buffers;
+  private int bufferIndex;
+
+  /**
+   * Creates a new ChunkInputStream.
+   *
+   * @param key chunk key
+   * @param xceiverClientManager client manager that controls client
+   * @param xceiverClient client to perform container calls
+   * @param chunks list of chunks to read
+   * @param traceID container protocol call traceID
+   */
+  public ChunkInputStream(String key, XceiverClientManager 
xceiverClientManager,
+  XceiverClientSpi xceiverClient, List chunks, String traceID) {
+this.key = key;
+this.traceID = traceID;
+this.xceiverClientManager = xceiverClientManager;
+this.xceiverClient = xceiverClient;
+this.chunks = chunks;
+this.chunkIndex = -1;
+// chunkOffset[i] stores offset at which chunk i stores data in
+// ChunkInputStream
+this.chunkOffset = new long[this.chunks.size()];
+initializeChunkOffset();
+this.buffers = null;
+this.bufferIndex = 0;
+  }
+
+  private void initializeChunkOffset() {
+int tempOffset = 0;
+for (int i = 0; i < chunks.size(); i++) {
+  chunkOffset[i] = tempOffset;
+  tempOffset += chunks.get(i).getLen();
+}
+  }
+
+  @Override
+  public synchronized int read()
+  throws IOException {
+checkOpen();
+int available = prepareRead(1);
+return available == EOF ? EOF :
+Byte.toUnsignedInt(buffers.get(bufferIndex).get());
+  }
+
+  @Override
+  public synchronized int read(byte[] b, int off, int len) throws IOException {
+// According to the JavaDocs for InputStream, it is recommended that
+// subclasses provide an override of bulk read if possible for performance
+// reasons.  In addition to performance, we need to do it for correctness
+// reasons.  The Ozone REST service uses PipedInputStream and
+// PipedOutputStream to relay HTTP response data between a Jersey thread 
and
+// a Netty thread.  It turns out that PipedInputStream/PipedOutputStream
+// have a subtle dependency (bug?) on the wrapped stream providing separate
+// implementations of single-byte read and bulk read.  Without this, get 
key
+// responses might close the connection before writing all of the by

[41/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/Dispatcher.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/Dispatcher.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/Dispatcher.java
new file mode 100644
index 000..1c6e39c
--- /dev/null
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/Dispatcher.java
@@ -0,0 +1,713 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.container.common.impl;
+
+import com.google.common.base.Preconditions;
+import com.google.protobuf.ByteString;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
+import org.apache.hadoop.hdds.scm.container.common.helpers
+.StorageContainerException;
+import org.apache.hadoop.hdds.protocol.proto.ContainerProtos;
+import org.apache.hadoop.hdds.protocol.proto.ContainerProtos
+.ContainerCommandRequestProto;
+import org.apache.hadoop.hdds.protocol.proto.ContainerProtos
+.ContainerCommandResponseProto;
+import org.apache.hadoop.hdds.protocol.proto.ContainerProtos.Type;
+import org.apache.hadoop.ozone.container.common.helpers.ChunkInfo;
+import org.apache.hadoop.ozone.container.common.helpers.ChunkUtils;
+import org.apache.hadoop.ozone.container.common.helpers.ContainerData;
+import org.apache.hadoop.ozone.container.common.helpers.ContainerMetrics;
+import org.apache.hadoop.ozone.container.common.helpers.ContainerUtils;
+import org.apache.hadoop.ozone.container.common.helpers.FileUtils;
+import org.apache.hadoop.ozone.container.common.helpers.KeyData;
+import org.apache.hadoop.ozone.container.common.helpers.KeyUtils;
+import org.apache.hadoop.ozone.container.common.interfaces.ContainerDispatcher;
+import org.apache.hadoop.ozone.container.common.interfaces.ContainerManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.security.NoSuchAlgorithmException;
+import java.util.LinkedList;
+import java.util.List;
+
+import static org.apache.hadoop.hdds.protocol.proto.ContainerProtos.Result
+.CLOSED_CONTAINER_IO;
+import static org.apache.hadoop.hdds.protocol.proto.ContainerProtos.Result
+.GET_SMALL_FILE_ERROR;
+import static org.apache.hadoop.hdds.protocol.proto.ContainerProtos.Result
+.NO_SUCH_ALGORITHM;
+import static org.apache.hadoop.hdds.protocol.proto.ContainerProtos.Result
+.PUT_SMALL_FILE_ERROR;
+
+/**
+ * Ozone Container dispatcher takes a call from the netty server and routes it
+ * to the right handler function.
+ */
+public class Dispatcher implements ContainerDispatcher {
+  static final Logger LOG = LoggerFactory.getLogger(Dispatcher.class);
+
+  private final ContainerManager containerManager;
+  private ContainerMetrics metrics;
+  private Configuration conf;
+
+  /**
+   * Constructs an OzoneContainer that receives calls from
+   * XceiverServerHandler.
+   *
+   * @param containerManager - A class that manages containers.
+   */
+  public Dispatcher(ContainerManager containerManager, Configuration config) {
+Preconditions.checkNotNull(containerManager);
+this.containerManager = containerManager;
+this.metrics = null;
+this.conf = config;
+  }
+
+  @Override
+  public void init() {
+this.metrics = ContainerMetrics.create(conf);
+  }
+
+  @Override
+  public void shutdown() {
+  }
+
+  @Override
+  public ContainerCommandResponseProto dispatch(
+  ContainerCommandRequestProto msg) {
+LOG.trace("Command {}, trace ID: {} ", msg.getCmdType().toString(),
+msg.getTraceID());
+long startNanos = System.nanoTime();
+ContainerCommandResponseProto resp = null;
+try {
+  Preconditions.checkNotNull(msg);
+  Type cmdType = msg.getCmdType();
+  metrics.incContainerOpcMetrics(cmdType);
+  if ((cmdType == Type.CreateContainer) ||
+  (cmdType == Type.DeleteContainer) ||
+  (cmdType == Type.ReadContainer)

[31/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.js.map
--
diff --git 
a/hadoop-hdds/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.js.map 
b/hadoop-hdds/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.js.map
new file mode 100644
index 000..594da5a3
--- /dev/null
+++ 
b/hadoop-hdds/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.js.map
@@ -0,0 +1 @@
+{"version":3,"file":"nv.d3.min.js","sources":["../src/core.js","../src/dom.js","../src/interactiveLayer.js","../src/tooltip.js","../src/utils.js","../src/models/axis.js","../src/models/boxPlot.js","../src/models/boxPlotChart.js","../src/models/bullet.js","../src/models/bulletChart.js","../src/models/candlestickBar.js","../src/models/cumulativeLineChart.js","../src/models/discreteBar.js","../src/models/discreteBarChart.js","../src/models/distribution.js","../src/models/focus.js","../src/models/forceDirectedGraph.js","../src/models/furiousLegend.js","../src/models/historicalBar.js","../src/models/historicalBarChart.js","../src/models/legend.js","../src/models/line.js","../src/models/lineChart.js","../src/models/linePlusBarChart.js","../src/models/multiBar.js","../src/models/multiBarChart.js","../src/models/multiBarHorizontal.js","../src/models/multiBarHorizontalChart.js","../src/models/multiChart.js","../src/models/ohlcBar.js","../src/models/parallelCoordinates.js","../src/models/para
 
llelCoordinatesChart.js","../src/models/pie.js","../src/models/pieChart.js","../src/models/sankey.js","../src/models/sankeyChart.js","../src/models/scatter.js","../src/models/scatterChart.js","../src/models/sparkline.js","../src/models/sparklinePlus.js","../src/models/stackedArea.js","../src/models/stackedAreaChart.js","../src/models/sunburst.js","../src/models/sunburstChart.js"],"names":["nv","dev","tooltip","utils","models","charts","logs","dom","d3","require","dispatch","Function","prototype","bind","oThis","this","TypeError","aArgs","Array","slice","call","arguments","fToBind","fNOP","fBound","apply","concat","on","e","startTime","Date","endTime","totalTime","log","window","console","length","deprecated","name","info","warn","render","step","active","render_start","renderLoop","chart","graph","i","queue","generate","callback","splice","setTimeout","render_end","addGraph","obj","push","module","exports","write","undefined","fastdom","mutate","read","measure","interactiveGuideline
 
","layer","selection","each","data","mouseHandler","d3mouse","mouse","mouseX","mouseY","subtractMargin","mouseOutAnyReason","isMSIE","event","offsetX","offsetY","target","tagName","className","baseVal","match","margin","left","top","type","availableWidth","availableHeight","relatedTarget","ownerSVGElement","nvPointerEventsClass","elementMouseout","renderGuideLine","hidden","scaleIsOrdinal","xScale","rangeBands","pointXValue","elementIndex","bisect","range","rangeBand","domain","invert","elementMousemove","elementDblclick","elementClick","elementMouseDown","elementMouseUp","container","select","width","height","wrap","selectAll","wrapEnter","enter","append","attr","svgContainer","guideLine","x","showGuideLine","line","NaNtoZero","String","d","exit","remove","scale","linear","ActiveXObject","duration","hideDelay","_","interactiveBisect","values","searchVal","xAccessor","_xAccessor","_cmp","v","bisector","index","max","currentValue","nextIndex","min","nextValue","Math","abs","nearestVa
 
lueIndex","threshold","yDistMax","Infinity","indexToHighlight","forEach","delta","initTooltip","node","document","body","id","classes","style","classed","nvtooltip","enabled","dataSeriesExists","newContent","contentGenerator","innerHTML","positionTooltip","floor","random","gravity","distance","snapDistance","lastPosition","headerEnabled","valueFormatter","headerFormatter","keyFormatter","table","createElement","theadEnter","html","value","tbodyEnter","trowEnter","p","series","highlight","color","total","key","filter","percent","format","opacityScale","opacity","outerHTML","footer","position","pos","clientX","clientY","getComputedStyle","transform","client","getBoundingClientRect","isArray","isObject","calcGravityOffset","tmp","offsetHeight","offsetWidth","clientWidth","documentElement","clientHeight","gravityOffset","interrupt","transition","delay","old_translate","new_translate","round","translateInterpolator","interpolateString","is_hidden","styleTween","options","optionsFunc","_o
 
ptions","Object","create","get","set","chartContainer","fixedTop","offset","point","y","initOptions","windowSize","size","innerWidth","innerHeight","compatMode","a","isFunction","isDate","toString","isNumber","isNaN","windowResize","handler","addEventListener","clear","removeEventListener","getColor","defaultColor","color_scale","ordinal","category20","customTheme","dictionary","getKey","defaultColors","defIndex","pjax","links","content","load","href","f

[52/83] [abbrv] hadoop git commit: HDFS-13405. Addendum: Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar.

2018-04-24 Thread xyao
HDFS-13405. Addendum: Ozone: Rename HDSL to HDDS.
Contributed by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b67a7eda
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b67a7eda
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b67a7eda

Branch: refs/heads/trunk
Commit: b67a7eda9633bf14756d9e55e9d5f250813501b9
Parents: 651a05a
Author: Anu Engineer 
Authored: Thu Apr 5 12:45:38 2018 -0700
Committer: Anu Engineer 
Committed: Thu Apr 5 12:45:38 2018 -0700

--
 .../hadoop/hdds/server/BaseHttpServer.java  | 218 +++
 .../apache/hadoop/hdds/server/ServerUtils.java  | 139 
 .../hadoop/hdds/server/ServiceRuntimeInfo.java  |  64 ++
 .../hdds/server/ServiceRuntimeInfoImpl.java |  55 +
 .../apache/hadoop/hdds/server/package-info.java |  23 ++
 .../hadoop/hdsl/server/BaseHttpServer.java  | 218 ---
 .../apache/hadoop/hdsl/server/ServerUtils.java  | 139 
 .../hadoop/hdsl/server/ServiceRuntimeInfo.java  |  64 --
 .../hdsl/server/ServiceRuntimeInfoImpl.java |  55 -
 .../apache/hadoop/hdsl/server/package-info.java |  23 --
 .../hadoop/hdds/server/TestBaseHttpServer.java  |  98 +
 .../hadoop/hdsl/server/TestBaseHttpServer.java  |  98 -
 12 files changed, 597 insertions(+), 597 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b67a7eda/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
--
diff --git 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
new file mode 100644
index 000..90de002
--- /dev/null
+++ 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
@@ -0,0 +1,218 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.hdds.server;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.http.HttpConfig;
+import org.apache.hadoop.http.HttpServer2;
+import org.apache.hadoop.net.NetUtils;
+import org.eclipse.jetty.webapp.WebAppContext;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.servlet.http.HttpServlet;
+import java.io.IOException;
+import java.net.InetSocketAddress;
+
+import static org.apache.hadoop.hdds.HddsUtils.getHostNameFromConfigKeys;
+import static org.apache.hadoop.hdds.HddsUtils.getPortNumberFromConfigKeys;
+
+/**
+ * Base class for HTTP server of the Ozone related components.
+ */
+public abstract class BaseHttpServer {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(BaseHttpServer.class);
+
+  private HttpServer2 httpServer;
+  private final Configuration conf;
+
+  private InetSocketAddress httpAddress;
+  private InetSocketAddress httpsAddress;
+
+  private HttpConfig.Policy policy;
+
+  private String name;
+
+  public BaseHttpServer(Configuration conf, String name) throws IOException {
+this.name = name;
+this.conf = conf;
+if (isEnabled()) {
+  policy = DFSUtil.getHttpPolicy(conf);
+  if (policy.isHttpEnabled()) {
+this.httpAddress = getHttpBindAddress();
+  }
+  if (policy.isHttpsEnabled()) {
+this.httpsAddress = getHttpsBindAddress();
+  }
+  HttpServer2.Builder builder = null;
+  builder = DFSUtil.httpServerTemplateForNNAndJN(conf, this.httpAddress,
+  this.httpsAddress, name, getSpnegoPrincipal(), getKeytabFile());
+
+  final boolean xFrameEnabled = conf.getBoolean(
+  DFSConfigKeys.DFS_XFRAME_OPTION_ENABLED,
+  DFSConfigKeys.DFS_XFRAME_OPTION_ENABLED_DEFAULT);
+
+  final String xFrameOptionValue = conf.getTrimmed(
+  DFSConfigKeys.DFS_XFRAME_OPTION_VALUE,
+  DFSConfigKeys.DFS_XFRAME_OPTION_VALUE_DEFA

[66/83] [abbrv] hadoop git commit: HDFS-13425. Ozone: Clean-up of ozone related change from hadoop-common-project. Contributed by Lokesh Jain.

2018-04-24 Thread xyao
HDFS-13425. Ozone: Clean-up of ozone related change from hadoop-common-project. 
Contributed by Lokesh Jain.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/40398d35
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/40398d35
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/40398d35

Branch: refs/heads/trunk
Commit: 40398d357b97ce26d0b347ad7d78df3188eab44a
Parents: ea85801
Author: Mukul Kumar Singh 
Authored: Thu Apr 12 13:46:52 2018 +0530
Committer: Mukul Kumar Singh 
Committed: Thu Apr 12 13:46:52 2018 +0530

--
 .../java/org/apache/hadoop/fs/FileUtil.java |  67 +--
 .../main/java/org/apache/hadoop/ipc/RPC.java|   1 +
 .../main/java/org/apache/hadoop/util/Time.java  |   9 --
 .../hadoop/util/concurrent/HadoopExecutors.java |  10 --
 .../org/apache/hadoop/hdds/scm/TestArchive.java | 114 ---
 .../replication/ContainerSupervisor.java|  11 +-
 6 files changed, 12 insertions(+), 200 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/40398d35/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
index 0e349d3..8743be5 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
@@ -38,7 +38,6 @@ import java.nio.file.FileSystems;
 import java.nio.file.Files;
 import java.util.ArrayList;
 import java.util.Enumeration;
-import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 import java.util.concurrent.ExecutionException;
@@ -48,19 +47,14 @@ import java.util.concurrent.Future;
 import java.util.jar.Attributes;
 import java.util.jar.JarOutputStream;
 import java.util.jar.Manifest;
-import java.util.zip.CRC32;
-import java.util.zip.CheckedOutputStream;
 import java.util.zip.GZIPInputStream;
 import java.util.zip.ZipEntry;
 import java.util.zip.ZipFile;
 import java.util.zip.ZipInputStream;
-import java.util.zip.ZipOutputStream;
 
-import com.google.common.base.Preconditions;
 import org.apache.commons.collections.map.CaseInsensitiveMap;
 import org.apache.commons.compress.archivers.tar.TarArchiveEntry;
 import org.apache.commons.compress.archivers.tar.TarArchiveInputStream;
-import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -75,7 +69,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 /**
- * A collection of file-processing util methods.
+ * A collection of file-processing util methods
  */
 @InterfaceAudience.Public
 @InterfaceStability.Evolving
@@ -613,65 +607,6 @@ public class FileUtil {
   }
 
   /**
-   * creates zip archieve of the source dir and writes a zip file.
-   *
-   * @param sourceDir - The directory to zip.
-   * @param archiveName - The destination file, the parent directory is assumed
-   * to exist.
-   * @return Checksum of the Archive.
-   * @throws IOException - Throws if zipFileName already exists or if the
-   * sourceDir does not exist.
-   */
-  public static Long zip(File sourceDir, File archiveName) throws IOException {
-Preconditions.checkNotNull(sourceDir, "source directory cannot be null");
-Preconditions.checkState(sourceDir.exists(), "source directory must " +
-"exist");
-
-Preconditions.checkNotNull(archiveName, "Destination file cannot be null");
-Preconditions.checkNotNull(archiveName.getParent(), "Destination " +
-"directory cannot be null");
-Preconditions.checkState(new File(archiveName.getParent()).exists(),
-"Destination directory must exist");
-Preconditions.checkState(!archiveName.exists(), "Destination file " +
-"already exists. Refusing to overwrite existing file.");
-
-CheckedOutputStream checksum;
-try (FileOutputStream outputStream =
- new FileOutputStream(archiveName)) {
-  checksum = new CheckedOutputStream(outputStream, new CRC32());
-  byte[] data = new byte[BUFFER_SIZE];
-  try (ZipOutputStream out =
-   new ZipOutputStream(new BufferedOutputStream(checksum))) {
-
-Iterator fileIter = FileUtils.iterateFiles(sourceDir, null, 
true);
-while (fileIter.hasNext()) {
-  File file = fileIter.next();
-  LOG.debug("Compressing file : " + file.getPath());
-  try (FileInputStream currentFile = new FileInputStream(file)) {
-Zip

[34/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/framework/src/main/resources/webapps/static/d3-3.5.17.min.js
--
diff --git 
a/hadoop-hdds/framework/src/main/resources/webapps/static/d3-3.5.17.min.js 
b/hadoop-hdds/framework/src/main/resources/webapps/static/d3-3.5.17.min.js
new file mode 100644
index 000..1664873
--- /dev/null
+++ b/hadoop-hdds/framework/src/main/resources/webapps/static/d3-3.5.17.min.js
@@ -0,0 +1,5 @@
+!function(){function n(n){return 
n&&(n.ownerDocument||n.document||n).documentElement}function t(n){return 
n&&(n.ownerDocument&&n.ownerDocument.defaultView||n.document&&n||n.defaultView)}function
 e(n,t){return t>n?-1:n>t?1:n>=t?0:NaN}function r(n){return 
null===n?NaN:+n}function i(n){return!isNaN(n)}function 
u(n){return{left:function(t,e,r,i){for(arguments.length<3&&(r=0),arguments.length<4&&(i=t.length);i>r;){var
 u=r+i>>>1;n(t[u],e)<0?r=u+1:i=u}return 
r},right:function(t,e,r,i){for(arguments.length<3&&(r=0),arguments.length<4&&(i=t.length);i>r;){var
 u=r+i>>>1;n(t[u],e)>0?i=u:r=u+1}return r}}}function o(n){return 
n.length}function a(n){for(var t=1;n*t%1;)t*=10;return t}function 
l(n,t){for(var e in 
t)Object.defineProperty(n.prototype,e,{value:t[e],enumerable:!1})}function 
c(){this._=Object.create(null)}function 
f(n){return(n+="")===bo||n[0]===_o?_o+n:n}function 
s(n){return(n+="")[0]===_o?n.slice(1):n}function h(n){return f(n)in 
this._}function p(n){return(n=f(n))in this._&&delete this
 ._[n]}function g(){var n=[];for(var t in this._)n.push(s(t));return n}function 
v(){var n=0;for(var t in this._)++n;return n}function d(){for(var n in 
this._)return!1;return!0}function y(){this._=Object.create(null)}function 
m(n){return n}function M(n,t,e){return function(){var 
r=e.apply(t,arguments);return r===t?n:r}}function x(n,t){if(t in n)return 
t;t=t.charAt(0).toUpperCase()+t.slice(1);for(var e=0,r=wo.length;r>e;++e){var 
i=wo[e]+t;if(i in n)return i}}function b(){}function _(){}function 
w(n){function t(){for(var 
t,r=e,i=-1,u=r.length;++ie;e++)for(var 
i,u=n[e],o=0,a=u.length;a>o;o++)(i=u[o])&&t(i,o,e);return n}function 
Z(n){return ko(n,qo),n}function V(n){var t,e;return function(r,i,u){var 
o,a=n[u].update,l=a.length;for(u!=e&&(e=u,t=0),i>=t&&(t=i+1);!(o=a[t])&&++t0&&(n=n.slice(0,a));var c=To.get(n);return 
c&&(n=c,l=B),a?t?i:r:t?b:u}function $(n,t){return function(e){var 
r=ao.event;ao.event=e,t[0]=this.__data__;try{n.apply(this,t)}finally{ao.event=r}}}function
 B(n,t){var e=$(n,t);return function(n){var 
t=this,r=n.relatedTarget;r&&(r===t||8&r.compareDocumentPosition(t))||e.call(t,n)}}function
 W(e){var r=".dragsuppress-"+ 
++Do,i="click"+r,u=ao.select(t(e)).on("touchmove"+r,S).on("dragstart"+r,S).on("selectstart"+r,S);if(null==Ro&&(Ro="onselectstart"in
 e?!1:x(e.style,"userSelect")),Ro){var o=n(e).style,a=o[Ro];o[Ro]="none"}return 
function(n){if(u.on(r,null),Ro&&(o[Ro]=a),n){var 
t=function(){u.on(i,null)};u.on(i,function(){S(),t()},!0),setTimeout(t,0)}}}function
 J(n,e){e.changedTouches&&(e=e.changedTouches[0]);var 
r=n.ownerSVGElement||n;if(r.createSVGPoint){var 
i=r.createSVGPoint();if(0>Po){var u=t(n);if(u.scrollX||u.scrollY){r=ao.select
 
("body").append("svg").style({position:"absolute",top:0,left:0,margin:0,padding:0,border:"none"},"important");var
 o=r[0][0].getScreenCTM();Po=!(o.f||o.e),r.remove()}}return 
Po?(i.x=e.pageX,i.y=e.pageY):(i.x=e.clientX,i.y=e.clientY),i=i.matrixTransform(n.getScreenCTM().inverse()),[i.x,i.y]}var
 
a=n.getBoundingClientRect();return[e.clientX-a.left-n.clientLeft,e.clientY-a.top-n.clientTop]}function
 G(){return ao.event.changedTouches[0].identifier}function K(n){return 
n>0?1:0>n?-1:0}function 
Q(n,t,e){return(t[0]-n[0])*(e[1]-n[1])-(t[1]-n[1])*(e[0]-n[0])}function 
nn(n){return n>1?0:-1>n?Fo:Math.acos(n)}function tn(n){return 
n>1?Io:-1>n?-Io:Math.asin(n)}function 
en(n){return((n=Math.exp(n))-1/n)/2}function 
rn(n){return((n=Math.exp(n))+1/n)/2}function 
un(n){return((n=Math.exp(2*n))-1)/(n+1)}function 
on(n){return(n=Math.sin(n/2))*n}function an(){}function ln(n,t,e){return this 
instanceof ln?(this.h=+n,this.s=+t,void(this.l=+e)):arguments.length<2?n 
instanceof ln?new ln(n.h,n.s,n.l):_n(""+n,wn
 ,ln):new ln(n,t,e)}function cn(n,t,e){function r(n){return 
n>360?n-=360:0>n&&(n+=360),60>n?u+(o-u)*n/60:180>n?o:240>n?u+(o-u)*(240-n)/60:u}function
 i(n){return Math.round(255*r(n))}var u,o;return 
n=isNaN(n)?0:(n%=360)<0?n+360:n,t=isNaN(t)?0:0>t?0:t>1?1:t,e=0>e?0:e>1?1:e,o=.5>=e?e*(1+t):e+t-e*t,u=2*e-o,new
 mn(i(n+120),i(n),i(n-120))}function fn(n,t,e){return this instanceof 
fn?(this.h=+n,this.c=+t,void(this.l=+e)):arguments.length<2?n instanceof fn?new 
fn(n.h,n.c,n.l):n instanceof 
hn?gn(n.l,n.a,n.b):gn((n=Sn((n=ao.rgb(n)).r,n.g,n.b)).l,n.a,n.b):new 
fn(n,t,e)}function sn(n,t,e){return isNaN(n)&&(n=0),isNaN(t)&&(t=0),new 
hn(e,Math.cos(n*=Yo)*t,Math.sin(n)*t)}function hn(n,t,e){return this instanceof 
hn?(this.l=+n,this.a=+t,void(this.b=+e)):arg

[50/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/TestLocalBlockCache.java
--
diff --git 
a/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/TestLocalBlockCache.java
 
b/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/TestLocalBlockCache.java
index 6eb7ea6..e1e2909 100644
--- 
a/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/TestLocalBlockCache.java
+++ 
b/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/TestLocalBlockCache.java
@@ -25,16 +25,16 @@ import 
org.apache.hadoop.cblock.jscsiHelper.CBlockTargetMetrics;
 import org.apache.hadoop.cblock.jscsiHelper.ContainerCacheFlusher;
 import org.apache.hadoop.cblock.jscsiHelper.cache.LogicalBlock;
 import org.apache.hadoop.cblock.jscsiHelper.cache.impl.CBlockLocalCache;
-import org.apache.hadoop.hdsl.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.ozone.MiniOzoneClassicCluster;
 import org.apache.hadoop.ozone.MiniOzoneCluster;
 import org.apache.hadoop.ozone.OzoneConsts;
-import org.apache.hadoop.scm.XceiverClientManager;
-import org.apache.hadoop.scm.XceiverClientSpi;
-import org.apache.hadoop.scm.container.common.helpers.Pipeline;
-import 
org.apache.hadoop.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB;
-import org.apache.hadoop.scm.storage.ContainerProtocolCalls;
+import org.apache.hadoop.hdds.scm.XceiverClientManager;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
+import 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB;
+import org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.util.Time;
 import org.junit.AfterClass;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/kubernetes/TestDynamicProvisioner.java
--
diff --git 
a/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/kubernetes/TestDynamicProvisioner.java
 
b/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/kubernetes/TestDynamicProvisioner.java
index 8d1a865..0268ccc 100644
--- 
a/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/kubernetes/TestDynamicProvisioner.java
+++ 
b/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/kubernetes/TestDynamicProvisioner.java
@@ -29,7 +29,7 @@ import org.junit.Test;
 import java.nio.file.Files;
 import java.nio.file.Paths;
 
-import org.apache.hadoop.hdsl.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 
 /**
  * Test the resource generation of Dynamic Provisioner.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/util/ContainerLookUpService.java
--
diff --git 
a/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/util/ContainerLookUpService.java
 
b/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/util/ContainerLookUpService.java
index 8cb57d6..d7dabe3 100644
--- 
a/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/util/ContainerLookUpService.java
+++ 
b/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/util/ContainerLookUpService.java
@@ -19,7 +19,7 @@ package org.apache.hadoop.cblock.util;
 
 import org.apache.hadoop.cblock.meta.ContainerDescriptor;
 import org.apache.hadoop.ozone.container.ContainerTestHelper;
-import org.apache.hadoop.scm.container.common.helpers.Pipeline;
+import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
 
 import java.io.IOException;
 import java.util.concurrent.ConcurrentHashMap;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/util/MockStorageClient.java
--
diff --git 
a/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/util/MockStorageClient.java
 
b/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/util/MockStorageClient.java
index 59c8e01..9fa76a8 100644
--- 
a/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/util/MockStorageClient.java
+++ 
b/hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/util/MockStorageClient.java
@@ -18,12 +18,12 @@
 package org.apache.hadoop.cblock.util;
 
 import org.apache.hadoop.cblock.meta.ContainerDescriptor;
-import org.apache.hadoop.hdsl.protocol.proto.ContainerProtos.ContainerData;
+import org.apache.hadoop.hdds.protocol.proto.ContainerProtos.ContainerData;
 import org.apache.hadoop.ozone.OzoneConsts;
-import org.apache.ha

[56/83] [abbrv] hadoop git commit: Merge branch 'trunk' into HDFS-7240

2018-04-24 Thread xyao
Merge branch 'trunk' into HDFS-7240


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/df3ff904
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/df3ff904
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/df3ff904

Branch: refs/heads/trunk
Commit: df3ff9042a6327b784ecf90ea8be8f0fe567859e
Parents: bb3c07f 8ab776d
Author: Xiaoyu Yao 
Authored: Tue Apr 10 12:22:50 2018 -0700
Committer: Xiaoyu Yao 
Committed: Tue Apr 10 12:22:50 2018 -0700

--
 BUILDING.txt|14 +
 .../src/main/bin/hadoop-functions.sh| 9 +-
 .../apache/hadoop/crypto/key/KeyProvider.java   |11 +-
 .../fs/CommonConfigurationKeysPublic.java   |21 +
 .../main/java/org/apache/hadoop/ipc/RPC.java|35 +-
 .../org/apache/hadoop/net/NetworkTopology.java  |   106 +-
 .../hadoop/util/concurrent/HadoopExecutors.java | 9 +-
 .../src/site/markdown/HttpAuthentication.md | 2 +-
 .../markdown/release/3.1.0/CHANGES.3.1.0.md |  1022 +
 .../release/3.1.0/RELEASENOTES.3.1.0.md |   199 +
 .../fs/contract/AbstractContractCreateTest.java |12 +-
 .../java/org/apache/hadoop/io/TestIOUtils.java  | 2 +-
 .../hadoop/hdfs/protocol/AclException.java  |10 +
 .../ha/RequestHedgingProxyProvider.java | 3 +
 .../ha/TestRequestHedgingProxyProvider.java |34 +
 .../federation/metrics/NamenodeBeanMetrics.java | 3 +
 .../federation/router/ConnectionContext.java|35 +-
 .../federation/router/ConnectionManager.java|10 +-
 .../federation/router/ConnectionPool.java   |98 +-
 .../federation/router/ConnectionPoolId.java |19 +-
 .../server/federation/router/RemoteMethod.java  |68 +-
 .../router/RouterNamenodeProtocol.java  |   187 +
 .../federation/router/RouterRpcClient.java  |62 +-
 .../federation/router/RouterRpcServer.java  |   141 +-
 .../router/SubClusterTimeoutException.java  |33 +
 .../driver/impl/StateStoreFileSystemImpl.java   | 6 +-
 .../server/federation/MiniRouterDFSCluster.java |39 +-
 .../router/TestConnectionManager.java   |56 +-
 .../server/federation/router/TestRouter.java|70 +-
 .../federation/router/TestRouterQuota.java  | 4 +
 .../router/TestRouterRPCClientRetries.java  |   126 +-
 .../server/federation/router/TestRouterRpc.java |   136 +-
 .../src/test/resources/contract/webhdfs.xml | 5 +
 .../jdiff/Apache_Hadoop_HDFS_3.1.0.xml  |   676 +
 .../server/blockmanagement/BlockIdManager.java  |17 +
 .../server/blockmanagement/BlockManager.java| 5 +-
 .../blockmanagement/BlockManagerSafeMode.java   | 2 +-
 .../hdfs/server/blockmanagement/BlocksMap.java  |12 +-
 .../blockmanagement/CorruptReplicasMap.java |35 +-
 .../blockmanagement/InvalidateBlocks.java   |13 +-
 .../server/namenode/EncryptionZoneManager.java  | 8 +-
 .../hadoop/hdfs/server/namenode/FSDirAclOp.java |12 +
 .../hdfs/server/namenode/FSTreeTraverser.java   |   339 +
 .../server/namenode/ReencryptionHandler.java|   615 +-
 .../server/namenode/ReencryptionUpdater.java| 2 +-
 .../src/site/markdown/ArchivalStorage.md| 2 +-
 .../src/site/markdown/MemoryStorage.md  | 2 +-
 .../blockmanagement/TestBlockManager.java   |61 +-
 .../blockmanagement/TestCorruptReplicaInfo.java |48 +-
 .../hdfs/server/namenode/TestReencryption.java  | 3 -
 .../namenode/TestReencryptionHandler.java   |10 +-
 .../apache/hadoop/net/TestNetworkTopology.java  |75 +-
 .../src/test/resources/testCryptoConf.xml   |19 +
 .../Apache_Hadoop_MapReduce_Common_3.1.0.xml|   113 +
 .../Apache_Hadoop_MapReduce_Core_3.1.0.xml  | 28075 +
 .../Apache_Hadoop_MapReduce_JobClient_3.1.0.xml |16 +
 .../jobhistory/JobHistoryEventHandler.java  | 2 +-
 hadoop-project/src/site/site.xml| 4 +
 .../fs/s3a/s3guard/DynamoDBMetadataStore.java   |18 +-
 .../fs/s3a/s3guard/LocalMetadataStore.java  |17 +-
 .../hadoop/fs/s3a/s3guard/MetadataStore.java|12 +
 .../fs/s3a/s3guard/NullMetadataStore.java   | 4 +
 .../hadoop/fs/s3a/s3guard/S3GuardTool.java  |14 +-
 .../site/markdown/tools/hadoop-aws/s3guard.md   |11 +-
 .../s3guard/AbstractS3GuardToolTestBase.java|21 +-
 .../dev-support/findbugs-exclude.xml| 7 +
 .../jdiff/Apache_Hadoop_YARN_Client_3.1.0.xml   |  3146 ++
 .../jdiff/Apache_Hadoop_YARN_Common_3.1.0.xml   |  3034 ++
 .../Apache_Hadoop_YARN_Server_Common_3.1.0.xml  |  1331 +
 .../api/records/AllocationTagNamespaceType.java | 2 +-
 .../timelineservice/SubApplicationEntity.java   |50 +
 .../hadoop/yarn/conf/YarnConfiguration.java |42 +
 .../hadoop-yarn-services-api/pom.xml| 5 +
 .../client/SystemServiceManagerIm

[19/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/common/DeleteBlockGroupResult.java
--
diff --git 
a/hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/common/DeleteBlockGroupResult.java
 
b/hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/common/DeleteBlockGroupResult.java
deleted file mode 100644
index da56385..000
--- 
a/hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/common/DeleteBlockGroupResult.java
+++ /dev/null
@@ -1,94 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.ozone.common;
-
-import 
org.apache.hadoop.hdsl.protocol.proto.ScmBlockLocationProtocolProtos.DeleteScmBlockResult;
-import 
org.apache.hadoop.hdsl.protocol.proto.ScmBlockLocationProtocolProtos.DeleteScmBlockResult.Result;
-import org.apache.hadoop.scm.container.common.helpers.DeleteBlockResult;
-
-import java.util.ArrayList;
-import java.util.List;
-import java.util.stream.Collectors;
-
-/**
- * Result to delete a group of blocks.
- */
-public class DeleteBlockGroupResult {
-  private String objectKey;
-  private List blockResultList;
-  public DeleteBlockGroupResult(String objectKey,
-  List blockResultList) {
-this.objectKey = objectKey;
-this.blockResultList = blockResultList;
-  }
-
-  public String getObjectKey() {
-return objectKey;
-  }
-
-  public List getBlockResultList() {
-return blockResultList;
-  }
-
-  public List getBlockResultProtoList() {
-List resultProtoList =
-new ArrayList<>(blockResultList.size());
-for (DeleteBlockResult result : blockResultList) {
-  DeleteScmBlockResult proto = DeleteScmBlockResult.newBuilder()
-  .setKey(result.getKey())
-  .setResult(result.getResult()).build();
-  resultProtoList.add(proto);
-}
-return resultProtoList;
-  }
-
-  public static List convertBlockResultProto(
-  List results) {
-List protoResults = new ArrayList<>(results.size());
-for (DeleteScmBlockResult result : results) {
-  protoResults.add(new DeleteBlockResult(result.getKey(),
-  result.getResult()));
-}
-return protoResults;
-  }
-
-  /**
-   * Only if all blocks are successfully deleted, this group is considered
-   * to be successfully executed.
-   *
-   * @return true if all blocks are successfully deleted, false otherwise.
-   */
-  public boolean isSuccess() {
-for (DeleteBlockResult result : blockResultList) {
-  if (result.getResult() != Result.success) {
-return false;
-  }
-}
-return true;
-  }
-
-  /**
-   * @return A list of deletion failed block IDs.
-   */
-  public List getFailedBlocks() {
-List failedBlocks = blockResultList.stream()
-.filter(result -> result.getResult() != Result.success)
-.map(DeleteBlockResult::getKey).collect(Collectors.toList());
-return failedBlocks;
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/common/InconsistentStorageStateException.java
--
diff --git 
a/hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/common/InconsistentStorageStateException.java
 
b/hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/common/InconsistentStorageStateException.java
deleted file mode 100644
index c3f9234..000
--- 
a/hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/common/InconsistentStorageStateException.java
+++ /dev/null
@@ -1,51 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed un

[71/83] [abbrv] hadoop git commit: Merge branch 'trunk' into HDFS-7240

2018-04-24 Thread xyao
Merge branch 'trunk' into HDFS-7240


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/72a3743c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/72a3743c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/72a3743c

Branch: refs/heads/trunk
Commit: 72a3743cc49d9c7b8d2eaec8064a25b8d890c267
Parents: 66610b5 995cba6
Author: Xiaoyu Yao 
Authored: Fri Apr 13 17:00:19 2018 -0700
Committer: Xiaoyu Yao 
Committed: Fri Apr 13 17:00:19 2018 -0700

--
 .../org/apache/hadoop/conf/Configuration.java   |  11 +-
 .../crypto/key/kms/KMSClientProvider.java   | 212 
 .../crypto/key/kms/KMSDelegationToken.java  |  22 +-
 .../crypto/key/kms/KMSLegacyTokenRenewer.java   |  56 ++
 .../hadoop/crypto/key/kms/KMSTokenRenewer.java  | 103 
 .../hadoop/crypto/key/kms/package-info.java |  18 +
 .../apache/hadoop/fs/ChecksumFileSystem.java|   9 +-
 .../fs/CommonConfigurationKeysPublic.java   |  10 +
 .../hadoop/fs/CompositeCrcFileChecksum.java |  82 +++
 .../java/org/apache/hadoop/fs/FileSystem.java   |   2 +-
 .../main/java/org/apache/hadoop/fs/Options.java |  11 +
 .../org/apache/hadoop/fs/shell/Command.java |  69 ++-
 .../apache/hadoop/fs/shell/CopyCommands.java|   6 +
 .../java/org/apache/hadoop/fs/shell/Ls.java |  26 +-
 .../org/apache/hadoop/fs/shell/PathData.java|  27 +
 .../web/DelegationTokenAuthenticatedURL.java|  21 +-
 .../DelegationTokenAuthenticationHandler.java   |   8 +-
 .../web/DelegationTokenAuthenticator.java   |   2 +-
 .../hadoop/service/launcher/IrqHandler.java |   2 +-
 .../org/apache/hadoop/util/CrcComposer.java | 187 +++
 .../java/org/apache/hadoop/util/CrcUtil.java| 220 
 .../org/apache/hadoop/util/DataChecksum.java|  18 +
 .../java/org/apache/hadoop/util/KMSUtil.java|  45 +-
 .../hadoop/util/KMSUtilFaultInjector.java   |  49 ++
 ...apache.hadoop.security.token.TokenIdentifier |   1 +
 ...rg.apache.hadoop.security.token.TokenRenewer |   3 +-
 .../src/main/resources/core-default.xml |  20 +
 .../apache/hadoop/conf/TestConfiguration.java   |  26 +-
 .../crypto/key/kms/TestKMSClientProvider.java   | 162 ++
 .../kms/TestLoadBalancingKMSClientProvider.java |  67 ++-
 .../apache/hadoop/fs/shell/find/TestFind.java   |  34 +-
 .../org/apache/hadoop/util/TestCrcComposer.java | 242 +
 .../org/apache/hadoop/util/TestCrcUtil.java | 232 +
 .../org/apache/hadoop/util/TestKMSUtil.java |  65 +++
 .../hadoop/crypto/key/kms/server/TestKMS.java   | 519 ---
 .../main/java/org/apache/hadoop/fs/Hdfs.java|   4 +-
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |  56 +-
 .../hadoop/hdfs/DistributedFileSystem.java  |   5 +-
 .../apache/hadoop/hdfs/FileChecksumHelper.java  | 365 -
 .../hdfs/client/HdfsClientConfigKeys.java   |   2 +
 .../hadoop/hdfs/client/impl/DfsClientConf.java  |  27 +
 .../hdfs/protocol/BlockChecksumOptions.java |  54 ++
 .../hadoop/hdfs/protocol/BlockChecksumType.java |  30 ++
 .../datatransfer/DataTransferProtocol.java  |  12 +-
 .../hdfs/protocol/datatransfer/Sender.java  |  11 +-
 .../hadoop/hdfs/protocolPB/PBHelperClient.java  |  44 ++
 .../ha/RequestHedgingProxyProvider.java |   3 -
 .../hdfs/shortcircuit/ShortCircuitCache.java|  11 +-
 .../src/main/proto/datatransfer.proto   |   7 +-
 .../src/main/proto/hdfs.proto   |  21 +
 .../ha/TestRequestHedgingProxyProvider.java |  34 --
 .../native/libhdfspp/include/hdfspp/hdfspp.h|  53 +-
 .../native/libhdfspp/include/hdfspp/ioservice.h | 140 +
 .../native/libhdfspp/lib/bindings/c/hdfs.cc |   7 +-
 .../native/libhdfspp/lib/common/CMakeLists.txt  |   2 +-
 .../native/libhdfspp/lib/common/async_stream.h  |  13 +-
 .../libhdfspp/lib/common/continuation/asio.h|   5 -
 .../libhdfspp/lib/common/hdfs_ioservice.cc  | 146 --
 .../libhdfspp/lib/common/hdfs_ioservice.h   |  79 ---
 .../libhdfspp/lib/common/ioservice_impl.cc  | 159 ++
 .../libhdfspp/lib/common/ioservice_impl.h   |  76 +++
 .../main/native/libhdfspp/lib/common/logging.h  |   3 -
 .../libhdfspp/lib/common/namenode_info.cc   |  15 +-
 .../native/libhdfspp/lib/common/namenode_info.h |   8 +-
 .../main/native/libhdfspp/lib/common/util.cc|  14 +-
 .../src/main/native/libhdfspp/lib/common/util.h |  25 +-
 .../lib/connection/datanodeconnection.cc|  27 +-
 .../lib/connection/datanodeconnection.h |  26 +-
 .../main/native/libhdfspp/lib/fs/filehandle.cc  |  18 +-
 .../main/native/libhdfspp/lib/fs/filehandle.h   |  12 +-
 .../main/native/libhdfspp/lib/fs/filesystem.cc  |  67 ++-
 .../main/native/libhdfspp/lib/fs/filesystem.h   |  66 +--
 .../libhdfspp/lib/fs/namenode_operations.h  |   4 +-
 .../native/libhdfspp/lib/reader/block_reader.cc |  18 +-
 .../native/libhdfspp/lib/reader/block_

[39/83] [abbrv] [partial] hadoop git commit: HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek Marton, Mukul Kumar Singh, Shashikant Banerjee and Anu Engineer.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/651a05a1/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java
new file mode 100644
index 000..ac245d5
--- /dev/null
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java
@@ -0,0 +1,157 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.container.common.states.datanode;
+
+import com.google.common.base.Strings;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.HddsUtils;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.ozone.container.common.helpers.ContainerUtils;
+import org.apache.hadoop.ozone.container.common.statemachine
+.DatanodeStateMachine;
+import org.apache.hadoop.ozone.container.common.statemachine
+.SCMConnectionManager;
+import org.apache.hadoop.ozone.container.common.statemachine.StateContext;
+import org.apache.hadoop.ozone.container.common.states.DatanodeState;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.Collection;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+import static org.apache.hadoop.hdds.HddsUtils.getSCMAddresses;
+
+/**
+ * Init Datanode State is the task that gets run when we are in Init State.
+ */
+public class InitDatanodeState implements DatanodeState,
+Callable {
+  static final Logger LOG = LoggerFactory.getLogger(InitDatanodeState.class);
+  private final SCMConnectionManager connectionManager;
+  private final Configuration conf;
+  private final StateContext context;
+  private Future result;
+
+  /**
+   *  Create InitDatanodeState Task.
+   *
+   * @param conf - Conf
+   * @param connectionManager - Connection Manager
+   * @param context - Current Context
+   */
+  public InitDatanodeState(Configuration conf,
+   SCMConnectionManager connectionManager,
+   StateContext context) {
+this.conf = conf;
+this.connectionManager = connectionManager;
+this.context = context;
+  }
+
+  /**
+   * Computes a result, or throws an exception if unable to do so.
+   *
+   * @return computed result
+   * @throws Exception if unable to compute a result
+   */
+  @Override
+  public DatanodeStateMachine.DatanodeStates call() throws Exception {
+Collection addresses = null;
+try {
+  addresses = getSCMAddresses(conf);
+} catch (IllegalArgumentException e) {
+  if(!Strings.isNullOrEmpty(e.getMessage())) {
+LOG.error("Failed to get SCM addresses: " + e.getMessage());
+  }
+  return DatanodeStateMachine.DatanodeStates.SHUTDOWN;
+}
+
+if (addresses == null || addresses.isEmpty()) {
+  LOG.error("Null or empty SCM address list found.");
+  return DatanodeStateMachine.DatanodeStates.SHUTDOWN;
+} else {
+  for (InetSocketAddress addr : addresses) {
+connectionManager.addSCMServer(addr);
+  }
+}
+
+// If datanode ID is set, persist it to the ID file.
+persistContainerDatanodeDetails();
+
+return this.context.getState().getNextState();
+  }
+
+  /**
+   * Persist DatanodeDetails to datanode.id file.
+   */
+  private void persistContainerDatanodeDetails() throws IOException {
+String dataNodeIDPath = HddsUtils.getDatanodeIdFilePath(conf);
+File idPath = new File(dataNodeIDPath);
+DatanodeDetails datanodeDetails = this.context.getParent()
+.getDatanodeDetails();
+if (datanodeDetails != null && !idPath.exists(

[72/83] [abbrv] hadoop git commit: HDFS-13446. Ozone: Fix OzoneFileSystem contract test failures. Contributed by Mukul Kumar Singh.

2018-04-24 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fd84dea0/hadoop-tools/hadoop-ozone/src/todo/java/org/apache/hadoop/fs/ozone/contract/OzoneContract.java
--
diff --git 
a/hadoop-tools/hadoop-ozone/src/todo/java/org/apache/hadoop/fs/ozone/contract/OzoneContract.java
 
b/hadoop-tools/hadoop-ozone/src/todo/java/org/apache/hadoop/fs/ozone/contract/OzoneContract.java
deleted file mode 100644
index 97ec3f4..000
--- 
a/hadoop-tools/hadoop-ozone/src/todo/java/org/apache/hadoop/fs/ozone/contract/OzoneContract.java
+++ /dev/null
@@ -1,125 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.ozone.contract;
-
-import org.apache.commons.lang.RandomStringUtils;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.conf.OzoneConfiguration;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.contract.AbstractFSContract;
-import org.apache.hadoop.fs.ozone.Constants;
-import org.apache.hadoop.hdfs.server.datanode.DataNode;
-import org.apache.hadoop.hdfs.server.datanode.ObjectStoreHandler;
-import org.apache.hadoop.ozone.MiniOzoneClassicCluster;
-import org.apache.hadoop.ozone.OzoneConsts;
-import org.apache.hadoop.ozone.client.rest.OzoneException;
-import org.apache.hadoop.ozone.web.handlers.BucketArgs;
-import org.apache.hadoop.ozone.web.handlers.UserArgs;
-import org.apache.hadoop.ozone.web.handlers.VolumeArgs;
-import org.apache.hadoop.ozone.web.interfaces.StorageHandler;
-import org.apache.hadoop.ozone.web.utils.OzoneUtils;
-import org.apache.hadoop.ozone.ksm.KSMConfigKeys;
-import org.apache.hadoop.hdds.scm.ScmConfigKeys;
-import org.junit.Assert;
-
-import java.io.IOException;
-
-/**
- * The contract of Ozone: only enabled if the test bucket is provided.
- */
-class OzoneContract extends AbstractFSContract {
-
-  private static MiniOzoneClassicCluster cluster;
-  private static StorageHandler storageHandler;
-  private static final String CONTRACT_XML = "contract/ozone.xml";
-
-  OzoneContract(Configuration conf) {
-super(conf);
-//insert the base features
-addConfResource(CONTRACT_XML);
-  }
-
-  @Override
-  public String getScheme() {
-return Constants.OZONE_URI_SCHEME;
-  }
-
-  @Override
-  public Path getTestPath() {
-Path path = new Path("/test");
-return path;
-  }
-
-  public static void createCluster() throws IOException {
-OzoneConfiguration conf = new OzoneConfiguration();
-conf.addResource(CONTRACT_XML);
-
-cluster =
-new MiniOzoneClassicCluster.Builder(conf).numDataNodes(5)
-.setHandlerType(OzoneConsts.OZONE_HANDLER_DISTRIBUTED).build();
-cluster.waitClusterUp();
-storageHandler = new ObjectStoreHandler(conf).getStorageHandler();
-  }
-
-  private void copyClusterConfigs(String configKey) {
-getConf().set(configKey, cluster.getConf().get(configKey));
-  }
-
-  @Override
-  public FileSystem getTestFileSystem() throws IOException {
-//assumes cluster is not null
-Assert.assertNotNull("cluster not created", cluster);
-
-String userName = "user" + RandomStringUtils.randomNumeric(5);
-String adminName = "admin" + RandomStringUtils.randomNumeric(5);
-String volumeName = "volume" + RandomStringUtils.randomNumeric(5);
-String bucketName = "bucket" + RandomStringUtils.randomNumeric(5);
-
-
-UserArgs userArgs = new UserArgs(null, OzoneUtils.getRequestID(),
-null, null, null, null);
-VolumeArgs volumeArgs = new VolumeArgs(volumeName, userArgs);
-volumeArgs.setUserName(userName);
-volumeArgs.setAdminName(adminName);
-BucketArgs bucketArgs = new BucketArgs(volumeName, bucketName, userArgs);
-try {
-  storageHandler.createVolume(volumeArgs);
-  storageHandler.createBucket(bucketArgs);
-} catch (OzoneException e) {
-  throw new IOException(e.getMessage());
-}
-DataNode dataNode = cluster.getDataNodes().get(0);
-final int port = dataNode.getInfoPort();
-
-String uri = String.format("%s://%s.%s/",
-Constants.OZONE_URI_SCHEME, bucketName, volumeName);
-getConf().set("fs.defaultFS", uri);
-copyClusterCon

[59/83] [abbrv] hadoop git commit: HDFS-13348. Ozone: Update IP and hostname in Datanode from SCM's response to the register call. Contributed by Shashikant Banerjee.

2018-04-24 Thread xyao
HDFS-13348. Ozone: Update IP and hostname in Datanode from SCM's response to 
the register call. Contributed by Shashikant Banerjee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4a8aa0e1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4a8aa0e1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4a8aa0e1

Branch: refs/heads/trunk
Commit: 4a8aa0e1c85944342a80b0a2110fd6210853b0b7
Parents: e6da4d8
Author: Nanda kumar 
Authored: Wed Apr 11 17:05:33 2018 +0530
Committer: Nanda kumar 
Committed: Wed Apr 11 17:05:33 2018 +0530

--
 .../states/endpoint/RegisterEndpointTask.java   | 34 
 .../protocol/commands/RegisteredCommand.java| 88 
 .../StorageContainerDatanodeProtocol.proto  |  2 +
 .../hadoop/hdds/scm/node/SCMNodeManager.java| 19 +++--
 .../ozone/container/common/TestEndPoint.java|  4 +-
 5 files changed, 106 insertions(+), 41 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4a8aa0e1/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/RegisterEndpointTask.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/RegisterEndpointTask.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/RegisterEndpointTask.java
index 6913896..de186a7 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/RegisterEndpointTask.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/RegisterEndpointTask.java
@@ -20,9 +20,10 @@ import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos.DatanodeDetailsProto;
 import org.apache.hadoop.ozone.container.common.statemachine
 .EndpointStateMachine;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.SCMRegisteredCmdResponseProto;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -40,7 +41,7 @@ public final class RegisterEndpointTask implements
   private final EndpointStateMachine rpcEndPoint;
   private final Configuration conf;
   private Future result;
-  private DatanodeDetailsProto datanodeDetailsProto;
+  private DatanodeDetails datanodeDetails;
 
   /**
* Creates a register endpoint task.
@@ -57,22 +58,22 @@ public final class RegisterEndpointTask implements
   }
 
   /**
-   * Get the DatanodeDetailsProto Proto.
+   * Get the DatanodeDetails.
*
* @return DatanodeDetailsProto
*/
-  public DatanodeDetailsProto getDatanodeDetailsProto() {
-return datanodeDetailsProto;
+  public DatanodeDetails getDatanodeDetails() {
+return datanodeDetails;
   }
 
   /**
* Set the contiainerNodeID Proto.
*
-   * @param datanodeDetailsProto - Container Node ID.
+   * @param datanodeDetails - Container Node ID.
*/
-  public void setDatanodeDetailsProto(
-  DatanodeDetailsProto datanodeDetailsProto) {
-this.datanodeDetailsProto = datanodeDetailsProto;
+  public void setDatanodeDetails(
+  DatanodeDetails datanodeDetails) {
+this.datanodeDetails = datanodeDetails;
   }
 
   /**
@@ -84,8 +85,8 @@ public final class RegisterEndpointTask implements
   @Override
   public EndpointStateMachine.EndPointStates call() throws Exception {
 
-if (getDatanodeDetailsProto() == null) {
-  LOG.error("Container ID proto cannot be null in RegisterEndpoint task, " 
+
+if (getDatanodeDetails() == null) {
+  LOG.error("DatanodeDetails cannot be null in RegisterEndpoint task, " +
   "shutting down the endpoint.");
   return 
rpcEndPoint.setState(EndpointStateMachine.EndPointStates.SHUTDOWN);
 }
@@ -94,8 +95,13 @@ public final class RegisterEndpointTask implements
 try {
 
   // TODO : Add responses to the command Queue.
-  rpcEndPoint.getEndPoint().register(datanodeDetailsProto,
-  conf.getStrings(ScmConfigKeys.OZONE_SCM_NAMES));
+  SCMRegisteredCmdResponseProto response = rpcEndPoint.getEndPoint()
+  .register(datanodeDetails.getProtoBufMessage(),
+  conf.getStrings(ScmConfigKeys.OZONE_SCM_NAMES));
+  if (response.hasHostname() && response.hasIpAddress()) {
+datanodeDetails.setHostName(response.getHostname());
+datanodeDetails.setIpAddress(response.getIpAddress());
+  }
   EndpointStateMachine.EndPointStates nextState =
   rpcEndPoint.getState().getNextState();
  

hadoop git commit: YARN-8196. Updated documentation for enabling YARN Service REST API. Contributed by Billie Rinaldi

2018-04-24 Thread eyang
Repository: hadoop
Updated Branches:
  refs/heads/trunk 9d6befb29 -> f64501fcd


YARN-8196.  Updated documentation for enabling YARN Service REST API.
Contributed by Billie Rinaldi


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f64501fc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f64501fc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f64501fc

Branch: refs/heads/trunk
Commit: f64501fcdc9dfa2e9848db0fb4749c6bd4a54d7f
Parents: 9d6befb
Author: Eric Yang 
Authored: Tue Apr 24 19:11:21 2018 -0400
Committer: Eric Yang 
Committed: Tue Apr 24 19:11:21 2018 -0400

--
 .../site/markdown/yarn-service/QuickStart.md| 30 +---
 1 file changed, 14 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f64501fc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/QuickStart.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/QuickStart.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/QuickStart.md
index f563193..e91380c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/QuickStart.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/QuickStart.md
@@ -18,10 +18,21 @@ This document describes how to deploy services on YARN 
using the YARN Service fr
 
 
 
-## Start HDFS and YARN components
+## Configure and start HDFS and YARN components
 
- Start all the hadoop components HDFS, YARN as usual.
+Start all the hadoop components for HDFS and YARN as usual.
+To enable the YARN Service framework, add this property to `yarn-site.xml` and 
restart the ResourceManager or set the property before the ResourceManager is 
started.
+This property is required for using the YARN Service framework through the CLI 
or the REST API.
 
+```
+  
+
+  Enable services rest api on ResourceManager.
+
+yarn.webapp.api-service.enable
+true
+  
+```
 
 ## Example service 
 Below is a simple service definition that launches sleep containers on YARN by 
writing a simple spec file and without writing any code.
@@ -104,20 +115,7 @@ yarn app -destroy ${SERVICE_NAME}
 
 ## Manage services on YARN via REST API
 
-YARN API Server REST API can be activated as part of the ResourceManager.
-
-### Start Embedded API-Server as part of ResourceManager
-For running inside ResourceManager, add this property to `yarn-site.xml` and 
restart ResourceManager.
-
-```
-  
-
-  Enable services rest api on ResourceManager.
-
-yarn.webapp.api-service.enable
-true
-  
-```
+The YARN API Server REST API is activated as part of the ResourceManager when 
`yarn.webapp.api-service.enable` is set to true.
 
 Services can be deployed on YARN through the ResourceManager web endpoint.
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8196. Updated documentation for enabling YARN Service REST API. Contributed by Billie Rinaldi

2018-04-24 Thread eyang
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 9209ebae1 -> 678e59987


YARN-8196.  Updated documentation for enabling YARN Service REST API.
Contributed by Billie Rinaldi

(cherry picked from commit f64501fcdc9dfa2e9848db0fb4749c6bd4a54d7f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/678e5998
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/678e5998
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/678e5998

Branch: refs/heads/branch-3.1
Commit: 678e599879fa05f5248a43081a133cdc099d6f3a
Parents: 9209eba
Author: Eric Yang 
Authored: Tue Apr 24 19:11:21 2018 -0400
Committer: Eric Yang 
Committed: Tue Apr 24 19:17:15 2018 -0400

--
 .../site/markdown/yarn-service/QuickStart.md| 30 +---
 1 file changed, 14 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/678e5998/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/QuickStart.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/QuickStart.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/QuickStart.md
index f563193..e91380c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/QuickStart.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/QuickStart.md
@@ -18,10 +18,21 @@ This document describes how to deploy services on YARN 
using the YARN Service fr
 
 
 
-## Start HDFS and YARN components
+## Configure and start HDFS and YARN components
 
- Start all the hadoop components HDFS, YARN as usual.
+Start all the hadoop components for HDFS and YARN as usual.
+To enable the YARN Service framework, add this property to `yarn-site.xml` and 
restart the ResourceManager or set the property before the ResourceManager is 
started.
+This property is required for using the YARN Service framework through the CLI 
or the REST API.
 
+```
+  
+
+  Enable services rest api on ResourceManager.
+
+yarn.webapp.api-service.enable
+true
+  
+```
 
 ## Example service 
 Below is a simple service definition that launches sleep containers on YARN by 
writing a simple spec file and without writing any code.
@@ -104,20 +115,7 @@ yarn app -destroy ${SERVICE_NAME}
 
 ## Manage services on YARN via REST API
 
-YARN API Server REST API can be activated as part of the ResourceManager.
-
-### Start Embedded API-Server as part of ResourceManager
-For running inside ResourceManager, add this property to `yarn-site.xml` and 
restart ResourceManager.
-
-```
-  
-
-  Enable services rest api on ResourceManager.
-
-yarn.webapp.api-service.enable
-true
-  
-```
+The YARN API Server REST API is activated as part of the ResourceManager when 
`yarn.webapp.api-service.enable` is set to true.
 
 Services can be deployed on YARN through the ResourceManager web endpoint.
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8183. Fix ConcurrentModificationException inside RMAppAttemptMetrics#convertAtomicLongMaptoLongMap. (Suma Shivaprasad via wangda)

2018-04-24 Thread wangda
Repository: hadoop
Updated Branches:
  refs/heads/trunk f64501fcd -> bb3c50476


YARN-8183. Fix ConcurrentModificationException inside 
RMAppAttemptMetrics#convertAtomicLongMaptoLongMap. (Suma Shivaprasad via wangda)

Change-Id: I347871d672001653a3afe2e99adefd74e0d798cd


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bb3c5047
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bb3c5047
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bb3c5047

Branch: refs/heads/trunk
Commit: bb3c504764f807fccba7f28298a12e2296f284cb
Parents: f64501f
Author: Wangda Tan 
Authored: Tue Apr 24 17:42:17 2018 -0700
Committer: Wangda Tan 
Committed: Tue Apr 24 17:42:17 2018 -0700

--
 .../resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java  | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bb3c5047/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
index 015cff7..4e75505 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
@@ -20,6 +20,7 @@ package 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt;
 
 import java.util.HashMap;
 import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.atomic.AtomicLong;
@@ -53,8 +54,8 @@ public class RMAppAttemptMetrics {
   
   private ReadLock readLock;
   private WriteLock writeLock;
-  private Map resourceUsageMap = new HashMap<>();
-  private Map preemptedResourceMap = new HashMap<>();
+  private Map resourceUsageMap = new ConcurrentHashMap<>();
+  private Map preemptedResourceMap = new 
ConcurrentHashMap<>();
   private RMContext rmContext;
 
   private int[][] localityStatistics =
@@ -97,7 +98,7 @@ public class RMAppAttemptMetrics {
   public Resource getResourcePreempted() {
 try {
   readLock.lock();
-  return resourcePreempted;
+  return Resource.newInstance(resourcePreempted);
 } finally {
   readLock.unlock();
 }
@@ -230,7 +231,7 @@ public class RMAppAttemptMetrics {
   }
 
   public Resource getApplicationAttemptHeadroom() {
-return applicationHeadroom;
+return Resource.newInstance(applicationHeadroom);
   }
 
   public void setApplicationAttemptHeadRoom(Resource headRoom) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8183. Fix ConcurrentModificationException inside RMAppAttemptMetrics#convertAtomicLongMaptoLongMap. (Suma Shivaprasad via wangda)

2018-04-24 Thread wangda
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 678e59987 -> 3043a93d4


YARN-8183. Fix ConcurrentModificationException inside 
RMAppAttemptMetrics#convertAtomicLongMaptoLongMap. (Suma Shivaprasad via wangda)

Change-Id: I347871d672001653a3afe2e99adefd74e0d798cd
(cherry picked from commit bb3c504764f807fccba7f28298a12e2296f284cb)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3043a93d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3043a93d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3043a93d

Branch: refs/heads/branch-3.1
Commit: 3043a93d461fd8b9ccc2ff4b8d17e5430ed77615
Parents: 678e599
Author: Wangda Tan 
Authored: Tue Apr 24 17:42:17 2018 -0700
Committer: Wangda Tan 
Committed: Tue Apr 24 17:44:58 2018 -0700

--
 .../resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java  | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3043a93d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
index 015cff7..4e75505 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
@@ -20,6 +20,7 @@ package 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt;
 
 import java.util.HashMap;
 import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.atomic.AtomicLong;
@@ -53,8 +54,8 @@ public class RMAppAttemptMetrics {
   
   private ReadLock readLock;
   private WriteLock writeLock;
-  private Map resourceUsageMap = new HashMap<>();
-  private Map preemptedResourceMap = new HashMap<>();
+  private Map resourceUsageMap = new ConcurrentHashMap<>();
+  private Map preemptedResourceMap = new 
ConcurrentHashMap<>();
   private RMContext rmContext;
 
   private int[][] localityStatistics =
@@ -97,7 +98,7 @@ public class RMAppAttemptMetrics {
   public Resource getResourcePreempted() {
 try {
   readLock.lock();
-  return resourcePreempted;
+  return Resource.newInstance(resourcePreempted);
 } finally {
   readLock.unlock();
 }
@@ -230,7 +231,7 @@ public class RMAppAttemptMetrics {
   }
 
   public Resource getApplicationAttemptHeadroom() {
-return applicationHeadroom;
+return Resource.newInstance(applicationHeadroom);
   }
 
   public void setApplicationAttemptHeadRoom(Resource headRoom) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8183. Fix ConcurrentModificationException inside RMAppAttemptMetrics#convertAtomicLongMaptoLongMap. (Suma Shivaprasad via wangda)

2018-04-24 Thread wangda
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 51194439b -> baa003035


YARN-8183. Fix ConcurrentModificationException inside 
RMAppAttemptMetrics#convertAtomicLongMaptoLongMap. (Suma Shivaprasad via wangda)

Change-Id: I347871d672001653a3afe2e99adefd74e0d798cd
(cherry picked from commit bb3c504764f807fccba7f28298a12e2296f284cb)
(cherry picked from commit 3043a93d461fd8b9ccc2ff4b8d17e5430ed77615)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/baa00303
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/baa00303
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/baa00303

Branch: refs/heads/branch-3.0
Commit: baa003035dcd4e1db941052e5331cbed4dfef0c9
Parents: 5119443
Author: Wangda Tan 
Authored: Tue Apr 24 17:42:17 2018 -0700
Committer: Wangda Tan 
Committed: Tue Apr 24 17:49:32 2018 -0700

--
 .../resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java  | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/baa00303/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
index 015cff7..4e75505 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java
@@ -20,6 +20,7 @@ package 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt;
 
 import java.util.HashMap;
 import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.atomic.AtomicLong;
@@ -53,8 +54,8 @@ public class RMAppAttemptMetrics {
   
   private ReadLock readLock;
   private WriteLock writeLock;
-  private Map resourceUsageMap = new HashMap<>();
-  private Map preemptedResourceMap = new HashMap<>();
+  private Map resourceUsageMap = new ConcurrentHashMap<>();
+  private Map preemptedResourceMap = new 
ConcurrentHashMap<>();
   private RMContext rmContext;
 
   private int[][] localityStatistics =
@@ -97,7 +98,7 @@ public class RMAppAttemptMetrics {
   public Resource getResourcePreempted() {
 try {
   readLock.lock();
-  return resourcePreempted;
+  return Resource.newInstance(resourcePreempted);
 } finally {
   readLock.unlock();
 }
@@ -230,7 +231,7 @@ public class RMAppAttemptMetrics {
   }
 
   public Resource getApplicationAttemptHeadroom() {
-return applicationHeadroom;
+return Resource.newInstance(applicationHeadroom);
   }
 
   public void setApplicationAttemptHeadRoom(Resource headRoom) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15385. Many tests are failing in hadoop-distcp project in branch-2. Contributed by Jason Lowe.

2018-04-24 Thread junping_du
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 af70c69fb -> 2b48854cf


HADOOP-15385. Many tests are failing in hadoop-distcp project in branch-2. 
Contributed by Jason Lowe.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2b48854c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2b48854c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2b48854c

Branch: refs/heads/branch-2
Commit: 2b48854cfd09a048d983c2a4870d9c95573b4fff
Parents: af70c69
Author: Junping Du 
Authored: Wed Apr 25 10:11:41 2018 +0800
Committer: Junping Du 
Committed: Wed Apr 25 10:11:41 2018 +0800

--
 .../test/java/org/apache/hadoop/tools/TestDistCpViewFs.java  | 8 
 .../test/java/org/apache/hadoop/tools/TestIntegration.java   | 8 
 2 files changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2b48854c/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
 
b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
index 5511e09..cab2754 100644
--- 
a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
+++ 
b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
@@ -60,12 +60,12 @@ public class TestDistCpViewFs {
   ConfigUtil.addLink(vConf, "/usr", new URI(fswd.toString())); 
   fs = FileSystem.get(FsConstants.VIEWFS_URI, vConf);
   fs.setWorkingDirectory(new Path("/usr"));
-  listFile = new Path("target/tmp/listing").makeQualified(fs.getUri(),
+  root = new Path("target/TestDistCpViewFs").makeQualified(fs.getUri(),
+  fs.getWorkingDirectory()).toString();
+  listFile = new Path(root, "listing").makeQualified(fs.getUri(),
   fs.getWorkingDirectory());
-  target = new Path("target/tmp/target").makeQualified(fs.getUri(),
+  target = new Path(root, "target").makeQualified(fs.getUri(),
   fs.getWorkingDirectory()); 
-  root = new Path("target/tmp").makeQualified(fs.getUri(),
-  fs.getWorkingDirectory()).toString();
   TestDistCpUtils.delete(fs, root);
 } catch (IOException e) {
   LOG.error("Exception encountered ", e);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2b48854c/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
 
b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
index ee8e7cc..f15d0d4 100644
--- 
a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
+++ 
b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
@@ -74,12 +74,12 @@ public class TestIntegration {
   public static void setup() {
 try {
   fs = FileSystem.get(getConf());
-  listFile = new Path("target/tmp/listing").makeQualified(fs.getUri(),
+  root = new Path("target/TestIntegration").makeQualified(fs.getUri(),
+  fs.getWorkingDirectory()).toString();
+  listFile = new Path(root, "listing").makeQualified(fs.getUri(),
   fs.getWorkingDirectory());
-  target = new Path("target/tmp/target").makeQualified(fs.getUri(),
+  target = new Path(root, "target").makeQualified(fs.getUri(),
   fs.getWorkingDirectory());
-  root = new Path("target/tmp").makeQualified(fs.getUri(),
-  fs.getWorkingDirectory()).toString();
   TestDistCpUtils.delete(fs, root);
 } catch (IOException e) {
   LOG.error("Exception encountered ", e);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15385. Many tests are failing in hadoop-distcp project in branch-2. Contributed by Jason Lowe.

2018-04-24 Thread junping_du
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 ac037a3d4 -> fba9ffe9e


HADOOP-15385. Many tests are failing in hadoop-distcp project in branch-2. 
Contributed by Jason Lowe.

(cherry picked from commit 2b48854cfd09a048d983c2a4870d9c95573b4fff)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fba9ffe9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fba9ffe9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fba9ffe9

Branch: refs/heads/branch-2.9
Commit: fba9ffe9edb70a534779c792052a0670427fa168
Parents: ac037a3
Author: Junping Du 
Authored: Wed Apr 25 10:11:41 2018 +0800
Committer: Junping Du 
Committed: Wed Apr 25 10:13:09 2018 +0800

--
 .../test/java/org/apache/hadoop/tools/TestDistCpViewFs.java  | 8 
 .../test/java/org/apache/hadoop/tools/TestIntegration.java   | 8 
 2 files changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fba9ffe9/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
 
b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
index 5511e09..cab2754 100644
--- 
a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
+++ 
b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
@@ -60,12 +60,12 @@ public class TestDistCpViewFs {
   ConfigUtil.addLink(vConf, "/usr", new URI(fswd.toString())); 
   fs = FileSystem.get(FsConstants.VIEWFS_URI, vConf);
   fs.setWorkingDirectory(new Path("/usr"));
-  listFile = new Path("target/tmp/listing").makeQualified(fs.getUri(),
+  root = new Path("target/TestDistCpViewFs").makeQualified(fs.getUri(),
+  fs.getWorkingDirectory()).toString();
+  listFile = new Path(root, "listing").makeQualified(fs.getUri(),
   fs.getWorkingDirectory());
-  target = new Path("target/tmp/target").makeQualified(fs.getUri(),
+  target = new Path(root, "target").makeQualified(fs.getUri(),
   fs.getWorkingDirectory()); 
-  root = new Path("target/tmp").makeQualified(fs.getUri(),
-  fs.getWorkingDirectory()).toString();
   TestDistCpUtils.delete(fs, root);
 } catch (IOException e) {
   LOG.error("Exception encountered ", e);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fba9ffe9/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
 
b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
index ee8e7cc..f15d0d4 100644
--- 
a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
+++ 
b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
@@ -74,12 +74,12 @@ public class TestIntegration {
   public static void setup() {
 try {
   fs = FileSystem.get(getConf());
-  listFile = new Path("target/tmp/listing").makeQualified(fs.getUri(),
+  root = new Path("target/TestIntegration").makeQualified(fs.getUri(),
+  fs.getWorkingDirectory()).toString();
+  listFile = new Path(root, "listing").makeQualified(fs.getUri(),
   fs.getWorkingDirectory());
-  target = new Path("target/tmp/target").makeQualified(fs.getUri(),
+  target = new Path(root, "target").makeQualified(fs.getUri(),
   fs.getWorkingDirectory());
-  root = new Path("target/tmp").makeQualified(fs.getUri(),
-  fs.getWorkingDirectory()).toString();
   TestDistCpUtils.delete(fs, root);
 } catch (IOException e) {
   LOG.error("Exception encountered ", e);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15385. Many tests are failing in hadoop-distcp project in branch-2. Contributed by Jason Lowe.

2018-04-24 Thread junping_du
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 ed9e1d271 -> 56026c3a8


HADOOP-15385. Many tests are failing in hadoop-distcp project in branch-2. 
Contributed by Jason Lowe.

(cherry picked from commit 2b48854cfd09a048d983c2a4870d9c95573b4fff)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/56026c3a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/56026c3a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/56026c3a

Branch: refs/heads/branch-2.8
Commit: 56026c3a8536d58938c9655eb3037e97e354324e
Parents: ed9e1d2
Author: Junping Du 
Authored: Wed Apr 25 10:11:41 2018 +0800
Committer: Junping Du 
Committed: Wed Apr 25 10:13:36 2018 +0800

--
 .../test/java/org/apache/hadoop/tools/TestDistCpViewFs.java  | 8 
 .../test/java/org/apache/hadoop/tools/TestIntegration.java   | 8 
 2 files changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/56026c3a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
 
b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
index 5511e09..cab2754 100644
--- 
a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
+++ 
b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
@@ -60,12 +60,12 @@ public class TestDistCpViewFs {
   ConfigUtil.addLink(vConf, "/usr", new URI(fswd.toString())); 
   fs = FileSystem.get(FsConstants.VIEWFS_URI, vConf);
   fs.setWorkingDirectory(new Path("/usr"));
-  listFile = new Path("target/tmp/listing").makeQualified(fs.getUri(),
+  root = new Path("target/TestDistCpViewFs").makeQualified(fs.getUri(),
+  fs.getWorkingDirectory()).toString();
+  listFile = new Path(root, "listing").makeQualified(fs.getUri(),
   fs.getWorkingDirectory());
-  target = new Path("target/tmp/target").makeQualified(fs.getUri(),
+  target = new Path(root, "target").makeQualified(fs.getUri(),
   fs.getWorkingDirectory()); 
-  root = new Path("target/tmp").makeQualified(fs.getUri(),
-  fs.getWorkingDirectory()).toString();
   TestDistCpUtils.delete(fs, root);
 } catch (IOException e) {
   LOG.error("Exception encountered ", e);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/56026c3a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
 
b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
index ee8e7cc..f15d0d4 100644
--- 
a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
+++ 
b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
@@ -74,12 +74,12 @@ public class TestIntegration {
   public static void setup() {
 try {
   fs = FileSystem.get(getConf());
-  listFile = new Path("target/tmp/listing").makeQualified(fs.getUri(),
+  root = new Path("target/TestIntegration").makeQualified(fs.getUri(),
+  fs.getWorkingDirectory()).toString();
+  listFile = new Path(root, "listing").makeQualified(fs.getUri(),
   fs.getWorkingDirectory());
-  target = new Path("target/tmp/target").makeQualified(fs.getUri(),
+  target = new Path(root, "target").makeQualified(fs.getUri(),
   fs.getWorkingDirectory());
-  root = new Path("target/tmp").makeQualified(fs.getUri(),
-  fs.getWorkingDirectory()).toString();
   TestDistCpUtils.delete(fs, root);
 } catch (IOException e) {
   LOG.error("Exception encountered ", e);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org