hadoop git commit: HDFS-11035. Better documentation for maintenace mode and upgrade domain.

2017-09-20 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 7dd662eaf -> 3bb23f4be


HDFS-11035. Better documentation for maintenace mode and upgrade domain.

(cherry picked from commit ce943eb17a4218d8ac1f5293c6726122371d8442)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3bb23f4b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3bb23f4b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3bb23f4b

Branch: refs/heads/branch-2
Commit: 3bb23f4be9bf91c8fefd77ad6ef0aa3dd7ae9820
Parents: 7dd662e
Author: Ming Ma 
Authored: Wed Sep 20 09:36:33 2017 -0700
Committer: Ming Ma 
Committed: Wed Sep 20 09:42:22 2017 -0700

--
 .../src/site/markdown/HdfsDataNodeAdminGuide.md | 165 ++
 .../src/site/markdown/HdfsUpgradeDomain.md  | 167 +++
 hadoop-project/src/site/site.xml|   2 +
 3 files changed, 334 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3bb23f4b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDataNodeAdminGuide.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDataNodeAdminGuide.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDataNodeAdminGuide.md
new file mode 100644
index 000..d6f288e
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDataNodeAdminGuide.md
@@ -0,0 +1,165 @@
+
+
+HDFS DataNode Admin Guide
+=
+
+
+
+Overview
+
+
+The Hadoop Distributed File System (HDFS) namenode maintains states of all 
datanodes.
+There are two types of states. The fist type describes the liveness of a 
datanode indicating if
+the node is live, dead or stale. The second type describes the admin state 
indicating if the node
+is in service, decommissioned or under maintenance.
+
+When an administrator decommission a datanode, the datanode will first be 
transitioned into
+`DECOMMISSION_INPROGRESS` state. After all blocks belonging to that datanode 
have been fully replicated elsewhere
+based on each block's replication factor. the datanode will be transitioned to 
`DECOMMISSIONED` state. After that,
+the administrator can shutdown the node to perform long-term repair and 
maintenance that could take days or weeks.
+After the machine has been repaired, the machine can be recommissioned back to 
the cluster.
+
+Sometimes administrators only need to take datanodes down for minutes/hours to 
perform short-term repair/maintenance.
+In such scenario, the HDFS block replication overhead incurred by decommission 
might not be necessary and a light-weight process is desirable.
+And that is what maintenance state is used for. When an administrator put a 
datanode in maintenance state, the datanode will first be transitioned
+to `ENTERING_MAINTENANCE` state. As long as all blocks belonging to that 
datanode is minimally replicated elsewhere, the datanode
+will immediately be transitioned to `IN_MAINTENANCE` state. After the 
maintenance has completed, the administrator can take the datanode
+out of the maintenance state. In addition, maintenance state supports timeout 
that allows administrators to config the maximum duration in
+which a datanode is allowed to stay in maintenance state. After the timeout, 
the datanode will be transitioned out of maintenance state
+automatically by HDFS without human intervention.
+
+In summary, datanode admin operations include the followings:
+
+* Decommission
+* Recommission
+* Putting nodes in maintenance state
+* Taking nodes out of maintenance state
+
+And datanode admin states include the followings:
+
+* `NORMAL` The node is in service.
+* `DECOMMISSIONED` The node has been decommissioned.
+* `DECOMMISSION_INPROGRESS` The node is being transitioned to DECOMMISSIONED 
state.
+* `IN_MAINTENANCE` The node in in maintenance state.
+* `ENTERING_MAINTENANCE` The node is being transitioned to maintenance state.
+
+
+Host-level settings
+---
+
+To perform any of datanode admin operations, there are two steps.
+
+* Update host-level configuration files to indicate the desired admin states 
of targeted datanodes. There are two supported formats for configuration files.
+* Hostname-only configuration. Each line includes the hostname/ip address 
for a datanode. That is the default format.
+* JSON-based configuration. The configuration is in JSON format. Each 
element maps to one datanode and each datanode can have multiple properties. 
This format is required to put datanodes to maintenance states.
+
+* Run the following command to have namenode reload the host-level 
configuration files.
+`hdfs dfsadmin [-refreshNodes]`
+
+### Hostname-only configuration
+This is the default configuration used by 

hadoop git commit: HDFS-11035. Better documentation for maintenace mode and upgrade domain.

2017-09-20 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 816933722 -> 0006ee681


HDFS-11035. Better documentation for maintenace mode and upgrade domain.

(cherry picked from commit ce943eb17a4218d8ac1f5293c6726122371d8442)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0006ee68
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0006ee68
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0006ee68

Branch: refs/heads/branch-3.0
Commit: 0006ee681a047d8dc7501df1d5dd141cdb0f279e
Parents: 8169337
Author: Ming Ma 
Authored: Wed Sep 20 09:36:33 2017 -0700
Committer: Ming Ma 
Committed: Wed Sep 20 09:38:15 2017 -0700

--
 .../src/site/markdown/HdfsDataNodeAdminGuide.md | 165 ++
 .../src/site/markdown/HdfsUpgradeDomain.md  | 167 +++
 hadoop-project/src/site/site.xml|   4 +-
 3 files changed, 335 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0006ee68/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDataNodeAdminGuide.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDataNodeAdminGuide.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDataNodeAdminGuide.md
new file mode 100644
index 000..d6f288e
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDataNodeAdminGuide.md
@@ -0,0 +1,165 @@
+
+
+HDFS DataNode Admin Guide
+=
+
+
+
+Overview
+
+
+The Hadoop Distributed File System (HDFS) namenode maintains states of all 
datanodes.
+There are two types of states. The fist type describes the liveness of a 
datanode indicating if
+the node is live, dead or stale. The second type describes the admin state 
indicating if the node
+is in service, decommissioned or under maintenance.
+
+When an administrator decommission a datanode, the datanode will first be 
transitioned into
+`DECOMMISSION_INPROGRESS` state. After all blocks belonging to that datanode 
have been fully replicated elsewhere
+based on each block's replication factor. the datanode will be transitioned to 
`DECOMMISSIONED` state. After that,
+the administrator can shutdown the node to perform long-term repair and 
maintenance that could take days or weeks.
+After the machine has been repaired, the machine can be recommissioned back to 
the cluster.
+
+Sometimes administrators only need to take datanodes down for minutes/hours to 
perform short-term repair/maintenance.
+In such scenario, the HDFS block replication overhead incurred by decommission 
might not be necessary and a light-weight process is desirable.
+And that is what maintenance state is used for. When an administrator put a 
datanode in maintenance state, the datanode will first be transitioned
+to `ENTERING_MAINTENANCE` state. As long as all blocks belonging to that 
datanode is minimally replicated elsewhere, the datanode
+will immediately be transitioned to `IN_MAINTENANCE` state. After the 
maintenance has completed, the administrator can take the datanode
+out of the maintenance state. In addition, maintenance state supports timeout 
that allows administrators to config the maximum duration in
+which a datanode is allowed to stay in maintenance state. After the timeout, 
the datanode will be transitioned out of maintenance state
+automatically by HDFS without human intervention.
+
+In summary, datanode admin operations include the followings:
+
+* Decommission
+* Recommission
+* Putting nodes in maintenance state
+* Taking nodes out of maintenance state
+
+And datanode admin states include the followings:
+
+* `NORMAL` The node is in service.
+* `DECOMMISSIONED` The node has been decommissioned.
+* `DECOMMISSION_INPROGRESS` The node is being transitioned to DECOMMISSIONED 
state.
+* `IN_MAINTENANCE` The node in in maintenance state.
+* `ENTERING_MAINTENANCE` The node is being transitioned to maintenance state.
+
+
+Host-level settings
+---
+
+To perform any of datanode admin operations, there are two steps.
+
+* Update host-level configuration files to indicate the desired admin states 
of targeted datanodes. There are two supported formats for configuration files.
+* Hostname-only configuration. Each line includes the hostname/ip address 
for a datanode. That is the default format.
+* JSON-based configuration. The configuration is in JSON format. Each 
element maps to one datanode and each datanode can have multiple properties. 
This format is required to put datanodes to maintenance states.
+
+* Run the following command to have namenode reload the host-level 
configuration files.
+`hdfs dfsadmin [-refreshNodes]`
+
+### Hostname-only configuration
+This is the default 

hadoop git commit: HDFS-11035. Better documentation for maintenace mode and upgrade domain.

2017-09-20 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 230b85d58 -> ce943eb17


HDFS-11035. Better documentation for maintenace mode and upgrade domain.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ce943eb1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ce943eb1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ce943eb1

Branch: refs/heads/trunk
Commit: ce943eb17a4218d8ac1f5293c6726122371d8442
Parents: 230b85d
Author: Ming Ma 
Authored: Wed Sep 20 09:36:33 2017 -0700
Committer: Ming Ma 
Committed: Wed Sep 20 09:36:33 2017 -0700

--
 .../src/site/markdown/HdfsDataNodeAdminGuide.md | 165 ++
 .../src/site/markdown/HdfsUpgradeDomain.md  | 167 +++
 hadoop-project/src/site/site.xml|   4 +-
 3 files changed, 335 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce943eb1/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDataNodeAdminGuide.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDataNodeAdminGuide.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDataNodeAdminGuide.md
new file mode 100644
index 000..d6f288e
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDataNodeAdminGuide.md
@@ -0,0 +1,165 @@
+
+
+HDFS DataNode Admin Guide
+=
+
+
+
+Overview
+
+
+The Hadoop Distributed File System (HDFS) namenode maintains states of all 
datanodes.
+There are two types of states. The fist type describes the liveness of a 
datanode indicating if
+the node is live, dead or stale. The second type describes the admin state 
indicating if the node
+is in service, decommissioned or under maintenance.
+
+When an administrator decommission a datanode, the datanode will first be 
transitioned into
+`DECOMMISSION_INPROGRESS` state. After all blocks belonging to that datanode 
have been fully replicated elsewhere
+based on each block's replication factor. the datanode will be transitioned to 
`DECOMMISSIONED` state. After that,
+the administrator can shutdown the node to perform long-term repair and 
maintenance that could take days or weeks.
+After the machine has been repaired, the machine can be recommissioned back to 
the cluster.
+
+Sometimes administrators only need to take datanodes down for minutes/hours to 
perform short-term repair/maintenance.
+In such scenario, the HDFS block replication overhead incurred by decommission 
might not be necessary and a light-weight process is desirable.
+And that is what maintenance state is used for. When an administrator put a 
datanode in maintenance state, the datanode will first be transitioned
+to `ENTERING_MAINTENANCE` state. As long as all blocks belonging to that 
datanode is minimally replicated elsewhere, the datanode
+will immediately be transitioned to `IN_MAINTENANCE` state. After the 
maintenance has completed, the administrator can take the datanode
+out of the maintenance state. In addition, maintenance state supports timeout 
that allows administrators to config the maximum duration in
+which a datanode is allowed to stay in maintenance state. After the timeout, 
the datanode will be transitioned out of maintenance state
+automatically by HDFS without human intervention.
+
+In summary, datanode admin operations include the followings:
+
+* Decommission
+* Recommission
+* Putting nodes in maintenance state
+* Taking nodes out of maintenance state
+
+And datanode admin states include the followings:
+
+* `NORMAL` The node is in service.
+* `DECOMMISSIONED` The node has been decommissioned.
+* `DECOMMISSION_INPROGRESS` The node is being transitioned to DECOMMISSIONED 
state.
+* `IN_MAINTENANCE` The node in in maintenance state.
+* `ENTERING_MAINTENANCE` The node is being transitioned to maintenance state.
+
+
+Host-level settings
+---
+
+To perform any of datanode admin operations, there are two steps.
+
+* Update host-level configuration files to indicate the desired admin states 
of targeted datanodes. There are two supported formats for configuration files.
+* Hostname-only configuration. Each line includes the hostname/ip address 
for a datanode. That is the default format.
+* JSON-based configuration. The configuration is in JSON format. Each 
element maps to one datanode and each datanode can have multiple properties. 
This format is required to put datanodes to maintenance states.
+
+* Run the following command to have namenode reload the host-level 
configuration files.
+`hdfs dfsadmin [-refreshNodes]`
+
+### Hostname-only configuration
+This is the default configuration used by the namenode. It only supports node 
decommission and 

hadoop git commit: HDFS-12473. Change hosts JSON file format.

2017-09-20 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 a81167e2e -> c54310a63


HDFS-12473. Change hosts JSON file format.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c54310a6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c54310a6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c54310a6

Branch: refs/heads/branch-2.8
Commit: c54310a6383f075eeb6c8b61efcd045cb610c5cd
Parents: a81167e
Author: Ming Ma 
Authored: Wed Sep 20 09:21:32 2017 -0700
Committer: Ming Ma 
Committed: Wed Sep 20 09:21:32 2017 -0700

--
 .../hdfs/util/CombinedHostsFileReader.java  | 75 ++--
 .../hdfs/util/CombinedHostsFileWriter.java  | 26 +++
 .../CombinedHostFileManager.java|  3 +-
 .../hdfs/util/TestCombinedHostsFileReader.java  | 47 +++-
 .../src/test/resources/dfs.hosts.json   | 12 ++--
 .../src/test/resources/legacy.dfs.hosts.json|  5 ++
 6 files changed, 107 insertions(+), 61 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c54310a6/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
index 33acb91..f88aaef 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
@@ -18,58 +18,87 @@
 
 package org.apache.hadoop.hdfs.util;
 
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.map.JsonMappingException;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.EOFException;
 import java.io.FileInputStream;
 import java.io.InputStreamReader;
 import java.io.IOException;
 import java.io.Reader;
-
+import java.util.ArrayList;
 import java.util.Iterator;
-import java.util.Set;
-import java.util.HashSet;
+import java.util.List;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
-import org.codehaus.jackson.JsonFactory;
-import org.codehaus.jackson.map.ObjectMapper;
-
 import org.apache.hadoop.hdfs.protocol.DatanodeAdminProperties;
 
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 /**
- * Reader support for JSON based datanode configuration, an alternative
+ * Reader support for JSON-based datanode configuration, an alternative format
  * to the exclude/include files configuration.
- * The JSON file format is the array of elements where each element
+ * The JSON file format defines the array of elements where each element
  * in the array describes the properties of a datanode. The properties of
- * a datanode is defined in {@link DatanodeAdminProperties}. For example,
+ * a datanode is defined by {@link DatanodeAdminProperties}. For example,
  *
- * {"hostName": "host1"}
- * {"hostName": "host2", "port": 50, "upgradeDomain": "ud0"}
- * {"hostName": "host3", "port": 0, "adminState": "DECOMMISSIONED"}
+ * [
+ *   {"hostName": "host1"},
+ *   {"hostName": "host2", "port": 50, "upgradeDomain": "ud0"},
+ *   {"hostName": "host3", "port": 0, "adminState": "DECOMMISSIONED"}
+ * ]
  */
 @InterfaceAudience.LimitedPrivate({"HDFS"})
 @InterfaceStability.Unstable
 public final class CombinedHostsFileReader {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(CombinedHostsFileReader.class);
+
   private CombinedHostsFileReader() {
   }
 
   /**
* Deserialize a set of DatanodeAdminProperties from a json file.
-   * @param hostsFile the input json file to read from.
+   * @param hostsFile the input json file to read from
* @return the set of DatanodeAdminProperties
* @throws IOException
*/
-  public static Set
+  public static DatanodeAdminProperties[]
   readFile(final String hostsFile) throws IOException {
-HashSet allDNs = new HashSet<>();
-ObjectMapper mapper = new ObjectMapper();
+DatanodeAdminProperties[] allDNs = new DatanodeAdminProperties[0];
+ObjectMapper objectMapper = new ObjectMapper();
+boolean tryOldFormat = false;
 try (Reader input =
- new InputStreamReader(new FileInputStream(hostsFile), "UTF-8")) {
-  Iterator iterator =
-  mapper.readValues(new JsonFactory().createJsonParser(input),
-  DatanodeAdminProperties.class);
-  while (iterator.hasNext()) {
-DatanodeAdminProperties properties = iterator.next();
-

hadoop git commit: HDFS-12473. Change hosts JSON file format.

2017-09-20 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8.2 e6597fe30 -> 7580a10e3


HDFS-12473. Change hosts JSON file format.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7580a10e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7580a10e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7580a10e

Branch: refs/heads/branch-2.8.2
Commit: 7580a10e3ebf6d1c58530af623cb27136b8a3de2
Parents: e6597fe
Author: Ming Ma 
Authored: Wed Sep 20 09:09:57 2017 -0700
Committer: Ming Ma 
Committed: Wed Sep 20 09:09:57 2017 -0700

--
 .../hdfs/util/CombinedHostsFileReader.java  | 75 ++--
 .../hdfs/util/CombinedHostsFileWriter.java  | 26 +++
 .../CombinedHostFileManager.java|  3 +-
 .../hdfs/util/TestCombinedHostsFileReader.java  | 47 +++-
 .../src/test/resources/dfs.hosts.json   | 12 ++--
 .../src/test/resources/legacy.dfs.hosts.json|  5 ++
 6 files changed, 107 insertions(+), 61 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7580a10e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
index 33acb91..f88aaef 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
@@ -18,58 +18,87 @@
 
 package org.apache.hadoop.hdfs.util;
 
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.map.JsonMappingException;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.EOFException;
 import java.io.FileInputStream;
 import java.io.InputStreamReader;
 import java.io.IOException;
 import java.io.Reader;
-
+import java.util.ArrayList;
 import java.util.Iterator;
-import java.util.Set;
-import java.util.HashSet;
+import java.util.List;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
-import org.codehaus.jackson.JsonFactory;
-import org.codehaus.jackson.map.ObjectMapper;
-
 import org.apache.hadoop.hdfs.protocol.DatanodeAdminProperties;
 
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 /**
- * Reader support for JSON based datanode configuration, an alternative
+ * Reader support for JSON-based datanode configuration, an alternative format
  * to the exclude/include files configuration.
- * The JSON file format is the array of elements where each element
+ * The JSON file format defines the array of elements where each element
  * in the array describes the properties of a datanode. The properties of
- * a datanode is defined in {@link DatanodeAdminProperties}. For example,
+ * a datanode is defined by {@link DatanodeAdminProperties}. For example,
  *
- * {"hostName": "host1"}
- * {"hostName": "host2", "port": 50, "upgradeDomain": "ud0"}
- * {"hostName": "host3", "port": 0, "adminState": "DECOMMISSIONED"}
+ * [
+ *   {"hostName": "host1"},
+ *   {"hostName": "host2", "port": 50, "upgradeDomain": "ud0"},
+ *   {"hostName": "host3", "port": 0, "adminState": "DECOMMISSIONED"}
+ * ]
  */
 @InterfaceAudience.LimitedPrivate({"HDFS"})
 @InterfaceStability.Unstable
 public final class CombinedHostsFileReader {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(CombinedHostsFileReader.class);
+
   private CombinedHostsFileReader() {
   }
 
   /**
* Deserialize a set of DatanodeAdminProperties from a json file.
-   * @param hostsFile the input json file to read from.
+   * @param hostsFile the input json file to read from
* @return the set of DatanodeAdminProperties
* @throws IOException
*/
-  public static Set
+  public static DatanodeAdminProperties[]
   readFile(final String hostsFile) throws IOException {
-HashSet allDNs = new HashSet<>();
-ObjectMapper mapper = new ObjectMapper();
+DatanodeAdminProperties[] allDNs = new DatanodeAdminProperties[0];
+ObjectMapper objectMapper = new ObjectMapper();
+boolean tryOldFormat = false;
 try (Reader input =
- new InputStreamReader(new FileInputStream(hostsFile), "UTF-8")) {
-  Iterator iterator =
-  mapper.readValues(new JsonFactory().createJsonParser(input),
-  DatanodeAdminProperties.class);
-  while (iterator.hasNext()) {
-DatanodeAdminProperties properties = iterator.next();
-

hadoop git commit: HDFS-12473. Change hosts JSON file format.

2017-09-20 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 6581f2dea -> 7dd662eaf


HDFS-12473. Change hosts JSON file format.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7dd662ea
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7dd662ea
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7dd662ea

Branch: refs/heads/branch-2
Commit: 7dd662eafd5448b9c858e61877632f5cecc0e13e
Parents: 6581f2d
Author: Ming Ma 
Authored: Wed Sep 20 09:08:41 2017 -0700
Committer: Ming Ma 
Committed: Wed Sep 20 09:08:41 2017 -0700

--
 .../hdfs/util/CombinedHostsFileReader.java  | 74 ++--
 .../hdfs/util/CombinedHostsFileWriter.java  | 23 +++---
 .../CombinedHostFileManager.java|  3 +-
 .../hdfs/util/TestCombinedHostsFileReader.java  | 44 +++-
 .../src/test/resources/dfs.hosts.json   | 16 +++--
 .../src/test/resources/legacy.dfs.hosts.json|  7 ++
 6 files changed, 106 insertions(+), 61 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7dd662ea/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
index 9b23ad0..f88aaef 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
@@ -18,59 +18,87 @@
 
 package org.apache.hadoop.hdfs.util;
 
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.map.JsonMappingException;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.EOFException;
 import java.io.FileInputStream;
 import java.io.InputStreamReader;
 import java.io.IOException;
 import java.io.Reader;
+import java.util.ArrayList;
 import java.util.Iterator;
-import java.util.Set;
-import java.util.HashSet;
+import java.util.List;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
-import org.codehaus.jackson.JsonFactory;
-import org.codehaus.jackson.map.ObjectMapper;
-import org.codehaus.jackson.map.ObjectReader;
 import org.apache.hadoop.hdfs.protocol.DatanodeAdminProperties;
 
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 /**
- * Reader support for JSON based datanode configuration, an alternative
+ * Reader support for JSON-based datanode configuration, an alternative format
  * to the exclude/include files configuration.
- * The JSON file format is the array of elements where each element
+ * The JSON file format defines the array of elements where each element
  * in the array describes the properties of a datanode. The properties of
- * a datanode is defined in {@link DatanodeAdminProperties}. For example,
+ * a datanode is defined by {@link DatanodeAdminProperties}. For example,
  *
- * {"hostName": "host1"}
- * {"hostName": "host2", "port": 50, "upgradeDomain": "ud0"}
- * {"hostName": "host3", "port": 0, "adminState": "DECOMMISSIONED"}
+ * [
+ *   {"hostName": "host1"},
+ *   {"hostName": "host2", "port": 50, "upgradeDomain": "ud0"},
+ *   {"hostName": "host3", "port": 0, "adminState": "DECOMMISSIONED"}
+ * ]
  */
 @InterfaceAudience.LimitedPrivate({"HDFS"})
 @InterfaceStability.Unstable
 public final class CombinedHostsFileReader {
-  private static final ObjectReader READER =
-  new ObjectMapper().reader(DatanodeAdminProperties.class);
-  private static final JsonFactory JSON_FACTORY = new JsonFactory();
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(CombinedHostsFileReader.class);
 
   private CombinedHostsFileReader() {
   }
 
   /**
* Deserialize a set of DatanodeAdminProperties from a json file.
-   * @param hostsFile the input json file to read from.
+   * @param hostsFile the input json file to read from
* @return the set of DatanodeAdminProperties
* @throws IOException
*/
-  public static Set
+  public static DatanodeAdminProperties[]
   readFile(final String hostsFile) throws IOException {
-HashSet allDNs = new HashSet<>();
+DatanodeAdminProperties[] allDNs = new DatanodeAdminProperties[0];
+ObjectMapper objectMapper = new ObjectMapper();
+boolean tryOldFormat = false;
 try (Reader input =
- new InputStreamReader(new FileInputStream(hostsFile), "UTF-8")) {
-  Iterator iterator =
-  

hadoop git commit: HDFS-12473. Change hosts JSON file format.

2017-09-20 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 5c158f2f5 -> 816933722


HDFS-12473. Change hosts JSON file format.

(cherry picked from commit 230b85d5865b7e08fb7aaeab45295b5b966011ef)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/81693372
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/81693372
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/81693372

Branch: refs/heads/branch-3.0
Commit: 816933722af4d96a7b848a461f4228c2099c44c8
Parents: 5c158f2
Author: Ming Ma 
Authored: Wed Sep 20 09:03:59 2017 -0700
Committer: Ming Ma 
Committed: Wed Sep 20 09:05:56 2017 -0700

--
 .../hdfs/util/CombinedHostsFileReader.java  | 67 ++--
 .../hdfs/util/CombinedHostsFileWriter.java  | 23 ---
 .../CombinedHostFileManager.java|  3 +-
 .../hdfs/util/TestCombinedHostsFileReader.java  | 44 -
 .../src/test/resources/dfs.hosts.json   | 16 +++--
 .../src/test/resources/legacy.dfs.hosts.json|  7 ++
 6 files changed, 102 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/81693372/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
index 8da5655..aa8e4c1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
@@ -19,58 +19,85 @@
 package org.apache.hadoop.hdfs.util;
 
 import com.fasterxml.jackson.core.JsonFactory;
+import com.fasterxml.jackson.databind.JsonMappingException;
 import com.fasterxml.jackson.databind.ObjectMapper;
 import com.fasterxml.jackson.databind.ObjectReader;
+
 import java.io.FileInputStream;
 import java.io.InputStreamReader;
 import java.io.IOException;
 import java.io.Reader;
+import java.util.ArrayList;
 import java.util.Iterator;
-import java.util.Set;
-import java.util.HashSet;
+import java.util.List;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.hdfs.protocol.DatanodeAdminProperties;
 
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 /**
- * Reader support for JSON based datanode configuration, an alternative
+ * Reader support for JSON-based datanode configuration, an alternative format
  * to the exclude/include files configuration.
- * The JSON file format is the array of elements where each element
+ * The JSON file format defines the array of elements where each element
  * in the array describes the properties of a datanode. The properties of
- * a datanode is defined in {@link DatanodeAdminProperties}. For example,
+ * a datanode is defined by {@link DatanodeAdminProperties}. For example,
  *
- * {"hostName": "host1"}
- * {"hostName": "host2", "port": 50, "upgradeDomain": "ud0"}
- * {"hostName": "host3", "port": 0, "adminState": "DECOMMISSIONED"}
+ * [
+ *   {"hostName": "host1"},
+ *   {"hostName": "host2", "port": 50, "upgradeDomain": "ud0"},
+ *   {"hostName": "host3", "port": 0, "adminState": "DECOMMISSIONED"}
+ * ]
  */
 @InterfaceAudience.LimitedPrivate({"HDFS"})
 @InterfaceStability.Unstable
 public final class CombinedHostsFileReader {
-  private static final ObjectReader READER =
-  new ObjectMapper().readerFor(DatanodeAdminProperties.class);
-  private static final JsonFactory JSON_FACTORY = new JsonFactory();
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(CombinedHostsFileReader.class);
 
   private CombinedHostsFileReader() {
   }
 
   /**
* Deserialize a set of DatanodeAdminProperties from a json file.
-   * @param hostsFile the input json file to read from.
+   * @param hostsFile the input json file to read from
* @return the set of DatanodeAdminProperties
* @throws IOException
*/
-  public static Set
+  public static DatanodeAdminProperties[]
   readFile(final String hostsFile) throws IOException {
-HashSet allDNs = new HashSet<>();
+DatanodeAdminProperties[] allDNs = new DatanodeAdminProperties[0];
+ObjectMapper objectMapper = new ObjectMapper();
+boolean tryOldFormat = false;
 try (Reader input =
- new InputStreamReader(new FileInputStream(hostsFile), "UTF-8")) {
-  Iterator iterator =
-  READER.readValues(JSON_FACTORY.createParser(input));
-  while 

hadoop git commit: HDFS-12473. Change hosts JSON file format.

2017-09-20 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 7e58b2478 -> 230b85d58


HDFS-12473. Change hosts JSON file format.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/230b85d5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/230b85d5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/230b85d5

Branch: refs/heads/trunk
Commit: 230b85d5865b7e08fb7aaeab45295b5b966011ef
Parents: 7e58b24
Author: Ming Ma 
Authored: Wed Sep 20 09:03:59 2017 -0700
Committer: Ming Ma 
Committed: Wed Sep 20 09:03:59 2017 -0700

--
 .../hdfs/util/CombinedHostsFileReader.java  | 67 ++--
 .../hdfs/util/CombinedHostsFileWriter.java  | 23 ---
 .../CombinedHostFileManager.java|  3 +-
 .../hdfs/util/TestCombinedHostsFileReader.java  | 44 -
 .../src/test/resources/dfs.hosts.json   | 16 +++--
 .../src/test/resources/legacy.dfs.hosts.json|  7 ++
 6 files changed, 102 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/230b85d5/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
index 8da5655..aa8e4c1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
@@ -19,58 +19,85 @@
 package org.apache.hadoop.hdfs.util;
 
 import com.fasterxml.jackson.core.JsonFactory;
+import com.fasterxml.jackson.databind.JsonMappingException;
 import com.fasterxml.jackson.databind.ObjectMapper;
 import com.fasterxml.jackson.databind.ObjectReader;
+
 import java.io.FileInputStream;
 import java.io.InputStreamReader;
 import java.io.IOException;
 import java.io.Reader;
+import java.util.ArrayList;
 import java.util.Iterator;
-import java.util.Set;
-import java.util.HashSet;
+import java.util.List;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.hdfs.protocol.DatanodeAdminProperties;
 
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 /**
- * Reader support for JSON based datanode configuration, an alternative
+ * Reader support for JSON-based datanode configuration, an alternative format
  * to the exclude/include files configuration.
- * The JSON file format is the array of elements where each element
+ * The JSON file format defines the array of elements where each element
  * in the array describes the properties of a datanode. The properties of
- * a datanode is defined in {@link DatanodeAdminProperties}. For example,
+ * a datanode is defined by {@link DatanodeAdminProperties}. For example,
  *
- * {"hostName": "host1"}
- * {"hostName": "host2", "port": 50, "upgradeDomain": "ud0"}
- * {"hostName": "host3", "port": 0, "adminState": "DECOMMISSIONED"}
+ * [
+ *   {"hostName": "host1"},
+ *   {"hostName": "host2", "port": 50, "upgradeDomain": "ud0"},
+ *   {"hostName": "host3", "port": 0, "adminState": "DECOMMISSIONED"}
+ * ]
  */
 @InterfaceAudience.LimitedPrivate({"HDFS"})
 @InterfaceStability.Unstable
 public final class CombinedHostsFileReader {
-  private static final ObjectReader READER =
-  new ObjectMapper().readerFor(DatanodeAdminProperties.class);
-  private static final JsonFactory JSON_FACTORY = new JsonFactory();
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(CombinedHostsFileReader.class);
 
   private CombinedHostsFileReader() {
   }
 
   /**
* Deserialize a set of DatanodeAdminProperties from a json file.
-   * @param hostsFile the input json file to read from.
+   * @param hostsFile the input json file to read from
* @return the set of DatanodeAdminProperties
* @throws IOException
*/
-  public static Set
+  public static DatanodeAdminProperties[]
   readFile(final String hostsFile) throws IOException {
-HashSet allDNs = new HashSet<>();
+DatanodeAdminProperties[] allDNs = new DatanodeAdminProperties[0];
+ObjectMapper objectMapper = new ObjectMapper();
+boolean tryOldFormat = false;
 try (Reader input =
- new InputStreamReader(new FileInputStream(hostsFile), "UTF-8")) {
-  Iterator iterator =
-  READER.readValues(JSON_FACTORY.createParser(input));
-  while (iterator.hasNext()) {
-DatanodeAdminProperties properties = iterator.next();
- 

hadoop git commit: HDFS-9922. Upgrade Domain placement policy status marks a good block in violation when there are decommissioned nodes. (Chris Trezzo via mingma)

2017-05-02 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 ab511286f -> d1ae1fb44


HDFS-9922. Upgrade Domain placement policy status marks a good block in 
violation when there are decommissioned nodes. (Chris Trezzo via mingma)

(cherry picked from commit b48f27e794e42ba90836314834e872616437d7c9)
(cherry picked from commit 32b115da1d75b9b5b6a770371fbecfda43928bb9)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d1ae1fb4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d1ae1fb4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d1ae1fb4

Branch: refs/heads/branch-2.8
Commit: d1ae1fb44946ef3934745a3822ea467da31f422d
Parents: ab51128
Author: Ming Ma <min...@apache.org>
Authored: Wed Jun 15 22:00:52 2016 -0700
Committer: Ming Ma <min...@twitter.com>
Committed: Tue May 2 12:48:51 2017 -0700

--
 .../BlockPlacementStatusWithUpgradeDomain.java  |   2 +-
 ...stBlockPlacementStatusWithUpgradeDomain.java |  83 ++
 .../TestUpgradeDomainBlockPlacementPolicy.java  | 161 ++-
 3 files changed, 209 insertions(+), 37 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1ae1fb4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithUpgradeDomain.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithUpgradeDomain.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithUpgradeDomain.java
index e2e1486..4b3c3cc 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithUpgradeDomain.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithUpgradeDomain.java
@@ -60,7 +60,7 @@ public class BlockPlacementStatusWithUpgradeDomain implements
 
   private boolean isUpgradeDomainPolicySatisfied() {
 if (numberOfReplicas <= upgradeDomainFactor) {
-  return (numberOfReplicas == upgradeDomains.size());
+  return (numberOfReplicas <= upgradeDomains.size());
 } else {
   return upgradeDomains.size() >= upgradeDomainFactor;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1ae1fb4/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementStatusWithUpgradeDomain.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementStatusWithUpgradeDomain.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementStatusWithUpgradeDomain.java
new file mode 100644
index 000..bfff932
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementStatusWithUpgradeDomain.java
@@ -0,0 +1,83 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.blockmanagement;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+import java.util.HashSet;
+import java.util.Set;
+
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Unit tests for BlockPlacementStatusWithUpgradeDomain class.
+ */
+public class TestBlockPlacementStatusWithUpgradeDomain {
+
+  private Set upgradeDomains;
+  private BlockPlacementStatusDefault bpsd =
+  mock(BlockPlacementStatusDefault.class);
+
+  @Before
+  public void setup() {
+upgradeDomains = new HashSet();
+upgradeDomains.add("1");
+upgradeDomains.add("2");
+upgradeDomains.add("3");
+   

hadoop git commit: HDFS-9016. Display upgrade domain information in fsck.

2017-05-02 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 42fa35d53 -> 305a9d886


HDFS-9016. Display upgrade domain information in fsck.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/305a9d88
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/305a9d88
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/305a9d88

Branch: refs/heads/branch-2.8
Commit: 305a9d886a8d613080005e366891016b89f65059
Parents: 42fa35d
Author: Ming Ma 
Authored: Tue May 2 09:52:16 2017 -0700
Committer: Ming Ma 
Committed: Tue May 2 09:52:16 2017 -0700

--
 .../hdfs/server/namenode/NamenodeFsck.java  | 25 +--
 .../org/apache/hadoop/hdfs/tools/DFSck.java | 13 ++--
 .../src/site/markdown/HDFSCommands.md   |  3 +-
 .../hadoop/hdfs/server/namenode/TestFsck.java   | 69 +++-
 4 files changed, 97 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/305a9d88/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
index 5eb3b23..3a72b8d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
@@ -118,6 +118,7 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   public static final String DECOMMISSIONED_STATUS = "is DECOMMISSIONED";
   public static final String NONEXISTENT_STATUS = "does not exist";
   public static final String FAILURE_STATUS = "FAILED";
+  public static final String UNDEFINED = "undefined";
 
   private final NameNode namenode;
   private final NetworkTopology networktopology;
@@ -136,6 +137,7 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   private boolean showCorruptFileBlocks = false;
 
   private boolean showReplicaDetails = false;
+  private boolean showUpgradeDomains = false;
   private long staleInterval;
   private Tracer tracer;
 
@@ -216,10 +218,13 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   else if (key.equals("racks")) { this.showRacks = true; }
   else if (key.equals("replicadetails")) {
 this.showReplicaDetails = true;
-  }
-  else if (key.equals("storagepolicies")) { this.showStoragePolcies = 
true; }
-  else if (key.equals("openforwrite")) {this.showOpenFiles = true; }
-  else if (key.equals("listcorruptfileblocks")) {
+  } else if (key.equals("upgradedomains")) {
+this.showUpgradeDomains = true;
+  } else if (key.equals("storagepolicies")) {
+this.showStoragePolcies = true;
+  } else if (key.equals("openforwrite")) {
+this.showOpenFiles = true;
+  } else if (key.equals("listcorruptfileblocks")) {
 this.showCorruptFileBlocks = true;
   } else if (key.equals("startblockafter")) {
 this.currentCookie[0] = pmap.get("startblockafter")[0];
@@ -531,8 +536,8 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
 }
   }
 
-  private void collectBlocksSummary(String parent, HdfsFileStatus file, Result 
res,
-  LocatedBlocks blocks) throws IOException {
+  private void collectBlocksSummary(String parent, HdfsFileStatus file,
+  Result res, LocatedBlocks blocks) throws IOException {
 String path = file.getFullName(parent);
 boolean isOpen = blocks.isUnderConstruction();
 if (isOpen && !showOpenFiles) {
@@ -645,7 +650,8 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
 missize += block.getNumBytes();
   } else {
 report.append(" Live_repl=" + liveReplicas);
-if (showLocations || showRacks || showReplicaDetails) {
+if (showLocations || showRacks || showReplicaDetails ||
+showUpgradeDomains) {
   StringBuilder sb = new StringBuilder("[");
   Iterable storages = 
bm.getStorages(block.getLocalBlock());
   for (Iterator iterator = storages.iterator(); 
iterator.hasNext();) {
@@ -657,6 +663,11 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   sb.append(new DatanodeInfoWithStorage(dnDesc, 
storage.getStorageID(), storage
   .getStorageType()));
 }
+if (showUpgradeDomains) {
+  String upgradeDomain = (dnDesc.getUpgradeDomain() != null) ?
+  dnDesc.getUpgradeDomain() : UNDEFINED;
+  sb.append("(ud=" + 

[1/2] hadoop git commit: HDFS-9005. Provide configuration support for upgrade domain.

2017-05-02 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 c9bf21b0f -> c4c553321


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c4c55332/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/HostsFileWriter.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/HostsFileWriter.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/HostsFileWriter.java
new file mode 100644
index 000..cd5ae95
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/HostsFileWriter.java
@@ -0,0 +1,122 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.util;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.HashSet;
+
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DFSTestUtil;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.server.blockmanagement.HostConfigManager;
+import org.apache.hadoop.hdfs.server.blockmanagement.HostFileManager;
+
+import org.apache.hadoop.hdfs.protocol.DatanodeAdminProperties;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo.AdminStates;
+
+import static org.junit.Assert.assertTrue;
+
+public class HostsFileWriter {
+  private FileSystem localFileSys;
+  private Path fullDir;
+  private Path excludeFile;
+  private Path includeFile;
+  private Path combinedFile;
+  private boolean isLegacyHostsFile = false;
+
+  public void initialize(Configuration conf, String dir) throws IOException {
+localFileSys = FileSystem.getLocal(conf);
+Path workingDir = new Path(MiniDFSCluster.getBaseDirectory());
+this.fullDir = new Path(workingDir, dir);
+assertTrue(localFileSys.mkdirs(this.fullDir));
+
+if (conf.getClass(
+DFSConfigKeys.DFS_NAMENODE_HOSTS_PROVIDER_CLASSNAME_KEY,
+HostFileManager.class, HostConfigManager.class).equals(
+HostFileManager.class)) {
+  isLegacyHostsFile = true;
+}
+if (isLegacyHostsFile) {
+  excludeFile = new Path(fullDir, "exclude");
+  includeFile = new Path(fullDir, "include");
+  DFSTestUtil.writeFile(localFileSys, excludeFile, "");
+  DFSTestUtil.writeFile(localFileSys, includeFile, "");
+  conf.set(DFSConfigKeys.DFS_HOSTS_EXCLUDE, excludeFile.toUri().getPath());
+  conf.set(DFSConfigKeys.DFS_HOSTS, includeFile.toUri().getPath());
+} else {
+  combinedFile = new Path(fullDir, "all");
+  conf.set(DFSConfigKeys.DFS_HOSTS, combinedFile.toString());
+}
+  }
+
+  public void initExcludeHost(String hostNameAndPort) throws IOException {
+if (isLegacyHostsFile) {
+  DFSTestUtil.writeFile(localFileSys, excludeFile, hostNameAndPort);
+} else {
+  DatanodeAdminProperties dn = new DatanodeAdminProperties();
+  String [] hostAndPort = hostNameAndPort.split(":");
+  dn.setHostName(hostAndPort[0]);
+  dn.setPort(Integer.parseInt(hostAndPort[1]));
+  dn.setAdminState(AdminStates.DECOMMISSIONED);
+  HashSet allDNs = new HashSet<>();
+  allDNs.add(dn);
+  CombinedHostsFileWriter.writeFile(combinedFile.toString(), allDNs);
+}
+  }
+
+  public void initIncludeHosts(String[] hostNameAndPorts) throws IOException {
+StringBuilder includeHosts = new StringBuilder();
+if (isLegacyHostsFile) {
+  for(String hostNameAndPort : hostNameAndPorts) {
+includeHosts.append(hostNameAndPort).append("\n");
+  }
+  DFSTestUtil.writeFile(localFileSys, includeFile,
+  includeHosts.toString());
+} else {
+  HashSet allDNs = new HashSet<>();
+  for(String hostNameAndPort : hostNameAndPorts) {
+String[] hostAndPort = hostNameAndPort.split(":");
+DatanodeAdminProperties dn = new DatanodeAdminProperties();
+dn.setHostName(hostAndPort[0]);
+dn.setPort(Integer.parseInt(hostAndPort[1]));
+allDNs.add(dn);
+  }
+  

[2/2] hadoop git commit: HDFS-9005. Provide configuration support for upgrade domain.

2017-05-02 Thread mingma
HDFS-9005. Provide configuration support for upgrade domain.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c4c55332
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c4c55332
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c4c55332

Branch: refs/heads/branch-2.8
Commit: c4c5533216eaaa64731a6f0c1bc9be9b1e91f7d6
Parents: c9bf21b
Author: Ming Ma 
Authored: Tue May 2 06:53:32 2017 -0700
Committer: Ming Ma 
Committed: Tue May 2 06:53:32 2017 -0700

--
 .../hdfs/protocol/DatanodeAdminProperties.java  | 100 
 .../apache/hadoop/hdfs/protocol/DatanodeID.java |   6 +
 .../hdfs/util/CombinedHostsFileReader.java  |  76 ++
 .../hdfs/util/CombinedHostsFileWriter.java  |  69 +
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   4 +-
 .../CombinedHostFileManager.java| 250 +++
 .../server/blockmanagement/DatanodeManager.java |  59 +++--
 .../blockmanagement/HostConfigManager.java  |  80 ++
 .../server/blockmanagement/HostFileManager.java | 147 +++
 .../hdfs/server/blockmanagement/HostSet.java| 114 +
 .../src/main/resources/hdfs-default.xml |  15 ++
 .../src/site/markdown/HdfsUserGuide.md  |   6 +-
 .../apache/hadoop/hdfs/TestDatanodeReport.java  |  56 -
 .../TestBlocksWithNotEnoughRacks.java   |  33 +--
 .../blockmanagement/TestDatanodeManager.java|   8 +-
 .../blockmanagement/TestHostFileManager.java|  10 +-
 .../hdfs/server/namenode/TestHostsFiles.java|  70 +++---
 .../server/namenode/TestNameNodeMXBean.java |  25 +-
 .../hdfs/server/namenode/TestStartup.java   |  53 +---
 .../TestUpgradeDomainBlockPlacementPolicy.java  | 169 +
 .../hadoop/hdfs/util/HostsFileWriter.java   | 122 +
 .../hdfs/util/TestCombinedHostsFileReader.java  |  79 ++
 .../src/test/resources/dfs.hosts.json   |   5 +
 23 files changed, 1291 insertions(+), 265 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c4c55332/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeAdminProperties.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeAdminProperties.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeAdminProperties.java
new file mode 100644
index 000..9f7b983
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeAdminProperties.java
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.protocol;
+
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo.AdminStates;
+
+/**
+ * The class describes the configured admin properties for a datanode.
+ *
+ * It is the static configuration specified by administrators via dfsadmin
+ * command; different from the runtime state. CombinedHostFileManager uses
+ * the class to deserialize the configurations from json-based file format.
+ *
+ * To decommission a node, use AdminStates.DECOMMISSIONED.
+ */
+public class DatanodeAdminProperties {
+  private String hostName;
+  private int port;
+  private String upgradeDomain;
+  private AdminStates adminState = AdminStates.NORMAL;
+
+  /**
+   * Return the host name of the datanode.
+   * @return the host name of the datanode.
+   */
+  public String getHostName() {
+return hostName;
+  }
+
+  /**
+   * Set the host name of the datanode.
+   * @param hostName the host name of the datanode.
+   */
+  public void setHostName(final String hostName) {
+this.hostName = hostName;
+  }
+
+  /**
+   * Get the port number of the datanode.
+   * @return the port number of the datanode.
+   */
+  public int getPort() {
+return port;
+  }
+
+  /**
+   * Set the port number of the datanode.
+   * @param port the port number of the 

hadoop git commit: YARN-5797. Add metrics to the node manager for cleaning the PUBLIC and PRIVATE caches. (Chris Trezzo via mingma)

2017-04-06 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 0391c9202 -> db5b4c292


YARN-5797. Add metrics to the node manager for cleaning the PUBLIC and PRIVATE 
caches. (Chris Trezzo via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/db5b4c29
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/db5b4c29
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/db5b4c29

Branch: refs/heads/branch-2
Commit: db5b4c292b3b207cce4eb1157d0e6d89d1f59298
Parents: 0391c92
Author: Ming Ma <min...@apache.org>
Authored: Thu Apr 6 17:08:59 2017 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Thu Apr 6 17:08:59 2017 -0700

--
 .../containermanager/ContainerManagerImpl.java  |  8 ++--
 .../localizer/ResourceLocalizationService.java  | 13 ++-
 .../nodemanager/metrics/NodeManagerMetrics.java | 41 
 .../nodemanager/DummyContainerManager.java  |  5 ++-
 .../TestContainerManagerRecovery.java   | 11 --
 .../localizer/TestLocalCacheCleanup.java| 17 +++-
 .../TestLocalCacheDirectoryManager.java |  8 +++-
 .../TestResourceLocalizationService.java| 36 +
 8 files changed, 111 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/db5b4c29/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
index 024f8b8..20e2ba0 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
@@ -225,7 +225,8 @@ public class ContainerManagerImpl extends CompositeService 
implements
 this.metrics = metrics;
 
 rsrcLocalizationSrvc =
-createResourceLocalizationService(exec, deletionContext, context);
+createResourceLocalizationService(exec, deletionContext, context,
+metrics);
 addService(rsrcLocalizationSrvc);
 
 containersLauncher = createContainersLauncher(context, exec);
@@ -452,9 +453,10 @@ public class ContainerManagerImpl extends CompositeService 
implements
   }
 
   protected ResourceLocalizationService createResourceLocalizationService(
-  ContainerExecutor exec, DeletionService deletionContext, Context 
context) {
+  ContainerExecutor exec, DeletionService deletionContext,
+  Context nmContext, NodeManagerMetrics nmMetrics) {
 return new ResourceLocalizationService(this.dispatcher, exec,
-deletionContext, dirsHandler, context);
+deletionContext, dirsHandler, nmContext, nmMetrics);
   }
 
   protected SharedCacheUploadService createSharedCacheUploaderService() {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/db5b4c29/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
index 37473e3..2208f8f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
@@ -131,6 +131,7 @@ import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.even
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.security.LocalizerTokenIden

hadoop git commit: YARN-5797. Add metrics to the node manager for cleaning the PUBLIC and PRIVATE caches. (Chris Trezzo via mingma)

2017-04-06 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 0eacd4c13 -> 0116c3c95


YARN-5797. Add metrics to the node manager for cleaning the PUBLIC and PRIVATE 
caches. (Chris Trezzo via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0116c3c9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0116c3c9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0116c3c9

Branch: refs/heads/trunk
Commit: 0116c3c95769e204ab2600510f0efd6baafb43e0
Parents: 0eacd4c
Author: Ming Ma <min...@apache.org>
Authored: Thu Apr 6 16:54:43 2017 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Thu Apr 6 16:54:43 2017 -0700

--
 .../containermanager/ContainerManagerImpl.java  |  8 ++--
 .../localizer/ResourceLocalizationService.java  | 13 ++-
 .../nodemanager/metrics/NodeManagerMetrics.java | 41 
 .../nodemanager/DummyContainerManager.java  |  5 ++-
 .../TestContainerManagerRecovery.java   | 11 --
 .../localizer/TestLocalCacheCleanup.java| 17 +++-
 .../TestLocalCacheDirectoryManager.java |  8 +++-
 .../TestResourceLocalizationService.java| 36 +
 8 files changed, 111 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0116c3c9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
index 85dc5fc..d82c728 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
@@ -230,7 +230,8 @@ public class ContainerManagerImpl extends CompositeService 
implements
 this.metrics = metrics;
 
 rsrcLocalizationSrvc =
-createResourceLocalizationService(exec, deletionContext, context);
+createResourceLocalizationService(exec, deletionContext, context,
+metrics);
 addService(rsrcLocalizationSrvc);
 
 containersLauncher = createContainersLauncher(context, exec);
@@ -477,9 +478,10 @@ public class ContainerManagerImpl extends CompositeService 
implements
   }
 
   protected ResourceLocalizationService createResourceLocalizationService(
-  ContainerExecutor exec, DeletionService deletionContext, Context 
context) {
+  ContainerExecutor exec, DeletionService deletionContext,
+  Context nmContext, NodeManagerMetrics nmMetrics) {
 return new ResourceLocalizationService(this.dispatcher, exec,
-deletionContext, dirsHandler, context);
+deletionContext, dirsHandler, nmContext, nmMetrics);
   }
 
   protected SharedCacheUploadService createSharedCacheUploaderService() {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0116c3c9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
index 37473e3..2208f8f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
@@ -131,6 +131,7 @@ import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.even
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.security.LocalizerTokenIden

hadoop git commit: YARN-6004. Refactor TestResourceLocalizationService#testDownloadingResourcesOnContainer so that it is less than 150 lines. (Chris Trezzo via mingma)

2017-04-04 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 1938f97c0 -> 7507ccd38


YARN-6004. Refactor 
TestResourceLocalizationService#testDownloadingResourcesOnContainer so that it 
is less than 150 lines. (Chris Trezzo via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7507ccd3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7507ccd3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7507ccd3

Branch: refs/heads/branch-2
Commit: 7507ccd38a719a257eacd58f507a2738015a49e4
Parents: 1938f97
Author: Ming Ma <min...@apache.org>
Authored: Tue Apr 4 18:05:09 2017 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Tue Apr 4 18:05:09 2017 -0700

--
 .../TestResourceLocalizationService.java| 376 +++
 1 file changed, 212 insertions(+), 164 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7507ccd3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
index 38f461b..623ce14 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
@@ -1124,7 +1124,6 @@ public class TestResourceLocalizationService {
   }
 
   @Test(timeout = 2)
-  @SuppressWarnings("unchecked")
   public void testDownloadingResourcesOnContainerKill() throws Exception {
 List localDirs = new ArrayList();
 String[] sDirs = new String[1];
@@ -1132,13 +1131,6 @@ public class TestResourceLocalizationService {
 sDirs[0] = localDirs.get(0).toString();
 
 conf.setStrings(YarnConfiguration.NM_LOCAL_DIRS, sDirs);
-DrainDispatcher dispatcher = new DrainDispatcher();
-dispatcher.init(conf);
-dispatcher.start();
-EventHandler applicationBus = mock(EventHandler.class);
-dispatcher.register(ApplicationEventType.class, applicationBus);
-EventHandler containerBus = mock(EventHandler.class);
-dispatcher.register(ContainerEventType.class, containerBus);
 
 DummyExecutor exec = new DummyExecutor();
 LocalDirsHandlerService dirsHandler = new LocalDirsHandlerService();
@@ -1149,6 +1141,7 @@ public class TestResourceLocalizationService {
 delService.init(new Configuration());
 delService.start();
 
+DrainDispatcher dispatcher = getDispatcher(conf);
 ResourceLocalizationService rawService = new ResourceLocalizationService(
 dispatcher, exec, delService, dirsHandler, nmContext);
 ResourceLocalizationService spyService = spy(rawService);
@@ -1191,180 +1184,235 @@ public class TestResourceLocalizationService {
   spyService.init(conf);
   spyService.start();
 
-  final Application app = mock(Application.class);
-  final ApplicationId appId =
-  BuilderUtils.newApplicationId(314159265358979L, 3);
-  String user = "user0";
-  when(app.getUser()).thenReturn(user);
-  when(app.getAppId()).thenReturn(appId);
-  spyService.handle(new ApplicationLocalizationEvent(
-  LocalizationEventType.INIT_APPLICATION_RESOURCES, app));
-  ArgumentMatcher matchesAppInit =
+  doLocalization(spyService, dispatcher, exec, delService);
+
+} finally {
+  spyService.stop();
+  dispatcher.stop();
+  delService.stop();
+}
+  }
+
+  private DrainDispatcher getDispatcher(Configuration config) {
+DrainDispatcher dispatcher = new DrainDispatcher();
+dispatcher.init(config);
+dispatcher.start();
+return dispatcher;
+  }
+
+  @SuppressWarnings("unchecked")
+  private EventHandler getApplicationBus(
+  DrainDispatcher dispatcher) {
+EventHandler applicationBus = mock(EventHandler.class);
+dispatcher.register(ApplicationEventType.class, applicationBus);
+return applicationBus;
+  }
+
+  @SuppressWarnings("unchecked")
+  private EventHandler getContainerBus(
+  DrainDispatcher dispatcher) {
+

hadoop git commit: YARN-6004. Refactor TestResourceLocalizationService#testDownloadingResourcesOnContainer so that it is less than 150 lines. (Chris Trezzo via mingma)

2017-04-04 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 9cc04b470 -> 2d5c09b84


YARN-6004. Refactor 
TestResourceLocalizationService#testDownloadingResourcesOnContainer so that it 
is less than 150 lines. (Chris Trezzo via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2d5c09b8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2d5c09b8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2d5c09b8

Branch: refs/heads/trunk
Commit: 2d5c09b8481d8cb4c2c517df5a9838aa8a875222
Parents: 9cc04b4
Author: Ming Ma <min...@apache.org>
Authored: Tue Apr 4 17:56:21 2017 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Tue Apr 4 17:56:21 2017 -0700

--
 .../TestResourceLocalizationService.java| 376 +++
 1 file changed, 212 insertions(+), 164 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2d5c09b8/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
index 411..932e94f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
@@ -1124,7 +1124,6 @@ public class TestResourceLocalizationService {
   }
 
   @Test(timeout = 2)
-  @SuppressWarnings("unchecked")
   public void testDownloadingResourcesOnContainerKill() throws Exception {
 List localDirs = new ArrayList();
 String[] sDirs = new String[1];
@@ -1132,13 +1131,6 @@ public class TestResourceLocalizationService {
 sDirs[0] = localDirs.get(0).toString();
 
 conf.setStrings(YarnConfiguration.NM_LOCAL_DIRS, sDirs);
-DrainDispatcher dispatcher = new DrainDispatcher();
-dispatcher.init(conf);
-dispatcher.start();
-EventHandler applicationBus = mock(EventHandler.class);
-dispatcher.register(ApplicationEventType.class, applicationBus);
-EventHandler containerBus = mock(EventHandler.class);
-dispatcher.register(ContainerEventType.class, containerBus);
 
 DummyExecutor exec = new DummyExecutor();
 LocalDirsHandlerService dirsHandler = new LocalDirsHandlerService();
@@ -1149,6 +1141,7 @@ public class TestResourceLocalizationService {
 delService.init(new Configuration());
 delService.start();
 
+DrainDispatcher dispatcher = getDispatcher(conf);
 ResourceLocalizationService rawService = new ResourceLocalizationService(
 dispatcher, exec, delService, dirsHandler, nmContext);
 ResourceLocalizationService spyService = spy(rawService);
@@ -1191,180 +1184,235 @@ public class TestResourceLocalizationService {
   spyService.init(conf);
   spyService.start();
 
-  final Application app = mock(Application.class);
-  final ApplicationId appId =
-  BuilderUtils.newApplicationId(314159265358979L, 3);
-  String user = "user0";
-  when(app.getUser()).thenReturn(user);
-  when(app.getAppId()).thenReturn(appId);
-  spyService.handle(new ApplicationLocalizationEvent(
-  LocalizationEventType.INIT_APPLICATION_RESOURCES, app));
-  ArgumentMatcher matchesAppInit =
+  doLocalization(spyService, dispatcher, exec, delService);
+
+} finally {
+  spyService.stop();
+  dispatcher.stop();
+  delService.stop();
+}
+  }
+
+  private DrainDispatcher getDispatcher(Configuration config) {
+DrainDispatcher dispatcher = new DrainDispatcher();
+dispatcher.init(config);
+dispatcher.start();
+return dispatcher;
+  }
+
+  @SuppressWarnings("unchecked")
+  private EventHandler getApplicationBus(
+  DrainDispatcher dispatcher) {
+EventHandler applicationBus = mock(EventHandler.class);
+dispatcher.register(ApplicationEventType.class, applicationBus);
+return applicationBus;
+  }
+
+  @SuppressWarnings("unchecked")
+  private EventHandler getContainerBus(
+  DrainDispatcher dispatcher) {
+

hadoop git commit: MAPREDUCE-6862. Fragments are not handled correctly by resource limit checking. (Chris Trezzo via mingma)

2017-03-29 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 3fe7d36e7 -> 2ae9ae186


MAPREDUCE-6862. Fragments are not handled correctly by resource limit checking. 
(Chris Trezzo via mingma)

(cherry picked from commit ceab00ac62f8057a07b4b936799e6f04271e6e41)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2ae9ae18
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2ae9ae18
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2ae9ae18

Branch: refs/heads/branch-2
Commit: 2ae9ae1864433d03891afea8ee681a510688b53f
Parents: 3fe7d36
Author: Ming Ma <min...@apache.org>
Authored: Wed Mar 29 17:41:58 2017 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Wed Mar 29 17:42:40 2017 -0700

--
 .../hadoop/mapreduce/JobResourceUploader.java   | 36 --
 .../mapreduce/TestJobResourceUploader.java  | 40 +---
 2 files changed, 59 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2ae9ae18/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
index 4c48ff4..085c966 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
@@ -238,28 +238,42 @@ class JobResourceUploader {
 Collection dcArchives =
 conf.getStringCollection(MRJobConfig.CACHE_ARCHIVES);
 
-for (String path : dcFiles) {
-  explorePath(conf, new Path(path), limitChecker, statCache);
+for (String uri : dcFiles) {
+  explorePath(conf, stringToPath(uri), limitChecker, statCache);
 }
 
-for (String path : dcArchives) {
-  explorePath(conf, new Path(path), limitChecker, statCache);
+for (String uri : dcArchives) {
+  explorePath(conf, stringToPath(uri), limitChecker, statCache);
 }
 
-for (String path : files) {
-  explorePath(conf, new Path(path), limitChecker, statCache);
+for (String uri : files) {
+  explorePath(conf, stringToPath(uri), limitChecker, statCache);
 }
 
-for (String path : libjars) {
-  explorePath(conf, new Path(path), limitChecker, statCache);
+for (String uri : libjars) {
+  explorePath(conf, stringToPath(uri), limitChecker, statCache);
 }
 
-for (String path : archives) {
-  explorePath(conf, new Path(path), limitChecker, statCache);
+for (String uri : archives) {
+  explorePath(conf, stringToPath(uri), limitChecker, statCache);
 }
 
 if (jobJar != null) {
-  explorePath(conf, new Path(jobJar), limitChecker, statCache);
+  explorePath(conf, stringToPath(jobJar), limitChecker, statCache);
+}
+  }
+
+  /**
+   * Convert a String to a Path and gracefully remove fragments/queries if they
+   * exist in the String.
+   */
+  @VisibleForTesting
+  Path stringToPath(String s) {
+try {
+  URI uri = new URI(s);
+  return new Path(uri.getScheme(), uri.getAuthority(), uri.getPath());
+} catch (URISyntaxException e) {
+  throw new IllegalArgumentException(e);
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2ae9ae18/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/TestJobResourceUploader.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/TestJobResourceUploader.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/TestJobResourceUploader.java
index 36ea57a..8ba50a6 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/TestJobResourceUploader.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/TestJobResourceUploader.java
@@ -40,6 +40,34 @@ import org.junit.Test;
 public class TestJobResourceUploader {
 
   @Test
+  public void testStringToPath() throws IOException {
+Configuration conf = new Configuration();
+JobR

hadoop git commit: MAPREDUCE-6862. Fragments are not handled correctly by resource limit checking. (Chris Trezzo via mingma)

2017-03-29 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 6a5516c23 -> ceab00ac6


MAPREDUCE-6862. Fragments are not handled correctly by resource limit checking. 
(Chris Trezzo via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ceab00ac
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ceab00ac
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ceab00ac

Branch: refs/heads/trunk
Commit: ceab00ac62f8057a07b4b936799e6f04271e6e41
Parents: 6a5516c
Author: Ming Ma <min...@apache.org>
Authored: Wed Mar 29 17:41:58 2017 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Wed Mar 29 17:41:58 2017 -0700

--
 .../hadoop/mapreduce/JobResourceUploader.java   | 36 --
 .../mapreduce/TestJobResourceUploader.java  | 40 +---
 2 files changed, 59 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ceab00ac/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
index 4c48ff4..085c966 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
@@ -238,28 +238,42 @@ class JobResourceUploader {
 Collection dcArchives =
 conf.getStringCollection(MRJobConfig.CACHE_ARCHIVES);
 
-for (String path : dcFiles) {
-  explorePath(conf, new Path(path), limitChecker, statCache);
+for (String uri : dcFiles) {
+  explorePath(conf, stringToPath(uri), limitChecker, statCache);
 }
 
-for (String path : dcArchives) {
-  explorePath(conf, new Path(path), limitChecker, statCache);
+for (String uri : dcArchives) {
+  explorePath(conf, stringToPath(uri), limitChecker, statCache);
 }
 
-for (String path : files) {
-  explorePath(conf, new Path(path), limitChecker, statCache);
+for (String uri : files) {
+  explorePath(conf, stringToPath(uri), limitChecker, statCache);
 }
 
-for (String path : libjars) {
-  explorePath(conf, new Path(path), limitChecker, statCache);
+for (String uri : libjars) {
+  explorePath(conf, stringToPath(uri), limitChecker, statCache);
 }
 
-for (String path : archives) {
-  explorePath(conf, new Path(path), limitChecker, statCache);
+for (String uri : archives) {
+  explorePath(conf, stringToPath(uri), limitChecker, statCache);
 }
 
 if (jobJar != null) {
-  explorePath(conf, new Path(jobJar), limitChecker, statCache);
+  explorePath(conf, stringToPath(jobJar), limitChecker, statCache);
+}
+  }
+
+  /**
+   * Convert a String to a Path and gracefully remove fragments/queries if they
+   * exist in the String.
+   */
+  @VisibleForTesting
+  Path stringToPath(String s) {
+try {
+  URI uri = new URI(s);
+  return new Path(uri.getScheme(), uri.getAuthority(), uri.getPath());
+} catch (URISyntaxException e) {
+  throw new IllegalArgumentException(e);
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ceab00ac/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/TestJobResourceUploader.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/TestJobResourceUploader.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/TestJobResourceUploader.java
index 36ea57a..8ba50a6 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/TestJobResourceUploader.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/TestJobResourceUploader.java
@@ -40,6 +40,34 @@ import org.junit.Test;
 public class TestJobResourceUploader {
 
   @Test
+  public void testStringToPath() throws IOException {
+Configuration conf = new Configuration();
+JobResourceUploader uploader =
+new JobResourceUploader(FileSystem.getLoc

hadoop git commit: HDFS-11412. Maintenance minimum replication config value allowable range should be [0, DefaultReplication]. (Manoj Govindassamy via mingma)

2017-03-02 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 b8b605bfc -> 8ce486310


HDFS-11412. Maintenance minimum replication config value allowable range should 
be [0, DefaultReplication]. (Manoj Govindassamy via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8ce48631
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8ce48631
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8ce48631

Branch: refs/heads/branch-2
Commit: 8ce48631034737a84c8853bc3bf6e35de5e6fb05
Parents: b8b605b
Author: Ming Ma <min...@apache.org>
Authored: Thu Mar 2 11:06:48 2017 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Thu Mar 2 11:06:48 2017 -0800

--
 .../server/blockmanagement/BlockManager.java|  15 ++-
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  |   8 ++
 .../hadoop/hdfs/TestMaintenanceState.java   | 111 +++
 3 files changed, 129 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8ce48631/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index b06996a..5d868e5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -436,12 +436,12 @@ public class BlockManager implements BlockStatsMXBean {
   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
   + " = " + minMaintenanceR + " < 0");
 }
-if (minMaintenanceR > minR) {
+if (minMaintenanceR > defaultReplication) {
   throw new IOException("Unexpected configuration parameters: "
   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
   + " = " + minMaintenanceR + " > "
-  + DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY
-  + " = " + minR);
+  + DFSConfigKeys.DFS_REPLICATION_KEY
+  + " = " + defaultReplication);
 }
 this.minReplicationToBeInMaintenance = (short)minMaintenanceR;
 
@@ -749,6 +749,11 @@ public class BlockManager implements BlockStatsMXBean {
 return minReplicationToBeInMaintenance;
   }
 
+  private short getMinMaintenanceStorageNum(BlockInfo block) {
+return (short) Math.min(minReplicationToBeInMaintenance,
+block.getReplication());
+  }
+
   /**
* Commit a block of a file
* 
@@ -3718,7 +3723,7 @@ public class BlockManager implements BlockStatsMXBean {
   boolean isNeededReplicationForMaintenance(BlockInfo storedBlock,
   NumberReplicas numberReplicas) {
 return storedBlock.isComplete() && (numberReplicas.liveReplicas() <
-getMinReplicationToBeInMaintenance() ||
+getMinMaintenanceStorageNum(storedBlock) ||
 !isPlacementPolicySatisfied(storedBlock));
   }
 
@@ -3744,7 +3749,7 @@ public class BlockManager implements BlockStatsMXBean {
 final short expectedRedundancy = getExpectedReplicaNum(block);
 return (short)Math.max(expectedRedundancy -
 numberReplicas.maintenanceReplicas(),
-getMinReplicationToBeInMaintenance());
+getMinMaintenanceStorageNum(block));
   }
 
   public short getExpectedReplicaNum(BlockInfo block) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8ce48631/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
index 8a9b213..d7b7692 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
@@ -807,6 +807,14 @@ public class MiniDFSCluster implements AutoCloseable {
 
   int replication = conf.getInt(DFS_REPLICATION_KEY, 3);
   conf.setInt(DFS_REPLICATION_KEY, Math.min(replication, numDataNodes));
+  int maintenanceMinReplication = conf.getInt(
+  DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY,
+  DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_DEFAULT);
+  if (maintenanceMinRepl

[1/2] hadoop git commit: HDFS-11412. Maintenance minimum replication config value allowable range should be [0, DefaultReplication]. (Manoj Govindassamy via mingma)

2017-03-01 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 555d0c399 -> 4e14eaded


HDFS-11412. Maintenance minimum replication config value allowable range should 
be [0, DefaultReplication]. (Manoj Govindassamy via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/25c84d27
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/25c84d27
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/25c84d27

Branch: refs/heads/trunk
Commit: 25c84d279bcefb72a3dd8058f25bba1713504849
Parents: 6f6dfe0
Author: Ming Ma <min...@apache.org>
Authored: Wed Mar 1 20:23:52 2017 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Wed Mar 1 20:23:52 2017 -0800

--
 .../server/blockmanagement/BlockManager.java|   9 +-
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  |   8 ++
 .../hadoop/hdfs/TestMaintenanceState.java   | 111 +++
 3 files changed, 124 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/25c84d27/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 5125b33..5ca0fa7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -484,12 +484,12 @@ public class BlockManager implements BlockStatsMXBean {
   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
   + " = " + minMaintenanceR + " < 0");
 }
-if (minMaintenanceR > minR) {
+if (minMaintenanceR > defaultReplication) {
   throw new IOException("Unexpected configuration parameters: "
   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
   + " = " + minMaintenanceR + " > "
-  + DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY
-  + " = " + minR);
+  + DFSConfigKeys.DFS_REPLICATION_KEY
+  + " = " + defaultReplication);
 }
 this.minReplicationToBeInMaintenance = (short)minMaintenanceR;
 
@@ -825,7 +825,8 @@ public class BlockManager implements BlockStatsMXBean {
 if (block.isStriped()) {
   return ((BlockInfoStriped) block).getRealDataBlockNum();
 } else {
-  return minReplicationToBeInMaintenance;
+  return (short) Math.min(minReplicationToBeInMaintenance,
+  block.getReplication());
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/25c84d27/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
index 51dca41..f9908fe 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
@@ -807,6 +807,14 @@ public class MiniDFSCluster implements AutoCloseable {
 
   int replication = conf.getInt(DFS_REPLICATION_KEY, 3);
   conf.setInt(DFS_REPLICATION_KEY, Math.min(replication, numDataNodes));
+  int maintenanceMinReplication = conf.getInt(
+  DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY,
+  DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_DEFAULT);
+  if (maintenanceMinReplication ==
+  DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_DEFAULT) {
+conf.setInt(DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY,
+Math.min(maintenanceMinReplication, numDataNodes));
+  }
   int safemodeExtension = conf.getInt(
   DFS_NAMENODE_SAFEMODE_EXTENSION_TESTING_KEY, 0);
   conf.setInt(DFS_NAMENODE_SAFEMODE_EXTENSION_KEY, safemodeExtension);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/25c84d27/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
index f3

[2/2] hadoop git commit: Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/hadoop into trunk

2017-03-01 Thread mingma
Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/hadoop into 
trunk


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4e14eade
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4e14eade
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4e14eade

Branch: refs/heads/trunk
Commit: 4e14eaded21f244d280b1ffe90a89959fd753047
Parents: 25c84d2 555d0c3
Author: Ming Ma 
Authored: Wed Mar 1 20:24:49 2017 -0800
Committer: Ming Ma 
Committed: Wed Mar 1 20:24:49 2017 -0800

--
 .../hadoop-hdfs/src/site/markdown/HDFSCommands.md  | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11411. Avoid OutOfMemoryError in TestMaintenanceState test runs. (Manoj Govindassamy via mingma)

2017-02-22 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 171b18693 -> 0add328c8


HDFS-11411. Avoid OutOfMemoryError in TestMaintenanceState test runs. (Manoj 
Govindassamy via mingma)

(cherry picked from commit cfcd527323352cf2a851c5c41f5d243d375d88d0)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0add328c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0add328c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0add328c

Branch: refs/heads/branch-2
Commit: 0add328c8c1d86f08793682d7a37a7b30509e28b
Parents: 171b186
Author: Ming Ma <min...@apache.org>
Authored: Wed Feb 22 09:41:07 2017 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Wed Feb 22 09:42:47 2017 -0800

--
 .../test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java | 6 ++
 1 file changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0add328c/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
index bbf947f..f3e2a0b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
@@ -333,6 +333,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 
   private void testExpectedReplication(int replicationFactor,
   int expectedReplicasInRead) throws IOException {
+setup();
 startCluster(1, 5);
 
 final Path file = new Path("/testExpectedReplication.dat");
@@ -352,6 +353,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 nodeOutofService));
 
 cleanupFile(fileSys, file);
+teardown();
   }
 
   /**
@@ -492,6 +494,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 
   private void testDecommissionDifferentNodeAfterMaintenance(int repl)
   throws Exception {
+setup();
 startCluster(1, 5);
 
 final Path file =
@@ -519,6 +522,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 assertNull(checkWithRetry(ns, fileSys, file, repl + 1, null));
 
 cleanupFile(fileSys, file);
+teardown();
   }
 
   /**
@@ -583,6 +587,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
*/
   private void testChangeReplicationFactor(int oldFactor, int newFactor,
   int expectedLiveReplicas) throws IOException {
+setup();
 LOG.info("Starting testChangeReplicationFactor {} {} {}",
 oldFactor, newFactor, expectedLiveReplicas);
 startCluster(1, 5);
@@ -615,6 +620,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 assertNull(checkWithRetry(ns, fileSys, file, newFactor, null));
 
 cleanupFile(fileSys, file);
+teardown();
   }
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11411. Avoid OutOfMemoryError in TestMaintenanceState test runs. (Manoj Govindassamy via mingma)

2017-02-22 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 4f4250fbc -> cfcd52732


HDFS-11411. Avoid OutOfMemoryError in TestMaintenanceState test runs. (Manoj 
Govindassamy via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cfcd5273
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cfcd5273
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cfcd5273

Branch: refs/heads/trunk
Commit: cfcd527323352cf2a851c5c41f5d243d375d88d0
Parents: 4f4250f
Author: Ming Ma <min...@apache.org>
Authored: Wed Feb 22 09:41:07 2017 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Wed Feb 22 09:41:07 2017 -0800

--
 .../test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java | 6 ++
 1 file changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cfcd5273/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
index bbf947f..f3e2a0b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
@@ -333,6 +333,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 
   private void testExpectedReplication(int replicationFactor,
   int expectedReplicasInRead) throws IOException {
+setup();
 startCluster(1, 5);
 
 final Path file = new Path("/testExpectedReplication.dat");
@@ -352,6 +353,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 nodeOutofService));
 
 cleanupFile(fileSys, file);
+teardown();
   }
 
   /**
@@ -492,6 +494,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 
   private void testDecommissionDifferentNodeAfterMaintenance(int repl)
   throws Exception {
+setup();
 startCluster(1, 5);
 
 final Path file =
@@ -519,6 +522,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 assertNull(checkWithRetry(ns, fileSys, file, repl + 1, null));
 
 cleanupFile(fileSys, file);
+teardown();
   }
 
   /**
@@ -583,6 +587,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
*/
   private void testChangeReplicationFactor(int oldFactor, int newFactor,
   int expectedLiveReplicas) throws IOException {
+setup();
 LOG.info("Starting testChangeReplicationFactor {} {} {}",
 oldFactor, newFactor, expectedLiveReplicas);
 startCluster(1, 5);
@@ -615,6 +620,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 assertNull(checkWithRetry(ns, fileSys, file, newFactor, null));
 
 cleanupFile(fileSys, file);
+teardown();
   }
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11265. Extend visualization for Maintenance Mode under Datanode tab in the NameNode UI. (Marton Elek via mingma)

2017-02-15 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 96be70513 -> 0f8b99fd3


HDFS-11265. Extend visualization for Maintenance Mode under Datanode tab in the 
NameNode UI. (Marton Elek via mingma)

(cherry picked from commit a136936d018b5cebb7aad9a01ea0dcc366e1c3b8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0f8b99fd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0f8b99fd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0f8b99fd

Branch: refs/heads/branch-2
Commit: 0f8b99fd347284dad0c95c2040e46223bc42fed0
Parents: 96be705
Author: Ming Ma <min...@apache.org>
Authored: Wed Feb 15 20:24:07 2017 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Wed Feb 15 20:25:04 2017 -0800

--
 .../hadoop-hdfs/src/main/webapps/static/hadoop.css  | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0f8b99fd/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css
index 0901125..341e1f8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css
@@ -236,8 +236,8 @@ header.bs-docs-nav, header.bs-docs-nav .navbar-brand {
 }
 
 .dfshealth-node-decommissioned:before {
-color: #eea236;
-content: "\e136";
+color: #bc5f04;
+content: "\e090";
 }
 
 .dfshealth-node-down:before {
@@ -250,6 +250,11 @@ header.bs-docs-nav, header.bs-docs-nav .navbar-brand {
 content: "\e017";
 }
 
+.dfshealth-node-down-maintenance:before {
+color: #eea236;
+content: "\e136";
+}
+
 .dfshealth-node-legend {
 list-style-type: none;
 text-align: right;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11265. Extend visualization for Maintenance Mode under Datanode tab in the NameNode UI. (Marton Elek via mingma)

2017-02-15 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 0741dd3b9 -> a136936d0


HDFS-11265. Extend visualization for Maintenance Mode under Datanode tab in the 
NameNode UI. (Marton Elek via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a136936d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a136936d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a136936d

Branch: refs/heads/trunk
Commit: a136936d018b5cebb7aad9a01ea0dcc366e1c3b8
Parents: 0741dd3
Author: Ming Ma <min...@apache.org>
Authored: Wed Feb 15 20:24:07 2017 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Wed Feb 15 20:24:07 2017 -0800

--
 .../hadoop-hdfs/src/main/webapps/static/hadoop.css  | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a136936d/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css
index 0901125..341e1f8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css
@@ -236,8 +236,8 @@ header.bs-docs-nav, header.bs-docs-nav .navbar-brand {
 }
 
 .dfshealth-node-decommissioned:before {
-color: #eea236;
-content: "\e136";
+color: #bc5f04;
+content: "\e090";
 }
 
 .dfshealth-node-down:before {
@@ -250,6 +250,11 @@ header.bs-docs-nav, header.bs-docs-nav .navbar-brand {
 content: "\e017";
 }
 
+.dfshealth-node-down-maintenance:before {
+color: #eea236;
+content: "\e136";
+}
+
 .dfshealth-node-legend {
 list-style-type: none;
 text-align: right;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11378. Verify multiple DataNodes can be decommissioned/maintenance at the same time. (Manoj Govindassamy via mingma)

2017-01-27 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 528bff9c4 -> 34f9ceab4


HDFS-11378. Verify multiple DataNodes can be decommissioned/maintenance at the 
same time. (Manoj Govindassamy via mingma)

(cherry picked from commit 312b36d113d83640b92c62fdd91ede74bd04c00f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/34f9ceab
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/34f9ceab
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/34f9ceab

Branch: refs/heads/branch-2
Commit: 34f9ceab4a53007bba485b51fbd909dae5198148
Parents: 528bff9
Author: Ming Ma <min...@apache.org>
Authored: Fri Jan 27 16:16:42 2017 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Fri Jan 27 16:17:51 2017 -0800

--
 .../apache/hadoop/hdfs/AdminStatesBaseTest.java | 151 +--
 .../apache/hadoop/hdfs/TestDecommission.java|  43 ++
 .../hadoop/hdfs/TestMaintenanceState.java   |  36 +
 3 files changed, 186 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/34f9ceab/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AdminStatesBaseTest.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AdminStatesBaseTest.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AdminStatesBaseTest.java
index 534c5e0..c4ccc67 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AdminStatesBaseTest.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AdminStatesBaseTest.java
@@ -22,11 +22,13 @@ import static org.junit.Assert.assertTrue;
 
 import java.io.IOException;
 import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Random;
 
+import com.google.common.collect.Lists;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -149,10 +151,18 @@ public class AdminStatesBaseTest {
 }
   }
 
-  /*
-   * decommission the DN or put the DN into maintenance for datanodeUuid or one
-   * random node if datanodeUuid is null.
-   * And wait for the node to reach the given {@code waitForState}.
+  /**
+   * Decommission or perform Maintenance for DataNodes and wait for them to
+   * reach the expected state.
+   *
+   * @param nnIndex NameNode index
+   * @param datanodeUuid DataNode to decommission/maintenance, or a random
+   * DataNode if null
+   * @param maintenanceExpirationInMS Maintenance expiration time
+   * @param decommissionedNodes List of DataNodes already decommissioned
+   * @param waitForState Await for this state for datanodeUuid DataNode
+   * @return DatanodeInfo DataNode taken out of service
+   * @throws IOException
*/
   protected DatanodeInfo takeNodeOutofService(int nnIndex,
   String datanodeUuid, long maintenanceExpirationInMS,
@@ -162,48 +172,91 @@ public class AdminStatesBaseTest {
 maintenanceExpirationInMS, decommissionedNodes, null, waitForState);
   }
 
-  /*
-   * decommission the DN or put the DN to maintenance set by datanodeUuid
-   * Pick randome node if datanodeUuid == null
-   * wait for the node to reach the given {@code waitForState}.
+  /**
+   * Decommission or perform Maintenance for DataNodes and wait for them to
+   * reach the expected state.
+   *
+   * @param nnIndex NameNode index
+   * @param datanodeUuid DataNode to decommission/maintenance, or a random
+   * DataNode if null
+   * @param maintenanceExpirationInMS Maintenance expiration time
+   * @param decommissionedNodes List of DataNodes already decommissioned
+   * @param inMaintenanceNodes Map of DataNodes already entering/in maintenance
+   * @param waitForState Await for this state for datanodeUuid DataNode
+   * @return DatanodeInfo DataNode taken out of service
+   * @throws IOException
*/
   protected DatanodeInfo takeNodeOutofService(int nnIndex,
   String datanodeUuid, long maintenanceExpirationInMS,
   List decommissionedNodes,
   Map<DatanodeInfo, Long> inMaintenanceNodes, AdminStates waitForState)
   throws IOException {
+return takeNodeOutofService(nnIndex, (datanodeUuid != null ?
+Lists.newArrayList(datanodeUuid) : null),
+maintenanceExpirationInMS, decommissionedNodes, inMaintenanceNodes,
+waitForState).get(0);
+  }
+
+  /**
+   * Decommission or perform Maintenance for DataNodes and wait for them to
+   * reach the expected state.
+   *
+   * @param nnIndex NameNode index
+   * @param dataNodeUuids DataNodes to decommission/mai

hadoop git commit: HDFS-11378. Verify multiple DataNodes can be decommissioned/maintenance at the same time. (Manoj Govindassamy via mingma)

2017-01-27 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk ebd40056a -> 312b36d11


HDFS-11378. Verify multiple DataNodes can be decommissioned/maintenance at the 
same time. (Manoj Govindassamy via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/312b36d1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/312b36d1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/312b36d1

Branch: refs/heads/trunk
Commit: 312b36d113d83640b92c62fdd91ede74bd04c00f
Parents: ebd4005
Author: Ming Ma <min...@apache.org>
Authored: Fri Jan 27 16:16:42 2017 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Fri Jan 27 16:16:42 2017 -0800

--
 .../apache/hadoop/hdfs/AdminStatesBaseTest.java | 151 +--
 .../apache/hadoop/hdfs/TestDecommission.java|  43 ++
 .../hadoop/hdfs/TestMaintenanceState.java   |  36 +
 3 files changed, 186 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/312b36d1/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AdminStatesBaseTest.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AdminStatesBaseTest.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AdminStatesBaseTest.java
index 0ed01f7..c0cef19 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AdminStatesBaseTest.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AdminStatesBaseTest.java
@@ -22,11 +22,13 @@ import static org.junit.Assert.assertTrue;
 
 import java.io.IOException;
 import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Random;
 
+import com.google.common.collect.Lists;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -149,10 +151,18 @@ public class AdminStatesBaseTest {
 }
   }
 
-  /*
-   * decommission the DN or put the DN into maintenance for datanodeUuid or one
-   * random node if datanodeUuid is null.
-   * And wait for the node to reach the given {@code waitForState}.
+  /**
+   * Decommission or perform Maintenance for DataNodes and wait for them to
+   * reach the expected state.
+   *
+   * @param nnIndex NameNode index
+   * @param datanodeUuid DataNode to decommission/maintenance, or a random
+   * DataNode if null
+   * @param maintenanceExpirationInMS Maintenance expiration time
+   * @param decommissionedNodes List of DataNodes already decommissioned
+   * @param waitForState Await for this state for datanodeUuid DataNode
+   * @return DatanodeInfo DataNode taken out of service
+   * @throws IOException
*/
   protected DatanodeInfo takeNodeOutofService(int nnIndex,
   String datanodeUuid, long maintenanceExpirationInMS,
@@ -162,48 +172,91 @@ public class AdminStatesBaseTest {
 maintenanceExpirationInMS, decommissionedNodes, null, waitForState);
   }
 
-  /*
-   * decommission the DN or put the DN to maintenance set by datanodeUuid
-   * Pick randome node if datanodeUuid == null
-   * wait for the node to reach the given {@code waitForState}.
+  /**
+   * Decommission or perform Maintenance for DataNodes and wait for them to
+   * reach the expected state.
+   *
+   * @param nnIndex NameNode index
+   * @param datanodeUuid DataNode to decommission/maintenance, or a random
+   * DataNode if null
+   * @param maintenanceExpirationInMS Maintenance expiration time
+   * @param decommissionedNodes List of DataNodes already decommissioned
+   * @param inMaintenanceNodes Map of DataNodes already entering/in maintenance
+   * @param waitForState Await for this state for datanodeUuid DataNode
+   * @return DatanodeInfo DataNode taken out of service
+   * @throws IOException
*/
   protected DatanodeInfo takeNodeOutofService(int nnIndex,
   String datanodeUuid, long maintenanceExpirationInMS,
   List decommissionedNodes,
   Map<DatanodeInfo, Long> inMaintenanceNodes, AdminStates waitForState)
   throws IOException {
+return takeNodeOutofService(nnIndex, (datanodeUuid != null ?
+Lists.newArrayList(datanodeUuid) : null),
+maintenanceExpirationInMS, decommissionedNodes, inMaintenanceNodes,
+waitForState).get(0);
+  }
+
+  /**
+   * Decommission or perform Maintenance for DataNodes and wait for them to
+   * reach the expected state.
+   *
+   * @param nnIndex NameNode index
+   * @param dataNodeUuids DataNodes to decommission/maintenance, or a random
+   * DataNode if null
+   * @param maintenanceExpir

hadoop git commit: HDFS-11296. Maintenance state expiry should be an epoch time and not jvm monotonic. (Manoj Govindassamy via mingma)

2017-01-19 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 d37408767 -> bed700e98


HDFS-11296. Maintenance state expiry should be an epoch time and not jvm 
monotonic. (Manoj Govindassamy via mingma)

(cherry picked from commit f3fb94be05a61a4c4c06ab279897e5de2b181b0e)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bed700e9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bed700e9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bed700e9

Branch: refs/heads/branch-2
Commit: bed700e98f08c37db7cd1a42d458add97b2b3409
Parents: d374087
Author: Ming Ma <min...@apache.org>
Authored: Thu Jan 19 22:31:15 2017 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Thu Jan 19 22:33:43 2017 -0800

--
 .../org/apache/hadoop/hdfs/protocol/DatanodeInfo.java   |  2 +-
 .../org/apache/hadoop/hdfs/TestMaintenanceState.java| 12 ++--
 .../hadoop/hdfs/server/namenode/TestNameNodeMXBean.java |  2 +-
 3 files changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bed700e9/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
index db30075..c6a69ab 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
@@ -509,7 +509,7 @@ public class DatanodeInfo extends DatanodeID implements 
Node {
   }
 
   public static boolean maintenanceNotExpired(long maintenanceExpireTimeInMS) {
-return Time.monotonicNow() < maintenanceExpireTimeInMS;
+return Time.now() < maintenanceExpireTimeInMS;
   }
   /**
* Returns true if the node is is entering_maintenance

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bed700e9/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
index c125f45..9cc130b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
@@ -114,7 +114,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 
 // Adjust the expiration.
 takeNodeOutofService(0, nodeOutofService.getDatanodeUuid(),
-Time.monotonicNow() + EXPIRATION_IN_MS, null, AdminStates.NORMAL);
+Time.now() + EXPIRATION_IN_MS, null, AdminStates.NORMAL);
 
 cleanupFile(fileSys, file);
   }
@@ -133,8 +133,8 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 final FileSystem fileSys = getCluster().getFileSystem(0);
 writeFile(fileSys, file, replicas, 1);
 
-// expiration has to be greater than Time.monotonicNow().
-takeNodeOutofService(0, null, Time.monotonicNow(), null,
+// expiration has to be greater than Time.now().
+takeNodeOutofService(0, null, Time.now(), null,
 AdminStates.NORMAL);
 
 cleanupFile(fileSys, file);
@@ -203,7 +203,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 
 // Adjust the expiration.
 takeNodeOutofService(0, nodeOutofService.getDatanodeUuid(),
-Time.monotonicNow() + EXPIRATION_IN_MS, null, AdminStates.NORMAL);
+Time.now() + EXPIRATION_IN_MS, null, AdminStates.NORMAL);
 
 // no change
 assertEquals(deadInMaintenance, ns.getNumInMaintenanceDeadDataNodes());
@@ -257,7 +257,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 
 // Adjust the expiration.
 takeNodeOutofService(0, nodeOutofService.getDatanodeUuid(),
-Time.monotonicNow() + EXPIRATION_IN_MS, null, AdminStates.NORMAL);
+Time.now() + EXPIRATION_IN_MS, null, AdminStates.NORMAL);
 
 cleanupFile(fileSys, file);
   }
@@ -398,7 +398,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 
 // Adjust the expiration.
 takeNodeOutofService(0, nodeOutofService.getDatanodeUuid(),
-Time.monotonicNow() + EXPIRATION_IN_MS, null, AdminStates.NORMAL);
+Time.now() + EXPIRATION_IN_MS, null, AdminStates.NORMAL);
 
 cleanupFile(fileSys, file);
   }

http://git-wip-us.apache.org/repos/asf

[1/2] hadoop git commit: HDFS-11296. Maintenance state expiry should be an epoch time and not jvm monotonic. (Manoj Govindassamy via mingma)

2017-01-19 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk b01514f65 -> fdf720299


HDFS-11296. Maintenance state expiry should be an epoch time and not jvm 
monotonic. (Manoj Govindassamy via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f3fb94be
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f3fb94be
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f3fb94be

Branch: refs/heads/trunk
Commit: f3fb94be05a61a4c4c06ab279897e5de2b181b0e
Parents: 60865c8
Author: Ming Ma <min...@apache.org>
Authored: Thu Jan 19 22:31:15 2017 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Thu Jan 19 22:31:15 2017 -0800

--
 .../org/apache/hadoop/hdfs/protocol/DatanodeInfo.java   |  2 +-
 .../org/apache/hadoop/hdfs/TestMaintenanceState.java| 12 ++--
 .../hadoop/hdfs/server/namenode/TestNameNodeMXBean.java |  2 +-
 3 files changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3fb94be/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
index 8f9f3d5..41735b1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
@@ -511,7 +511,7 @@ public class DatanodeInfo extends DatanodeID implements 
Node {
   }
 
   public static boolean maintenanceNotExpired(long maintenanceExpireTimeInMS) {
-return Time.monotonicNow() < maintenanceExpireTimeInMS;
+return Time.now() < maintenanceExpireTimeInMS;
   }
   /**
* Returns true if the node is is entering_maintenance

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3fb94be/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
index c125f45..9cc130b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
@@ -114,7 +114,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 
 // Adjust the expiration.
 takeNodeOutofService(0, nodeOutofService.getDatanodeUuid(),
-Time.monotonicNow() + EXPIRATION_IN_MS, null, AdminStates.NORMAL);
+Time.now() + EXPIRATION_IN_MS, null, AdminStates.NORMAL);
 
 cleanupFile(fileSys, file);
   }
@@ -133,8 +133,8 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 final FileSystem fileSys = getCluster().getFileSystem(0);
 writeFile(fileSys, file, replicas, 1);
 
-// expiration has to be greater than Time.monotonicNow().
-takeNodeOutofService(0, null, Time.monotonicNow(), null,
+// expiration has to be greater than Time.now().
+takeNodeOutofService(0, null, Time.now(), null,
 AdminStates.NORMAL);
 
 cleanupFile(fileSys, file);
@@ -203,7 +203,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 
 // Adjust the expiration.
 takeNodeOutofService(0, nodeOutofService.getDatanodeUuid(),
-Time.monotonicNow() + EXPIRATION_IN_MS, null, AdminStates.NORMAL);
+Time.now() + EXPIRATION_IN_MS, null, AdminStates.NORMAL);
 
 // no change
 assertEquals(deadInMaintenance, ns.getNumInMaintenanceDeadDataNodes());
@@ -257,7 +257,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 
 // Adjust the expiration.
 takeNodeOutofService(0, nodeOutofService.getDatanodeUuid(),
-Time.monotonicNow() + EXPIRATION_IN_MS, null, AdminStates.NORMAL);
+Time.now() + EXPIRATION_IN_MS, null, AdminStates.NORMAL);
 
 cleanupFile(fileSys, file);
   }
@@ -398,7 +398,7 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
 
 // Adjust the expiration.
 takeNodeOutofService(0, nodeOutofService.getDatanodeUuid(),
-Time.monotonicNow() + EXPIRATION_IN_MS, null, AdminStates.NORMAL);
+Time.now() + EXPIRATION_IN_MS, null, AdminStates.NORMAL);
 
 cleanupFile(fileSys, file);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3fb94be/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apac

[2/2] hadoop git commit: Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/hadoop into trunk

2017-01-19 Thread mingma
Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/hadoop into 
trunk


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fdf72029
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fdf72029
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fdf72029

Branch: refs/heads/trunk
Commit: fdf72029929bd31f3cb996525621c8d2cdfd6326
Parents: f3fb94b b01514f
Author: Ming Ma 
Authored: Thu Jan 19 22:31:43 2017 -0800
Committer: Ming Ma 
Committed: Thu Jan 19 22:31:43 2017 -0800

--
 hadoop-project/pom.xml  |  13 +-
 .../server/resourcemanager/TestRMRestart.java   |   4 +
 .../TestResourceTrackerService.java |   4 +
 .../pom.xml |  12 +
 .../pom.xml | 191 +
 .../reader/filter/TimelineFilterUtils.java  | 290 
 .../reader/filter/package-info.java |  28 +
 .../storage/HBaseTimelineReaderImpl.java|  88 +++
 .../storage/HBaseTimelineWriterImpl.java| 566 ++
 .../storage/TimelineSchemaCreator.java  | 250 +++
 .../storage/application/ApplicationColumn.java  | 156 
 .../application/ApplicationColumnFamily.java|  65 ++
 .../application/ApplicationColumnPrefix.java| 288 
 .../storage/application/ApplicationRowKey.java  | 206 ++
 .../application/ApplicationRowKeyPrefix.java|  69 ++
 .../storage/application/ApplicationTable.java   | 161 
 .../storage/application/package-info.java   |  28 +
 .../storage/apptoflow/AppToFlowColumn.java  | 148 
 .../apptoflow/AppToFlowColumnFamily.java|  51 ++
 .../storage/apptoflow/AppToFlowRowKey.java  | 143 
 .../storage/apptoflow/AppToFlowTable.java   | 113 +++
 .../storage/apptoflow/package-info.java |  28 +
 .../storage/common/AppIdKeyConverter.java   |  96 +++
 .../storage/common/BaseTable.java   | 140 
 .../common/BufferedMutatorDelegator.java|  73 ++
 .../timelineservice/storage/common/Column.java  |  80 ++
 .../storage/common/ColumnFamily.java|  34 +
 .../storage/common/ColumnHelper.java| 388 ++
 .../storage/common/ColumnPrefix.java| 145 
 .../storage/common/EventColumnName.java |  63 ++
 .../common/EventColumnNameConverter.java|  99 +++
 .../storage/common/GenericConverter.java|  48 ++
 .../common/HBaseTimelineStorageUtils.java   | 243 +++
 .../storage/common/KeyConverter.java|  41 ++
 .../storage/common/LongConverter.java   |  94 +++
 .../storage/common/LongKeyConverter.java|  68 ++
 .../storage/common/NumericValueConverter.java   |  39 +
 .../timelineservice/storage/common/Range.java   |  62 ++
 .../storage/common/RowKeyPrefix.java|  42 ++
 .../storage/common/Separator.java   | 575 +++
 .../storage/common/StringKeyConverter.java  |  54 ++
 .../common/TimelineHBaseSchemaConstants.java|  71 ++
 .../storage/common/TimestampGenerator.java  | 116 +++
 .../storage/common/TypedBufferedMutator.java|  28 +
 .../storage/common/ValueConverter.java  |  47 ++
 .../storage/common/package-info.java|  28 +
 .../storage/entity/EntityColumn.java| 160 
 .../storage/entity/EntityColumnFamily.java  |  65 ++
 .../storage/entity/EntityColumnPrefix.java  | 300 
 .../storage/entity/EntityRowKey.java| 225 ++
 .../storage/entity/EntityRowKeyPrefix.java  |  74 ++
 .../storage/entity/EntityTable.java | 161 
 .../storage/entity/package-info.java|  28 +
 .../flow/AggregationCompactionDimension.java|  63 ++
 .../storage/flow/AggregationOperation.java  |  94 +++
 .../timelineservice/storage/flow/Attribute.java |  39 +
 .../storage/flow/FlowActivityColumnFamily.java  |  55 ++
 .../storage/flow/FlowActivityColumnPrefix.java  | 277 +++
 .../storage/flow/FlowActivityRowKey.java| 196 +
 .../storage/flow/FlowActivityRowKeyPrefix.java  |  60 ++
 .../storage/flow/FlowActivityTable.java | 108 +++
 .../storage/flow/FlowRunColumn.java | 182 +
 .../storage/flow/FlowRunColumnFamily.java   |  54 ++
 .../storage/flow/FlowRunColumnPrefix.java   | 268 +++
 .../storage/flow/FlowRunCoprocessor.java| 304 
 .../storage/flow/FlowRunRowKey.java | 190 +
 .../storage/flow/FlowRunRowKeyPrefix.java   |  54 ++
 .../storage/flow/FlowRunTable.java  | 141 
 .../storage/flow/FlowScanner.java   | 728 +++
 .../storage/flow/FlowScannerOperation.java  |  46 ++
 .../storage/flow/package-info.java  |  29 +
 .../timelineservice/storage/package-info.java   |  28 +
 

hadoop git commit: HDFS-9391. Update webUI/JMX to display maintenance state info. (Manoj Govindassamy via mingma)

2017-01-10 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 ba6a01334 -> b6258e2b1


HDFS-9391. Update webUI/JMX to display maintenance state info. (Manoj 
Govindassamy via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b6258e2b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b6258e2b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b6258e2b

Branch: refs/heads/branch-2
Commit: b6258e2b15cc917c23c03a05d606ae6194927262
Parents: ba6a013
Author: Ming Ma <min...@apache.org>
Authored: Tue Jan 10 20:20:13 2017 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Tue Jan 10 20:20:13 2017 -0800

--
 .../blockmanagement/DatanodeDescriptor.java |  12 +-
 .../blockmanagement/DecommissionManager.java|   2 +
 .../server/blockmanagement/NumberReplicas.java  |   2 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  29 -
 .../hdfs/server/namenode/NameNodeMXBean.java|   9 +-
 .../src/main/webapps/hdfs/dfshealth.html|  32 -
 .../src/main/webapps/hdfs/dfshealth.js  |  11 +-
 .../server/namenode/TestNameNodeMXBean.java | 125 +--
 8 files changed, 201 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b6258e2b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index 40b5af1..974b6dd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -689,7 +689,7 @@ public class DatanodeDescriptor extends DatanodeInfo {
 // Super implementation is sufficient
 return super.hashCode();
   }
-  
+
   @Override
   public boolean equals(Object obj) {
 // Sufficient to use super equality as datanodes are uniquely identified
@@ -704,14 +704,14 @@ public class DatanodeDescriptor extends DatanodeInfo {
 private int underReplicatedInOpenFiles;
 private long startTime;
 
-synchronized void set(int underRep,
-int onlyRep, int underConstruction) {
+synchronized void set(int underRepBlocks,
+int outOfServiceOnlyRep, int underRepInOpenFiles) {
   if (!isDecommissionInProgress() && !isEnteringMaintenance()) {
 return;
   }
-  underReplicatedBlocks = underRep;
-  outOfServiceOnlyReplicas = onlyRep;
-  underReplicatedInOpenFiles = underConstruction;
+  underReplicatedBlocks = underRepBlocks;
+  outOfServiceOnlyReplicas = outOfServiceOnlyRep;
+  underReplicatedInOpenFiles = underRepInOpenFiles;
 }
 
 /** @return the number of under-replicated blocks */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b6258e2b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
index 7e404c4..cea2ee3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
@@ -660,7 +660,9 @@ public class DecommissionManager {
 boolean pruneSufficientlyReplicated) {
   boolean firstReplicationLog = true;
   int underReplicatedBlocks = 0;
+  // All maintenance and decommission replicas.
   int outOfServiceOnlyReplicas = 0;
+  // Low redundancy in UC Blocks only
   int underReplicatedInOpenFiles = 0;
   while (it.hasNext()) {
 if (insufficientlyReplicated == null

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b6258e2b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/NumberReplicas.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/NumberReplicas.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/NumberReplicas.java
index 

hadoop git commit: HDFS-9391. Update webUI/JMX to display maintenance state info. (Manoj Govindassamy via mingma)

2017-01-10 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 4db119b7b -> 467f5f173


HDFS-9391. Update webUI/JMX to display maintenance state info. (Manoj 
Govindassamy via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/467f5f17
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/467f5f17
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/467f5f17

Branch: refs/heads/trunk
Commit: 467f5f1735494c5ef74e6591069884d3771c17e4
Parents: 4db119b
Author: Ming Ma <min...@apache.org>
Authored: Tue Jan 10 20:12:42 2017 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Tue Jan 10 20:12:42 2017 -0800

--
 .../blockmanagement/DatanodeDescriptor.java |  12 +-
 .../blockmanagement/DecommissionManager.java|   9 +-
 .../server/blockmanagement/NumberReplicas.java  |   2 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  29 -
 .../hdfs/server/namenode/NameNodeMXBean.java|   9 +-
 .../src/main/webapps/hdfs/dfshealth.html|  32 -
 .../src/main/webapps/hdfs/dfshealth.js  |  11 +-
 .../server/namenode/TestNameNodeMXBean.java | 125 +--
 8 files changed, 205 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/467f5f17/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index 320c680..8ab4bba 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -730,7 +730,7 @@ public class DatanodeDescriptor extends DatanodeInfo {
 // Super implementation is sufficient
 return super.hashCode();
   }
-  
+
   @Override
   public boolean equals(Object obj) {
 // Sufficient to use super equality as datanodes are uniquely identified
@@ -745,14 +745,14 @@ public class DatanodeDescriptor extends DatanodeInfo {
 private int underReplicatedInOpenFiles;
 private long startTime;
 
-synchronized void set(int underRep,
-int onlyRep, int underConstruction) {
+synchronized void set(int underRepInOpenFiles, int underRepBlocks,
+int outOfServiceOnlyRep) {
   if (!isDecommissionInProgress() && !isEnteringMaintenance()) {
 return;
   }
-  underReplicatedBlocks = underRep;
-  outOfServiceOnlyReplicas = onlyRep;
-  underReplicatedInOpenFiles = underConstruction;
+  underReplicatedInOpenFiles = underRepInOpenFiles;
+  underReplicatedBlocks = underRepBlocks;
+  outOfServiceOnlyReplicas = outOfServiceOnlyRep;
 }
 
 /** @return the number of under-replicated blocks */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/467f5f17/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
index b1cfd78..ae79826 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
@@ -634,9 +634,12 @@ public class DecommissionManager {
 final List insufficientList,
 boolean pruneReliableBlocks) {
   boolean firstReplicationLog = true;
+  // Low redundancy in UC Blocks only
+  int lowRedundancyInOpenFiles = 0;
+  // All low redundancy blocks. Includes lowRedundancyInOpenFiles.
   int lowRedundancyBlocks = 0;
+  // All maintenance and decommission replicas.
   int outOfServiceOnlyReplicas = 0;
-  int lowRedundancyInOpenFiles = 0;
   while (it.hasNext()) {
 if (insufficientList == null
 && numBlocksCheckedPerLock >= numBlocksPerCheck) {
@@ -726,8 +729,8 @@ public class DecommissionManager {
 }
   }
 
-  datanode.getLeavingServiceStatus().set(lowRedundancyBlocks,
-  outOfServiceOnlyReplicas, lowRedundancyInOpenFiles);
+  datanode.getLeavingServiceStatus().set(lowRedundancyInOpenFil

hadoop git commit: HDFS-10206. Datanodes not sorted properly by distance when the reader isn't a datanode. (Nandakumar via mingma)

2016-12-07 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 3350d0c08 -> d2656dc5a


HDFS-10206. Datanodes not sorted properly by distance when the reader isn't a 
datanode. (Nandakumar via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d2656dc5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d2656dc5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d2656dc5

Branch: refs/heads/branch-2
Commit: d2656dc5a6d4f5e208bc1f9466b4d8c8e105dae3
Parents: 3350d0c
Author: Ming Ma <min...@apache.org>
Authored: Wed Dec 7 08:27:17 2016 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Wed Dec 7 08:27:17 2016 -0800

--
 .../org/apache/hadoop/net/NetworkTopology.java  | 158 +--
 .../server/blockmanagement/DatanodeManager.java |  14 +-
 .../apache/hadoop/net/TestNetworkTopology.java  |  29 +++-
 3 files changed, 184 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d2656dc5/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
index 14c870d..5751d2b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
@@ -57,6 +57,10 @@ public class NetworkTopology {
   public static final Logger LOG =
   LoggerFactory.getLogger(NetworkTopology.class);
 
+  private static final char PATH_SEPARATOR = '/';
+  private static final String PATH_SEPARATOR_STR = "/";
+  private static final String ROOT = "/";
+
   public static class InvalidTopologyException extends RuntimeException {
 private static final long serialVersionUID = 1L;
 public InvalidTopologyException(String msg) {
@@ -916,7 +920,7 @@ public class NetworkTopology {
 }
   }
 
-  /** convert a network tree to a string */
+  /** convert a network tree to a string. */
   @Override
   public String toString() {
 // print the number of racks
@@ -970,19 +974,108 @@ public class NetworkTopology {
* @return weight
*/
   protected int getWeight(Node reader, Node node) {
-// 0 is local, 1 is same rack, 2 is off rack
-// Start off by initializing to off rack
-int weight = 2;
-if (reader != null) {
-  if (reader.equals(node)) {
-weight = 0;
-  } else if (isOnSameRack(reader, node)) {
-weight = 1;
+// 0 is local, 2 is same rack, and each level on each node increases the
+//weight by 1
+//Start off by initializing to Integer.MAX_VALUE
+int weight = Integer.MAX_VALUE;
+if (reader != null && node != null) {
+  if(reader.equals(node)) {
+return 0;
+  }
+  int maxReaderLevel = reader.getLevel();
+  int maxNodeLevel = node.getLevel();
+  int currentLevelToCompare = maxReaderLevel > maxNodeLevel ?
+  maxNodeLevel : maxReaderLevel;
+  Node r = reader;
+  Node n = node;
+  weight = 0;
+  while(r != null && r.getLevel() > currentLevelToCompare) {
+r = r.getParent();
+weight++;
+  }
+  while(n != null && n.getLevel() > currentLevelToCompare) {
+n = n.getParent();
+weight++;
+  }
+  while(r != null && n != null && !r.equals(n)) {
+r = r.getParent();
+n = n.getParent();
+weight+=2;
+  }
+}
+return weight;
+  }
+
+  /**
+   * Returns an integer weight which specifies how far away node is
+   * from reader. A lower value signifies that a node is closer.
+   * It uses network location to calculate the weight
+   *
+   * @param reader Node where data will be read
+   * @param node Replica of data
+   * @return weight
+   */
+  private static int getWeightUsingNetworkLocation(Node reader, Node node) {
+//Start off by initializing to Integer.MAX_VALUE
+int weight = Integer.MAX_VALUE;
+if(reader != null && node != null) {
+  String readerPath = normalizeNetworkLocationPath(
+  reader.getNetworkLocation());
+  String nodePath = normalizeNetworkLocationPath(
+  node.getNetworkLocation());
+
+  //same rack
+  if(readerPath.equals(nodePath)) {
+if(reader.getName().equals(node.getName())) {
+  weight = 0;
+} else {
+  weight = 2;
+}
+  } else {
+String[] readerPathToken = readerPath.split(PATH_SEPARATOR_STR);
+String[] nodePathToken = nodePath.split(PATH_SE

hadoop git commit: HDFS-10206. Datanodes not sorted properly by distance when the reader isn't a datanode. (Nandakumar via mingma)

2016-12-07 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 563480dcc -> c73e08a6d


HDFS-10206. Datanodes not sorted properly by distance when the reader isn't a 
datanode. (Nandakumar via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c73e08a6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c73e08a6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c73e08a6

Branch: refs/heads/trunk
Commit: c73e08a6dad46cad14b38a4a586a5cda1622b206
Parents: 563480d
Author: Ming Ma <min...@apache.org>
Authored: Wed Dec 7 08:26:09 2016 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Wed Dec 7 08:26:09 2016 -0800

--
 .../org/apache/hadoop/net/NetworkTopology.java  | 158 +--
 .../server/blockmanagement/DatanodeManager.java |  12 +-
 .../apache/hadoop/net/TestNetworkTopology.java  |  29 +++-
 3 files changed, 182 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c73e08a6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
index 14c870d..5751d2b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
@@ -57,6 +57,10 @@ public class NetworkTopology {
   public static final Logger LOG =
   LoggerFactory.getLogger(NetworkTopology.class);
 
+  private static final char PATH_SEPARATOR = '/';
+  private static final String PATH_SEPARATOR_STR = "/";
+  private static final String ROOT = "/";
+
   public static class InvalidTopologyException extends RuntimeException {
 private static final long serialVersionUID = 1L;
 public InvalidTopologyException(String msg) {
@@ -916,7 +920,7 @@ public class NetworkTopology {
 }
   }
 
-  /** convert a network tree to a string */
+  /** convert a network tree to a string. */
   @Override
   public String toString() {
 // print the number of racks
@@ -970,19 +974,108 @@ public class NetworkTopology {
* @return weight
*/
   protected int getWeight(Node reader, Node node) {
-// 0 is local, 1 is same rack, 2 is off rack
-// Start off by initializing to off rack
-int weight = 2;
-if (reader != null) {
-  if (reader.equals(node)) {
-weight = 0;
-  } else if (isOnSameRack(reader, node)) {
-weight = 1;
+// 0 is local, 2 is same rack, and each level on each node increases the
+//weight by 1
+//Start off by initializing to Integer.MAX_VALUE
+int weight = Integer.MAX_VALUE;
+if (reader != null && node != null) {
+  if(reader.equals(node)) {
+return 0;
+  }
+  int maxReaderLevel = reader.getLevel();
+  int maxNodeLevel = node.getLevel();
+  int currentLevelToCompare = maxReaderLevel > maxNodeLevel ?
+  maxNodeLevel : maxReaderLevel;
+  Node r = reader;
+  Node n = node;
+  weight = 0;
+  while(r != null && r.getLevel() > currentLevelToCompare) {
+r = r.getParent();
+weight++;
+  }
+  while(n != null && n.getLevel() > currentLevelToCompare) {
+n = n.getParent();
+weight++;
+  }
+  while(r != null && n != null && !r.equals(n)) {
+r = r.getParent();
+n = n.getParent();
+weight+=2;
+  }
+}
+return weight;
+  }
+
+  /**
+   * Returns an integer weight which specifies how far away node is
+   * from reader. A lower value signifies that a node is closer.
+   * It uses network location to calculate the weight
+   *
+   * @param reader Node where data will be read
+   * @param node Replica of data
+   * @return weight
+   */
+  private static int getWeightUsingNetworkLocation(Node reader, Node node) {
+//Start off by initializing to Integer.MAX_VALUE
+int weight = Integer.MAX_VALUE;
+if(reader != null && node != null) {
+  String readerPath = normalizeNetworkLocationPath(
+  reader.getNetworkLocation());
+  String nodePath = normalizeNetworkLocationPath(
+  node.getNetworkLocation());
+
+  //same rack
+  if(readerPath.equals(nodePath)) {
+if(reader.getName().equals(node.getName())) {
+  weight = 0;
+} else {
+  weight = 2;
+}
+  } else {
+String[] readerPathToken = readerPath.split(PATH_SEPARATOR_STR);
+String[] nodePathToken = nodePath.split(PATH_SE

[1/2] hadoop git commit: HDFS-9390. Block management for maintenance states.

2016-10-17 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 a5a56c356 -> d55a7f893


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d55a7f89/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
index 63617ad..c125f45 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
@@ -18,13 +18,19 @@
 package org.apache.hadoop.hdfs;
 
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
 import java.io.IOException;
+import java.util.ArrayList;
 import java.util.Collection;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hdfs.client.HdfsDataInputStream;
@@ -32,6 +38,8 @@ import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo.AdminStates;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType;
 import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
+import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo;
 import org.apache.hadoop.hdfs.server.namenode.FSNamesystem;
 import org.apache.hadoop.util.Time;
 import org.junit.Test;
@@ -40,13 +48,23 @@ import org.junit.Test;
  * This class tests node maintenance.
  */
 public class TestMaintenanceState extends AdminStatesBaseTest {
-  public static final Log LOG = LogFactory.getLog(TestMaintenanceState.class);
-  static private final long EXPIRATION_IN_MS = 500;
+  public static final Logger LOG =
+  LoggerFactory.getLogger(TestMaintenanceState.class);
+  static private final long EXPIRATION_IN_MS = 50;
+  private int minMaintenanceR =
+  DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_DEFAULT;
 
   public TestMaintenanceState() {
 setUseCombinedHostFileManager();
   }
 
+  void setMinMaintenanceR(int minMaintenanceR) {
+this.minMaintenanceR = minMaintenanceR;
+getConf().setInt(
+DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY,
+minMaintenanceR);
+  }
+
   /**
* Verify a node can transition from AdminStates.ENTERING_MAINTENANCE to
* AdminStates.NORMAL.
@@ -55,21 +73,25 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
   public void testTakeNodeOutOfEnteringMaintenance() throws Exception {
 LOG.info("Starting testTakeNodeOutOfEnteringMaintenance");
 final int replicas = 1;
-final int numNamenodes = 1;
-final int numDatanodes = 1;
-final Path file1 = new Path("/testTakeNodeOutOfEnteringMaintenance.dat");
+final Path file = new Path("/testTakeNodeOutOfEnteringMaintenance.dat");
 
-startCluster(numNamenodes, numDatanodes);
+startCluster(1, 1);
 
-FileSystem fileSys = getCluster().getFileSystem(0);
-writeFile(fileSys, file1, replicas, 1);
+final FileSystem fileSys = getCluster().getFileSystem(0);
+final FSNamesystem ns = getCluster().getNamesystem(0);
+writeFile(fileSys, file, replicas, 1);
 
-DatanodeInfo nodeOutofService = takeNodeOutofService(0,
+final DatanodeInfo nodeOutofService = takeNodeOutofService(0,
 null, Long.MAX_VALUE, null, AdminStates.ENTERING_MAINTENANCE);
 
+// When node is in ENTERING_MAINTENANCE state, it can still serve read
+// requests
+assertNull(checkWithRetry(ns, fileSys, file, replicas, null,
+nodeOutofService));
+
 putNodeInService(0, nodeOutofService.getDatanodeUuid());
 
-cleanupFile(fileSys, file1);
+cleanupFile(fileSys, file);
   }
 
   /**
@@ -80,23 +102,21 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
   public void testEnteringMaintenanceExpiration() throws Exception {
 LOG.info("Starting testEnteringMaintenanceExpiration");
 final int replicas = 1;
-final int numNamenodes = 1;
-final int numDatanodes = 1;
-final Path file1 = new Path("/testTakeNodeOutOfEnteringMaintenance.dat");
+final Path file = new Path("/testEnteringMaintenanceExpiration.dat");
 
-startCluster(numNamenodes, numDatanodes);
+startCluster(1, 1);
 
-FileSystem fileSys = getCluster().getFileSystem(0);
-writeFile(fileSys, file1, replicas, 1);
+final FileSystem fileSys = getCluster().getFileSystem(0);
+writeFile(fileSys, file, 

[1/2] hadoop git commit: HDFS-9390. Block management for maintenance states.

2016-10-17 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk f5d923591 -> b61fb267b


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b61fb267/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
index 63617ad..c125f45 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
@@ -18,13 +18,19 @@
 package org.apache.hadoop.hdfs;
 
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
 import java.io.IOException;
+import java.util.ArrayList;
 import java.util.Collection;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hdfs.client.HdfsDataInputStream;
@@ -32,6 +38,8 @@ import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo.AdminStates;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType;
 import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
+import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo;
 import org.apache.hadoop.hdfs.server.namenode.FSNamesystem;
 import org.apache.hadoop.util.Time;
 import org.junit.Test;
@@ -40,13 +48,23 @@ import org.junit.Test;
  * This class tests node maintenance.
  */
 public class TestMaintenanceState extends AdminStatesBaseTest {
-  public static final Log LOG = LogFactory.getLog(TestMaintenanceState.class);
-  static private final long EXPIRATION_IN_MS = 500;
+  public static final Logger LOG =
+  LoggerFactory.getLogger(TestMaintenanceState.class);
+  static private final long EXPIRATION_IN_MS = 50;
+  private int minMaintenanceR =
+  DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_DEFAULT;
 
   public TestMaintenanceState() {
 setUseCombinedHostFileManager();
   }
 
+  void setMinMaintenanceR(int minMaintenanceR) {
+this.minMaintenanceR = minMaintenanceR;
+getConf().setInt(
+DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY,
+minMaintenanceR);
+  }
+
   /**
* Verify a node can transition from AdminStates.ENTERING_MAINTENANCE to
* AdminStates.NORMAL.
@@ -55,21 +73,25 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
   public void testTakeNodeOutOfEnteringMaintenance() throws Exception {
 LOG.info("Starting testTakeNodeOutOfEnteringMaintenance");
 final int replicas = 1;
-final int numNamenodes = 1;
-final int numDatanodes = 1;
-final Path file1 = new Path("/testTakeNodeOutOfEnteringMaintenance.dat");
+final Path file = new Path("/testTakeNodeOutOfEnteringMaintenance.dat");
 
-startCluster(numNamenodes, numDatanodes);
+startCluster(1, 1);
 
-FileSystem fileSys = getCluster().getFileSystem(0);
-writeFile(fileSys, file1, replicas, 1);
+final FileSystem fileSys = getCluster().getFileSystem(0);
+final FSNamesystem ns = getCluster().getNamesystem(0);
+writeFile(fileSys, file, replicas, 1);
 
-DatanodeInfo nodeOutofService = takeNodeOutofService(0,
+final DatanodeInfo nodeOutofService = takeNodeOutofService(0,
 null, Long.MAX_VALUE, null, AdminStates.ENTERING_MAINTENANCE);
 
+// When node is in ENTERING_MAINTENANCE state, it can still serve read
+// requests
+assertNull(checkWithRetry(ns, fileSys, file, replicas, null,
+nodeOutofService));
+
 putNodeInService(0, nodeOutofService.getDatanodeUuid());
 
-cleanupFile(fileSys, file1);
+cleanupFile(fileSys, file);
   }
 
   /**
@@ -80,23 +102,21 @@ public class TestMaintenanceState extends 
AdminStatesBaseTest {
   public void testEnteringMaintenanceExpiration() throws Exception {
 LOG.info("Starting testEnteringMaintenanceExpiration");
 final int replicas = 1;
-final int numNamenodes = 1;
-final int numDatanodes = 1;
-final Path file1 = new Path("/testTakeNodeOutOfEnteringMaintenance.dat");
+final Path file = new Path("/testEnteringMaintenanceExpiration.dat");
 
-startCluster(numNamenodes, numDatanodes);
+startCluster(1, 1);
 
-FileSystem fileSys = getCluster().getFileSystem(0);
-writeFile(fileSys, file1, replicas, 1);
+final FileSystem fileSys = getCluster().getFileSystem(0);
+writeFile(fileSys, file, replicas, 

[2/2] hadoop git commit: HDFS-9390. Block management for maintenance states.

2016-10-17 Thread mingma
HDFS-9390. Block management for maintenance states.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d55a7f89
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d55a7f89
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d55a7f89

Branch: refs/heads/branch-2
Commit: d55a7f893584acee0c3bfd89e89f8002310dcc3f
Parents: a5a56c3
Author: Ming Ma 
Authored: Mon Oct 17 17:46:29 2016 -0700
Committer: Ming Ma 
Committed: Mon Oct 17 17:46:29 2016 -0700

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   4 +
 .../java/org/apache/hadoop/hdfs/DFSUtil.java|  54 +-
 .../hadoop/hdfs/server/balancer/Dispatcher.java |  11 +-
 .../server/blockmanagement/BlockManager.java| 260 +--
 .../BlockPlacementPolicyDefault.java|   4 +-
 .../CacheReplicationMonitor.java|   2 +-
 .../blockmanagement/DatanodeDescriptor.java |  35 +-
 .../server/blockmanagement/DatanodeManager.java |  47 +-
 .../blockmanagement/DecommissionManager.java| 145 ++--
 .../blockmanagement/HeartbeatManager.java   |  23 +-
 .../server/blockmanagement/NumberReplicas.java  |  39 +-
 .../blockmanagement/StorageTypeStats.java   |   8 +-
 .../hdfs/server/namenode/FSNamesystem.java  |   9 +-
 .../src/main/resources/hdfs-default.xml |   7 +
 .../apache/hadoop/hdfs/AdminStatesBaseTest.java |  20 +-
 .../apache/hadoop/hdfs/TestDecommission.java|   2 +-
 .../hadoop/hdfs/TestMaintenanceState.java   | 775 +--
 .../blockmanagement/TestBlockManager.java   |   4 +-
 .../namenode/TestDecommissioningStatus.java |  48 +-
 .../namenode/TestNamenodeCapacityReport.java|  78 +-
 .../hadoop/hdfs/util/HostsFileWriter.java   |   1 +
 21 files changed, 1219 insertions(+), 357 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d55a7f89/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index ca2fb3e..6b6a4e0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -213,6 +213,10 @@ public class DFSConfigKeys extends CommonConfigurationKeys 
{
   public static final String  DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_KEY 
=
   
HdfsClientConfigKeys.DeprecatedKeys.DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_KEY;
   public static final int 
DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_DEFAULT = -1;
+  public static final String  DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY =
+  "dfs.namenode.maintenance.replication.min";
+  public static final int DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_DEFAULT
+  = 1;
   public static final String  DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY =
   
HdfsClientConfigKeys.DeprecatedKeys.DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY;
   public static final int DFS_NAMENODE_REPLICATION_MAX_STREAMS_DEFAULT = 2;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d55a7f89/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
index a2d3d5d..e0a4e18 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
@@ -124,49 +124,59 @@ public class DFSUtil {
 return array;
   }
 
+
   /**
-   * Compartor for sorting DataNodeInfo[] based on decommissioned states.
-   * Decommissioned nodes are moved to the end of the array on sorting with
-   * this compartor.
+   * Comparator for sorting DataNodeInfo[] based on
+   * decommissioned and entering_maintenance states.
*/
-  public static final Comparator DECOM_COMPARATOR = 
-new Comparator() {
-  @Override
-  public int compare(DatanodeInfo a, DatanodeInfo b) {
-return a.isDecommissioned() == b.isDecommissioned() ? 0 : 
-  a.isDecommissioned() ? 1 : -1;
+  public static class ServiceComparator implements Comparator {
+@Override
+public int compare(DatanodeInfo a, DatanodeInfo b) {
+  // Decommissioned nodes will still be moved to the end of the list
+  if (a.isDecommissioned()) {
+return b.isDecommissioned() ? 0 : 1;
+  } 

[2/2] hadoop git commit: HDFS-9390. Block management for maintenance states.

2016-10-17 Thread mingma
HDFS-9390. Block management for maintenance states.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b61fb267
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b61fb267
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b61fb267

Branch: refs/heads/trunk
Commit: b61fb267b92b2736920b4bd0c673d31e7632ebb9
Parents: f5d9235
Author: Ming Ma 
Authored: Mon Oct 17 17:45:41 2016 -0700
Committer: Ming Ma 
Committed: Mon Oct 17 17:45:41 2016 -0700

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   5 +
 .../java/org/apache/hadoop/hdfs/DFSUtil.java|  53 +-
 .../hadoop/hdfs/server/balancer/Dispatcher.java |  11 +-
 .../server/blockmanagement/BlockManager.java| 249 --
 .../BlockPlacementPolicyDefault.java|   4 +-
 .../CacheReplicationMonitor.java|   2 +-
 .../blockmanagement/DatanodeDescriptor.java |  35 +-
 .../server/blockmanagement/DatanodeManager.java |  47 +-
 .../blockmanagement/DecommissionManager.java| 142 +++-
 .../blockmanagement/ErasureCodingWork.java  |  16 +-
 .../blockmanagement/HeartbeatManager.java   |  23 +-
 .../blockmanagement/LowRedundancyBlocks.java|  47 +-
 .../server/blockmanagement/NumberReplicas.java  |  30 +-
 .../blockmanagement/StorageTypeStats.java   |   8 +-
 .../hdfs/server/namenode/FSNamesystem.java  |   9 +-
 .../src/main/resources/hdfs-default.xml |   7 +
 .../apache/hadoop/hdfs/AdminStatesBaseTest.java |  20 +-
 .../apache/hadoop/hdfs/TestDecommission.java|   2 +-
 .../hadoop/hdfs/TestMaintenanceState.java   | 775 +--
 .../blockmanagement/TestBlockManager.java   |   8 +-
 .../namenode/TestDecommissioningStatus.java |  57 +-
 .../namenode/TestNamenodeCapacityReport.java|  78 +-
 .../hadoop/hdfs/util/HostsFileWriter.java   |   1 +
 23 files changed, 1240 insertions(+), 389 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b61fb267/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 10c0ad6..d54c109 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -220,6 +220,11 @@ public class DFSConfigKeys extends CommonConfigurationKeys 
{
   "dfs.namenode.reconstruction.pending.timeout-sec";
   public static final int 
DFS_NAMENODE_RECONSTRUCTION_PENDING_TIMEOUT_SEC_DEFAULT = -1;
 
+  public static final String  DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY =
+  "dfs.namenode.maintenance.replication.min";
+  public static final int DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_DEFAULT
+  = 1;
+
   public static final String  DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY =
   
HdfsClientConfigKeys.DeprecatedKeys.DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY;
   public static final int DFS_NAMENODE_REPLICATION_MAX_STREAMS_DEFAULT = 2;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b61fb267/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
index 83870cf..23166e2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
@@ -124,48 +124,57 @@ public class DFSUtil {
   }
 
   /**
-   * Compartor for sorting DataNodeInfo[] based on decommissioned states.
-   * Decommissioned nodes are moved to the end of the array on sorting with
-   * this compartor.
+   * Comparator for sorting DataNodeInfo[] based on
+   * decommissioned and entering_maintenance states.
*/
-  public static final Comparator DECOM_COMPARATOR = 
-new Comparator() {
-  @Override
-  public int compare(DatanodeInfo a, DatanodeInfo b) {
-return a.isDecommissioned() == b.isDecommissioned() ? 0 : 
-  a.isDecommissioned() ? 1 : -1;
+  public static class ServiceComparator implements Comparator {
+@Override
+public int compare(DatanodeInfo a, DatanodeInfo b) {
+  // Decommissioned nodes will still be moved to the end of the list
+  if (a.isDecommissioned()) {
+return b.isDecommissioned() ? 0 : 1;
+  } else if 

[2/2] hadoop git commit: HDFS-9392. Admins support for maintenance state. Contributed by Ming Ma.

2016-08-30 Thread mingma
HDFS-9392. Admins support for maintenance state. Contributed by Ming Ma.

(cherry picked from commit 9dcbdbdb5a34d85910707f81ebc1bb1f81c99978)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/56c9a96a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/56c9a96a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/56c9a96a

Branch: refs/heads/branch-2
Commit: 56c9a96a76d6d2ce65a975888fb3bb5dbab7c0ce
Parents: abbd95b
Author: Ming Ma 
Authored: Tue Aug 30 14:00:13 2016 -0700
Committer: Ming Ma 
Committed: Tue Aug 30 14:09:18 2016 -0700

--
 .../hdfs/protocol/DatanodeAdminProperties.java  |  19 +
 .../hadoop/hdfs/protocol/DatanodeInfo.java  |  27 +-
 .../hadoop/hdfs/protocol/HdfsConstants.java |   2 +-
 .../CombinedHostFileManager.java|  23 +
 .../server/blockmanagement/DatanodeManager.java |  33 +-
 .../server/blockmanagement/DatanodeStats.java   |  10 +-
 .../blockmanagement/DecommissionManager.java| 102 +++-
 .../blockmanagement/HeartbeatManager.java   |  27 +
 .../blockmanagement/HostConfigManager.java  |   7 +
 .../server/blockmanagement/HostFileManager.java |   6 +
 .../hdfs/server/namenode/FSNamesystem.java  |  29 +
 .../namenode/metrics/FSNamesystemMBean.java |  15 +
 .../apache/hadoop/hdfs/AdminStatesBaseTest.java | 375 
 .../apache/hadoop/hdfs/TestDecommission.java| 601 ++-
 .../hadoop/hdfs/TestMaintenanceState.java   | 310 ++
 .../namenode/TestDecommissioningStatus.java |   2 +-
 .../hadoop/hdfs/util/HostsFileWriter.java   |  55 +-
 .../hdfs/util/TestCombinedHostsFileReader.java  |   2 +-
 .../src/test/resources/dfs.hosts.json   |   2 +
 19 files changed, 1172 insertions(+), 475 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/56c9a96a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeAdminProperties.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeAdminProperties.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeAdminProperties.java
index 9f7b983..2abed81 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeAdminProperties.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeAdminProperties.java
@@ -33,6 +33,7 @@ public class DatanodeAdminProperties {
   private int port;
   private String upgradeDomain;
   private AdminStates adminState = AdminStates.NORMAL;
+  private long maintenanceExpireTimeInMS = Long.MAX_VALUE;
 
   /**
* Return the host name of the datanode.
@@ -97,4 +98,22 @@ public class DatanodeAdminProperties {
   public void setAdminState(final AdminStates adminState) {
 this.adminState = adminState;
   }
+
+  /**
+   * Get the maintenance expiration time in milliseconds.
+   * @return the maintenance expiration time in milliseconds.
+   */
+  public long getMaintenanceExpireTimeInMS() {
+return this.maintenanceExpireTimeInMS;
+  }
+
+  /**
+   * Get the maintenance expiration time in milliseconds.
+   * @param maintenanceExpireTimeInMS
+   *the maintenance expiration time in milliseconds.
+   */
+  public void setMaintenanceExpireTimeInMS(
+  final long maintenanceExpireTimeInMS) {
+this.maintenanceExpireTimeInMS = maintenanceExpireTimeInMS;
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/56c9a96a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
index 2a305a8..7599e36 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
@@ -83,6 +83,7 @@ public class DatanodeInfo extends DatanodeID implements Node {
   }
 
   protected AdminStates adminState;
+  private long maintenanceExpireTimeInMS;
 
   public DatanodeInfo(DatanodeInfo from) {
 super(from);
@@ -497,17 +498,28 @@ public class DatanodeInfo extends DatanodeID implements 
Node {
   }
 
   /**
-   * Put a node to maintenance mode.
+   * Start the maintenance operation.
*/
   public void startMaintenance() {
-adminState = AdminStates.ENTERING_MAINTENANCE;
+

[1/2] hadoop git commit: HDFS-9392. Admins support for maintenance state. Contributed by Ming Ma.

2016-08-30 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 abbd95b79 -> 56c9a96a7


http://git-wip-us.apache.org/repos/asf/hadoop/blob/56c9a96a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
index 1d5ebbf..f8e6da4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
@@ -26,17 +26,13 @@ import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
-import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
-import java.util.Random;
 import java.util.concurrent.ExecutionException;
 
 import com.google.common.base.Supplier;
 import com.google.common.collect.Lists;
-import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.BlockLocation;
-import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -64,11 +60,8 @@ import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter;
 import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStatistics;
 import org.apache.hadoop.test.GenericTestUtils;
-import org.apache.hadoop.test.PathUtils;
 import org.apache.log4j.Level;
-import org.junit.After;
 import org.junit.Assert;
-import org.junit.Before;
 import org.junit.Ignore;
 import org.junit.Test;
 import org.mortbay.util.ajax.JSON;
@@ -78,90 +71,9 @@ import org.slf4j.LoggerFactory;
 /**
  * This class tests the decommissioning of nodes.
  */
-public class TestDecommission {
+public class TestDecommission extends AdminStatesBaseTest {
   public static final Logger LOG = LoggerFactory.getLogger(TestDecommission
   .class);
-  static final long seed = 0xDEADBEEFL;
-  static final int blockSize = 8192;
-  static final int fileSize = 16384;
-  static final int HEARTBEAT_INTERVAL = 1; // heartbeat interval in seconds
-  static final int BLOCKREPORT_INTERVAL_MSEC = 1000; //block report in msec
-  static final int NAMENODE_REPLICATION_INTERVAL = 1; //replication interval
-
-  final Random myrand = new Random();
-  Path dir;
-  Path hostsFile;
-  Path excludeFile;
-  FileSystem localFileSys;
-  Configuration conf;
-  MiniDFSCluster cluster = null;
-
-  @Before
-  public void setup() throws IOException {
-conf = new HdfsConfiguration();
-// Set up the hosts/exclude files.
-localFileSys = FileSystem.getLocal(conf);
-Path workingDir = localFileSys.getWorkingDirectory();
-dir = new Path(workingDir, PathUtils.getTestDirName(getClass()) + 
"/work-dir/decommission");
-hostsFile = new Path(dir, "hosts");
-excludeFile = new Path(dir, "exclude");
-
-// Setup conf
-conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_REPLICATION_CONSIDERLOAD_KEY, 
false);
-conf.set(DFSConfigKeys.DFS_HOSTS, hostsFile.toUri().getPath());
-conf.set(DFSConfigKeys.DFS_HOSTS_EXCLUDE, excludeFile.toUri().getPath());
-conf.setInt(DFSConfigKeys.DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY, 
2000);
-conf.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, HEARTBEAT_INTERVAL);
-conf.setInt(DFSConfigKeys.DFS_NAMENODE_REPLICATION_INTERVAL_KEY, 1);
-conf.setInt(DFSConfigKeys.DFS_BLOCKREPORT_INTERVAL_MSEC_KEY, 
BLOCKREPORT_INTERVAL_MSEC);
-
conf.setInt(DFSConfigKeys.DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_KEY, 4);
-conf.setInt(DFSConfigKeys.DFS_NAMENODE_REPLICATION_INTERVAL_KEY, 
NAMENODE_REPLICATION_INTERVAL);
-  
-writeConfigFile(hostsFile, null);
-writeConfigFile(excludeFile, null);
-  }
-  
-  @After
-  public void teardown() throws IOException {
-cleanupFile(localFileSys, dir);
-if (cluster != null) {
-  cluster.shutdown();
-  cluster = null;
-}
-  }
-  
-  private void writeConfigFile(Path name, List nodes) 
-throws IOException {
-// delete if it already exists
-if (localFileSys.exists(name)) {
-  localFileSys.delete(name, true);
-}
-
-FSDataOutputStream stm = localFileSys.create(name);
-
-if (nodes != null) {
-  for (Iterator it = nodes.iterator(); it.hasNext();) {
-String node = it.next();
-stm.writeBytes(node);
-stm.writeBytes("\n");
-  }
-}
-stm.close();
-  }
-
-  private void writeFile(FileSystem fileSys, Path name, int repl)
-throws IOException {
-// create and write a file that contains three blocks of data
-FSDataOutputStream stm = fileSys.create(name, true, fileSys.getConf()
-.getInt(CommonConfigurationKeys.IO_FILE_BUFFER_SIZE_KEY, 4096),
-(short) 

[1/2] hadoop git commit: HDFS-9392. Admins support for maintenance state. Contributed by Ming Ma.

2016-08-30 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk c4ee6915a -> 9dcbdbdb5


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9dcbdbdb/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
index f6b5d8f..ddb8237 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
@@ -26,17 +26,13 @@ import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
-import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
-import java.util.Random;
 import java.util.concurrent.ExecutionException;
 
 import com.google.common.base.Supplier;
 import com.google.common.collect.Lists;
-import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.BlockLocation;
-import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -64,11 +60,8 @@ import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter;
 import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStatistics;
 import org.apache.hadoop.test.GenericTestUtils;
-import org.apache.hadoop.test.PathUtils;
 import org.apache.log4j.Level;
-import org.junit.After;
 import org.junit.Assert;
-import org.junit.Before;
 import org.junit.Ignore;
 import org.junit.Test;
 import org.mortbay.util.ajax.JSON;
@@ -78,90 +71,9 @@ import org.slf4j.LoggerFactory;
 /**
  * This class tests the decommissioning of nodes.
  */
-public class TestDecommission {
+public class TestDecommission extends AdminStatesBaseTest {
   public static final Logger LOG = LoggerFactory.getLogger(TestDecommission
   .class);
-  static final long seed = 0xDEADBEEFL;
-  static final int blockSize = 8192;
-  static final int fileSize = 16384;
-  static final int HEARTBEAT_INTERVAL = 1; // heartbeat interval in seconds
-  static final int BLOCKREPORT_INTERVAL_MSEC = 1000; //block report in msec
-  static final int NAMENODE_REPLICATION_INTERVAL = 1; //replication interval
-
-  final Random myrand = new Random();
-  Path dir;
-  Path hostsFile;
-  Path excludeFile;
-  FileSystem localFileSys;
-  Configuration conf;
-  MiniDFSCluster cluster = null;
-
-  @Before
-  public void setup() throws IOException {
-conf = new HdfsConfiguration();
-// Set up the hosts/exclude files.
-localFileSys = FileSystem.getLocal(conf);
-Path workingDir = localFileSys.getWorkingDirectory();
-dir = new Path(workingDir, PathUtils.getTestDirName(getClass()) + 
"/work-dir/decommission");
-hostsFile = new Path(dir, "hosts");
-excludeFile = new Path(dir, "exclude");
-
-// Setup conf
-conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_REPLICATION_CONSIDERLOAD_KEY, 
false);
-conf.set(DFSConfigKeys.DFS_HOSTS, hostsFile.toUri().getPath());
-conf.set(DFSConfigKeys.DFS_HOSTS_EXCLUDE, excludeFile.toUri().getPath());
-conf.setInt(DFSConfigKeys.DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY, 
2000);
-conf.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, HEARTBEAT_INTERVAL);
-conf.setInt(DFSConfigKeys.DFS_NAMENODE_REPLICATION_INTERVAL_KEY, 1);
-conf.setInt(DFSConfigKeys.DFS_BLOCKREPORT_INTERVAL_MSEC_KEY, 
BLOCKREPORT_INTERVAL_MSEC);
-
conf.setInt(DFSConfigKeys.DFS_NAMENODE_RECONSTRUCTION_PENDING_TIMEOUT_SEC_KEY, 
4);
-conf.setInt(DFSConfigKeys.DFS_NAMENODE_REPLICATION_INTERVAL_KEY, 
NAMENODE_REPLICATION_INTERVAL);
-  
-writeConfigFile(hostsFile, null);
-writeConfigFile(excludeFile, null);
-  }
-  
-  @After
-  public void teardown() throws IOException {
-cleanupFile(localFileSys, dir);
-if (cluster != null) {
-  cluster.shutdown();
-  cluster = null;
-}
-  }
-  
-  private void writeConfigFile(Path name, List nodes) 
-throws IOException {
-// delete if it already exists
-if (localFileSys.exists(name)) {
-  localFileSys.delete(name, true);
-}
-
-FSDataOutputStream stm = localFileSys.create(name);
-
-if (nodes != null) {
-  for (Iterator it = nodes.iterator(); it.hasNext();) {
-String node = it.next();
-stm.writeBytes(node);
-stm.writeBytes("\n");
-  }
-}
-stm.close();
-  }
-
-  private void writeFile(FileSystem fileSys, Path name, int repl)
-throws IOException {
-// create and write a file that contains three blocks of data
-FSDataOutputStream stm = fileSys.create(name, true, fileSys.getConf()
-.getInt(CommonConfigurationKeys.IO_FILE_BUFFER_SIZE_KEY, 4096),
-

[2/2] hadoop git commit: HDFS-9392. Admins support for maintenance state. Contributed by Ming Ma.

2016-08-30 Thread mingma
HDFS-9392. Admins support for maintenance state. Contributed by Ming Ma.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9dcbdbdb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9dcbdbdb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9dcbdbdb

Branch: refs/heads/trunk
Commit: 9dcbdbdb5a34d85910707f81ebc1bb1f81c99978
Parents: c4ee691
Author: Ming Ma 
Authored: Tue Aug 30 14:00:13 2016 -0700
Committer: Ming Ma 
Committed: Tue Aug 30 14:00:13 2016 -0700

--
 .../hdfs/protocol/DatanodeAdminProperties.java  |  19 +
 .../hadoop/hdfs/protocol/DatanodeInfo.java  |  27 +-
 .../hadoop/hdfs/protocol/HdfsConstants.java |   2 +-
 .../CombinedHostFileManager.java|  23 +
 .../server/blockmanagement/DatanodeManager.java |  33 +-
 .../server/blockmanagement/DatanodeStats.java   |  10 +-
 .../blockmanagement/DecommissionManager.java| 101 +++-
 .../blockmanagement/HeartbeatManager.java   |  27 +
 .../blockmanagement/HostConfigManager.java  |   7 +
 .../server/blockmanagement/HostFileManager.java |   6 +
 .../hdfs/server/namenode/FSNamesystem.java  |  29 +
 .../namenode/metrics/FSNamesystemMBean.java |  15 +
 .../apache/hadoop/hdfs/AdminStatesBaseTest.java | 375 
 .../apache/hadoop/hdfs/TestDecommission.java| 592 ++-
 .../hadoop/hdfs/TestMaintenanceState.java   | 310 ++
 .../namenode/TestDecommissioningStatus.java |   2 +-
 .../hadoop/hdfs/util/HostsFileWriter.java   |  55 +-
 .../hdfs/util/TestCombinedHostsFileReader.java  |   2 +-
 .../src/test/resources/dfs.hosts.json   |   2 +
 19 files changed, 1165 insertions(+), 472 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9dcbdbdb/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeAdminProperties.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeAdminProperties.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeAdminProperties.java
index 9f7b983..2abed81 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeAdminProperties.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeAdminProperties.java
@@ -33,6 +33,7 @@ public class DatanodeAdminProperties {
   private int port;
   private String upgradeDomain;
   private AdminStates adminState = AdminStates.NORMAL;
+  private long maintenanceExpireTimeInMS = Long.MAX_VALUE;
 
   /**
* Return the host name of the datanode.
@@ -97,4 +98,22 @@ public class DatanodeAdminProperties {
   public void setAdminState(final AdminStates adminState) {
 this.adminState = adminState;
   }
+
+  /**
+   * Get the maintenance expiration time in milliseconds.
+   * @return the maintenance expiration time in milliseconds.
+   */
+  public long getMaintenanceExpireTimeInMS() {
+return this.maintenanceExpireTimeInMS;
+  }
+
+  /**
+   * Get the maintenance expiration time in milliseconds.
+   * @param maintenanceExpireTimeInMS
+   *the maintenance expiration time in milliseconds.
+   */
+  public void setMaintenanceExpireTimeInMS(
+  final long maintenanceExpireTimeInMS) {
+this.maintenanceExpireTimeInMS = maintenanceExpireTimeInMS;
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9dcbdbdb/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
index e04abdd..cd32a53 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
@@ -83,6 +83,7 @@ public class DatanodeInfo extends DatanodeID implements Node {
   }
 
   protected AdminStates adminState;
+  private long maintenanceExpireTimeInMS;
 
   public DatanodeInfo(DatanodeInfo from) {
 super(from);
@@ -499,17 +500,28 @@ public class DatanodeInfo extends DatanodeID implements 
Node {
   }
 
   /**
-   * Put a node to maintenance mode.
+   * Start the maintenance operation.
*/
   public void startMaintenance() {
-adminState = AdminStates.ENTERING_MAINTENANCE;
+this.adminState = AdminStates.ENTERING_MAINTENANCE;
   }
 
   /**
-   * Put a node 

hadoop git commit: HDFS-9922. Upgrade Domain placement policy status marks a good block in violation when there are decommissioned nodes. (Chris Trezzo via mingma)

2016-06-15 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 6dd34baf3 -> 32b115da1


HDFS-9922. Upgrade Domain placement policy status marks a good block in 
violation when there are decommissioned nodes. (Chris Trezzo via mingma)

(cherry picked from commit b48f27e794e42ba90836314834e872616437d7c9)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/32b115da
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/32b115da
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/32b115da

Branch: refs/heads/branch-2
Commit: 32b115da1d75b9b5b6a770371fbecfda43928bb9
Parents: 6dd34ba
Author: Ming Ma <min...@apache.org>
Authored: Wed Jun 15 22:00:52 2016 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Wed Jun 15 22:03:20 2016 -0700

--
 .../BlockPlacementStatusWithUpgradeDomain.java  |   2 +-
 ...stBlockPlacementStatusWithUpgradeDomain.java |  83 ++
 .../TestUpgradeDomainBlockPlacementPolicy.java  | 161 ++-
 3 files changed, 209 insertions(+), 37 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/32b115da/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithUpgradeDomain.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithUpgradeDomain.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithUpgradeDomain.java
index e2e1486..4b3c3cc 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithUpgradeDomain.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithUpgradeDomain.java
@@ -60,7 +60,7 @@ public class BlockPlacementStatusWithUpgradeDomain implements
 
   private boolean isUpgradeDomainPolicySatisfied() {
 if (numberOfReplicas <= upgradeDomainFactor) {
-  return (numberOfReplicas == upgradeDomains.size());
+  return (numberOfReplicas <= upgradeDomains.size());
 } else {
   return upgradeDomains.size() >= upgradeDomainFactor;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/32b115da/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementStatusWithUpgradeDomain.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementStatusWithUpgradeDomain.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementStatusWithUpgradeDomain.java
new file mode 100644
index 000..bfff932
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementStatusWithUpgradeDomain.java
@@ -0,0 +1,83 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.blockmanagement;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+import java.util.HashSet;
+import java.util.Set;
+
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Unit tests for BlockPlacementStatusWithUpgradeDomain class.
+ */
+public class TestBlockPlacementStatusWithUpgradeDomain {
+
+  private Set upgradeDomains;
+  private BlockPlacementStatusDefault bpsd =
+  mock(BlockPlacementStatusDefault.class);
+
+  @Before
+  public void setup() {
+upgradeDomains = new HashSet();
+upgradeDomains.add("1");
+upgradeDomains.add("2");
+upgradeDomains.add("3");
+when(bpsd.isPlacementPolicySatisfied()).thenReturn(true);
+  

hadoop git commit: HDFS-9922. Upgrade Domain placement policy status marks a good block in violation when there are decommissioned nodes. (Chris Trezzo via mingma)

2016-06-15 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 5dfc38ff5 -> b48f27e79


HDFS-9922. Upgrade Domain placement policy status marks a good block in 
violation when there are decommissioned nodes. (Chris Trezzo via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b48f27e7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b48f27e7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b48f27e7

Branch: refs/heads/trunk
Commit: b48f27e794e42ba90836314834e872616437d7c9
Parents: 5dfc38f
Author: Ming Ma <min...@apache.org>
Authored: Wed Jun 15 22:00:52 2016 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Wed Jun 15 22:00:52 2016 -0700

--
 .../BlockPlacementStatusWithUpgradeDomain.java  |   2 +-
 ...stBlockPlacementStatusWithUpgradeDomain.java |  83 ++
 .../TestUpgradeDomainBlockPlacementPolicy.java  | 161 ++-
 3 files changed, 209 insertions(+), 37 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b48f27e7/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithUpgradeDomain.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithUpgradeDomain.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithUpgradeDomain.java
index e2e1486..4b3c3cc 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithUpgradeDomain.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithUpgradeDomain.java
@@ -60,7 +60,7 @@ public class BlockPlacementStatusWithUpgradeDomain implements
 
   private boolean isUpgradeDomainPolicySatisfied() {
 if (numberOfReplicas <= upgradeDomainFactor) {
-  return (numberOfReplicas == upgradeDomains.size());
+  return (numberOfReplicas <= upgradeDomains.size());
 } else {
   return upgradeDomains.size() >= upgradeDomainFactor;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b48f27e7/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementStatusWithUpgradeDomain.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementStatusWithUpgradeDomain.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementStatusWithUpgradeDomain.java
new file mode 100644
index 000..bfff932
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementStatusWithUpgradeDomain.java
@@ -0,0 +1,83 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.blockmanagement;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+import java.util.HashSet;
+import java.util.Set;
+
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Unit tests for BlockPlacementStatusWithUpgradeDomain class.
+ */
+public class TestBlockPlacementStatusWithUpgradeDomain {
+
+  private Set upgradeDomains;
+  private BlockPlacementStatusDefault bpsd =
+  mock(BlockPlacementStatusDefault.class);
+
+  @Before
+  public void setup() {
+upgradeDomains = new HashSet();
+upgradeDomains.add("1");
+upgradeDomains.add("2");
+upgradeDomains.add("3");
+when(bpsd.isPlacementPolicySatisfied()).thenReturn(true);
+  }
+
+  @Test
+  public void testIsPolicySatisfiedPar

hadoop git commit: Revert "HDFS-9016. Display upgrade domain information in fsck. (mingma)"

2016-06-14 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 68ab0aa9d -> 03ed3f9b6


Revert "HDFS-9016. Display upgrade domain information in fsck. (mingma)"

This reverts commit 68ab0aa9d0b9bd03768dcd7c422ef201ad3f58f6.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/03ed3f9b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/03ed3f9b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/03ed3f9b

Branch: refs/heads/branch-2.8
Commit: 03ed3f9b66c8def06a9c580df5a6cc22d9afe039
Parents: 68ab0aa
Author: Ming Ma <min...@apache.org>
Authored: Tue Jun 14 20:14:00 2016 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Tue Jun 14 20:14:00 2016 -0700

--
 .../hdfs/server/namenode/NamenodeFsck.java  | 25 ++-
 .../org/apache/hadoop/hdfs/tools/DFSck.java | 13 ++--
 .../src/site/markdown/HDFSCommands.md   |  3 +-
 .../hadoop/hdfs/server/namenode/TestFsck.java   | 74 +---
 4 files changed, 14 insertions(+), 101 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/03ed3f9b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
index 8d3bcd3..a813e50 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
@@ -118,7 +118,6 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   public static final String DECOMMISSIONED_STATUS = "is DECOMMISSIONED";
   public static final String NONEXISTENT_STATUS = "does not exist";
   public static final String FAILURE_STATUS = "FAILED";
-  public static final String UNDEFINED = "undefined";
 
   private final NameNode namenode;
   private final NetworkTopology networktopology;
@@ -137,7 +136,6 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   private boolean showCorruptFileBlocks = false;
 
   private boolean showReplicaDetails = false;
-  private boolean showUpgradeDomains = false;
   private long staleInterval;
   private Tracer tracer;
 
@@ -218,13 +216,10 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   else if (key.equals("racks")) { this.showRacks = true; }
   else if (key.equals("replicadetails")) {
 this.showReplicaDetails = true;
-  } else if (key.equals("upgradedomains")) {
-this.showUpgradeDomains = true;
-  } else if (key.equals("storagepolicies")) {
-this.showStoragePolcies = true;
-  } else if (key.equals("openforwrite")) {
-this.showOpenFiles = true;
-  } else if (key.equals("listcorruptfileblocks")) {
+  }
+  else if (key.equals("storagepolicies")) { this.showStoragePolcies = 
true; }
+  else if (key.equals("openforwrite")) {this.showOpenFiles = true; }
+  else if (key.equals("listcorruptfileblocks")) {
 this.showCorruptFileBlocks = true;
   } else if (key.equals("startblockafter")) {
 this.currentCookie[0] = pmap.get("startblockafter")[0];
@@ -529,8 +524,8 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
 if (res.totalFiles % 100 == 0) { out.println(); out.flush(); }
   }
 
-  private void collectBlocksSummary(String parent, HdfsFileStatus file,
-  Result res, LocatedBlocks blocks) throws IOException {
+  private void collectBlocksSummary(String parent, HdfsFileStatus file, Result 
res,
+  LocatedBlocks blocks) throws IOException {
 String path = file.getFullName(parent);
 boolean isOpen = blocks.isUnderConstruction();
 if (isOpen && !showOpenFiles) {
@@ -643,8 +638,7 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
 missize += block.getNumBytes();
   } else {
 report.append(" Live_repl=" + liveReplicas);
-if (showLocations || showRacks || showReplicaDetails ||
-showUpgradeDomains) {
+if (showLocations || showRacks || showReplicaDetails) {
   StringBuilder sb = new StringBuilder("[");
   Iterable storages = 
bm.getStorages(block.getLocalBlock());
   for (Iterator iterator = storages.iterator(); 
iterator.hasNext();) {
@@ -656,11 +650,6 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
  

hadoop git commit: HDFS-9016. Display upgrade domain information in fsck. (mingma)

2016-06-14 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 d838c6443 -> 68ab0aa9d


HDFS-9016. Display upgrade domain information in fsck. (mingma)

(cherry picked from commit b7436f44682442e4d431ec3c97e4753e03435f3e)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/68ab0aa9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/68ab0aa9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/68ab0aa9

Branch: refs/heads/branch-2.8
Commit: 68ab0aa9d0b9bd03768dcd7c422ef201ad3f58f6
Parents: d838c64
Author: Ming Ma <min...@apache.org>
Authored: Tue Jun 14 20:07:37 2016 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Tue Jun 14 20:08:29 2016 -0700

--
 .../hdfs/server/namenode/NamenodeFsck.java  | 25 +--
 .../org/apache/hadoop/hdfs/tools/DFSck.java | 13 ++--
 .../src/site/markdown/HDFSCommands.md   |  3 +-
 .../hadoop/hdfs/server/namenode/TestFsck.java   | 74 +++-
 4 files changed, 101 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/68ab0aa9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
index a813e50..8d3bcd3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
@@ -118,6 +118,7 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   public static final String DECOMMISSIONED_STATUS = "is DECOMMISSIONED";
   public static final String NONEXISTENT_STATUS = "does not exist";
   public static final String FAILURE_STATUS = "FAILED";
+  public static final String UNDEFINED = "undefined";
 
   private final NameNode namenode;
   private final NetworkTopology networktopology;
@@ -136,6 +137,7 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   private boolean showCorruptFileBlocks = false;
 
   private boolean showReplicaDetails = false;
+  private boolean showUpgradeDomains = false;
   private long staleInterval;
   private Tracer tracer;
 
@@ -216,10 +218,13 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   else if (key.equals("racks")) { this.showRacks = true; }
   else if (key.equals("replicadetails")) {
 this.showReplicaDetails = true;
-  }
-  else if (key.equals("storagepolicies")) { this.showStoragePolcies = 
true; }
-  else if (key.equals("openforwrite")) {this.showOpenFiles = true; }
-  else if (key.equals("listcorruptfileblocks")) {
+  } else if (key.equals("upgradedomains")) {
+this.showUpgradeDomains = true;
+  } else if (key.equals("storagepolicies")) {
+this.showStoragePolcies = true;
+  } else if (key.equals("openforwrite")) {
+this.showOpenFiles = true;
+  } else if (key.equals("listcorruptfileblocks")) {
 this.showCorruptFileBlocks = true;
   } else if (key.equals("startblockafter")) {
 this.currentCookie[0] = pmap.get("startblockafter")[0];
@@ -524,8 +529,8 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
 if (res.totalFiles % 100 == 0) { out.println(); out.flush(); }
   }
 
-  private void collectBlocksSummary(String parent, HdfsFileStatus file, Result 
res,
-  LocatedBlocks blocks) throws IOException {
+  private void collectBlocksSummary(String parent, HdfsFileStatus file,
+  Result res, LocatedBlocks blocks) throws IOException {
 String path = file.getFullName(parent);
 boolean isOpen = blocks.isUnderConstruction();
 if (isOpen && !showOpenFiles) {
@@ -638,7 +643,8 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
 missize += block.getNumBytes();
   } else {
 report.append(" Live_repl=" + liveReplicas);
-if (showLocations || showRacks || showReplicaDetails) {
+if (showLocations || showRacks || showReplicaDetails ||
+showUpgradeDomains) {
   StringBuilder sb = new StringBuilder("[");
   Iterable storages = 
bm.getStorages(block.getLocalBlock());
   for (Iterator iterator = storages.iterator(); 
iterator.hasNext();) {
@@ -650,6 +656,11 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   s

hadoop git commit: HDFS-9016. Display upgrade domain information in fsck. (mingma)

2016-06-14 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 863bfa4d6 -> b7436f446


HDFS-9016. Display upgrade domain information in fsck. (mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b7436f44
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b7436f44
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b7436f44

Branch: refs/heads/branch-2
Commit: b7436f44682442e4d431ec3c97e4753e03435f3e
Parents: 863bfa4
Author: Ming Ma <min...@apache.org>
Authored: Tue Jun 14 20:07:37 2016 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Tue Jun 14 20:07:37 2016 -0700

--
 .../hdfs/server/namenode/NamenodeFsck.java  | 25 +--
 .../org/apache/hadoop/hdfs/tools/DFSck.java | 13 ++--
 .../src/site/markdown/HDFSCommands.md   |  3 +-
 .../hadoop/hdfs/server/namenode/TestFsck.java   | 74 +++-
 4 files changed, 101 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b7436f44/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
index 1b63c91..6e5a1a4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
@@ -118,6 +118,7 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   public static final String DECOMMISSIONED_STATUS = "is DECOMMISSIONED";
   public static final String NONEXISTENT_STATUS = "does not exist";
   public static final String FAILURE_STATUS = "FAILED";
+  public static final String UNDEFINED = "undefined";
 
   private final NameNode namenode;
   private final NetworkTopology networktopology;
@@ -136,6 +137,7 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   private boolean showCorruptFileBlocks = false;
 
   private boolean showReplicaDetails = false;
+  private boolean showUpgradeDomains = false;
   private long staleInterval;
   private Tracer tracer;
 
@@ -216,10 +218,13 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   else if (key.equals("racks")) { this.showRacks = true; }
   else if (key.equals("replicadetails")) {
 this.showReplicaDetails = true;
-  }
-  else if (key.equals("storagepolicies")) { this.showStoragePolcies = 
true; }
-  else if (key.equals("openforwrite")) {this.showOpenFiles = true; }
-  else if (key.equals("listcorruptfileblocks")) {
+  } else if (key.equals("upgradedomains")) {
+this.showUpgradeDomains = true;
+  } else if (key.equals("storagepolicies")) {
+this.showStoragePolcies = true;
+  } else if (key.equals("openforwrite")) {
+this.showOpenFiles = true;
+  } else if (key.equals("listcorruptfileblocks")) {
 this.showCorruptFileBlocks = true;
   } else if (key.equals("startblockafter")) {
 this.currentCookie[0] = pmap.get("startblockafter")[0];
@@ -524,8 +529,8 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
 if (res.totalFiles % 100 == 0) { out.println(); out.flush(); }
   }
 
-  private void collectBlocksSummary(String parent, HdfsFileStatus file, Result 
res,
-  LocatedBlocks blocks) throws IOException {
+  private void collectBlocksSummary(String parent, HdfsFileStatus file,
+  Result res, LocatedBlocks blocks) throws IOException {
 String path = file.getFullName(parent);
 boolean isOpen = blocks.isUnderConstruction();
 if (isOpen && !showOpenFiles) {
@@ -638,7 +643,8 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
 missize += block.getNumBytes();
   } else {
 report.append(" Live_repl=" + liveReplicas);
-if (showLocations || showRacks || showReplicaDetails) {
+if (showLocations || showRacks || showReplicaDetails ||
+showUpgradeDomains) {
   StringBuilder sb = new StringBuilder("[");
   Iterable storages = 
bm.getStorages(block.getLocalBlock());
   for (Iterator iterator = storages.iterator(); 
iterator.hasNext();) {
@@ -650,6 +656,11 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   sb.append(new DatanodeInfoWithStorage(dnDesc, 
stora

hadoop git commit: HDFS-9016. Display upgrade domain information in fsck. (mingma)

2016-06-14 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk e2f640942 -> 7d521a29e


HDFS-9016. Display upgrade domain information in fsck. (mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7d521a29
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7d521a29
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7d521a29

Branch: refs/heads/trunk
Commit: 7d521a29eed62c4329b16034375bd5fb747a92a9
Parents: e2f6409
Author: Ming Ma <min...@apache.org>
Authored: Tue Jun 14 20:05:50 2016 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Tue Jun 14 20:05:50 2016 -0700

--
 .../hdfs/server/namenode/NamenodeFsck.java  | 24 +--
 .../org/apache/hadoop/hdfs/tools/DFSck.java | 16 +++--
 .../src/site/markdown/HDFSCommands.md   |  3 +-
 .../hadoop/hdfs/server/namenode/TestFsck.java   | 74 +++-
 4 files changed, 103 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7d521a29/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
index a85c68c..d7c9a78 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
@@ -118,6 +118,7 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   public static final String DECOMMISSIONED_STATUS = "is DECOMMISSIONED";
   public static final String NONEXISTENT_STATUS = "does not exist";
   public static final String FAILURE_STATUS = "FAILED";
+  public static final String UNDEFINED = "undefined";
 
   private final NameNode namenode;
   private final BlockManager blockManager;
@@ -141,6 +142,7 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   private boolean showCorruptFileBlocks = false;
 
   private boolean showReplicaDetails = false;
+  private boolean showUpgradeDomains = false;
   private long staleInterval;
   private Tracer tracer;
 
@@ -222,11 +224,15 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   else if (key.equals("racks")) { this.showRacks = true; }
   else if (key.equals("replicadetails")) {
 this.showReplicaDetails = true;
-  }
-  else if (key.equals("storagepolicies")) { this.showStoragePolcies = 
true; }
-  else if (key.equals("showprogress")) { this.showprogress = true; }
-  else if (key.equals("openforwrite")) {this.showOpenFiles = true; }
-  else if (key.equals("listcorruptfileblocks")) {
+  } else if (key.equals("upgradedomains")) {
+this.showUpgradeDomains = true;
+  } else if (key.equals("storagepolicies")) {
+this.showStoragePolcies = true;
+  } else if (key.equals("showprogress")) {
+this.showprogress = true;
+  } else if (key.equals("openforwrite")) {
+this.showOpenFiles = true;
+  } else if (key.equals("listcorruptfileblocks")) {
 this.showCorruptFileBlocks = true;
   } else if (key.equals("startblockafter")) {
 this.currentCookie[0] = pmap.get("startblockafter")[0];
@@ -550,7 +556,8 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
* For striped block group, display info of each internal block.
*/
   private String getReplicaInfo(BlockInfo storedBlock) {
-if (!(showLocations || showRacks || showReplicaDetails)) {
+if (!(showLocations || showRacks || showReplicaDetails ||
+showUpgradeDomains)) {
   return "";
 }
 final boolean isComplete = storedBlock.isComplete();
@@ -568,6 +575,11 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
 sb.append(new DatanodeInfoWithStorage(dnDesc, storage.getStorageID(),
 storage.getStorageType()));
   }
+  if (showUpgradeDomains) {
+String upgradeDomain = (dnDesc.getUpgradeDomain() != null) ?
+dnDesc.getUpgradeDomain() : UNDEFINED;
+sb.append("(ud=" + upgradeDomain +")");
+  }
   if (showReplicaDetails) {
 Collection corruptReplicas =
 blockManager.getCorruptReplicas(storedBlock);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7d521a29/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/h

hadoop git commit: MAPREDUCE-5044. Have AM trigger jstack on task attempts that timeout before killing them. (Eric Payne and Gera Shegalov via mingma)

2016-06-06 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 f9478c95b -> ec4f9a14f


MAPREDUCE-5044. Have AM trigger jstack on task attempts that timeout before 
killing them. (Eric Payne and Gera Shegalov via mingma)

(cherry picked from commit 4a1cedc010d3fa1d8ef3f2773ca12acadfee5ba5)
(cherry picked from commit 74e2b5efa26f27027fed212b4b2108f0e95587fb)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ec4f9a14
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ec4f9a14
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ec4f9a14

Branch: refs/heads/branch-2.8
Commit: ec4f9a14f930276a72bbb646fb6706000b6e7751
Parents: f9478c9
Author: Ming Ma <min...@apache.org>
Authored: Mon Jun 6 14:30:51 2016 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Mon Jun 6 14:49:43 2016 -0700

--
 .../hadoop/mapred/LocalContainerLauncher.java   |  28 +
 .../v2/app/job/impl/TaskAttemptImpl.java|   3 +-
 .../v2/app/launcher/ContainerLauncherEvent.java |  21 +++-
 .../v2/app/launcher/ContainerLauncherImpl.java  |  20 +++-
 .../v2/app/launcher/TestContainerLauncher.java  |  10 +-
 .../app/launcher/TestContainerLauncherImpl.java |   8 ++
 .../hadoop/mapred/ResourceMgrDelegate.java  |   5 +-
 .../hadoop/mapred/TestClientRedirect.java   |   2 +-
 .../apache/hadoop/mapreduce/v2/TestMRJobs.java  | 119 +++
 .../yarn/api/ApplicationClientProtocol.java |   2 +-
 .../yarn/api/ContainerManagementProtocol.java   |   5 +
 .../SignalContainerResponse.java|   2 +-
 .../main/proto/applicationclient_protocol.proto |   2 +-
 .../proto/containermanagement_protocol.proto|   1 +
 .../hadoop/yarn/client/api/YarnClient.java  |   2 +-
 .../yarn/client/api/impl/YarnClientImpl.java|   4 +-
 .../hadoop/yarn/client/cli/ApplicationCLI.java  |   6 +-
 .../yarn/client/api/impl/TestYarnClient.java|   4 +-
 .../yarn/api/ContainerManagementProtocolPB.java |   7 ++
 .../ApplicationClientProtocolPBClientImpl.java  |   4 +-
 ...ContainerManagementProtocolPBClientImpl.java |  19 +++
 .../ApplicationClientProtocolPBServiceImpl.java |   5 +-
 ...ontainerManagementProtocolPBServiceImpl.java |  20 
 .../hadoop/yarn/TestContainerLaunchRPC.java |  10 ++
 .../yarn/TestContainerResourceIncreaseRPC.java  |   8 ++
 .../java/org/apache/hadoop/yarn/TestRPC.java|  10 ++
 .../containermanager/ContainerManagerImpl.java  |  37 --
 .../amrmproxy/MockResourceManagerFacade.java|   2 +-
 .../server/resourcemanager/ClientRMService.java |   2 +-
 .../yarn/server/resourcemanager/MockRM.java |   6 +-
 .../server/resourcemanager/NodeManager.java |   9 +-
 .../resourcemanager/TestAMAuthorization.java|   8 ++
 .../TestApplicationMasterLauncher.java  |   8 ++
 .../resourcemanager/TestSignalContainer.java|   2 +-
 34 files changed, 360 insertions(+), 41 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec4f9a14/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/LocalContainerLauncher.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/LocalContainerLauncher.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/LocalContainerLauncher.java
index 1a0d5fb..e4fea94 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/LocalContainerLauncher.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/LocalContainerLauncher.java
@@ -20,6 +20,10 @@ package org.apache.hadoop.mapred;
 
 import java.io.File;
 import java.io.IOException;
+import java.lang.management.ManagementFactory;
+import java.lang.management.RuntimeMXBean;
+import java.lang.management.ThreadInfo;
+import java.lang.management.ThreadMXBean;
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.Map;
@@ -255,6 +259,30 @@ public class LocalContainerLauncher extends 
AbstractService implements
 
 } else if (event.getType() == EventType.CONTAINER_REMOTE_CLEANUP) {
 
+  if (event.getDumpContainerThreads()) {
+try {
+  // Construct full thread dump header
+  System.out.println(new java.util.Date());
+  RuntimeMXBean rtBean = ManagementFactory.getRuntimeMXBean();
+  System.out.println("Full thread dump " + rtBean.getVmName()
+  + " (" + rtBean.getVmVersion()
+  + " " + r

hadoop git commit: MAPREDUCE-5044. Have AM trigger jstack on task attempts that timeout before killing them. (Eric Payne and Gera Shegalov via mingma)

2016-06-06 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 074588d78 -> 74e2b5efa


MAPREDUCE-5044. Have AM trigger jstack on task attempts that timeout before 
killing them. (Eric Payne and Gera Shegalov via mingma)

(cherry picked from commit 4a1cedc010d3fa1d8ef3f2773ca12acadfee5ba5)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/74e2b5ef
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/74e2b5ef
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/74e2b5ef

Branch: refs/heads/branch-2
Commit: 74e2b5efa26f27027fed212b4b2108f0e95587fb
Parents: 074588d
Author: Ming Ma <min...@apache.org>
Authored: Mon Jun 6 14:30:51 2016 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Mon Jun 6 14:34:47 2016 -0700

--
 .../hadoop/mapred/LocalContainerLauncher.java   |  28 +
 .../v2/app/job/impl/TaskAttemptImpl.java|   3 +-
 .../v2/app/launcher/ContainerLauncherEvent.java |  21 +++-
 .../v2/app/launcher/ContainerLauncherImpl.java  |  19 ++-
 .../v2/app/launcher/TestContainerLauncher.java  |  10 +-
 .../app/launcher/TestContainerLauncherImpl.java |   8 ++
 .../hadoop/mapred/ResourceMgrDelegate.java  |   5 +-
 .../hadoop/mapred/TestClientRedirect.java   |   2 +-
 .../apache/hadoop/mapreduce/v2/TestMRJobs.java  | 119 +++
 .../yarn/api/ApplicationClientProtocol.java |   2 +-
 .../yarn/api/ContainerManagementProtocol.java   |   5 +
 .../SignalContainerResponse.java|   2 +-
 .../main/proto/applicationclient_protocol.proto |   2 +-
 .../proto/containermanagement_protocol.proto|   1 +
 .../hadoop/yarn/client/api/YarnClient.java  |   2 +-
 .../yarn/client/api/impl/YarnClientImpl.java|   4 +-
 .../hadoop/yarn/client/cli/ApplicationCLI.java  |   6 +-
 .../yarn/client/api/impl/TestYarnClient.java|   4 +-
 .../yarn/api/ContainerManagementProtocolPB.java |   7 ++
 .../ApplicationClientProtocolPBClientImpl.java  |   4 +-
 ...ContainerManagementProtocolPBClientImpl.java |  19 +++
 .../ApplicationClientProtocolPBServiceImpl.java |   5 +-
 ...ontainerManagementProtocolPBServiceImpl.java |  20 
 .../hadoop/yarn/TestContainerLaunchRPC.java |  10 ++
 .../yarn/TestContainerResourceIncreaseRPC.java  |   8 ++
 .../java/org/apache/hadoop/yarn/TestRPC.java|  10 ++
 .../containermanager/ContainerManagerImpl.java  |  38 --
 .../amrmproxy/MockResourceManagerFacade.java|   2 +-
 .../server/resourcemanager/ClientRMService.java |   2 +-
 .../yarn/server/resourcemanager/MockRM.java |   6 +-
 .../server/resourcemanager/NodeManager.java |   9 +-
 .../resourcemanager/TestAMAuthorization.java|   8 ++
 .../TestApplicationMasterLauncher.java  |   8 ++
 .../resourcemanager/TestSignalContainer.java|   2 +-
 34 files changed, 360 insertions(+), 41 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/74e2b5ef/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/LocalContainerLauncher.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/LocalContainerLauncher.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/LocalContainerLauncher.java
index da118c5..190d988 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/LocalContainerLauncher.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/LocalContainerLauncher.java
@@ -20,6 +20,10 @@ package org.apache.hadoop.mapred;
 
 import java.io.File;
 import java.io.IOException;
+import java.lang.management.ManagementFactory;
+import java.lang.management.RuntimeMXBean;
+import java.lang.management.ThreadInfo;
+import java.lang.management.ThreadMXBean;
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.Map;
@@ -255,6 +259,30 @@ public class LocalContainerLauncher extends 
AbstractService implements
 
 } else if (event.getType() == EventType.CONTAINER_REMOTE_CLEANUP) {
 
+  if (event.getDumpContainerThreads()) {
+try {
+  // Construct full thread dump header
+  System.out.println(new java.util.Date());
+  RuntimeMXBean rtBean = ManagementFactory.getRuntimeMXBean();
+  System.out.println("Full thread dump " + rtBean.getVmName()
+  + " (" + rtBean.getVmVersion()
+  + " " + rtBean.getSystemProperties().get("java.vm.info")
+  + &qu

hadoop git commit: HDFS-10320. Rack failures may result in NN terminate. (Xiao Chen via mingma)

2016-05-04 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 22ac37615 -> d6e95ae47


HDFS-10320. Rack failures may result in NN terminate. (Xiao Chen via mingma)

(cherry picked from commit 1268cf5fbe4458fa75ad0662512d352f9e8d3470)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d6e95ae4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d6e95ae4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d6e95ae4

Branch: refs/heads/branch-2.8
Commit: d6e95ae47b6281219ca2b634507ece3b4ac6a12e
Parents: 22ac376
Author: Ming Ma <min...@apache.org>
Authored: Wed May 4 17:02:26 2016 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Wed May 4 17:07:36 2016 -0700

--
 .../org/apache/hadoop/net/NetworkTopology.java  | 109 +--
 .../AvailableSpaceBlockPlacementPolicy.java |  11 +-
 .../BlockPlacementPolicyDefault.java|  84 +++---
 .../web/resources/NamenodeWebHdfsMethods.java   |  13 +--
 .../apache/hadoop/net/TestNetworkTopology.java  |  75 -
 5 files changed, 196 insertions(+), 96 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d6e95ae4/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
index b637da1..d680094 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
@@ -29,13 +29,13 @@ import java.util.concurrent.locks.ReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 
 import com.google.common.annotations.VisibleForTesting;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.util.ReflectionUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import com.google.common.base.Preconditions;
 import com.google.common.collect.Lists;
@@ -54,8 +54,8 @@ import com.google.common.collect.Lists;
 public class NetworkTopology {
   public final static String DEFAULT_RACK = "/default-rack";
   public final static int DEFAULT_HOST_LEVEL = 2;
-  public static final Log LOG =
-LogFactory.getLog(NetworkTopology.class);
+  public static final Logger LOG =
+  LoggerFactory.getLogger(NetworkTopology.class);
 
   public static class InvalidTopologyException extends RuntimeException {
 private static final long serialVersionUID = 1L;
@@ -432,9 +432,7 @@ public class NetworkTopology {
   }
 }
   }
-  if(LOG.isDebugEnabled()) {
-LOG.debug("NetworkTopology became:\n" + this.toString());
-  }
+  LOG.debug("NetworkTopology became:\n{}", this.toString());
 } finally {
   netlock.writeLock().unlock();
 }
@@ -507,9 +505,7 @@ public class NetworkTopology {
   numOfRacks--;
 }
   }
-  if(LOG.isDebugEnabled()) {
-LOG.debug("NetworkTopology became:\n" + this.toString());
-  }
+  LOG.debug("NetworkTopology became:\n{}", this.toString());
 } finally {
   netlock.writeLock().unlock();
 }
@@ -702,26 +698,45 @@ public class NetworkTopology {
 r.setSeed(seed);
   }
 
-  /** randomly choose one node from scope
-   * if scope starts with ~, choose one from the all nodes except for the
-   * ones in scope; otherwise, choose one from scope
+  /**
+   * Randomly choose a node.
+   *
* @param scope range of nodes from which a node will be chosen
* @return the chosen node
+   *
+   * @see #chooseRandom(String, Collection)
*/
-  public Node chooseRandom(String scope) {
+  public Node chooseRandom(final String scope) {
+return chooseRandom(scope, null);
+  }
+
+  /**
+   * Randomly choose one node from scope.
+   *
+   * If scope starts with ~, choose one from the all nodes except for the
+   * ones in scope; otherwise, choose one from scope.
+   * If excludedNodes is given, choose a node that's not in excludedNodes.
+   *
+   * @param scope range of nodes from which a node will be chosen
+   * @param excludedNodes nodes to be excluded from
+   * @return the chosen node
+   */
+  public Node chooseRandom(final String scope,
+  final Collection excludedNodes) {
 netlock.readLock().lock(

hadoop git commit: HDFS-10320. Rack failures may result in NN terminate. (Xiao Chen via mingma)

2016-05-04 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 8262ef831 -> 5b5383317


HDFS-10320. Rack failures may result in NN terminate. (Xiao Chen via mingma)

(cherry picked from commit 1268cf5fbe4458fa75ad0662512d352f9e8d3470)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5b538331
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5b538331
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5b538331

Branch: refs/heads/branch-2
Commit: 5b53833171f9427234d2c608e48b1eb323efb435
Parents: 8262ef8
Author: Ming Ma <min...@apache.org>
Authored: Wed May 4 17:02:26 2016 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Wed May 4 17:04:44 2016 -0700

--
 .../org/apache/hadoop/net/NetworkTopology.java  | 109 +--
 .../AvailableSpaceBlockPlacementPolicy.java |  11 +-
 .../BlockPlacementPolicyDefault.java|  84 +++---
 .../web/resources/NamenodeWebHdfsMethods.java   |  13 +--
 .../apache/hadoop/net/TestNetworkTopology.java  |  75 -
 5 files changed, 196 insertions(+), 96 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5b538331/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
index e1d2968..1e23ff6 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
@@ -29,13 +29,13 @@ import java.util.concurrent.locks.ReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 
 import com.google.common.annotations.VisibleForTesting;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.util.ReflectionUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import com.google.common.base.Preconditions;
 import com.google.common.collect.Lists;
@@ -54,8 +54,8 @@ import com.google.common.collect.Lists;
 public class NetworkTopology {
   public final static String DEFAULT_RACK = "/default-rack";
   public final static int DEFAULT_HOST_LEVEL = 2;
-  public static final Log LOG =
-LogFactory.getLog(NetworkTopology.class);
+  public static final Logger LOG =
+  LoggerFactory.getLogger(NetworkTopology.class);
 
   public static class InvalidTopologyException extends RuntimeException {
 private static final long serialVersionUID = 1L;
@@ -442,9 +442,7 @@ public class NetworkTopology {
   }
 }
   }
-  if(LOG.isDebugEnabled()) {
-LOG.debug("NetworkTopology became:\n" + this.toString());
-  }
+  LOG.debug("NetworkTopology became:\n{}", this.toString());
 } finally {
   netlock.writeLock().unlock();
 }
@@ -517,9 +515,7 @@ public class NetworkTopology {
   numOfRacks--;
 }
   }
-  if(LOG.isDebugEnabled()) {
-LOG.debug("NetworkTopology became:\n" + this.toString());
-  }
+  LOG.debug("NetworkTopology became:\n{}", this.toString());
 } finally {
   netlock.writeLock().unlock();
 }
@@ -717,26 +713,45 @@ public class NetworkTopology {
 r.setSeed(seed);
   }
 
-  /** randomly choose one node from scope
-   * if scope starts with ~, choose one from the all nodes except for the
-   * ones in scope; otherwise, choose one from scope
+  /**
+   * Randomly choose a node.
+   *
* @param scope range of nodes from which a node will be chosen
* @return the chosen node
+   *
+   * @see #chooseRandom(String, Collection)
*/
-  public Node chooseRandom(String scope) {
+  public Node chooseRandom(final String scope) {
+return chooseRandom(scope, null);
+  }
+
+  /**
+   * Randomly choose one node from scope.
+   *
+   * If scope starts with ~, choose one from the all nodes except for the
+   * ones in scope; otherwise, choose one from scope.
+   * If excludedNodes is given, choose a node that's not in excludedNodes.
+   *
+   * @param scope range of nodes from which a node will be chosen
+   * @param excludedNodes nodes to be excluded from
+   * @return the chosen node
+   */
+  public Node chooseRandom(final String scope,
+  final Collection excludedNodes) {
 netlock.readLock().lock(

hadoop git commit: HDFS-10320. Rack failures may result in NN terminate. (Xiao Chen via mingma)

2016-05-04 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 9e37fe3b7 -> 1268cf5fb


HDFS-10320. Rack failures may result in NN terminate. (Xiao Chen via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1268cf5f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1268cf5f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1268cf5f

Branch: refs/heads/trunk
Commit: 1268cf5fbe4458fa75ad0662512d352f9e8d3470
Parents: 9e37fe3
Author: Ming Ma <min...@apache.org>
Authored: Wed May 4 17:02:26 2016 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Wed May 4 17:02:26 2016 -0700

--
 .../org/apache/hadoop/net/NetworkTopology.java  | 109 +--
 .../AvailableSpaceBlockPlacementPolicy.java |  11 +-
 .../BlockPlacementPolicyDefault.java|  84 +++---
 .../web/resources/NamenodeWebHdfsMethods.java   |  13 +--
 .../apache/hadoop/net/TestNetworkTopology.java  |  75 -
 5 files changed, 196 insertions(+), 96 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1268cf5f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
index e1d2968..1e23ff6 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
@@ -29,13 +29,13 @@ import java.util.concurrent.locks.ReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 
 import com.google.common.annotations.VisibleForTesting;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.util.ReflectionUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import com.google.common.base.Preconditions;
 import com.google.common.collect.Lists;
@@ -54,8 +54,8 @@ import com.google.common.collect.Lists;
 public class NetworkTopology {
   public final static String DEFAULT_RACK = "/default-rack";
   public final static int DEFAULT_HOST_LEVEL = 2;
-  public static final Log LOG =
-LogFactory.getLog(NetworkTopology.class);
+  public static final Logger LOG =
+  LoggerFactory.getLogger(NetworkTopology.class);
 
   public static class InvalidTopologyException extends RuntimeException {
 private static final long serialVersionUID = 1L;
@@ -442,9 +442,7 @@ public class NetworkTopology {
   }
 }
   }
-  if(LOG.isDebugEnabled()) {
-LOG.debug("NetworkTopology became:\n" + this.toString());
-  }
+  LOG.debug("NetworkTopology became:\n{}", this.toString());
 } finally {
   netlock.writeLock().unlock();
 }
@@ -517,9 +515,7 @@ public class NetworkTopology {
   numOfRacks--;
 }
   }
-  if(LOG.isDebugEnabled()) {
-LOG.debug("NetworkTopology became:\n" + this.toString());
-  }
+  LOG.debug("NetworkTopology became:\n{}", this.toString());
 } finally {
   netlock.writeLock().unlock();
 }
@@ -717,26 +713,45 @@ public class NetworkTopology {
 r.setSeed(seed);
   }
 
-  /** randomly choose one node from scope
-   * if scope starts with ~, choose one from the all nodes except for the
-   * ones in scope; otherwise, choose one from scope
+  /**
+   * Randomly choose a node.
+   *
* @param scope range of nodes from which a node will be chosen
* @return the chosen node
+   *
+   * @see #chooseRandom(String, Collection)
*/
-  public Node chooseRandom(String scope) {
+  public Node chooseRandom(final String scope) {
+return chooseRandom(scope, null);
+  }
+
+  /**
+   * Randomly choose one node from scope.
+   *
+   * If scope starts with ~, choose one from the all nodes except for the
+   * ones in scope; otherwise, choose one from scope.
+   * If excludedNodes is given, choose a node that's not in excludedNodes.
+   *
+   * @param scope range of nodes from which a node will be chosen
+   * @param excludedNodes nodes to be excluded from
+   * @return the chosen node
+   */
+  public Node chooseRandom(final String scope,
+  final Collection excludedNodes) {
 netlock.readLock().lock();
 try {
   if (scope.startsWith("~")) {
-return ch

[1/2] hadoop git commit: HADOOP-12789. log classpath of ApplicationClassLoader at INFO level. (Sangjin Lee via mingma)

2016-03-07 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.6 80034621e -> d61a2fd1d


HADOOP-12789. log classpath of ApplicationClassLoader at INFO level. (Sangjin 
Lee via mingma)

(cherry picked from commit 49eedc7ff02ea61764f416f0e2ddf81370aec5fb)
(cherry picked from commit f01f1940c4006eeffae679f822463d156c790c04)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5579ee37
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5579ee37
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5579ee37

Branch: refs/heads/branch-2.6
Commit: 5579ee37e377138a124c8d6251480848310b339b
Parents: 8003462
Author: Ming Ma <min...@apache.org>
Authored: Mon Mar 7 20:26:19 2016 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Mon Mar 7 20:52:12 2016 -0800

--
 .../java/org/apache/hadoop/util/ApplicationClassLoader.java | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5579ee37/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
index b18997c..bec88bc 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
@@ -96,10 +96,6 @@ public class ApplicationClassLoader extends URLClassLoader {
   public ApplicationClassLoader(URL[] urls, ClassLoader parent,
   List systemClasses) {
 super(urls, parent);
-if (LOG.isDebugEnabled()) {
-  LOG.debug("urls: " + Arrays.toString(urls));
-  LOG.debug("system classes: " + systemClasses);
-}
 this.parent = parent;
 if (parent == null) {
   throw new IllegalArgumentException("No parent classloader!");
@@ -108,6 +104,7 @@ public class ApplicationClassLoader extends URLClassLoader {
 this.systemClasses = (systemClasses == null || systemClasses.isEmpty()) ?
 Arrays.asList(StringUtils.getTrimmedStrings(SYSTEM_CLASSES_DEFAULT)) :
 systemClasses;
+LOG.info("classpath: " + Arrays.toString(urls));
 LOG.info("system classes: " + this.systemClasses);
   }
 



[2/2] hadoop git commit: Update CHANGES.txt for HADOOP-12789

2016-03-07 Thread mingma
Update CHANGES.txt for HADOOP-12789

(cherry picked from commit d6e40fd270c7b552b02c407e2f2dba7deb97177d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d61a2fd1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d61a2fd1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d61a2fd1

Branch: refs/heads/branch-2.6
Commit: d61a2fd1d03de7e63c2da5260051c8a392e4e64b
Parents: 5579ee3
Author: Ming Ma <min...@apache.org>
Authored: Mon Mar 7 20:50:09 2016 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Mon Mar 7 20:52:25 2016 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d61a2fd1/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index a6fb74d..5782165 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -14,6 +14,9 @@ Release 2.6.5 - UNRELEASED
 HADOOP-12800. Copy docker directory from 2.8 to 2.7/2.6 repos to enable
 pre-commit Jenkins runs (zhz)
 
+HADOOP-12789. log classpath of ApplicationClassLoader at INFO level
+(Sangjin Lee via mingma)
+
   OPTIMIZATIONS
 
   BUG FIXES



[2/2] hadoop git commit: Update CHANGES.txt for HADOOP-12789

2016-03-07 Thread mingma
Update CHANGES.txt for HADOOP-12789


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d6e40fd2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d6e40fd2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d6e40fd2

Branch: refs/heads/branch-2.7
Commit: d6e40fd270c7b552b02c407e2f2dba7deb97177d
Parents: f01f194
Author: Ming Ma <min...@apache.org>
Authored: Mon Mar 7 20:50:09 2016 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Mon Mar 7 20:50:09 2016 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d6e40fd2/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 29bd08e..e3efbbe 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -897,6 +897,9 @@ Release 2.6.5 - UNRELEASED
 HADOOP-12800. Copy docker directory from 2.8 to 2.7/2.6 repos to enable
 pre-commit Jenkins runs (zhz)
 
+HADOOP-12789. log classpath of ApplicationClassLoader at INFO level
+(Sangjin Lee via mingma)
+
   OPTIMIZATIONS
 
   BUG FIXES



[1/2] hadoop git commit: HADOOP-12789. log classpath of ApplicationClassLoader at INFO level. (Sangjin Lee via mingma)

2016-03-07 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 7f43eb954 -> d6e40fd27


HADOOP-12789. log classpath of ApplicationClassLoader at INFO level. (Sangjin 
Lee via mingma)

(cherry picked from commit 49eedc7ff02ea61764f416f0e2ddf81370aec5fb)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f01f1940
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f01f1940
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f01f1940

Branch: refs/heads/branch-2.7
Commit: f01f1940c4006eeffae679f822463d156c790c04
Parents: 7f43eb9
Author: Ming Ma <min...@apache.org>
Authored: Mon Mar 7 20:26:19 2016 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Mon Mar 7 20:43:07 2016 -0800

--
 .../java/org/apache/hadoop/util/ApplicationClassLoader.java | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f01f1940/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
index 6d37c28..8c1601a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
@@ -94,10 +94,6 @@ public class ApplicationClassLoader extends URLClassLoader {
   public ApplicationClassLoader(URL[] urls, ClassLoader parent,
   List systemClasses) {
 super(urls, parent);
-if (LOG.isDebugEnabled()) {
-  LOG.debug("urls: " + Arrays.toString(urls));
-  LOG.debug("system classes: " + systemClasses);
-}
 this.parent = parent;
 if (parent == null) {
   throw new IllegalArgumentException("No parent classloader!");
@@ -106,6 +102,7 @@ public class ApplicationClassLoader extends URLClassLoader {
 this.systemClasses = (systemClasses == null || systemClasses.isEmpty()) ?
 Arrays.asList(StringUtils.getTrimmedStrings(SYSTEM_CLASSES_DEFAULT)) :
 systemClasses;
+LOG.info("classpath: " + Arrays.toString(urls));
 LOG.info("system classes: " + this.systemClasses);
   }
 



hadoop git commit: HADOOP-12789. log classpath of ApplicationClassLoader at INFO level. (Sangjin Lee via mingma)

2016-03-07 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 d84d54e26 -> 47f884e06


HADOOP-12789. log classpath of ApplicationClassLoader at INFO level. (Sangjin 
Lee via mingma)

(cherry picked from commit 49eedc7ff02ea61764f416f0e2ddf81370aec5fb)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/47f884e0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/47f884e0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/47f884e0

Branch: refs/heads/branch-2.8
Commit: 47f884e06af98764cc1f8a6bde690396f1176e15
Parents: d84d54e
Author: Ming Ma <min...@apache.org>
Authored: Mon Mar 7 20:26:19 2016 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Mon Mar 7 20:32:32 2016 -0800

--
 .../java/org/apache/hadoop/util/ApplicationClassLoader.java | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/47f884e0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
index 6d37c28..8c1601a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
@@ -94,10 +94,6 @@ public class ApplicationClassLoader extends URLClassLoader {
   public ApplicationClassLoader(URL[] urls, ClassLoader parent,
   List systemClasses) {
 super(urls, parent);
-if (LOG.isDebugEnabled()) {
-  LOG.debug("urls: " + Arrays.toString(urls));
-  LOG.debug("system classes: " + systemClasses);
-}
 this.parent = parent;
 if (parent == null) {
   throw new IllegalArgumentException("No parent classloader!");
@@ -106,6 +102,7 @@ public class ApplicationClassLoader extends URLClassLoader {
 this.systemClasses = (systemClasses == null || systemClasses.isEmpty()) ?
 Arrays.asList(StringUtils.getTrimmedStrings(SYSTEM_CLASSES_DEFAULT)) :
 systemClasses;
+LOG.info("classpath: " + Arrays.toString(urls));
 LOG.info("system classes: " + this.systemClasses);
   }
 



hadoop git commit: HADOOP-12789. log classpath of ApplicationClassLoader at INFO level. (Sangjin Lee via mingma)

2016-03-07 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 fe0009a2b -> aae39ff97


HADOOP-12789. log classpath of ApplicationClassLoader at INFO level. (Sangjin 
Lee via mingma)

(cherry picked from commit 49eedc7ff02ea61764f416f0e2ddf81370aec5fb)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aae39ff9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aae39ff9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aae39ff9

Branch: refs/heads/branch-2
Commit: aae39ff9739ef1415bd91802ce0f9402b9c70919
Parents: fe0009a
Author: Ming Ma <min...@apache.org>
Authored: Mon Mar 7 20:26:19 2016 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Mon Mar 7 20:27:10 2016 -0800

--
 .../java/org/apache/hadoop/util/ApplicationClassLoader.java | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/aae39ff9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
index 6d37c28..8c1601a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
@@ -94,10 +94,6 @@ public class ApplicationClassLoader extends URLClassLoader {
   public ApplicationClassLoader(URL[] urls, ClassLoader parent,
   List systemClasses) {
 super(urls, parent);
-if (LOG.isDebugEnabled()) {
-  LOG.debug("urls: " + Arrays.toString(urls));
-  LOG.debug("system classes: " + systemClasses);
-}
 this.parent = parent;
 if (parent == null) {
   throw new IllegalArgumentException("No parent classloader!");
@@ -106,6 +102,7 @@ public class ApplicationClassLoader extends URLClassLoader {
 this.systemClasses = (systemClasses == null || systemClasses.isEmpty()) ?
 Arrays.asList(StringUtils.getTrimmedStrings(SYSTEM_CLASSES_DEFAULT)) :
 systemClasses;
+LOG.info("classpath: " + Arrays.toString(urls));
 LOG.info("system classes: " + this.systemClasses);
   }
 



hadoop git commit: HADOOP-12789. log classpath of ApplicationClassLoader at INFO level. (Sangjin Lee via mingma)

2016-03-07 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 352d299cf -> 49eedc7ff


HADOOP-12789. log classpath of ApplicationClassLoader at INFO level. (Sangjin 
Lee via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/49eedc7f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/49eedc7f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/49eedc7f

Branch: refs/heads/trunk
Commit: 49eedc7ff02ea61764f416f0e2ddf81370aec5fb
Parents: 352d299
Author: Ming Ma <min...@apache.org>
Authored: Mon Mar 7 20:26:19 2016 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Mon Mar 7 20:26:19 2016 -0800

--
 .../java/org/apache/hadoop/util/ApplicationClassLoader.java | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/49eedc7f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
index 6d37c28..8c1601a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
@@ -94,10 +94,6 @@ public class ApplicationClassLoader extends URLClassLoader {
   public ApplicationClassLoader(URL[] urls, ClassLoader parent,
   List systemClasses) {
 super(urls, parent);
-if (LOG.isDebugEnabled()) {
-  LOG.debug("urls: " + Arrays.toString(urls));
-  LOG.debug("system classes: " + systemClasses);
-}
 this.parent = parent;
 if (parent == null) {
   throw new IllegalArgumentException("No parent classloader!");
@@ -106,6 +102,7 @@ public class ApplicationClassLoader extends URLClassLoader {
 this.systemClasses = (systemClasses == null || systemClasses.isEmpty()) ?
 Arrays.asList(StringUtils.getTrimmedStrings(SYSTEM_CLASSES_DEFAULT)) :
 systemClasses;
+LOG.info("classpath: " + Arrays.toString(urls));
 LOG.info("system classes: " + this.systemClasses);
   }
 



hadoop git commit: YARN-4720. Skip unnecessary NN operations in log aggregation. (Jun Gong via mingma)

2016-02-26 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 cf8bcc73c -> 6f5ca1b29


YARN-4720. Skip unnecessary NN operations in log aggregation. (Jun Gong via 
mingma)

(cherry picked from commit 7f3139e54da2c496327446a5eac43f8421fc8839)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6f5ca1b2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6f5ca1b2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6f5ca1b2

Branch: refs/heads/branch-2.8
Commit: 6f5ca1b29308472d2118dd1f7cac7f321f583e65
Parents: cf8bcc7
Author: Ming Ma <min...@apache.org>
Authored: Fri Feb 26 08:40:05 2016 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Fri Feb 26 08:49:51 2016 -0800

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../logaggregation/AppLogAggregatorImpl.java| 68 +---
 .../TestLogAggregationService.java  | 59 +
 3 files changed, 106 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6f5ca1b2/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 00f7aaa..c134f08 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -645,6 +645,9 @@ Release 2.8.0 - UNRELEASED
 
 YARN-4207. Add a non-judgemental YARN app completion status. (Rich Haase 
via sseth)
 
+YARN-4720. Skip unnecessary NN operations in log aggregation.
+    (Jun Gong via mingma)
+
   BUG FIXES
 
 YARN-4680. TimerTasks leak in ATS V1.5 Writer. (Xuan Gong via gtcarrera9)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6f5ca1b2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
index 82a1ad4..da7fc14 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
@@ -127,6 +127,9 @@ public class AppLogAggregatorImpl implements 
AppLogAggregator {
   // This variable is only for testing
   private final AtomicBoolean waiting = new AtomicBoolean(false);
 
+  // This variable is only for testing
+  private int logAggregationTimes = 0;
+
   private boolean renameTemporaryLogFileFailed = false;
 
   private final Map<ContainerId, ContainerLogAggregator> 
containerLogAggregators =
@@ -311,7 +314,15 @@ public class AppLogAggregatorImpl implements 
AppLogAggregator {
 }
 
 LogWriter writer = null;
+String diagnosticMessage = "";
+boolean logAggregationSucceedInThisCycle = true;
 try {
+  if (pendingContainerInThisCycle.isEmpty()) {
+return;
+  }
+
+  logAggregationTimes++;
+
   try {
 writer =
 new LogWriter(this.conf, this.remoteNodeTmpLogFileForApp,
@@ -321,6 +332,7 @@ public class AppLogAggregatorImpl implements 
AppLogAggregator {
 writer.writeApplicationOwner(this.userUgi.getShortUserName());
 
   } catch (IOException e1) {
+logAggregationSucceedInThisCycle = false;
 LOG.error("Cannot create writer for app " + this.applicationId
 + ". Skip log upload this time. ", e1);
 return;
@@ -369,20 +381,16 @@ public class AppLogAggregatorImpl implements 
AppLogAggregator {
 remoteNodeLogFileForApp.getName() + "_"
 + currentTime);
 
-  String diagnosticMessage = "";
-  boolean logAggregationSucceedInThisCycle = true;
   final boolean rename = uploadedLogsInThisCycle;
   try {
 userUgi.doAs(new PrivilegedExceptionAction() {
   @Override
   public Object run() throws Exception {
 FileSystem remoteFS = remoteNodeLogFileForApp.getFileSystem(conf);
-if (remoteFS.exists(remoteNodeTmpLogFileForApp)) {
-  if (rename) {
-

hadoop git commit: YARN-4720. Skip unnecessary NN operations in log aggregation. (Jun Gong via mingma)

2016-02-26 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 b1d497a7d -> 1656bcec5


YARN-4720. Skip unnecessary NN operations in log aggregation. (Jun Gong via 
mingma)

(cherry picked from commit 7f3139e54da2c496327446a5eac43f8421fc8839)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1656bcec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1656bcec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1656bcec

Branch: refs/heads/branch-2
Commit: 1656bcec5f5ad949c97e66e51ac38bf26e8093f3
Parents: b1d497a
Author: Ming Ma <min...@apache.org>
Authored: Fri Feb 26 08:40:05 2016 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Fri Feb 26 08:43:14 2016 -0800

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../logaggregation/AppLogAggregatorImpl.java| 68 +---
 .../TestLogAggregationService.java  | 59 +
 3 files changed, 106 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1656bcec/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index cf4d5fd..863fa10 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -826,6 +826,9 @@ Release 2.8.0 - UNRELEASED
 YARN-4066. Large number of queues choke fair scheduler.
 (Johan Gustavsson via kasha)
 
+YARN-4720. Skip unnecessary NN operations in log aggregation.
+    (Jun Gong via mingma)
+
   BUG FIXES
 
 YARN-4680. TimerTasks leak in ATS V1.5 Writer. (Xuan Gong via gtcarrera9)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1656bcec/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
index 82a1ad4..da7fc14 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
@@ -127,6 +127,9 @@ public class AppLogAggregatorImpl implements 
AppLogAggregator {
   // This variable is only for testing
   private final AtomicBoolean waiting = new AtomicBoolean(false);
 
+  // This variable is only for testing
+  private int logAggregationTimes = 0;
+
   private boolean renameTemporaryLogFileFailed = false;
 
   private final Map<ContainerId, ContainerLogAggregator> 
containerLogAggregators =
@@ -311,7 +314,15 @@ public class AppLogAggregatorImpl implements 
AppLogAggregator {
 }
 
 LogWriter writer = null;
+String diagnosticMessage = "";
+boolean logAggregationSucceedInThisCycle = true;
 try {
+  if (pendingContainerInThisCycle.isEmpty()) {
+return;
+  }
+
+  logAggregationTimes++;
+
   try {
 writer =
 new LogWriter(this.conf, this.remoteNodeTmpLogFileForApp,
@@ -321,6 +332,7 @@ public class AppLogAggregatorImpl implements 
AppLogAggregator {
 writer.writeApplicationOwner(this.userUgi.getShortUserName());
 
   } catch (IOException e1) {
+logAggregationSucceedInThisCycle = false;
 LOG.error("Cannot create writer for app " + this.applicationId
 + ". Skip log upload this time. ", e1);
 return;
@@ -369,20 +381,16 @@ public class AppLogAggregatorImpl implements 
AppLogAggregator {
 remoteNodeLogFileForApp.getName() + "_"
 + currentTime);
 
-  String diagnosticMessage = "";
-  boolean logAggregationSucceedInThisCycle = true;
   final boolean rename = uploadedLogsInThisCycle;
   try {
 userUgi.doAs(new PrivilegedExceptionAction() {
   @Override
   public Object run() throws Exception {
 FileSystem remoteFS = remoteNodeLogFileForApp.getFileSystem(conf);
-if (remoteFS.exists(remoteNodeTmpLogFileForApp)) {
-  if (rename) {
-

[1/2] hadoop git commit: YARN-4720. Skip unnecessary NN operations in log aggregation. (Jun Gong via mingma)

2016-02-26 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk f0de733ca -> 1aa06978e


YARN-4720. Skip unnecessary NN operations in log aggregation. (Jun Gong via 
mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7f3139e5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7f3139e5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7f3139e5

Branch: refs/heads/trunk
Commit: 7f3139e54da2c496327446a5eac43f8421fc8839
Parents: d7fdec1
Author: Ming Ma <min...@apache.org>
Authored: Fri Feb 26 08:40:05 2016 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Fri Feb 26 08:40:05 2016 -0800

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../logaggregation/AppLogAggregatorImpl.java| 68 +---
 .../TestLogAggregationService.java  | 59 +
 3 files changed, 106 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7f3139e5/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 8708e0c..4ec6e2a 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -884,6 +884,9 @@ Release 2.8.0 - UNRELEASED
 YARN-4066. Large number of queues choke fair scheduler.
 (Johan Gustavsson via kasha)
 
+YARN-4720. Skip unnecessary NN operations in log aggregation.
+    (Jun Gong via mingma)
+
   BUG FIXES
 
 YARN-3197. Confusing log generated by CapacityScheduler. (Varun Saxena 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7f3139e5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
index 82a1ad4..da7fc14 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
@@ -127,6 +127,9 @@ public class AppLogAggregatorImpl implements 
AppLogAggregator {
   // This variable is only for testing
   private final AtomicBoolean waiting = new AtomicBoolean(false);
 
+  // This variable is only for testing
+  private int logAggregationTimes = 0;
+
   private boolean renameTemporaryLogFileFailed = false;
 
   private final Map<ContainerId, ContainerLogAggregator> 
containerLogAggregators =
@@ -311,7 +314,15 @@ public class AppLogAggregatorImpl implements 
AppLogAggregator {
 }
 
 LogWriter writer = null;
+String diagnosticMessage = "";
+boolean logAggregationSucceedInThisCycle = true;
 try {
+  if (pendingContainerInThisCycle.isEmpty()) {
+return;
+  }
+
+  logAggregationTimes++;
+
   try {
 writer =
 new LogWriter(this.conf, this.remoteNodeTmpLogFileForApp,
@@ -321,6 +332,7 @@ public class AppLogAggregatorImpl implements 
AppLogAggregator {
 writer.writeApplicationOwner(this.userUgi.getShortUserName());
 
   } catch (IOException e1) {
+logAggregationSucceedInThisCycle = false;
 LOG.error("Cannot create writer for app " + this.applicationId
 + ". Skip log upload this time. ", e1);
 return;
@@ -369,20 +381,16 @@ public class AppLogAggregatorImpl implements 
AppLogAggregator {
 remoteNodeLogFileForApp.getName() + "_"
 + currentTime);
 
-  String diagnosticMessage = "";
-  boolean logAggregationSucceedInThisCycle = true;
   final boolean rename = uploadedLogsInThisCycle;
   try {
 userUgi.doAs(new PrivilegedExceptionAction() {
   @Override
   public Object run() throws Exception {
 FileSystem remoteFS = remoteNodeLogFileForApp.getFileSystem(conf);
-if (remoteFS.exists(remoteNodeTmpLogFileForApp)) {
-  if (rename) {
-remoteFS.rename(remoteNodeTmpLogFileForApp, renamedPath);
-  } el

[2/2] hadoop git commit: Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/hadoop into trunk

2016-02-26 Thread mingma
Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/hadoop into 
trunk


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1aa06978
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1aa06978
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1aa06978

Branch: refs/heads/trunk
Commit: 1aa06978e9b53cf6e5222b5f1edb051924669fa9
Parents: 7f3139e f0de733
Author: Ming Ma 
Authored: Fri Feb 26 08:40:44 2016 -0800
Committer: Ming Ma 
Committed: Fri Feb 26 08:40:44 2016 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--




hadoop git commit: HDFS-9430 Remove waitForLoadingFSImage since checkNNStartup has ensured image loaded and namenode started. (Brahma Reddy Battula via mingma)

2015-12-04 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk e84d6ca2d -> 3fa33b5c2


HDFS-9430 Remove waitForLoadingFSImage since checkNNStartup has ensured image 
loaded and namenode started. (Brahma Reddy Battula via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3fa33b5c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3fa33b5c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3fa33b5c

Branch: refs/heads/trunk
Commit: 3fa33b5c2c289ceaced30c6c5451f3569110459d
Parents: e84d6ca
Author: Ming Ma <min...@apache.org>
Authored: Fri Dec 4 09:47:57 2015 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Fri Dec 4 09:47:57 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../hdfs/server/namenode/FSNamesystem.java  | 38 
 2 files changed, 3 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3fa33b5c/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 40fdc58..17cbe29 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -2439,6 +2439,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9484. NNThroughputBenchmark$BlockReportStats should not send empty
 block reports. (Mingliang Liu via shv)
 
+HDFS-9430. Remove waitForLoadingFSImage since checkNNStartup has ensured
+image loaded and namenode started. (Brahma Reddy Battula via mingma)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3fa33b5c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 6af7265..9c9d9f5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -564,25 +564,6 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   }
 
   /**
-   * Block until the object is imageLoaded to be used.
-   */
-  void waitForLoadingFSImage() {
-if (!imageLoaded) {
-  writeLock();
-  try {
-while (!imageLoaded) {
-  try {
-cond.await(5000, TimeUnit.MILLISECONDS);
-  } catch (InterruptedException ignored) {
-  }
-}
-  } finally {
-writeUnlock();
-  }
-}
-  }
-
-  /**
* Clear all loaded data
*/
   void clear() {
@@ -1802,7 +1783,6 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
*/
   void concat(String target, String [] srcs, boolean logRetryCache)
   throws IOException {
-waitForLoadingFSImage();
 HdfsFileStatus stat = null;
 boolean success = false;
 writeLock();
@@ -1899,7 +1879,6 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 if (!FileSystem.areSymlinksEnabled()) {
   throw new UnsupportedOperationException("Symlinks not supported");
 }
-waitForLoadingFSImage();
 HdfsFileStatus auditStat = null;
 writeLock();
 try {
@@ -1933,7 +1912,6 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   boolean setReplication(final String src, final short replication)
   throws IOException {
 boolean success = false;
-waitForLoadingFSImage();
 checkOperation(OperationCategory.WRITE);
 writeLock();
 try {
@@ -1961,7 +1939,6 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
*/
   void setStoragePolicy(String src, String policyName) throws IOException {
 HdfsFileStatus auditStat;
-waitForLoadingFSImage();
 checkOperation(OperationCategory.WRITE);
 writeLock();
 try {
@@ -1988,7 +1965,6 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
*/
   BlockStoragePolicy getStoragePolicy(String src) throws IOException {
 checkOperation(OperationCategory.READ);
-waitForLoadingFSImage();
 readLock();
 try {
   checkOperation(OperationCategory.READ);
@@ -2003,7 +1979,6 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
*/
   BlockStoragePolicy[] getStoragePolicies() throws IOException {
 checkOperation(OperationCategory.READ);
-waitForLoadingFSImage();
 readLock();
 tr

hadoop git commit: HDFS-9430 Remove waitForLoadingFSImage since checkNNStartup has ensured image loaded and namenode started. (Brahma Reddy Battula via mingma)

2015-12-04 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 8a97ca4b0 -> a68faf1a0


HDFS-9430 Remove waitForLoadingFSImage since checkNNStartup has ensured image 
loaded and namenode started. (Brahma Reddy Battula via mingma)

(cherry picked from commit 3fa33b5c2c289ceaced30c6c5451f3569110459d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a68faf1a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a68faf1a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a68faf1a

Branch: refs/heads/branch-2
Commit: a68faf1a02c606d3e2a8fa04ecea8f5d9687d6ff
Parents: 8a97ca4
Author: Ming Ma <min...@apache.org>
Authored: Fri Dec 4 09:47:57 2015 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Fri Dec 4 09:55:02 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../hdfs/server/namenode/FSNamesystem.java  | 36 
 2 files changed, 3 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a68faf1a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index c43b2c4..aeffe63 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1574,6 +1574,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9484. NNThroughputBenchmark$BlockReportStats should not send empty
 block reports. (Mingliang Liu via shv)
 
+HDFS-9430. Remove waitForLoadingFSImage since checkNNStartup has ensured
+image loaded and namenode started. (Brahma Reddy Battula via mingma)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a68faf1a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index f0737ad..ac17275 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -553,25 +553,6 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   }
 
   /**
-   * Block until the object is imageLoaded to be used.
-   */
-  void waitForLoadingFSImage() {
-if (!imageLoaded) {
-  writeLock();
-  try {
-while (!imageLoaded) {
-  try {
-cond.await(5000, TimeUnit.MILLISECONDS);
-  } catch (InterruptedException ignored) {
-  }
-}
-  } finally {
-writeUnlock();
-  }
-}
-  }
-
-  /**
* Clear all loaded data
*/
   void clear() {
@@ -1800,7 +1781,6 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
*/
   void concat(String target, String [] srcs, boolean logRetryCache)
   throws IOException {
-waitForLoadingFSImage();
 HdfsFileStatus stat = null;
 boolean success = false;
 writeLock();
@@ -1885,7 +1865,6 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   boolean setReplication(final String src, final short replication)
   throws IOException {
 boolean success = false;
-waitForLoadingFSImage();
 checkOperation(OperationCategory.WRITE);
 writeLock();
 try {
@@ -1960,7 +1939,6 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
*/
   void setStoragePolicy(String src, String policyName) throws IOException {
 HdfsFileStatus auditStat;
-waitForLoadingFSImage();
 checkOperation(OperationCategory.WRITE);
 writeLock();
 try {
@@ -1987,7 +1965,6 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
*/
   BlockStoragePolicy getStoragePolicy(String src) throws IOException {
 checkOperation(OperationCategory.READ);
-waitForLoadingFSImage();
 readLock();
 try {
   checkOperation(OperationCategory.READ);
@@ -2002,7 +1979,6 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
*/
   BlockStoragePolicy[] getStoragePolicies() throws IOException {
 checkOperation(OperationCategory.READ);
-waitForLoadingFSImage();
 readLock();
 try {
   checkOperation(OperationCategory.READ);
@@ -2119,7 +2095,6 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 }
 
 FSPermissionChecker pc = getPermissionChecker();
-waitForLoadingFSImage();
 
 /**

hadoop git commit: HDFS-9314. Improve BlockPlacementPolicyDefault's picking of excess replicas. (Xiao Chen via mingma)

2015-11-24 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 1777608fa -> 0e54b164a


HDFS-9314. Improve BlockPlacementPolicyDefault's picking of excess replicas. 
(Xiao Chen via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0e54b164
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0e54b164
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0e54b164

Branch: refs/heads/trunk
Commit: 0e54b164a8d8acf09aca8712116bf7a554cb4846
Parents: 1777608
Author: Ming Ma <min...@apache.org>
Authored: Tue Nov 24 10:30:24 2015 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Tue Nov 24 10:30:24 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../BlockPlacementPolicyDefault.java| 32 ++--
 .../BlockPlacementPolicyRackFaultTolerant.java  |  8 ++
 .../BlockPlacementPolicyWithNodeGroup.java  |  3 +-
 .../BlockPlacementPolicyWithUpgradeDomain.java  | 19 ++---
 .../blockmanagement/TestReplicationPolicy.java  | 82 
 .../TestReplicationPolicyWithNodeGroup.java |  6 +-
 .../TestReplicationPolicyWithUpgradeDomain.java | 32 
 .../hdfs/server/namenode/ha/TestDNFencing.java  |  4 +-
 9 files changed, 153 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0e54b164/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index d39ed3f..b441b35 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1676,6 +1676,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-7988. Replace usage of ExactSizeInputStream with LimitInputStream.
 (Walter Su via wheat9)
 
+HDFS-9314. Improve BlockPlacementPolicyDefault's picking of excess
+replicas. (Xiao Chen via mingma)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0e54b164/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
index 13b17e3..08e7851 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
@@ -916,7 +916,8 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
   public DatanodeStorageInfo chooseReplicaToDelete(
   Collection moreThanOne,
   Collection exactlyOne,
-  final List excessTypes) {
+  final List excessTypes,
+  Map<String, List> rackMap) {
 long oldestHeartbeat =
   monotonicNow() - heartbeatInterval * tolerateHeartbeatMultiplier;
 DatanodeStorageInfo oldestHeartbeatStorage = null;
@@ -926,7 +927,7 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
 // Pick the node with the oldest heartbeat or with the least free space,
 // if all hearbeats are within the tolerable heartbeat interval
 for(DatanodeStorageInfo storage : pickupReplicaSet(moreThanOne,
-exactlyOne)) {
+exactlyOne, rackMap)) {
   if (!excessTypes.contains(storage.getStorageType())) {
 continue;
   }
@@ -991,7 +992,8 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
   moreThanOne, exactlyOne, excessTypes)) {
 cur = delNodeHintStorage;
   } else { // regular excessive replica removal
-cur = chooseReplicaToDelete(moreThanOne, exactlyOne, excessTypes);
+cur = chooseReplicaToDelete(moreThanOne, exactlyOne, excessTypes,
+rackMap);
   }
   firstOne = false;
   if (cur == null) {
@@ -1044,16 +1046,34 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
 splitNodesWithRack(locs, rackMap, moreThanOne, exactlyOne);
 return notReduceNumOfGroups(moreThanOne, source, target);
   }
+
   /**
* Pick up replica node set for deleting replica as over-replicated. 
* First set contains replica nodes on rack with more than one
* replica while second set contains remaining replica nodes.
-   * So pick up first set if not empty. If first is empty, then pick second.
+   * If only 1 rack, pick all. If 2 racks, pi

hadoop git commit: HDFS-9314. Improve BlockPlacementPolicyDefault's picking of excess replicas. (Xiao Chen via mingma)

2015-11-24 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 c79f01775 -> 85d04dc46


HDFS-9314. Improve BlockPlacementPolicyDefault's picking of excess replicas. 
(Xiao Chen via mingma)

(cherry picked from commit 0e54b164a8d8acf09aca8712116bf7a554cb4846)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/85d04dc4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/85d04dc4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/85d04dc4

Branch: refs/heads/branch-2
Commit: 85d04dc46494c5b627920bbc021f0515af8f753e
Parents: c79f017
Author: Ming Ma <min...@apache.org>
Authored: Tue Nov 24 10:30:24 2015 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Tue Nov 24 10:31:23 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../BlockPlacementPolicyDefault.java| 32 ++--
 .../BlockPlacementPolicyRackFaultTolerant.java  |  8 ++
 .../BlockPlacementPolicyWithNodeGroup.java  |  3 +-
 .../BlockPlacementPolicyWithUpgradeDomain.java  | 19 ++---
 .../blockmanagement/TestReplicationPolicy.java  | 82 
 .../TestReplicationPolicyWithNodeGroup.java |  6 +-
 .../TestReplicationPolicyWithUpgradeDomain.java | 32 
 .../hdfs/server/namenode/ha/TestDNFencing.java  |  4 +-
 9 files changed, 153 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/85d04dc4/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 421240b..5cacca3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -811,6 +811,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-7988. Replace usage of ExactSizeInputStream with LimitInputStream.
 (Walter Su via wheat9)
 
+HDFS-9314. Improve BlockPlacementPolicyDefault's picking of excess
+replicas. (Xiao Chen via mingma)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/85d04dc4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
index 13b17e3..08e7851 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
@@ -916,7 +916,8 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
   public DatanodeStorageInfo chooseReplicaToDelete(
   Collection moreThanOne,
   Collection exactlyOne,
-  final List excessTypes) {
+  final List excessTypes,
+  Map<String, List> rackMap) {
 long oldestHeartbeat =
   monotonicNow() - heartbeatInterval * tolerateHeartbeatMultiplier;
 DatanodeStorageInfo oldestHeartbeatStorage = null;
@@ -926,7 +927,7 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
 // Pick the node with the oldest heartbeat or with the least free space,
 // if all hearbeats are within the tolerable heartbeat interval
 for(DatanodeStorageInfo storage : pickupReplicaSet(moreThanOne,
-exactlyOne)) {
+exactlyOne, rackMap)) {
   if (!excessTypes.contains(storage.getStorageType())) {
 continue;
   }
@@ -991,7 +992,8 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
   moreThanOne, exactlyOne, excessTypes)) {
 cur = delNodeHintStorage;
   } else { // regular excessive replica removal
-cur = chooseReplicaToDelete(moreThanOne, exactlyOne, excessTypes);
+cur = chooseReplicaToDelete(moreThanOne, exactlyOne, excessTypes,
+rackMap);
   }
   firstOne = false;
   if (cur == null) {
@@ -1044,16 +1046,34 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
 splitNodesWithRack(locs, rackMap, moreThanOne, exactlyOne);
 return notReduceNumOfGroups(moreThanOne, source, target);
   }
+
   /**
* Pick up replica node set for deleting replica as over-replicated. 
* First set contains replica nodes on rack with more than one
* replica while second set contains remaining replica nodes.
-   * So pick up first set if not empty. If firs

hadoop git commit: HDFS-8056. Decommissioned dead nodes should continue to be counted as dead after NN restart. (mingma)

2015-11-19 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk ac1aa6c81 -> 1c4951a7a


HDFS-8056. Decommissioned dead nodes should continue to be counted as dead 
after NN restart. (mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1c4951a7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1c4951a7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1c4951a7

Branch: refs/heads/trunk
Commit: 1c4951a7a09433fbbcfe26f243d6c2d8043c71bb
Parents: ac1aa6c
Author: Ming Ma <min...@apache.org>
Authored: Thu Nov 19 10:04:01 2015 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Thu Nov 19 10:04:01 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../server/blockmanagement/DatanodeManager.java |  5 ++-
 .../apache/hadoop/hdfs/TestDecommission.java| 35 
 .../blockmanagement/TestHostFileManager.java|  2 +-
 4 files changed, 43 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1c4951a7/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index d857c58..04bb1e1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1657,6 +1657,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9252. Change TestFileTruncate to use FsDatasetTestUtils to get block
 file size and genstamp. (Lei (Eddy) Xu via cmccabe)
 
+HDFS-8056. Decommissioned dead nodes should continue to be counted as dead
+after NN restart. (mingma)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1c4951a7/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index 3406cf4..d35b237 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -1272,7 +1272,7 @@ public class DatanodeManager {
 
 if (listDeadNodes) {
   for (InetSocketAddress addr : includedNodes) {
-if (foundNodes.matchedBy(addr) || excludedNodes.match(addr)) {
+if (foundNodes.matchedBy(addr)) {
   continue;
 }
 // The remaining nodes are ones that are referenced by the hosts
@@ -1289,6 +1289,9 @@ public class DatanodeManager {
 addr.getPort() == 0 ? defaultXferPort : addr.getPort(),
 defaultInfoPort, defaultInfoSecurePort, defaultIpcPort));
 setDatanodeDead(dn);
+if (excludedNodes.match(addr)) {
+  dn.setDecommissioned();
+}
 nodes.add(dn);
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1c4951a7/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
index d648bca..0b70e24 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
@@ -924,6 +924,41 @@ public class TestDecommission {
   }
 
   /**
+   * Tests dead node count after restart of namenode
+   **/
+  @Test(timeout=36)
+  public void testDeadNodeCountAfterNamenodeRestart()throws Exception {
+LOG.info("Starting test testDeadNodeCountAfterNamenodeRestart");
+int numNamenodes = 1;
+int numDatanodes = 2;
+
+startCluster(numNamenodes, numDatanodes, conf);
+
+DFSClient client = getDfsClient(cluster.getNameNode(), conf);
+DatanodeInfo[] info = client.datanodeReport(DatanodeReportType.LIVE);
+DatanodeInfo excludedDatanode = info[0];
+String excludedDatanodeName = info[0].getXferAddr();
+
+writeConfigFile(hostsFile, new ArrayList(Arrays.asList(
+excludedDatanodeName, info[1].getXferAddr(;
+decommissionNode(0, excludedDatanode.getDatanodeUuid(), null,
+AdminStates.DECOMMISSIONED);
+
+cluster.stopData

hadoop git commit: HDFS-8056. Decommissioned dead nodes should continue to be counted as dead after NN restart. (mingma)

2015-11-19 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 c74e42b4a -> 5a3db2156


HDFS-8056. Decommissioned dead nodes should continue to be counted as dead 
after NN restart. (mingma)

(cherry picked from commit 1c4951a7a09433fbbcfe26f243d6c2d8043c71bb)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5a3db215
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5a3db215
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5a3db215

Branch: refs/heads/branch-2
Commit: 5a3db21563d1f7303d263efcf5631c447bda4307
Parents: c74e42b
Author: Ming Ma <min...@apache.org>
Authored: Thu Nov 19 10:04:01 2015 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Thu Nov 19 10:04:40 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../server/blockmanagement/DatanodeManager.java |  5 ++-
 .../apache/hadoop/hdfs/TestDecommission.java| 35 
 .../blockmanagement/TestHostFileManager.java|  2 +-
 4 files changed, 43 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5a3db215/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index dcf0ecf..54fefb4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -798,6 +798,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9252. Change TestFileTruncate to use FsDatasetTestUtils to get block
 file size and genstamp. (Lei (Eddy) Xu via cmccabe)
 
+HDFS-8056. Decommissioned dead nodes should continue to be counted as dead
+after NN restart. (mingma)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5a3db215/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index 870e5ad..49f4100 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -1267,7 +1267,7 @@ public class DatanodeManager {
 
 if (listDeadNodes) {
   for (InetSocketAddress addr : includedNodes) {
-if (foundNodes.matchedBy(addr) || excludedNodes.match(addr)) {
+if (foundNodes.matchedBy(addr)) {
   continue;
 }
 // The remaining nodes are ones that are referenced by the hosts
@@ -1284,6 +1284,9 @@ public class DatanodeManager {
 addr.getPort() == 0 ? defaultXferPort : addr.getPort(),
 defaultInfoPort, defaultInfoSecurePort, defaultIpcPort));
 setDatanodeDead(dn);
+if (excludedNodes.match(addr)) {
+  dn.setDecommissioned();
+}
 nodes.add(dn);
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5a3db215/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
index 0dce5d3..396e50c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
@@ -925,6 +925,41 @@ public class TestDecommission {
   }
 
   /**
+   * Tests dead node count after restart of namenode
+   **/
+  @Test(timeout=36)
+  public void testDeadNodeCountAfterNamenodeRestart()throws Exception {
+LOG.info("Starting test testDeadNodeCountAfterNamenodeRestart");
+int numNamenodes = 1;
+int numDatanodes = 2;
+
+startCluster(numNamenodes, numDatanodes, conf);
+
+DFSClient client = getDfsClient(cluster.getNameNode(), conf);
+DatanodeInfo[] info = client.datanodeReport(DatanodeReportType.LIVE);
+DatanodeInfo excludedDatanode = info[0];
+String excludedDatanodeName = info[0].getXferAddr();
+
+writeConfigFile(hostsFile, new ArrayList(Arrays.asList(
+excludedDatanodeName, info[1].getXferAddr(;
+decommissionNode(0, excludedDatanod

hadoop git commit: HDFS-9413. getContentSummary() on standby should throw StandbyException. (Brahma Reddy Battula via mingma)

2015-11-16 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 855d52927 -> 02653add9


HDFS-9413. getContentSummary() on standby should throw StandbyException. 
(Brahma Reddy Battula via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/02653add
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/02653add
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/02653add

Branch: refs/heads/trunk
Commit: 02653add98f34deedc27f4da2254d25e83e55b58
Parents: 855d529
Author: Ming Ma <min...@apache.org>
Authored: Mon Nov 16 09:32:40 2015 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Mon Nov 16 09:32:40 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  |  3 +++
 .../hadoop/hdfs/server/namenode/FSNamesystem.java|  2 ++
 .../hdfs/server/namenode/ha/TestQuotasWithHA.java| 15 +++
 3 files changed, 20 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/02653add/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index c91522b..8285a99 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -2302,6 +2302,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9410. Some tests should always reset sysout and syserr.
 (Xiao Chen via waltersu4549)
 
+HDFS-9413. getContentSummary() on standby should throw StandbyException.
+(Brahma Reddy Battula via mingma)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/02653add/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 316b7de..e86ff96 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -2948,9 +2948,11 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
* or null if file not found
*/
   ContentSummary getContentSummary(final String src) throws IOException {
+checkOperation(OperationCategory.READ);
 readLock();
 boolean success = true;
 try {
+  checkOperation(OperationCategory.READ);
   return FSDirStatAndListingOp.getContentSummary(dir, src);
 } catch (AccessControlException ace) {
   success = false;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/02653add/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestQuotasWithHA.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestQuotasWithHA.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestQuotasWithHA.java
index 6ceecc7..db67d95 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestQuotasWithHA.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestQuotasWithHA.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hdfs.server.namenode.ha;
 
+
 import static org.junit.Assert.assertEquals;
 
 import java.io.IOException;
@@ -34,6 +35,7 @@ import org.apache.hadoop.hdfs.MiniDFSCluster;
 import org.apache.hadoop.hdfs.MiniDFSNNTopology;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.ipc.StandbyException;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
@@ -130,4 +132,17 @@ public class TestQuotasWithHA {
 assertEquals(1, cs.getDirectoryCount());
 assertEquals(0, cs.getFileCount());
   }
+
+  /**
+   * Test that getContentSummary on Standby should should throw standby
+   * exception.
+   */
+  @Test(expected = StandbyException.class)
+  public void testgetContentSummaryOnStandby() throws Exception {
+Configuration nn1conf =cluster.getConfiguration(1);
+// just reset the standby reads to default i.e False on standby.
+HAUtil.setAllowStandbyReads(nn1conf, false);
+cluster.restartNameNode(1);
+cluster.getNameNodeRpc(1).getContentSummary("/");
+  }
 }



hadoop git commit: HDFS-9413. getContentSummary() on standby should throw StandbyException. (Brahma Reddy Battula via mingma)

2015-11-16 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 1d107d805 -> 42b55ff23


HDFS-9413. getContentSummary() on standby should throw StandbyException. 
(Brahma Reddy Battula via mingma)

(cherry picked from commit 02653add98f34deedc27f4da2254d25e83e55b58)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/42b55ff2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/42b55ff2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/42b55ff2

Branch: refs/heads/branch-2
Commit: 42b55ff23e5fa30e00a6b56df92777a1899a1952
Parents: 1d107d8
Author: Ming Ma <min...@apache.org>
Authored: Mon Nov 16 09:32:40 2015 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Mon Nov 16 09:34:03 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  |  3 +++
 .../hadoop/hdfs/server/namenode/FSNamesystem.java|  2 ++
 .../hdfs/server/namenode/ha/TestQuotasWithHA.java| 15 +++
 3 files changed, 20 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/42b55ff2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 5a4f567..c5c94a5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1450,6 +1450,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9410. Some tests should always reset sysout and syserr.
 (Xiao Chen via waltersu4549)
 
+HDFS-9413. getContentSummary() on standby should throw StandbyException.
+(Brahma Reddy Battula via mingma)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/42b55ff2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index d438ece..9ef7f4e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -2914,9 +2914,11 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
* or null if file not found
*/
   ContentSummary getContentSummary(final String src) throws IOException {
+checkOperation(OperationCategory.READ);
 readLock();
 boolean success = true;
 try {
+  checkOperation(OperationCategory.READ);
   return FSDirStatAndListingOp.getContentSummary(dir, src);
 } catch (AccessControlException ace) {
   success = false;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/42b55ff2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestQuotasWithHA.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestQuotasWithHA.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestQuotasWithHA.java
index 6ceecc7..db67d95 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestQuotasWithHA.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestQuotasWithHA.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hdfs.server.namenode.ha;
 
+
 import static org.junit.Assert.assertEquals;
 
 import java.io.IOException;
@@ -34,6 +35,7 @@ import org.apache.hadoop.hdfs.MiniDFSCluster;
 import org.apache.hadoop.hdfs.MiniDFSNNTopology;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.ipc.StandbyException;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
@@ -130,4 +132,17 @@ public class TestQuotasWithHA {
 assertEquals(1, cs.getDirectoryCount());
 assertEquals(0, cs.getFileCount());
   }
+
+  /**
+   * Test that getContentSummary on Standby should should throw standby
+   * exception.
+   */
+  @Test(expected = StandbyException.class)
+  public void testgetContentSummaryOnStandby() throws Exception {
+Configuration nn1conf =cluster.getConfiguration(1);
+// just reset the standby reads to default i.e False on standby.
+HAUtil.setAllowStandbyReads(nn1conf, false);
+cluster.restartNameNode(1);
+cluster.getNameNodeRpc(1).getContentSummary("/");
+  }
 }



hadoop git commit: HDFS-9313. Possible NullPointerException in BlockManager if no excess replica can be chosen. (mingma)

2015-11-02 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 8e05dbf2b -> d565480da


HDFS-9313. Possible NullPointerException in BlockManager if no excess replica 
can be chosen. (mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d565480d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d565480d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d565480d

Branch: refs/heads/trunk
Commit: d565480da2f646b40c3180e1ccb2935c9863dfef
Parents: 8e05dbf
Author: Ming Ma <min...@apache.org>
Authored: Mon Nov 2 19:36:37 2015 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Mon Nov 2 19:36:37 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../blockmanagement/BlockPlacementPolicy.java   |  8 +++--
 .../BlockPlacementPolicyDefault.java|  6 
 .../blockmanagement/TestReplicationPolicy.java  | 31 
 4 files changed, 45 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d565480d/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 19ea5c1..879c015 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -2216,6 +2216,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9329. TestBootstrapStandby#testRateThrottling is flaky because fsimage
 size is smaller than IO buffer size. (zhz)
 
+HDFS-9313. Possible NullPointerException in BlockManager if no excess
+replica can be chosen. (mingma)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d565480d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
index be169c3..526a5d7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
@@ -23,8 +23,6 @@ import java.util.List;
 import java.util.Map;
 import java.util.Set;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.StorageType;
@@ -33,13 +31,17 @@ import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.net.NetworkTopology;
 import org.apache.hadoop.net.Node;
 
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 /** 
  * This interface is used for choosing the desired number of targets
  * for placing block replicas.
  */
 @InterfaceAudience.Private
 public abstract class BlockPlacementPolicy {
-  static final Log LOG = LogFactory.getLog(BlockPlacementPolicy.class);
+  static final Logger LOG = LoggerFactory.getLogger(
+  BlockPlacementPolicy.class);
 
   @InterfaceAudience.Private
   public static class NotEnoughReplicasException extends Exception {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d565480d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
index d9b8d60..2723ed9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
@@ -981,6 +981,12 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
 excessTypes);
   }
   firstOne = false;
+  if (cur == null) {
+LOG.warn("No excess replica can be found. excessTypes: {}." +
+" moreThanOne: {}. exactlyOne: {}.", excessTypes, moreThanOne,
+exactlyOne);
+break;
+  }
 
   // adjust rackmap, moreThanOne, and exactlyOne
   adjustSetsWit

hadoop git commit: HDFS-9313. Possible NullPointerException in BlockManager if no excess replica can be chosen. (mingma)

2015-11-02 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 c6b9c8d69 -> fd9a7f84b


HDFS-9313. Possible NullPointerException in BlockManager if no excess replica 
can be chosen. (mingma)

(cherry picked from commit d565480da2f646b40c3180e1ccb2935c9863dfef)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fd9a7f84
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fd9a7f84
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fd9a7f84

Branch: refs/heads/branch-2
Commit: fd9a7f84bcf6664443568a16df1d1101393e6901
Parents: c6b9c8d
Author: Ming Ma <min...@apache.org>
Authored: Mon Nov 2 19:36:37 2015 -0800
Committer: Ming Ma <min...@apache.org>
Committed: Mon Nov 2 19:37:38 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../blockmanagement/BlockPlacementPolicy.java   |  8 +++--
 .../BlockPlacementPolicyDefault.java|  6 
 .../blockmanagement/TestReplicationPolicy.java  | 31 
 4 files changed, 45 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fd9a7f84/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 6da1264..8d2ddf5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1373,6 +1373,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9329. TestBootstrapStandby#testRateThrottling is flaky because fsimage
 size is smaller than IO buffer size. (zhz)
 
+HDFS-9313. Possible NullPointerException in BlockManager if no excess
+replica can be chosen. (mingma)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fd9a7f84/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
index 69a49aa..e75efa0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
@@ -23,8 +23,6 @@ import java.util.List;
 import java.util.Map;
 import java.util.Set;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.StorageType;
@@ -35,13 +33,17 @@ import org.apache.hadoop.net.NetworkTopology;
 import org.apache.hadoop.net.Node;
 import org.apache.hadoop.util.ReflectionUtils;
 
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 /** 
  * This interface is used for choosing the desired number of targets
  * for placing block replicas.
  */
 @InterfaceAudience.Private
 public abstract class BlockPlacementPolicy {
-  static final Log LOG = LogFactory.getLog(BlockPlacementPolicy.class);
+  static final Logger LOG = LoggerFactory.getLogger(
+  BlockPlacementPolicy.class);
 
   @InterfaceAudience.Private
   public static class NotEnoughReplicasException extends Exception {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fd9a7f84/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
index d9b8d60..2723ed9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
@@ -981,6 +981,12 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
 excessTypes);
   }
   firstOne = false;
+  if (cur == null) {
+LOG.warn("No excess replica can be found. excessTypes: {}." +
+" moreThanOne: {}. exactlyOne: {}.", excessTypes, moreThanOne,
+exactlyOne);
+break;
+  }
 
  

hadoop git commit: HDFS-9259. Make SO_SNDBUF size configurable at DFSClient side for hdfs write scenario. (Mingliang Liu via mingma)

2015-10-27 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk c28e16b40 -> aa09880ab


HDFS-9259. Make SO_SNDBUF size configurable at DFSClient side for hdfs write 
scenario. (Mingliang Liu via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aa09880a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aa09880a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aa09880a

Branch: refs/heads/trunk
Commit: aa09880ab85f3c35c12373976e7b03f3140b65c8
Parents: c28e16b
Author: Ming Ma <min...@apache.org>
Authored: Tue Oct 27 09:28:40 2015 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Tue Oct 27 09:28:40 2015 -0700

--
 .../org/apache/hadoop/hdfs/DataStreamer.java|  4 +-
 .../hdfs/client/HdfsClientConfigKeys.java   |  5 +
 .../hadoop/hdfs/client/impl/DfsClientConf.java  | 12 +++
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../src/main/resources/hdfs-default.xml | 12 +++
 .../hadoop/hdfs/TestDFSClientSocketSize.java| 96 
 6 files changed, 131 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa09880a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
index b0c5be6..03c2c52 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
@@ -131,7 +131,9 @@ class DataStreamer extends Daemon {
 NetUtils.connect(sock, isa, client.getRandomLocalInterfaceAddr(),
 conf.getSocketTimeout());
 sock.setSoTimeout(timeout);
-sock.setSendBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
+if (conf.getSocketSendBufferSize() > 0) {
+  sock.setSendBufferSize(conf.getSocketSendBufferSize());
+}
 LOG.debug("Send buf size {}", sock.getSendBufferSize());
 return sock;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa09880a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
index 17c3654..fcfd49c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.hdfs.client;
 
 import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 
 import java.util.concurrent.TimeUnit;
 
@@ -58,6 +59,10 @@ public interface HdfsClientConfigKeys {
   String  DFS_CLIENT_WRITE_PACKET_SIZE_KEY = "dfs.client-write-packet-size";
   int DFS_CLIENT_WRITE_PACKET_SIZE_DEFAULT = 64*1024;
   String  DFS_CLIENT_SOCKET_TIMEOUT_KEY = "dfs.client.socket-timeout";
+  String  DFS_CLIENT_SOCKET_SEND_BUFFER_SIZE_KEY =
+  "dfs.client.socket.send.buffer.size";
+  int DFS_CLIENT_SOCKET_SEND_BUFFER_SIZE_DEFAULT =
+  HdfsConstants.DEFAULT_DATA_SOCKET_SIZE;
   String  DFS_CLIENT_SOCKET_CACHE_CAPACITY_KEY =
   "dfs.client.socketcache.capacity";
   int DFS_CLIENT_SOCKET_CACHE_CAPACITY_DEFAULT = 16;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa09880a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
index 43fba7b..194f3ba 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
@@ -56,6 +56,8 @@ import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_SOCK
 import static 
org.apache.hadoop.hdfs.client.HdfsC

hadoop git commit: HDFS-9259. Make SO_SNDBUF size configurable at DFSClient side for hdfs write scenario. (Mingliang Liu via mingma)

2015-10-27 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 c5bf1cb7a -> 2c335a843


HDFS-9259. Make SO_SNDBUF size configurable at DFSClient side for hdfs write 
scenario. (Mingliang Liu via mingma)

(cherry picked from commit aa09880ab85f3c35c12373976e7b03f3140b65c8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2c335a84
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2c335a84
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2c335a84

Branch: refs/heads/branch-2
Commit: 2c335a8434d94528d1923fffae35356447a5a5dc
Parents: c5bf1cb
Author: Ming Ma <min...@apache.org>
Authored: Tue Oct 27 09:28:40 2015 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Tue Oct 27 09:29:37 2015 -0700

--
 .../org/apache/hadoop/hdfs/DataStreamer.java|  4 +-
 .../hdfs/client/HdfsClientConfigKeys.java   |  5 +
 .../hadoop/hdfs/client/impl/DfsClientConf.java  | 12 +++
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../src/main/resources/hdfs-default.xml | 12 +++
 .../hadoop/hdfs/TestDFSClientSocketSize.java| 96 
 6 files changed, 131 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2c335a84/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
index 5bb7837..9b140be 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
@@ -139,7 +139,9 @@ class DataStreamer extends Daemon {
 NetUtils.connect(sock, isa, client.getRandomLocalInterfaceAddr(),
 conf.getSocketTimeout());
 sock.setSoTimeout(timeout);
-sock.setSendBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
+if (conf.getSocketSendBufferSize() > 0) {
+  sock.setSendBufferSize(conf.getSocketSendBufferSize());
+}
 LOG.debug("Send buf size {}", sock.getSendBufferSize());
 return sock;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2c335a84/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
index 2e72769..992cf3a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.hdfs.client;
 
 import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 
 import java.util.concurrent.TimeUnit;
 
@@ -62,6 +63,10 @@ public interface HdfsClientConfigKeys {
   String  DFS_CLIENT_WRITE_PACKET_SIZE_KEY = "dfs.client-write-packet-size";
   int DFS_CLIENT_WRITE_PACKET_SIZE_DEFAULT = 64*1024;
   String  DFS_CLIENT_SOCKET_TIMEOUT_KEY = "dfs.client.socket-timeout";
+  String  DFS_CLIENT_SOCKET_SEND_BUFFER_SIZE_KEY =
+  "dfs.client.socket.send.buffer.size";
+  int DFS_CLIENT_SOCKET_SEND_BUFFER_SIZE_DEFAULT =
+  HdfsConstants.DEFAULT_DATA_SOCKET_SIZE;
   String  DFS_CLIENT_SOCKET_CACHE_CAPACITY_KEY =
   "dfs.client.socketcache.capacity";
   int DFS_CLIENT_SOCKET_CACHE_CAPACITY_DEFAULT = 16;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2c335a84/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
index 15387bb..7f3ae04 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
@@ -55,6 +55,8 @@ import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_SOCK
 import static 
org.apache.hadoop.hdfs.client.HdfsC

hadoop git commit: YARN-2913. Fair scheduler should have ability to set MaxResourceDefault for each queue. (Siqi Li via mingma)

2015-10-23 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 de739cabb -> 4bb7e68eb


YARN-2913. Fair scheduler should have ability to set MaxResourceDefault for 
each queue. (Siqi Li via mingma)

(cherry picked from commit 934d96a334598fcf0e5aba2043ff539469025f69)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4bb7e68e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4bb7e68e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4bb7e68e

Branch: refs/heads/branch-2
Commit: 4bb7e68eb6b601b424b46b1d0e8c92767959c733
Parents: de739ca
Author: Ming Ma <min...@apache.org>
Authored: Fri Oct 23 08:36:33 2015 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Fri Oct 23 08:37:46 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +++
 .../scheduler/fair/AllocationConfiguration.java | 26 +---
 .../fair/AllocationFileLoaderService.java   | 15 ---
 .../fair/TestAllocationFileLoaderService.java   | 18 +-
 .../src/site/markdown/FairScheduler.md  |  3 +++
 5 files changed, 56 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4bb7e68e/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index eb0e54d..92b0143 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -479,6 +479,9 @@ Release 2.8.0 - UNRELEASED
 YARN-4243. Add retry on establishing Zookeeper conenction in 
 EmbeddedElectorService#serviceInit. (Xuan Gong via junping_du)
 
+YARN-2913. Fair scheduler should have ability to set MaxResourceDefault for
+each queue. (Siqi Li via mingma)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4bb7e68e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
index 0ea7314..bf4eae8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
@@ -29,6 +29,8 @@ import org.apache.hadoop.yarn.api.records.QueueACL;
 import org.apache.hadoop.yarn.api.records.Resource;
 import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationSchedulerConfiguration;
 import org.apache.hadoop.yarn.server.resourcemanager.resource.ResourceWeights;
+import org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
+import org.apache.hadoop.yarn.util.resource.ResourceCalculator;
 import org.apache.hadoop.yarn.util.resource.Resources;
 
 import com.google.common.annotations.VisibleForTesting;
@@ -36,7 +38,8 @@ import com.google.common.annotations.VisibleForTesting;
 public class AllocationConfiguration extends ReservationSchedulerConfiguration 
{
   private static final AccessControlList EVERYBODY_ACL = new 
AccessControlList("*");
   private static final AccessControlList NOBODY_ACL = new AccessControlList(" 
");
-  
+  private static final ResourceCalculator RESOURCE_CALCULATOR =
+  new DefaultResourceCalculator();
   // Minimum resource allocation for each queue
   private final Map<String, Resource> minQueueResources;
   // Maximum amount of resources per queue
@@ -53,6 +56,7 @@ public class AllocationConfiguration extends 
ReservationSchedulerConfiguration {
   final Map<String, Integer> userMaxApps;
   private final int userMaxAppsDefault;
   private final int queueMaxAppsDefault;
+  private final Resource queueMaxResourcesDefault;
 
   // Maximum resource share for each leaf queue that can be used to run AMs
   final Map<String, Float> queueMaxAMShares;
@@ -99,7 +103,8 @@ public class AllocationConfiguration extends 
ReservationSchedulerConfiguration {
   Map<String, Integer> queueMaxApps, Map<String, Integer> userMaxApps,
   Ma

hadoop git commit: YARN-2913. Fair scheduler should have ability to set MaxResourceDefault for each queue. (Siqi Li via mingma)

2015-10-23 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk f8adeb712 -> 934d96a33


YARN-2913. Fair scheduler should have ability to set MaxResourceDefault for 
each queue. (Siqi Li via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/934d96a3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/934d96a3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/934d96a3

Branch: refs/heads/trunk
Commit: 934d96a334598fcf0e5aba2043ff539469025f69
Parents: f8adeb7
Author: Ming Ma <min...@apache.org>
Authored: Fri Oct 23 08:36:33 2015 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Fri Oct 23 08:36:33 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +++
 .../scheduler/fair/AllocationConfiguration.java | 26 +---
 .../fair/AllocationFileLoaderService.java   | 15 ---
 .../fair/TestAllocationFileLoaderService.java   | 18 +-
 .../src/site/markdown/FairScheduler.md  |  3 +++
 5 files changed, 56 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/934d96a3/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 53bf85a..125ff94 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -531,6 +531,9 @@ Release 2.8.0 - UNRELEASED
 YARN-4243. Add retry on establishing Zookeeper conenction in 
 EmbeddedElectorService#serviceInit. (Xuan Gong via junping_du)
 
+YARN-2913. Fair scheduler should have ability to set MaxResourceDefault for
+each queue. (Siqi Li via mingma)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/934d96a3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
index 0ea7314..bf4eae8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
@@ -29,6 +29,8 @@ import org.apache.hadoop.yarn.api.records.QueueACL;
 import org.apache.hadoop.yarn.api.records.Resource;
 import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationSchedulerConfiguration;
 import org.apache.hadoop.yarn.server.resourcemanager.resource.ResourceWeights;
+import org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
+import org.apache.hadoop.yarn.util.resource.ResourceCalculator;
 import org.apache.hadoop.yarn.util.resource.Resources;
 
 import com.google.common.annotations.VisibleForTesting;
@@ -36,7 +38,8 @@ import com.google.common.annotations.VisibleForTesting;
 public class AllocationConfiguration extends ReservationSchedulerConfiguration 
{
   private static final AccessControlList EVERYBODY_ACL = new 
AccessControlList("*");
   private static final AccessControlList NOBODY_ACL = new AccessControlList(" 
");
-  
+  private static final ResourceCalculator RESOURCE_CALCULATOR =
+  new DefaultResourceCalculator();
   // Minimum resource allocation for each queue
   private final Map<String, Resource> minQueueResources;
   // Maximum amount of resources per queue
@@ -53,6 +56,7 @@ public class AllocationConfiguration extends 
ReservationSchedulerConfiguration {
   final Map<String, Integer> userMaxApps;
   private final int userMaxAppsDefault;
   private final int queueMaxAppsDefault;
+  private final Resource queueMaxResourcesDefault;
 
   // Maximum resource share for each leaf queue that can be used to run AMs
   final Map<String, Float> queueMaxAMShares;
@@ -99,7 +103,8 @@ public class AllocationConfiguration extends 
ReservationSchedulerConfiguration {
   Map<String, Integer> queueMaxApps, Map<String, Integer> userMaxApps,
   Map<String, ResourceWeights> queueWeights,
   Map<String, Float

hadoop git commit: HDFS-8647. Abstract BlockManager's rack policy into BlockPlacementPolicy. (Brahma Reddy Battula via mingma)

2015-10-21 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 e2f027ddd -> 39e7548cd


HDFS-8647. Abstract BlockManager's rack policy into BlockPlacementPolicy. 
(Brahma Reddy Battula via mingma)

(cherry picked from commit e27c2ae8bafc94f18eb38f5d839dcef5652d424e)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/39e7548c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/39e7548c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/39e7548c

Branch: refs/heads/branch-2
Commit: 39e7548cd9d12e7be8c721a8bba56475cc8fcc23
Parents: e2f027d
Author: Ming Ma <min...@apache.org>
Authored: Wed Oct 21 08:06:58 2015 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Wed Oct 21 08:24:02 2015 -0700

--
 .../org/apache/hadoop/net/NetworkTopology.java  |  34 +++-
 .../net/NetworkTopologyWithNodeGroup.java   |   2 +-
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../server/blockmanagement/BlockManager.java| 182 +--
 .../blockmanagement/BlockPlacementPolicy.java   |  50 +++--
 .../BlockPlacementPolicyDefault.java| 112 ++--
 .../BlockPlacementPolicyRackFaultTolerant.java  |  18 ++
 .../BlockPlacementPolicyWithUpgradeDomain.java  |  11 +-
 .../server/blockmanagement/DatanodeManager.java |   8 -
 .../hdfs/server/namenode/NamenodeFsck.java  |   2 +-
 .../hdfs/server/balancer/TestBalancer.java  |   9 +-
 .../blockmanagement/TestBlockManager.java   |   8 +-
 .../blockmanagement/TestReplicationPolicy.java  |  79 +++-
 .../TestReplicationPolicyWithNodeGroup.java |  12 +-
 .../TestReplicationPolicyWithUpgradeDomain.java |  25 ++-
 .../hdfs/server/namenode/ha/TestDNFencing.java  |   8 +-
 16 files changed, 328 insertions(+), 235 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/39e7548c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
index fe6e439..b637da1 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
@@ -54,9 +54,9 @@ import com.google.common.collect.Lists;
 public class NetworkTopology {
   public final static String DEFAULT_RACK = "/default-rack";
   public final static int DEFAULT_HOST_LEVEL = 2;
-  public static final Log LOG = 
+  public static final Log LOG =
 LogFactory.getLog(NetworkTopology.class);
-
+
   public static class InvalidTopologyException extends RuntimeException {
 private static final long serialVersionUID = 1L;
 public InvalidTopologyException(String msg) {
@@ -379,6 +379,13 @@ public class NetworkTopology {
   private int depthOfAllLeaves = -1;
   /** rack counter */
   protected int numOfRacks = 0;
+
+  /**
+   * Whether or not this cluster has ever consisted of more than 1 rack,
+   * according to the NetworkTopology.
+   */
+  private boolean clusterEverBeenMultiRack = false;
+
   /** the lock used to manage access */
   protected ReadWriteLock netlock = new ReentrantReadWriteLock();
 
@@ -417,7 +424,7 @@ public class NetworkTopology {
   if (clusterMap.add(node)) {
 LOG.info("Adding a new node: "+NodeBase.getPath(node));
 if (rack == null) {
-  numOfRacks++;
+  incrementRacks();
 }
 if (!(node instanceof InnerNode)) {
   if (depthOfAllLeaves == -1) {
@@ -432,7 +439,14 @@ public class NetworkTopology {
   netlock.writeLock().unlock();
 }
   }
-  
+
+  protected void incrementRacks() {
+numOfRacks++;
+if (!clusterEverBeenMultiRack && numOfRacks > 1) {
+  clusterEverBeenMultiRack = true;
+}
+  }
+
   /**
* Return a reference to the node given its string representation.
* Default implementation delegates to {@link #getNode(String)}.
@@ -540,10 +554,18 @@ public class NetworkTopology {
   netlock.readLock().unlock();
 }
   }
-  
+
+  /**
+   * @return true if this cluster has ever consisted of multiple racks, even if
+   * it is not now a multi-rack cluster.
+   */
+  public boolean hasClusterEverBeenMultiRack() {
+return clusterEverBeenMultiRack;
+  }
+
   /** Given a string representation of a rack for a specific network
*  location
-   * 
+   *
* To be overridden in subclasses for specific NetworkTopology 
* implementations, as alternative to overriding the full 
* {@link #getRack(String)} method.

http://git-wip-us.apache.org/repos/

hadoop git commit: HDFS-8647. Abstract BlockManager's rack policy into BlockPlacementPolicy. (Brahma Reddy Battula via mingma)

2015-10-21 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk b37c41fd6 -> e27c2ae8b


HDFS-8647. Abstract BlockManager's rack policy into BlockPlacementPolicy. 
(Brahma Reddy Battula via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e27c2ae8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e27c2ae8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e27c2ae8

Branch: refs/heads/trunk
Commit: e27c2ae8bafc94f18eb38f5d839dcef5652d424e
Parents: b37c41f
Author: Ming Ma <min...@apache.org>
Authored: Wed Oct 21 08:06:58 2015 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Wed Oct 21 08:06:58 2015 -0700

--
 .../org/apache/hadoop/net/NetworkTopology.java  |  34 +++-
 .../net/NetworkTopologyWithNodeGroup.java   |   2 +-
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../server/blockmanagement/BlockManager.java| 175 ---
 .../blockmanagement/BlockPlacementPolicy.java   |  52 +++---
 .../BlockPlacementPolicyDefault.java| 112 ++--
 .../BlockPlacementPolicyRackFaultTolerant.java  |  18 ++
 .../BlockPlacementPolicyWithUpgradeDomain.java  |  11 +-
 .../server/blockmanagement/DatanodeManager.java |   8 -
 .../hdfs/server/namenode/NamenodeFsck.java  |   5 +-
 .../hdfs/server/balancer/TestBalancer.java  |   9 +-
 .../blockmanagement/TestBlockManager.java   |   8 +-
 .../blockmanagement/TestReplicationPolicy.java  |  79 -
 .../TestReplicationPolicyWithNodeGroup.java |  12 +-
 .../TestReplicationPolicyWithUpgradeDomain.java |  25 ++-
 .../hdfs/server/namenode/ha/TestDNFencing.java  |   8 +-
 16 files changed, 316 insertions(+), 245 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e27c2ae8/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
index fe6e439..b637da1 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
@@ -54,9 +54,9 @@ import com.google.common.collect.Lists;
 public class NetworkTopology {
   public final static String DEFAULT_RACK = "/default-rack";
   public final static int DEFAULT_HOST_LEVEL = 2;
-  public static final Log LOG = 
+  public static final Log LOG =
 LogFactory.getLog(NetworkTopology.class);
-
+
   public static class InvalidTopologyException extends RuntimeException {
 private static final long serialVersionUID = 1L;
 public InvalidTopologyException(String msg) {
@@ -379,6 +379,13 @@ public class NetworkTopology {
   private int depthOfAllLeaves = -1;
   /** rack counter */
   protected int numOfRacks = 0;
+
+  /**
+   * Whether or not this cluster has ever consisted of more than 1 rack,
+   * according to the NetworkTopology.
+   */
+  private boolean clusterEverBeenMultiRack = false;
+
   /** the lock used to manage access */
   protected ReadWriteLock netlock = new ReentrantReadWriteLock();
 
@@ -417,7 +424,7 @@ public class NetworkTopology {
   if (clusterMap.add(node)) {
 LOG.info("Adding a new node: "+NodeBase.getPath(node));
 if (rack == null) {
-  numOfRacks++;
+  incrementRacks();
 }
 if (!(node instanceof InnerNode)) {
   if (depthOfAllLeaves == -1) {
@@ -432,7 +439,14 @@ public class NetworkTopology {
   netlock.writeLock().unlock();
 }
   }
-  
+
+  protected void incrementRacks() {
+numOfRacks++;
+if (!clusterEverBeenMultiRack && numOfRacks > 1) {
+  clusterEverBeenMultiRack = true;
+}
+  }
+
   /**
* Return a reference to the node given its string representation.
* Default implementation delegates to {@link #getNode(String)}.
@@ -540,10 +554,18 @@ public class NetworkTopology {
   netlock.readLock().unlock();
 }
   }
-  
+
+  /**
+   * @return true if this cluster has ever consisted of multiple racks, even if
+   * it is not now a multi-rack cluster.
+   */
+  public boolean hasClusterEverBeenMultiRack() {
+return clusterEverBeenMultiRack;
+  }
+
   /** Given a string representation of a rack for a specific network
*  location
-   * 
+   *
* To be overridden in subclasses for specific NetworkTopology 
* implementations, as alternative to overriding the full 
* {@link #getRack(String)} method.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e27c2ae8/hadoop-common-project/had

hadoop git commit: HDFS-9142. Separating Configuration object for namenode(s) in MiniDFSCluster. (Siqi Li via mingma)

2015-10-09 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk c11fc8a1b -> de8efc65a


HDFS-9142. Separating Configuration object for namenode(s) in MiniDFSCluster. 
(Siqi Li via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/de8efc65
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/de8efc65
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/de8efc65

Branch: refs/heads/trunk
Commit: de8efc65a455c10ae7280b5982c48f9aca84c9d4
Parents: c11fc8a
Author: Ming Ma <min...@apache.org>
Authored: Fri Oct 9 11:10:46 2015 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Fri Oct 9 11:10:46 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  | 88 +---
 .../apache/hadoop/hdfs/TestMiniDFSCluster.java  | 47 +++
 3 files changed, 106 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/de8efc65/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 7dbdfa8..ce8d79a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1998,6 +1998,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9137. DeadLock between DataNode#refreshVolumes and
 BPOfferService#registrationSucceeded. (Uma Maheswara Rao G via yliu)
 
+HDFS-9142. Separating Configuration object for namenode(s) in
+MiniDFSCluster. (Siqi Li via mingma)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/de8efc65/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
index 71a4bd2..2473dde 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
@@ -40,6 +40,7 @@ import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_HTTP_ADDRESS_KEY
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_SAFEMODE_EXTENSION_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_SERVICE_RPC_ADDRESS_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY;
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMESERVICES;
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMESERVICE_ID;
@@ -890,6 +891,44 @@ public class MiniDFSCluster {
   format, operation, clusterId, nnCounter);
   nnCounter += nameservice.getNNs().size();
 }
+
+for (NameNodeInfo nn : namenodes.values()) {
+  Configuration nnConf = nn.conf;
+  for (NameNodeInfo nnInfo : namenodes.values()) {
+if (nn.equals(nnInfo)) {
+  continue;
+}
+   copyKeys(conf, nnConf, nnInfo.nameserviceId, nnInfo.nnId);
+  }
+}
+  }
+
+  private static void copyKeys(Configuration srcConf, Configuration destConf,
+  String nameserviceId, String nnId) {
+String key = DFSUtil.addKeySuffixes(DFS_NAMENODE_RPC_ADDRESS_KEY,
+  nameserviceId, nnId);
+destConf.set(key, srcConf.get(key));
+
+key = DFSUtil.addKeySuffixes(DFS_NAMENODE_HTTP_ADDRESS_KEY,
+nameserviceId, nnId);
+String val = srcConf.get(key);
+if (val != null) {
+  destConf.set(key, srcConf.get(key));
+}
+
+key = DFSUtil.addKeySuffixes(DFS_NAMENODE_HTTPS_ADDRESS_KEY,
+nameserviceId, nnId);
+val = srcConf.get(key);
+if (val != null) {
+  destConf.set(key, srcConf.get(key));
+}
+
+key = DFSUtil.addKeySuffixes(DFS_NAMENODE_SERVICE_RPC_ADDRESS_KEY,
+nameserviceId, nnId);
+val = srcConf.get(key);
+if (val != null) {
+  destConf.set(key, srcConf.get(key));
+}
   }
 
   /**
@@ -972,16 +1011,13 @@ public class MiniDFSCluster {
 // create all the namenodes in the namespace
 nnIndex = nnCounter;
 for (NNConf nn : nameservice.getNNs()) {
-  initNameNodeConf(conf, nsId, nsCounter, nn.getNnId(), manageNameDfsDirs,
+  Configuration hdfsConf = new Configuration(conf);
+  initNameNodeConf(hdfsConf, nsId, nsCounter, nn.getNnId(), 
manageNameDfsDirs,
   enableManagedDfsDirsRedundancy, nnIndex++);
-   

hadoop git commit: HDFS-9142. Separating Configuration object for namenode(s) in MiniDFSCluster. (Siqi Li via mingma)

2015-10-09 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 61988f801 -> 02380e015


HDFS-9142. Separating Configuration object for namenode(s) in MiniDFSCluster. 
(Siqi Li via mingma)

(cherry picked from commit de8efc65a455c10ae7280b5982c48f9aca84c9d4)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/02380e01
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/02380e01
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/02380e01

Branch: refs/heads/branch-2
Commit: 02380e015642a3a6b645616313bb4ef48ccb4c8d
Parents: 61988f8
Author: Ming Ma <min...@apache.org>
Authored: Fri Oct 9 11:10:46 2015 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Fri Oct 9 11:41:54 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  | 77 ++--
 .../apache/hadoop/hdfs/TestMiniDFSCluster.java  | 47 
 3 files changed, 103 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/02380e01/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 290c5ef..a4d7662 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1178,6 +1178,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9137. DeadLock between DataNode#refreshVolumes and
 BPOfferService#registrationSucceeded. (Uma Maheswara Rao G via yliu)
 
+HDFS-9142. Separating Configuration object for namenode(s) in
+MiniDFSCluster. (Siqi Li via mingma)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/02380e01/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
index dc4c2f3..5033d57 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
@@ -40,6 +40,7 @@ import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_HTTP_ADDRESS_KEY
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_SAFEMODE_EXTENSION_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_SERVICE_RPC_ADDRESS_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY;
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMESERVICES;
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMESERVICE_ID;
@@ -850,6 +851,44 @@ public class MiniDFSCluster {
 shutdown();
   }
 }
+
+for (NameNodeInfo nn : nameNodes) {
+  Configuration nnConf = nn.conf;
+  for (NameNodeInfo nnInfo : nameNodes) {
+if (nn.equals(nnInfo)) {
+  continue;
+}
+   copyKeys(conf, nnConf, nnInfo.nameserviceId, nnInfo.nnId);
+  }
+}
+  }
+
+  private static void copyKeys(Configuration srcConf, Configuration destConf,
+  String nameserviceId, String nnId) {
+String key = DFSUtil.addKeySuffixes(DFS_NAMENODE_RPC_ADDRESS_KEY,
+  nameserviceId, nnId);
+destConf.set(key, srcConf.get(key));
+
+key = DFSUtil.addKeySuffixes(DFS_NAMENODE_HTTP_ADDRESS_KEY,
+nameserviceId, nnId);
+String val = srcConf.get(key);
+if (val != null) {
+  destConf.set(key, srcConf.get(key));
+}
+
+key = DFSUtil.addKeySuffixes(DFS_NAMENODE_HTTPS_ADDRESS_KEY,
+nameserviceId, nnId);
+val = srcConf.get(key);
+if (val != null) {
+  destConf.set(key, srcConf.get(key));
+}
+
+key = DFSUtil.addKeySuffixes(DFS_NAMENODE_SERVICE_RPC_ADDRESS_KEY,
+nameserviceId, nnId);
+val = srcConf.get(key);
+if (val != null) {
+  destConf.set(key, srcConf.get(key));
+}
   }
   
   /**
@@ -985,15 +1024,13 @@ public class MiniDFSCluster {
 
   // Start all Namenodes
   for (NNConf nn : nameservice.getNNs()) {
-initNameNodeConf(conf, nsId, nn.getNnId(), manageNameDfsDirs,
+Configuration hdfsConf = new Configuration(conf);
+initNameNodeConf(hdfsConf, nsId, nn.getNnId(), manageNameDfsDirs,
 enableManagedDfsDirsRedundancy, nnCounter);
-createNameNode(nnCounter, conf, numDataNodes, f

svn commit: r1706301 - /hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml

2015-10-01 Thread mingma
Author: mingma
Date: Thu Oct  1 18:13:08 2015
New Revision: 1706301

URL: http://svn.apache.org/viewvc?rev=1706301=rev
Log:
Added sjlee to committers list

Modified:
hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml

Modified: hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml?rev=1706301=1706300=1706301=diff
==
--- hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml 
(original)
+++ hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml Thu 
Oct  1 18:13:08 2015
@@ -1084,6 +1084,14 @@

 

+ sjlee
+ Sangjin Lee
+ Twitter
+ 
+ -8
+   
+
+   
  sradia
  http://people.apache.org/~sradia;>Sanjay Radia
  Hortonworks




hadoop git commit: HADOOP-12440. TestRPC#testRPCServerShutdown did not produce the desired thread states before shutting down. (Xiao Chen via mingma)

2015-09-28 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 9735afe96 -> 5c3b663bf


HADOOP-12440. TestRPC#testRPCServerShutdown did not produce the desired thread 
states before shutting down. (Xiao Chen via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5c3b663b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5c3b663b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5c3b663b

Branch: refs/heads/trunk
Commit: 5c3b663bf95551d1cf36a2a39849e0676893fa1d
Parents: 9735afe
Author: Ming Ma <min...@apache.org>
Authored: Mon Sep 28 18:12:51 2015 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Mon Sep 28 18:12:51 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt  | 3 +++
 .../src/test/java/org/apache/hadoop/ipc/TestRPC.java | 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5c3b663b/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 07463f4..2af6580 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1095,6 +1095,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11918. Listing an empty s3a root directory throws FileNotFound.
 (Lei (Eddy) Xu via cnauroth)
 
+HADOOP-12440. TestRPC#testRPCServerShutdown did not produce the desired
+thread states before shutting down. (Xiao Chen via mingma)
+
   OPTIMIZATIONS
 
 HADOOP-12051. ProtobufRpcEngine.invoke() should use Exception.toString()

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5c3b663b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
index d36a671..5711587 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
@@ -1060,8 +1060,8 @@ public class TestRPC {
 }));
   }
   while (server.getCallQueueLen() != 1
-  && countThreads(CallQueueManager.class.getName()) != 1
-  && countThreads(TestProtocol.class.getName()) != 1) {
+  || countThreads(CallQueueManager.class.getName()) != 1
+  || countThreads(TestImpl.class.getName()) != 1) {
 Thread.sleep(100);
   }
 } finally {



hadoop git commit: HADOOP-12440. TestRPC#testRPCServerShutdown did not produce the desired thread states before shutting down. (Xiao Chen via mingma)

2015-09-28 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 22f250147 -> ac7d85efb


HADOOP-12440. TestRPC#testRPCServerShutdown did not produce the desired thread 
states before shutting down. (Xiao Chen via mingma)

(cherry picked from commit 5c3b663bf95551d1cf36a2a39849e0676893fa1d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ac7d85ef
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ac7d85ef
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ac7d85ef

Branch: refs/heads/branch-2
Commit: ac7d85efb2fdf14542a0840c939a308618cc8985
Parents: 22f2501
Author: Ming Ma <min...@apache.org>
Authored: Mon Sep 28 18:12:51 2015 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Mon Sep 28 18:14:29 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt  | 3 +++
 .../src/test/java/org/apache/hadoop/ipc/TestRPC.java | 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ac7d85ef/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 51a23b8..2c0f708 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -590,6 +590,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11918. Listing an empty s3a root directory throws FileNotFound.
 (Lei (Eddy) Xu via cnauroth)
 
+HADOOP-12440. TestRPC#testRPCServerShutdown did not produce the desired
+thread states before shutting down. (Xiao Chen via mingma)
+
   OPTIMIZATIONS
 
 HADOOP-12051. ProtobufRpcEngine.invoke() should use Exception.toString()

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ac7d85ef/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
index f85e6a6..75f4695 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
@@ -1056,8 +1056,8 @@ public class TestRPC {
 }));
   }
   while (server.getCallQueueLen() != 1
-  && countThreads(CallQueueManager.class.getName()) != 1
-  && countThreads(TestProtocol.class.getName()) != 1) {
+  || countThreads(CallQueueManager.class.getName()) != 1
+  || countThreads(TestImpl.class.getName()) != 1) {
 Thread.sleep(100);
   }
 } finally {



hadoop git commit: HDFS-9008. Balancer#Parameters class could use a builder pattern. (Chris Trezzo via mingma)

2015-09-15 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 73e3a49eb -> 083b44c13


HDFS-9008. Balancer#Parameters class could use a builder pattern. (Chris Trezzo 
via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/083b44c1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/083b44c1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/083b44c1

Branch: refs/heads/trunk
Commit: 083b44c136ea5aba660fcd1dddbb2d21513b4456
Parents: 73e3a49
Author: Ming Ma <min...@apache.org>
Authored: Tue Sep 15 10:16:02 2015 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Tue Sep 15 10:16:02 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../hadoop/hdfs/server/balancer/Balancer.java   | 134 --
 .../server/balancer/BalancerParameters.java | 168 +
 .../hdfs/server/balancer/TestBalancer.java  | 180 ++-
 .../balancer/TestBalancerWithHANameNodes.java   |   4 +-
 .../TestBalancerWithMultipleNameNodes.java  |  26 ++-
 .../balancer/TestBalancerWithNodeGroup.java |   4 +-
 7 files changed, 317 insertions(+), 202 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/083b44c1/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index c49432d..fef8ee5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -915,6 +915,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9065. Include commas on # of files, blocks, total filesystem objects
 in NN Web UI. (Daniel Templeton via wheat9)
 
+HDFS-9008. Balancer#Parameters class could use a builder pattern.
+(Chris Trezzo via mingma)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/083b44c1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
index 259b280..f3f3d6f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
@@ -243,7 +243,8 @@ public class Balancer {
* namenode as a client and a secondary namenode and retry proxies
* when connection fails.
*/
-  Balancer(NameNodeConnector theblockpool, Parameters p, Configuration conf) {
+  Balancer(NameNodeConnector theblockpool, BalancerParameters p,
+  Configuration conf) {
 final long movedWinWidth = getLong(conf,
 DFSConfigKeys.DFS_BALANCER_MOVEDWINWIDTH_KEY,
 DFSConfigKeys.DFS_BALANCER_MOVEDWINWIDTH_DEFAULT);
@@ -265,13 +266,15 @@ public class Balancer {
 DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_DEFAULT);
 
 this.nnc = theblockpool;
-this.dispatcher = new Dispatcher(theblockpool, p.includedNodes,
-p.excludedNodes, movedWinWidth, moverThreads, dispatcherThreads,
-maxConcurrentMovesPerNode, getBlocksSize, getBlocksMinBlockSize, conf);
-this.threshold = p.threshold;
-this.policy = p.policy;
-this.sourceNodes = p.sourceNodes;
-this.runDuringUpgrade = p.runDuringUpgrade;
+this.dispatcher =
+new Dispatcher(theblockpool, p.getIncludedNodes(),
+p.getExcludedNodes(), movedWinWidth, moverThreads,
+dispatcherThreads, maxConcurrentMovesPerNode, getBlocksSize,
+getBlocksMinBlockSize, conf);
+this.threshold = p.getThreshold();
+this.policy = p.getBalancingPolicy();
+this.sourceNodes = p.getSourceNodes();
+this.runDuringUpgrade = p.getRunDuringUpgrade();
 
 this.maxSizeToMove = getLong(conf,
 DFSConfigKeys.DFS_BALANCER_MAX_SIZE_TO_MOVE_KEY,
@@ -629,7 +632,7 @@ public class Balancer {
* for each namenode,
* execute a {@link Balancer} to work through all datanodes once.  
*/
-  static int run(Collection namenodes, final Parameters p,
+  static int run(Collection namenodes, final BalancerParameters p,
   Configuration conf) throws IOException, InterruptedException {
 final long sleeptime =
 conf.getLong(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY,
@@ -638,24 +641,25 @@ public class Balancer {
 DFSConfigKeys.DFS_NAMENODE_REPLICATION_INTERVAL_DEFAULT) * 1000;
 LOG.info("namenodes  = " + namenodes);
 LOG

hadoop git commit: HDFS-9008. Balancer#Parameters class could use a builder pattern. (Chris Trezzo via mingma)

2015-09-15 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 df714e25a -> 3531823fc


HDFS-9008. Balancer#Parameters class could use a builder pattern. (Chris Trezzo 
via mingma)

(cherry picked from commit 083b44c136ea5aba660fcd1dddbb2d21513b4456)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3531823f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3531823f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3531823f

Branch: refs/heads/branch-2
Commit: 3531823fcc4966e0938cd824f117f3ecec76e47d
Parents: df714e2
Author: Ming Ma <min...@apache.org>
Authored: Tue Sep 15 10:16:02 2015 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Tue Sep 15 10:18:44 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../hadoop/hdfs/server/balancer/Balancer.java   | 134 --
 .../server/balancer/BalancerParameters.java | 168 +
 .../hdfs/server/balancer/TestBalancer.java  | 180 ++-
 .../balancer/TestBalancerWithHANameNodes.java   |   4 +-
 .../TestBalancerWithMultipleNameNodes.java  |  26 ++-
 .../balancer/TestBalancerWithNodeGroup.java |   4 +-
 7 files changed, 317 insertions(+), 202 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3531823f/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 5c334f2..891aefc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -567,6 +567,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9065. Include commas on # of files, blocks, total filesystem objects
 in NN Web UI. (Daniel Templeton via wheat9)
 
+HDFS-9008. Balancer#Parameters class could use a builder pattern.
+(Chris Trezzo via mingma)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3531823f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
index c4a4edc..dcae922 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
@@ -244,7 +244,8 @@ public class Balancer {
* namenode as a client and a secondary namenode and retry proxies
* when connection fails.
*/
-  Balancer(NameNodeConnector theblockpool, Parameters p, Configuration conf) {
+  Balancer(NameNodeConnector theblockpool, BalancerParameters p,
+  Configuration conf) {
 final long movedWinWidth = getLong(conf,
 DFSConfigKeys.DFS_BALANCER_MOVEDWINWIDTH_KEY,
 DFSConfigKeys.DFS_BALANCER_MOVEDWINWIDTH_DEFAULT);
@@ -266,13 +267,15 @@ public class Balancer {
 DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_DEFAULT);
 
 this.nnc = theblockpool;
-this.dispatcher = new Dispatcher(theblockpool, p.includedNodes,
-p.excludedNodes, movedWinWidth, moverThreads, dispatcherThreads,
-maxConcurrentMovesPerNode, getBlocksSize, getBlocksMinBlockSize, conf);
-this.threshold = p.threshold;
-this.policy = p.policy;
-this.sourceNodes = p.sourceNodes;
-this.runDuringUpgrade = p.runDuringUpgrade;
+this.dispatcher =
+new Dispatcher(theblockpool, p.getIncludedNodes(),
+p.getExcludedNodes(), movedWinWidth, moverThreads,
+dispatcherThreads, maxConcurrentMovesPerNode, getBlocksSize,
+getBlocksMinBlockSize, conf);
+this.threshold = p.getThreshold();
+this.policy = p.getBalancingPolicy();
+this.sourceNodes = p.getSourceNodes();
+this.runDuringUpgrade = p.getRunDuringUpgrade();
 
 this.maxSizeToMove = getLong(conf,
 DFSConfigKeys.DFS_BALANCER_MAX_SIZE_TO_MOVE_KEY,
@@ -630,7 +633,7 @@ public class Balancer {
* for each namenode,
* execute a {@link Balancer} to work through all datanodes once.  
*/
-  static int run(Collection namenodes, final Parameters p,
+  static int run(Collection namenodes, final BalancerParameters p,
   Configuration conf) throws IOException, InterruptedException {
 final long sleeptime =
 conf.getLong(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY,
@@ -639,24 +642,25 @@ public class Balancer {
 DFSConfigKeys.DFS_NAMENODE_REPLICATION_INTERVAL_

hadoop git commit: HDFS-8890. Allow admin to specify which blockpools the balancer should run on. (Chris Trezzo via mingma)

2015-09-02 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk de928d566 -> d31a41c35


HDFS-8890. Allow admin to specify which blockpools the balancer should run on. 
(Chris Trezzo via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d31a41c3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d31a41c3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d31a41c3

Branch: refs/heads/trunk
Commit: d31a41c35927f02f2fb40d19380b5df4bb2b6d57
Parents: de928d5
Author: Ming Ma <min...@apache.org>
Authored: Wed Sep 2 15:55:42 2015 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Wed Sep 2 15:55:42 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../hadoop/hdfs/server/balancer/Balancer.java   |  82 ++---
 .../src/site/markdown/HDFSCommands.md   |   2 +
 .../hdfs/server/balancer/TestBalancer.java  |  43 -
 .../TestBalancerWithMultipleNameNodes.java  | 179 ---
 5 files changed, 253 insertions(+), 56 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d31a41c3/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 7a685f5..e68c011 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -880,6 +880,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-328. Improve fs -setrep error message for invalid replication factors.
 (Daniel Templeton via wang)
 
+HDFS-8890. Allow admin to specify which blockpools the balancer should run
+on. (Chris Trezzo via mingma)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d31a41c3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
index fe6e4c3..259b280 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
@@ -179,6 +179,8 @@ public class Balancer {
   + "\tExcludes the specified datanodes."
   + "\n\t[-include [-f  | ]]"
   + "\tIncludes only the specified datanodes."
+  + "\n\t[-blockpools ]"
+  + "\tThe balancer will only run on blockpools included in this list."
   + "\n\t[-idleiterations ]"
   + "\tNumber of consecutive idle iterations (-1 for Infinite) before "
   + "exit."
@@ -652,22 +654,27 @@ public class Balancer {
 done = true;
 Collections.shuffle(connectors);
 for(NameNodeConnector nnc : connectors) {
-  final Balancer b = new Balancer(nnc, p, conf);
-  final Result r = b.runOneIteration();
-  r.print(iteration, System.out);
-
-  // clean all lists
-  b.resetData(conf);
-  if (r.exitStatus == ExitStatus.IN_PROGRESS) {
-done = false;
-  } else if (r.exitStatus != ExitStatus.SUCCESS) {
-//must be an error statue, return.
-return r.exitStatus.getExitCode();
-  }
-}
+  if (p.blockpools.size() == 0
+  || p.blockpools.contains(nnc.getBlockpoolID())) {
+final Balancer b = new Balancer(nnc, p, conf);
+final Result r = b.runOneIteration();
+r.print(iteration, System.out);
+
+// clean all lists
+b.resetData(conf);
+if (r.exitStatus == ExitStatus.IN_PROGRESS) {
+  done = false;
+} else if (r.exitStatus != ExitStatus.SUCCESS) {
+  // must be an error statue, return.
+  return r.exitStatus.getExitCode();
+}
 
-if (!done) {
-  Thread.sleep(sleeptime);
+if (!done) {
+  Thread.sleep(sleeptime);
+}
+  } else {
+LOG.info("Skipping blockpool " + nnc.getBlockpoolID());
+  }
 }
   }
 } finally {
@@ -699,12 +706,12 @@ public class Balancer {
   }
 
   static class Parameters {
-static final Parameters DEFAULT = new Parameters(
-BalancingPolicy.Node.INSTANCE, 10.0,
-NameNodeConnector.DEFAULT_MAX_IDLE_ITERATIONS,
-Collections.emptySet(), Collections.emptySet

hadoop git commit: HDFS-8890. Allow admin to specify which blockpools the balancer should run on. (Chris Trezzo via mingma)

2015-09-02 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 1d56325a8 -> f81d12668


HDFS-8890. Allow admin to specify which blockpools the balancer should run on. 
(Chris Trezzo via mingma)

(cherry picked from commit d31a41c35927f02f2fb40d19380b5df4bb2b6d57)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f81d1266
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f81d1266
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f81d1266

Branch: refs/heads/branch-2
Commit: f81d12668f6c5d8b2d1689ccac2c6ecf91a4eee6
Parents: 1d56325
Author: Ming Ma <min...@apache.org>
Authored: Wed Sep 2 15:55:42 2015 -0700
Committer: Ming Ma <min...@apache.org>
Committed: Wed Sep 2 15:57:55 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../hadoop/hdfs/server/balancer/Balancer.java   |  82 ++---
 .../src/site/markdown/HDFSCommands.md   |   2 +
 .../hdfs/server/balancer/TestBalancer.java  |  43 -
 .../TestBalancerWithMultipleNameNodes.java  | 179 ---
 5 files changed, 253 insertions(+), 56 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f81d1266/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index def33d3..9553ec3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -535,6 +535,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-328. Improve fs -setrep error message for invalid replication factors.
 (Daniel Templeton via wang)
 
+HDFS-8890. Allow admin to specify which blockpools the balancer should run
+on. (Chris Trezzo via mingma)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f81d1266/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
index 9d3ddd4..c4a4edc 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
@@ -180,6 +180,8 @@ public class Balancer {
   + "\tExcludes the specified datanodes."
   + "\n\t[-include [-f  | ]]"
   + "\tIncludes only the specified datanodes."
+  + "\n\t[-blockpools ]"
+  + "\tThe balancer will only run on blockpools included in this list."
   + "\n\t[-idleiterations ]"
   + "\tNumber of consecutive idle iterations (-1 for Infinite) before "
   + "exit."
@@ -653,22 +655,27 @@ public class Balancer {
 done = true;
 Collections.shuffle(connectors);
 for(NameNodeConnector nnc : connectors) {
-  final Balancer b = new Balancer(nnc, p, conf);
-  final Result r = b.runOneIteration();
-  r.print(iteration, System.out);
-
-  // clean all lists
-  b.resetData(conf);
-  if (r.exitStatus == ExitStatus.IN_PROGRESS) {
-done = false;
-  } else if (r.exitStatus != ExitStatus.SUCCESS) {
-//must be an error statue, return.
-return r.exitStatus.getExitCode();
-  }
-}
+  if (p.blockpools.size() == 0
+  || p.blockpools.contains(nnc.getBlockpoolID())) {
+final Balancer b = new Balancer(nnc, p, conf);
+final Result r = b.runOneIteration();
+r.print(iteration, System.out);
+
+// clean all lists
+b.resetData(conf);
+if (r.exitStatus == ExitStatus.IN_PROGRESS) {
+  done = false;
+} else if (r.exitStatus != ExitStatus.SUCCESS) {
+  // must be an error statue, return.
+  return r.exitStatus.getExitCode();
+}
 
-if (!done) {
-  Thread.sleep(sleeptime);
+if (!done) {
+  Thread.sleep(sleeptime);
+}
+  } else {
+LOG.info("Skipping blockpool " + nnc.getBlockpoolID());
+  }
 }
   }
 } finally {
@@ -700,12 +707,12 @@ public class Balancer {
   }
 
   static class Parameters {
-static final Parameters DEFAULT = new Parameters(
-BalancingPolicy.Node.INSTANCE, 10.0,
-   

hadoop git commit: HDFS-7314. When the DFSClient lease cannot be renewed, abort open-for-write files rather than the entire DFSClient. (mingma)

2015-07-16 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 e97a14ae6 - 516bbf1c2


HDFS-7314. When the DFSClient lease cannot be renewed, abort open-for-write 
files rather than the entire DFSClient. (mingma)

(cherry picked from commit fbd88f1062f3c4b208724d208e3f501eb196dfab)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/516bbf1c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/516bbf1c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/516bbf1c

Branch: refs/heads/branch-2
Commit: 516bbf1c20547dc513126df0d9f0934bb65c10c7
Parents: e97a14a
Author: Ming Ma min...@apache.org
Authored: Thu Jul 16 12:33:57 2015 -0700
Committer: Ming Ma min...@apache.org
Committed: Thu Jul 16 12:55:29 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../java/org/apache/hadoop/hdfs/DFSClient.java  | 16 +
 .../hadoop/hdfs/client/impl/LeaseRenewer.java   | 12 +++-
 .../hadoop/hdfs/TestDFSClientRetries.java   | 66 +++-
 4 files changed, 79 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/516bbf1c/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 8988a7d..bc01dde 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -383,6 +383,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-8742. Inotify: Support event for OP_TRUNCATE.
 (Surendra Singh Lilhore via aajisaka)
 
+HDFS-7314. When the DFSClient lease cannot be renewed, abort open-for-write
+files rather than the entire DFSClient. (mingma)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/516bbf1c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 0ebe488..11a5e9d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -569,23 +569,9 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
   void closeConnectionToNamenode() {
 RPC.stopProxy(namenode);
   }
-  
-  /** Abort and release resources held.  Ignore all errors. */
-  public void abort() {
-clientRunning = false;
-closeAllFilesBeingWritten(true);
-try {
-  // remove reference to this client and stop the renewer,
-  // if there is no more clients under the renewer.
-  getLeaseRenewer().closeClient(this);
-} catch (IOException ioe) {
-   LOG.info(Exception occurred while aborting the client  + ioe);
-}
-closeConnectionToNamenode();
-  }
 
   /** Close/abort all files being written. */
-  private void closeAllFilesBeingWritten(final boolean abort) {
+  public void closeAllFilesBeingWritten(final boolean abort) {
 for(;;) {
   final long inodeId;
   final DFSOutputStream out;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/516bbf1c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
index 99323bb..c689b73 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
@@ -215,6 +215,12 @@ public class LeaseRenewer {
 return renewal;
   }
 
+  /** Used for testing only. */
+  @VisibleForTesting
+  public synchronized void setRenewalTime(final long renewal) {
+this.renewal = renewal;
+  }
+
   /** Add a client. */
   private synchronized void addClient(final DFSClient dfsc) {
 for(DFSClient c : dfsclients) {
@@ -453,8 +459,12 @@ public class LeaseRenewer {
   + (elapsed/1000) +  seconds.  Aborting ..., ie);
   synchronized (this) {
 while (!dfsclients.isEmpty()) {
-  dfsclients.get(0).abort();
+  DFSClient dfsClient = dfsclients.get(0);
+  dfsClient.closeAllFilesBeingWritten(true);
+  closeClient(dfsClient

hadoop git commit: HDFS-7314. When the DFSClient lease cannot be renewed, abort open-for-write files rather than the entire DFSClient. (mingma)

2015-07-16 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 1ba2986de - fbd88f106


HDFS-7314. When the DFSClient lease cannot be renewed, abort open-for-write 
files rather than the entire DFSClient. (mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fbd88f10
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fbd88f10
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fbd88f10

Branch: refs/heads/trunk
Commit: fbd88f1062f3c4b208724d208e3f501eb196dfab
Parents: 1ba2986
Author: Ming Ma min...@apache.org
Authored: Thu Jul 16 12:33:57 2015 -0700
Committer: Ming Ma min...@apache.org
Committed: Thu Jul 16 12:33:57 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../java/org/apache/hadoop/hdfs/DFSClient.java  | 16 +
 .../hadoop/hdfs/client/impl/LeaseRenewer.java   | 12 +++-
 .../hadoop/hdfs/TestDFSClientRetries.java   | 66 +++-
 4 files changed, 79 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fbd88f10/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 8f6dd41..c6685e1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -165,6 +165,9 @@ Trunk (Unreleased)
 HDFS-5033. Bad error message for fs -put/copyFromLocal if user
 doesn't have permissions to read the source (Darrell Taylor via aw)
 
+HDFS-7314. When the DFSClient lease cannot be renewed, abort open-for-write
+files rather than the entire DFSClient. (mingma)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fbd88f10/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 6629a83..6f9e613 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -567,23 +567,9 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
   void closeConnectionToNamenode() {
 RPC.stopProxy(namenode);
   }
-  
-  /** Abort and release resources held.  Ignore all errors. */
-  public void abort() {
-clientRunning = false;
-closeAllFilesBeingWritten(true);
-try {
-  // remove reference to this client and stop the renewer,
-  // if there is no more clients under the renewer.
-  getLeaseRenewer().closeClient(this);
-} catch (IOException ioe) {
-   LOG.info(Exception occurred while aborting the client  + ioe);
-}
-closeConnectionToNamenode();
-  }
 
   /** Close/abort all files being written. */
-  private void closeAllFilesBeingWritten(final boolean abort) {
+  public void closeAllFilesBeingWritten(final boolean abort) {
 for(;;) {
   final long inodeId;
   final DFSOutputStream out;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fbd88f10/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
index 99323bb..c689b73 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
@@ -215,6 +215,12 @@ public class LeaseRenewer {
 return renewal;
   }
 
+  /** Used for testing only. */
+  @VisibleForTesting
+  public synchronized void setRenewalTime(final long renewal) {
+this.renewal = renewal;
+  }
+
   /** Add a client. */
   private synchronized void addClient(final DFSClient dfsc) {
 for(DFSClient c : dfsclients) {
@@ -453,8 +459,12 @@ public class LeaseRenewer {
   + (elapsed/1000) +  seconds.  Aborting ..., ie);
   synchronized (this) {
 while (!dfsclients.isEmpty()) {
-  dfsclients.get(0).abort();
+  DFSClient dfsClient = dfsclients.get(0);
+  dfsClient.closeAllFilesBeingWritten(true);
+  closeClient(dfsClient);
 }
+//Expire the current LeaseRenewer thread.
+emptyTime = 0

[1/2] hadoop git commit: HADOOP-12107. long running apps may have a huge number of StatisticsData instances under FileSystem (Sangjin Lee via Ming Ma)

2015-07-10 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 c1447e654 - b169889f0


HADOOP-12107. long running apps may have a huge number of StatisticsData 
instances under FileSystem (Sangjin Lee via Ming Ma)

(cherry picked from commit 8e1bdc17d9134e01115ae7c929503d8ac0325207)

Conflicts:
hadoop-common-project/hadoop-common/CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5e6bbe60
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5e6bbe60
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5e6bbe60

Branch: refs/heads/branch-2
Commit: 5e6bbe603157f3fcc9b5e7baaab5b943ba3df0da
Parents: c1447e6
Author: Ming Ma min...@apache.org
Authored: Mon Jun 29 14:37:38 2015 -0700
Committer: Ming Ma min...@apache.org
Committed: Fri Jul 10 08:32:27 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../java/org/apache/hadoop/fs/FileSystem.java   | 140 +--
 .../apache/hadoop/fs/FCStatisticsBaseTest.java  |  56 +++-
 3 files changed, 155 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5e6bbe60/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 1b8c55d..42dbb55 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -539,6 +539,9 @@ Release 2.7.1 - 2015-07-06
 that method doesn't modify the FsPermission (Bibin A Chundatt via Colin P.
 McCabe)
 
+HADOOP-12107. long running apps may have a huge number of StatisticsData
+instances under FileSystem (Sangjin Lee via Ming Ma)
+
 Release 2.7.0 - 2015-04-20
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5e6bbe60/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index 3a589de..b94f65c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -20,7 +20,8 @@ package org.apache.hadoop.fs;
 import java.io.Closeable;
 import java.io.FileNotFoundException;
 import java.io.IOException;
-import java.lang.ref.WeakReference;
+import java.lang.ref.PhantomReference;
+import java.lang.ref.ReferenceQueue;
 import java.net.URI;
 import java.net.URISyntaxException;
 import java.security.PrivilegedExceptionAction;
@@ -32,7 +33,6 @@ import java.util.HashMap;
 import java.util.HashSet;
 import java.util.IdentityHashMap;
 import java.util.Iterator;
-import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
 import java.util.NoSuchElementException;
@@ -2918,16 +2918,6 @@ public abstract class FileSystem extends Configured 
implements Closeable {
   volatile int readOps;
   volatile int largeReadOps;
   volatile int writeOps;
-  /**
-   * Stores a weak reference to the thread owning this StatisticsData.
-   * This allows us to remove StatisticsData objects that pertain to
-   * threads that no longer exist.
-   */
-  final WeakReferenceThread owner;
-
-  StatisticsData(WeakReferenceThread owner) {
-this.owner = owner;
-  }
 
   /**
* Add another StatisticsData object to this one.
@@ -2998,17 +2988,37 @@ public abstract class FileSystem extends Configured 
implements Closeable {
  * Thread-local data.
  */
 private final ThreadLocalStatisticsData threadData;
-
+
 /**
- * List of all thread-local data areas.  Protected by the Statistics lock.
+ * Set of all thread-local data areas.  Protected by the Statistics lock.
+ * The references to the statistics data are kept using phantom references
+ * to the associated threads. Proper clean-up is performed by the cleaner
+ * thread when the threads are garbage collected.
  */
-private LinkedListStatisticsData allData;
+private final SetStatisticsDataReference allData;
+
+/**
+ * Global reference queue and a cleaner thread that manage statistics data
+ * references from all filesystem instances.
+ */
+private static final ReferenceQueueThread STATS_DATA_REF_QUEUE;
+private static final Thread STATS_DATA_CLEANER;
+
+static {
+  STATS_DATA_REF_QUEUE = new ReferenceQueueThread();
+  // start a single daemon cleaner thread
+  STATS_DATA_CLEANER = new 

[2/2] hadoop git commit: YARN-3445. Cache runningApps in RMNode for getting running apps on given NodeId. (Junping Du via mingma)

2015-07-10 Thread mingma
YARN-3445. Cache runningApps in RMNode for getting running apps on given 
NodeId. (Junping Du via mingma)

(cherry picked from commit 08244264c0583472b9c4e16591cfde72c6db62a2)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b169889f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b169889f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b169889f

Branch: refs/heads/branch-2
Commit: b169889f01309757197a8a27b6244a87c77a3ce3
Parents: 5e6bbe6
Author: Ming Ma min...@apache.org
Authored: Fri Jul 10 08:30:10 2015 -0700
Committer: Ming Ma min...@apache.org
Committed: Fri Jul 10 08:34:01 2015 -0700

--
 .../hadoop/yarn/sls/nodemanager/NodeInfo.java   |  8 +++-
 .../yarn/sls/scheduler/RMNodeWrapper.java   |  5 +++
 hadoop-yarn-project/CHANGES.txt |  3 ++
 .../server/resourcemanager/rmnode/RMNode.java   |  2 +
 .../resourcemanager/rmnode/RMNodeImpl.java  | 43 
 .../yarn/server/resourcemanager/MockNodes.java  |  5 +++
 .../resourcemanager/TestRMNodeTransitions.java  | 36 ++--
 7 files changed, 91 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b169889f/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
--
diff --git 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
index ee6eb7b..440779c 100644
--- 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
+++ 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
@@ -62,7 +62,8 @@ public class NodeInfo {
 private NodeState state;
 private ListContainerId toCleanUpContainers;
 private ListApplicationId toCleanUpApplications;
-
+private ListApplicationId runningApplications;
+
 public FakeRMNodeImpl(NodeId nodeId, String nodeAddr, String httpAddress,
 Resource perNode, String rackName, String healthReport,
 int cmdPort, String hostName, NodeState state) {
@@ -77,6 +78,7 @@ public class NodeInfo {
   this.state = state;
   toCleanUpApplications = new ArrayListApplicationId();
   toCleanUpContainers = new ArrayListContainerId();
+  runningApplications = new ArrayListApplicationId();
 }
 
 public NodeId getNodeID() {
@@ -135,6 +137,10 @@ public class NodeInfo {
   return toCleanUpApplications;
 }
 
+public ListApplicationId getRunningApps() {
+  return runningApplications;
+}
+
 public void updateNodeHeartbeatResponseForCleanup(
 NodeHeartbeatResponse response) {
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b169889f/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
--
diff --git 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
index b64be1b..a6633ae 100644
--- 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
+++ 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
@@ -119,6 +119,11 @@ public class RMNodeWrapper implements RMNode {
   }
 
   @Override
+  public ListApplicationId getRunningApps() {
+return node.getRunningApps();
+  }
+
+  @Override
   public void updateNodeHeartbeatResponseForCleanup(
   NodeHeartbeatResponse nodeHeartbeatResponse) {
 node.updateNodeHeartbeatResponseForCleanup(nodeHeartbeatResponse);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b169889f/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 2331b67..74705f2 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -1637,6 +1637,9 @@ Release 2.6.0 - 2014-11-18
 YARN-2811. In Fair Scheduler, reservation fulfillments shouldn't ignore max
 share (Siqi Li via Sandy Ryza)
 
+YARN-3445. Cache runningApps in RMNode for getting running apps on given
+NodeId. (Junping Du via mingma)
+
   IMPROVEMENTS
 
 YARN-2242. Improve exception information on AM launch crashes. (Li Lu 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b169889f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java

hadoop git commit: YARN-3445. Cache runningApps in RMNode for getting running apps on given NodeId. (Junping Du via mingma)

2015-07-10 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk b48908033 - 08244264c


YARN-3445. Cache runningApps in RMNode for getting running apps on given 
NodeId. (Junping Du via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/08244264
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/08244264
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/08244264

Branch: refs/heads/trunk
Commit: 08244264c0583472b9c4e16591cfde72c6db62a2
Parents: b489080
Author: Ming Ma min...@apache.org
Authored: Fri Jul 10 08:30:10 2015 -0700
Committer: Ming Ma min...@apache.org
Committed: Fri Jul 10 08:30:10 2015 -0700

--
 .../hadoop/yarn/sls/nodemanager/NodeInfo.java   |  8 +++-
 .../yarn/sls/scheduler/RMNodeWrapper.java   |  5 +++
 hadoop-yarn-project/CHANGES.txt |  3 ++
 .../server/resourcemanager/rmnode/RMNode.java   |  2 +
 .../resourcemanager/rmnode/RMNodeImpl.java  | 43 
 .../yarn/server/resourcemanager/MockNodes.java  |  5 +++
 .../resourcemanager/TestRMNodeTransitions.java  | 36 ++--
 7 files changed, 91 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/08244264/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
--
diff --git 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
index ee6eb7b..440779c 100644
--- 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
+++ 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
@@ -62,7 +62,8 @@ public class NodeInfo {
 private NodeState state;
 private ListContainerId toCleanUpContainers;
 private ListApplicationId toCleanUpApplications;
-
+private ListApplicationId runningApplications;
+
 public FakeRMNodeImpl(NodeId nodeId, String nodeAddr, String httpAddress,
 Resource perNode, String rackName, String healthReport,
 int cmdPort, String hostName, NodeState state) {
@@ -77,6 +78,7 @@ public class NodeInfo {
   this.state = state;
   toCleanUpApplications = new ArrayListApplicationId();
   toCleanUpContainers = new ArrayListContainerId();
+  runningApplications = new ArrayListApplicationId();
 }
 
 public NodeId getNodeID() {
@@ -135,6 +137,10 @@ public class NodeInfo {
   return toCleanUpApplications;
 }
 
+public ListApplicationId getRunningApps() {
+  return runningApplications;
+}
+
 public void updateNodeHeartbeatResponseForCleanup(
 NodeHeartbeatResponse response) {
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/08244264/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
--
diff --git 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
index b64be1b..a6633ae 100644
--- 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
+++ 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
@@ -119,6 +119,11 @@ public class RMNodeWrapper implements RMNode {
   }
 
   @Override
+  public ListApplicationId getRunningApps() {
+return node.getRunningApps();
+  }
+
+  @Override
   public void updateNodeHeartbeatResponseForCleanup(
   NodeHeartbeatResponse nodeHeartbeatResponse) {
 node.updateNodeHeartbeatResponseForCleanup(nodeHeartbeatResponse);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/08244264/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 2a9ff98..db000d7 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -1678,6 +1678,9 @@ Release 2.6.0 - 2014-11-18
 YARN-2811. In Fair Scheduler, reservation fulfillments shouldn't ignore max
 share (Siqi Li via Sandy Ryza)
 
+YARN-3445. Cache runningApps in RMNode for getting running apps on given
+NodeId. (Junping Du via mingma)
+
   IMPROVEMENTS
 
 YARN-2197. Add a link to YARN CHANGES.txt in the left side of doc

http://git-wip-us.apache.org/repos/asf/hadoop/blob/08244264/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java

[2/2] hadoop git commit: Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/hadoop into trunk

2015-06-29 Thread mingma
Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/hadoop into 
trunk


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/34ee0b9b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/34ee0b9b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/34ee0b9b

Branch: refs/heads/trunk
Commit: 34ee0b9b4797093f5aeb0d55f32cf1b74b02e1c2
Parents: 8e1bdc1 4672315
Author: Ming Ma min...@apache.org
Authored: Mon Jun 29 14:37:44 2015 -0700
Committer: Ming Ma min...@apache.org
Committed: Mon Jun 29 14:37:44 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +++
 .../api/records/impl/pb/SerializedExceptionPBImpl.java  |  2 +-
 .../records/impl/pb/TestSerializedExceptionPBImpl.java  | 12 ++--
 3 files changed, 14 insertions(+), 3 deletions(-)
--




[1/2] hadoop git commit: HADOOP-12107. long running apps may have a huge number of StatisticsData instances under FileSystem (Sangjin Lee via Ming Ma)

2015-06-29 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 4672315e2 - 34ee0b9b4


HADOOP-12107. long running apps may have a huge number of StatisticsData 
instances under FileSystem (Sangjin Lee via Ming Ma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8e1bdc17
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8e1bdc17
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8e1bdc17

Branch: refs/heads/trunk
Commit: 8e1bdc17d9134e01115ae7c929503d8ac0325207
Parents: 460e98f
Author: Ming Ma min...@apache.org
Authored: Mon Jun 29 14:37:38 2015 -0700
Committer: Ming Ma min...@apache.org
Committed: Mon Jun 29 14:37:38 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../java/org/apache/hadoop/fs/FileSystem.java   | 140 +--
 .../apache/hadoop/fs/FCStatisticsBaseTest.java  |  56 +++-
 3 files changed, 155 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8e1bdc17/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index a9b44e3..50192ae 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -490,6 +490,9 @@ Trunk (Unreleased)
 HADOOP-11347. RawLocalFileSystem#mkdir and create should honor umask (Varun
 Saxena via Colin P. McCabe)
 
+HADOOP-12107. long running apps may have a huge number of StatisticsData
+instances under FileSystem (Sangjin Lee via Ming Ma)
+
   OPTIMIZATIONS
 
 HADOOP-7761. Improve the performance of raw comparisons. (todd)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8e1bdc17/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index 3f9e3bd..1d7bc87 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -20,7 +20,8 @@ package org.apache.hadoop.fs;
 import java.io.Closeable;
 import java.io.FileNotFoundException;
 import java.io.IOException;
-import java.lang.ref.WeakReference;
+import java.lang.ref.PhantomReference;
+import java.lang.ref.ReferenceQueue;
 import java.net.URI;
 import java.net.URISyntaxException;
 import java.security.PrivilegedExceptionAction;
@@ -32,7 +33,6 @@ import java.util.HashMap;
 import java.util.HashSet;
 import java.util.IdentityHashMap;
 import java.util.Iterator;
-import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
 import java.util.NoSuchElementException;
@@ -2920,16 +2920,6 @@ public abstract class FileSystem extends Configured 
implements Closeable {
   volatile int readOps;
   volatile int largeReadOps;
   volatile int writeOps;
-  /**
-   * Stores a weak reference to the thread owning this StatisticsData.
-   * This allows us to remove StatisticsData objects that pertain to
-   * threads that no longer exist.
-   */
-  final WeakReferenceThread owner;
-
-  StatisticsData(WeakReferenceThread owner) {
-this.owner = owner;
-  }
 
   /**
* Add another StatisticsData object to this one.
@@ -3000,17 +2990,37 @@ public abstract class FileSystem extends Configured 
implements Closeable {
  * Thread-local data.
  */
 private final ThreadLocalStatisticsData threadData;
-
+
 /**
- * List of all thread-local data areas.  Protected by the Statistics lock.
+ * Set of all thread-local data areas.  Protected by the Statistics lock.
+ * The references to the statistics data are kept using phantom references
+ * to the associated threads. Proper clean-up is performed by the cleaner
+ * thread when the threads are garbage collected.
  */
-private LinkedListStatisticsData allData;
+private final SetStatisticsDataReference allData;
+
+/**
+ * Global reference queue and a cleaner thread that manage statistics data
+ * references from all filesystem instances.
+ */
+private static final ReferenceQueueThread STATS_DATA_REF_QUEUE;
+private static final Thread STATS_DATA_CLEANER;
+
+static {
+  STATS_DATA_REF_QUEUE = new ReferenceQueueThread();
+  // start a single daemon cleaner thread
+  STATS_DATA_CLEANER = new Thread(new StatisticsDataReferenceCleaner());
+  STATS_DATA_CLEANER.
+