hadoop git commit: YARN-3980. Plumb resource-utilization info in node heartbeat through to the scheduler. (Inigo Goiri via kasha)

2015-11-24 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/trunk f80dc6f49 -> 52948bb20


YARN-3980. Plumb resource-utilization info in node heartbeat through to the 
scheduler. (Inigo Goiri via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/52948bb2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/52948bb2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/52948bb2

Branch: refs/heads/trunk
Commit: 52948bb20bd1446164df1d3920c46c96dad750ae
Parents: f80dc6f
Author: Karthik Kambatla 
Authored: Tue Nov 24 10:05:12 2015 +0530
Committer: Karthik Kambatla 
Committed: Tue Nov 24 13:47:17 2015 +0530

--
 .../hadoop/yarn/sls/nodemanager/NodeInfo.java   |  11 +
 .../yarn/sls/scheduler/RMNodeWrapper.java   |  11 +
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../impl/pb/ResourceUtilizationPBImpl.java  |   2 +-
 .../nodemanager/NodeStatusUpdaterImpl.java  |   3 +-
 .../resourcemanager/ResourceTrackerService.java |   5 +-
 .../server/resourcemanager/rmnode/RMNode.java   |  15 +-
 .../resourcemanager/rmnode/RMNodeImpl.java  |  55 +
 .../rmnode/RMNodeStatusEvent.java   |  53 ++--
 .../scheduler/SchedulerNode.java|  38 +++
 .../scheduler/capacity/CapacityScheduler.java   |   5 +
 .../scheduler/fair/FairScheduler.java   |   5 +
 .../scheduler/fifo/FifoScheduler.java   |   4 +
 .../yarn/server/resourcemanager/MockNodes.java  |  28 ++-
 .../resourcemanager/TestRMNodeTransitions.java  |  16 +-
 .../TestRMAppLogAggregationStatus.java  |  32 +--
 .../webapp/TestRMWebServicesNodes.java  |   6 +-
 .../hadoop/yarn/server/MiniYARNCluster.java |  54 +++-
 .../TestMiniYarnClusterNodeUtilization.java | 245 +++
 19 files changed, 522 insertions(+), 69 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/52948bb2/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
--
diff --git 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
index dae2ce7..f5943a8 100644
--- 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
+++ 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
@@ -35,6 +35,7 @@ import org.apache.hadoop.yarn.api.records.NodeId;
 import org.apache.hadoop.yarn.api.records.NodeState;
 import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.server.api.protocolrecords.NodeHeartbeatResponse;
+import org.apache.hadoop.yarn.server.api.records.ResourceUtilization;
 import 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode;
 import org.apache.hadoop.yarn.server.resourcemanager.rmnode
@@ -188,6 +189,16 @@ public class NodeInfo {
   // TODO Auto-generated method stub
   return null;
 }
+
+@Override
+public ResourceUtilization getAggregatedContainersUtilization() {
+  return null;
+}
+
+@Override
+public ResourceUtilization getNodeUtilization() {
+  return null;
+}
   }
 
   public static RMNode newNodeInfo(String rackName, String hostName,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/52948bb2/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
--
diff --git 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
index 8c65ccc..e778188 100644
--- 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
+++ 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
@@ -28,6 +28,7 @@ import org.apache.hadoop.yarn.api.records.NodeId;
 import org.apache.hadoop.yarn.api.records.NodeState;
 import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.server.api.protocolrecords.NodeHeartbeatResponse;
+import org.apache.hadoop.yarn.server.api.records.ResourceUtilization;
 import 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode;
 import org.apache.hadoop.yarn.server.resourcemanager.rmnode
@@ -176,4 +177,14 @@ public class RMNodeWrapper implements RMNode {
 // TODO Auto-generated method stub
 return null;
   }
+
+  @Override
+  public ResourceUtilization getAggreg

hadoop git commit: YARN-3980. Plumb resource-utilization info in node heartbeat through to the scheduler. (Inigo Goiri via kasha)

2015-11-24 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 52c889ac1 -> 49ed955c9


YARN-3980. Plumb resource-utilization info in node heartbeat through to the 
scheduler. (Inigo Goiri via kasha)

(cherry picked from commit 52948bb20bd1446164df1d3920c46c96dad750ae)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/49ed955c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/49ed955c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/49ed955c

Branch: refs/heads/branch-2
Commit: 49ed955c91a3c495f40a59b8b13371bcebce473d
Parents: 52c889a
Author: Karthik Kambatla 
Authored: Tue Nov 24 10:05:12 2015 +0530
Committer: Karthik Kambatla 
Committed: Tue Nov 24 13:52:29 2015 +0530

--
 .../hadoop/yarn/sls/nodemanager/NodeInfo.java   |  11 +
 .../yarn/sls/scheduler/RMNodeWrapper.java   |  11 +
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../impl/pb/ResourceUtilizationPBImpl.java  |   2 +-
 .../nodemanager/NodeStatusUpdaterImpl.java  |   3 +-
 .../resourcemanager/ResourceTrackerService.java |   5 +-
 .../server/resourcemanager/rmnode/RMNode.java   |  15 +-
 .../resourcemanager/rmnode/RMNodeImpl.java  |  55 +
 .../rmnode/RMNodeStatusEvent.java   |  53 ++--
 .../scheduler/SchedulerNode.java|  38 +++
 .../scheduler/capacity/CapacityScheduler.java   |   5 +
 .../scheduler/fair/FairScheduler.java   |   5 +
 .../scheduler/fifo/FifoScheduler.java   |   4 +
 .../yarn/server/resourcemanager/MockNodes.java  |  28 ++-
 .../resourcemanager/TestRMNodeTransitions.java  |  16 +-
 .../TestRMAppLogAggregationStatus.java  |  32 +--
 .../webapp/TestRMWebServicesNodes.java  |   6 +-
 .../hadoop/yarn/server/MiniYARNCluster.java |  54 +++-
 .../TestMiniYarnClusterNodeUtilization.java | 245 +++
 19 files changed, 522 insertions(+), 69 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/49ed955c/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
--
diff --git 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
index dae2ce7..f5943a8 100644
--- 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
+++ 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
@@ -35,6 +35,7 @@ import org.apache.hadoop.yarn.api.records.NodeId;
 import org.apache.hadoop.yarn.api.records.NodeState;
 import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.server.api.protocolrecords.NodeHeartbeatResponse;
+import org.apache.hadoop.yarn.server.api.records.ResourceUtilization;
 import 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode;
 import org.apache.hadoop.yarn.server.resourcemanager.rmnode
@@ -188,6 +189,16 @@ public class NodeInfo {
   // TODO Auto-generated method stub
   return null;
 }
+
+@Override
+public ResourceUtilization getAggregatedContainersUtilization() {
+  return null;
+}
+
+@Override
+public ResourceUtilization getNodeUtilization() {
+  return null;
+}
   }
 
   public static RMNode newNodeInfo(String rackName, String hostName,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49ed955c/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
--
diff --git 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
index 8c65ccc..e778188 100644
--- 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
+++ 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
@@ -28,6 +28,7 @@ import org.apache.hadoop.yarn.api.records.NodeId;
 import org.apache.hadoop.yarn.api.records.NodeState;
 import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.server.api.protocolrecords.NodeHeartbeatResponse;
+import org.apache.hadoop.yarn.server.api.records.ResourceUtilization;
 import 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode;
 import org.apache.hadoop.yarn.server.resourcemanager.rmnode
@@ -176,4 +177,14 @@ public class RMNodeWrapper implements RMNode {
 // TODO Auto-generated method stub
 

hadoop git commit: CHANGES.txt: add YARN-4367 to 2.8.0.

2015-11-24 Thread ozawa
Repository: hadoop
Updated Branches:
  refs/heads/trunk 52948bb20 -> fb0f09e46


CHANGES.txt: add YARN-4367 to 2.8.0.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fb0f09e4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fb0f09e4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fb0f09e4

Branch: refs/heads/trunk
Commit: fb0f09e46b456789ec1c7470873b6de231430773
Parents: 52948bb
Author: Tsuyoshi Ozawa 
Authored: Tue Nov 24 17:39:34 2015 +0900
Committer: Tsuyoshi Ozawa 
Committed: Tue Nov 24 17:39:34 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fb0f09e4/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 0532e1d..24807d7 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -1059,6 +1059,8 @@ Release 2.8.0 - UNRELEASED
 YARN-4345. yarn rmadmin -updateNodeResource doesn't work (Junping Du via
 jlowe)
 
+YARN-4367. SLS webapp doesn't load. (kasha).
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES



hadoop git commit: YARN-4298. Fix findbugs warnings in hadoop-yarn-common. Contributed by Sunil G.

2015-11-24 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/trunk fb0f09e46 -> 8c0133f3d


YARN-4298. Fix findbugs warnings in hadoop-yarn-common. Contributed by Sunil G.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8c0133f3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8c0133f3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8c0133f3

Branch: refs/heads/trunk
Commit: 8c0133f3d4c85733c1e0389d1ee39f1eab58c0b9
Parents: fb0f09e
Author: Akira Ajisaka 
Authored: Tue Nov 24 18:40:33 2015 +0900
Committer: Akira Ajisaka 
Committed: Tue Nov 24 18:40:33 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt  | 3 +++
 .../yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java | 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8c0133f3/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 24807d7..6558e40 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -1061,6 +1061,9 @@ Release 2.8.0 - UNRELEASED
 
 YARN-4367. SLS webapp doesn't load. (kasha).
 
+YARN-4298. Fix findbugs warnings in hadoop-yarn-common.
+(Sunil G via aajisaka)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8c0133f3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
index bd460f6..da87465 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
@@ -386,7 +386,7 @@ public class AllocateResponsePBImpl extends 
AllocateResponse {
   }
 
   @Override
-  public Priority getApplicationPriority() {
+  public synchronized Priority getApplicationPriority() {
 AllocateResponseProtoOrBuilder p = viaProto ? proto : builder;
 if (this.appPriority != null) {
   return this.appPriority;
@@ -399,7 +399,7 @@ public class AllocateResponsePBImpl extends 
AllocateResponse {
   }
 
   @Override
-  public void setApplicationPriority(Priority priority) {
+  public synchronized void setApplicationPriority(Priority priority) {
 maybeInitBuilder();
 if (priority == null)
   builder.clearApplicationPriority();



hadoop git commit: YARN-4298. Fix findbugs warnings in hadoop-yarn-common. Contributed by Sunil G.

2015-11-24 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 49ed955c9 -> ac0ddc4fe


YARN-4298. Fix findbugs warnings in hadoop-yarn-common. Contributed by Sunil G.

(cherry picked from commit 8c0133f3d4c85733c1e0389d1ee39f1eab58c0b9)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ac0ddc4f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ac0ddc4f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ac0ddc4f

Branch: refs/heads/branch-2
Commit: ac0ddc4fec907d19a95c5e5305bb27a82d0fa085
Parents: 49ed955
Author: Akira Ajisaka 
Authored: Tue Nov 24 18:40:33 2015 +0900
Committer: Akira Ajisaka 
Committed: Tue Nov 24 18:40:56 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt  | 3 +++
 .../yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java | 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ac0ddc4f/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index e17883d..d2ce060 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -1009,6 +1009,9 @@ Release 2.8.0 - UNRELEASED
 
 YARN-4367. SLS webapp doesn't load. (kasha)
 
+YARN-4298. Fix findbugs warnings in hadoop-yarn-common.
+(Sunil G via aajisaka)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ac0ddc4f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
index bd460f6..da87465 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
@@ -386,7 +386,7 @@ public class AllocateResponsePBImpl extends 
AllocateResponse {
   }
 
   @Override
-  public Priority getApplicationPriority() {
+  public synchronized Priority getApplicationPriority() {
 AllocateResponseProtoOrBuilder p = viaProto ? proto : builder;
 if (this.appPriority != null) {
   return this.appPriority;
@@ -399,7 +399,7 @@ public class AllocateResponsePBImpl extends 
AllocateResponse {
   }
 
   @Override
-  public void setApplicationPriority(Priority priority) {
+  public synchronized void setApplicationPriority(Priority priority) {
 maybeInitBuilder();
 if (priority == null)
   builder.clearApplicationPriority();



hadoop git commit: YARN-4387. Fix typo in FairScheduler log message. Contributed by Xin Wang.

2015-11-24 Thread ozawa
Repository: hadoop
Updated Branches:
  refs/heads/trunk 8c0133f3d -> 28dfe721b


YARN-4387. Fix typo in FairScheduler log message. Contributed by Xin Wang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/28dfe721
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/28dfe721
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/28dfe721

Branch: refs/heads/trunk
Commit: 28dfe721b86ccbaf2ddcfb7e709b226ac766803a
Parents: 8c0133f
Author: Tsuyoshi Ozawa 
Authored: Tue Nov 24 17:35:38 2015 +0900
Committer: Tsuyoshi Ozawa 
Committed: Tue Nov 24 19:24:01 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt| 2 ++
 .../yarn/server/resourcemanager/scheduler/fair/FairScheduler.java  | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/28dfe721/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 6558e40..204c338 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -1064,6 +1064,8 @@ Release 2.8.0 - UNRELEASED
 YARN-4298. Fix findbugs warnings in hadoop-yarn-common.
 (Sunil G via aajisaka)
 
+YARN-4387. Fix typo in FairScheduler log message. (Xin Wang via ozawa)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/28dfe721/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
index f1839f5..04977a6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
@@ -500,7 +500,7 @@ public class FairScheduler extends
 // containers on the RMNode (see SchedulerNode.releaseContainer()).
 completedContainer(container, status, RMContainerEventType.KILL);
 LOG.info("Killing container" + container +
-" (after waiting for premption for " +
+" (after waiting for preemption for " +
 (getClock().getTime() - time) + "ms)");
   }
 } else {



hadoop git commit: YARN-4387. Fix typo in FairScheduler log message. Contributed by Xin Wang.

2015-11-24 Thread ozawa
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 ac0ddc4fe -> 9e458c3e9


YARN-4387. Fix typo in FairScheduler log message. Contributed by Xin Wang.

(cherry picked from commit 28dfe721b86ccbaf2ddcfb7e709b226ac766803a)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9e458c3e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9e458c3e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9e458c3e

Branch: refs/heads/branch-2
Commit: 9e458c3e9e5d6ae7850b3d56309157e2213a6603
Parents: ac0ddc4
Author: Tsuyoshi Ozawa 
Authored: Tue Nov 24 17:35:38 2015 +0900
Committer: Tsuyoshi Ozawa 
Committed: Tue Nov 24 22:10:31 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt| 2 ++
 .../yarn/server/resourcemanager/scheduler/fair/FairScheduler.java  | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9e458c3e/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index d2ce060..fa2049a 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -1012,6 +1012,8 @@ Release 2.8.0 - UNRELEASED
 YARN-4298. Fix findbugs warnings in hadoop-yarn-common.
 (Sunil G via aajisaka)
 
+YARN-4387. Fix typo in FairScheduler log message. (Xin Wang via ozawa)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9e458c3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
index f1839f5..04977a6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
@@ -500,7 +500,7 @@ public class FairScheduler extends
 // containers on the RMNode (see SchedulerNode.releaseContainer()).
 completedContainer(container, status, RMContainerEventType.KILL);
 LOG.info("Killing container" + container +
-" (after waiting for premption for " +
+" (after waiting for preemption for " +
 (getClock().getTime() - time) + "ms)");
   }
 } else {



hadoop git commit: HDFS-9289. Make DataStreamer#block thread safe and verify genStamp in commitBlock. Contributed by Chang Li.

2015-11-24 Thread zhz
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.6 b9a6f9aa1 -> 238458b25


HDFS-9289. Make DataStreamer#block thread safe and verify genStamp in 
commitBlock. Contributed by Chang Li.

Conflicts:
hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java

Change-Id: Ibd44ff1bf92bad7262db724990a6a64c1975ffb6


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/238458b2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/238458b2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/238458b2

Branch: refs/heads/branch-2.6
Commit: 238458b25921a652eefead2cebd797c1b9de0343
Parents: b9a6f9a
Author: Kihwal Lee 
Authored: Wed Nov 4 12:10:59 2015 -0600
Committer: Zhe Zhang 
Committed: Tue Nov 24 09:44:50 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../org/apache/hadoop/hdfs/DFSOutputStream.java |  2 +-
 .../BlockInfoUnderConstruction.java |  2 +-
 .../server/blockmanagement/BlockManager.java|  4 +
 .../org/apache/hadoop/hdfs/DFSTestUtil.java | 67 +
 .../TestCommitBlockWithInvalidGenStamp.java | 98 
 6 files changed, 174 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/238458b2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index cc7bae8..5e683e3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -20,6 +20,9 @@ Release 2.6.3 - UNRELEASED
 
 HDFS-9083. Replication violates block placement policy (Rushabh Shah)
 
+HDFS-9289. Make DataStreamer#block thread safe and verify genStamp in
+commitBlock. (Chang Li via zhz)
+
 Release 2.6.2 - 2015-10-28
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/238458b2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
index 92dbc8e..21e4d4e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
@@ -363,7 +363,7 @@ public class DFSOutputStream extends FSOutputSummer
   //
   class DataStreamer extends Daemon {
 private volatile boolean streamerClosed = false;
-private ExtendedBlock block; // its length is number of bytes acked
+private volatile ExtendedBlock block; // its length is number of bytes 
acked
 private Token accessToken;
 private DataOutputStream blockStream;
 private DataInputStream blockReplyStream;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/238458b2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
index dd3593f..703373e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
@@ -262,7 +262,7 @@ public class BlockInfoUnderConstruction extends BlockInfo {
   throw new IOException("Trying to commit inconsistent block: id = "
   + block.getBlockId() + ", expected id = " + getBlockId());
 blockUCState = BlockUCState.COMMITTED;
-this.set(getBlockId(), block.getNumBytes(), block.getGenerationStamp());
+this.setNumBytes(block.getNumBytes());
 // Sort out invalid replicas.
 setGenerationStampAndVerifyReplicas(block.getGenerationStamp());
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/238458b2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoo

[2/2] hadoop git commit: HDFS-6101. TestReplaceDatanodeOnFailure fails occasionally. Contributed by Wei-Chiu Chuang.

2015-11-24 Thread cnauroth
HDFS-6101. TestReplaceDatanodeOnFailure fails occasionally. Contributed by 
Wei-Chiu Chuang.

(cherry picked from commit 1777608fa075a807c645619fda87cb8de1b0350c)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c79f0177
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c79f0177
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c79f0177

Branch: refs/heads/branch-2
Commit: c79f01775269594f8624264623c8b9a747edbb60
Parents: 9e458c3
Author: cnauroth 
Authored: Tue Nov 24 09:39:21 2015 -0800
Committer: cnauroth 
Committed: Tue Nov 24 09:39:37 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../hdfs/TestReplaceDatanodeOnFailure.java  | 64 +++-
 2 files changed, 53 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c79f0177/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 6955092..421240b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1510,6 +1510,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9433. DFS getEZForPath API on a non-existent file should throw 
FileNotFoundException
 (Rakesh R via umamahesh)
 
+HDFS-6101. TestReplaceDatanodeOnFailure fails occasionally.
+(Wei-Chiu Chuang via cnauroth)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c79f0177/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplaceDatanodeOnFailure.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplaceDatanodeOnFailure.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplaceDatanodeOnFailure.java
index 76d592c..45acd12 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplaceDatanodeOnFailure.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplaceDatanodeOnFailure.java
@@ -17,10 +17,14 @@
  */
 package org.apache.hadoop.hdfs;
 
+import com.google.common.base.Supplier;
+
 import java.io.IOException;
 import java.util.Arrays;
+import java.util.concurrent.TimeoutException;
 
 import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FSDataOutputStream;
@@ -42,7 +46,7 @@ import org.junit.Test;
  * data can be read by another client.
  */
 public class TestReplaceDatanodeOnFailure {
-  static final Log LOG = AppendTestUtil.LOG;
+  static final Log LOG = LogFactory.getLog(TestReplaceDatanodeOnFailure.class);
 
   static final String DIR = "/" + 
TestReplaceDatanodeOnFailure.class.getSimpleName() + "/";
   static final short REPLICATION = 3;
@@ -114,7 +118,8 @@ public class TestReplaceDatanodeOnFailure {
   @Test
   public void testReplaceDatanodeOnFailure() throws Exception {
 final Configuration conf = new HdfsConfiguration();
-
+// do not consider load factor when selecting a data node
+conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_REPLICATION_CONSIDERLOAD_KEY, 
false);
 //always replace a datanode
 ReplaceDatanodeOnFailure.write(Policy.ALWAYS, true, conf);
 
@@ -124,31 +129,40 @@ public class TestReplaceDatanodeOnFailure {
 ).racks(racks).numDataNodes(REPLICATION).build();
 
 try {
+  cluster.waitActive();
   final DistributedFileSystem fs = cluster.getFileSystem();
   final Path dir = new Path(DIR);
-  
-  final SlowWriter[] slowwriters = new SlowWriter[10];
+  final int NUM_WRITERS = 10;
+  final int FIRST_BATCH = 5;
+  final SlowWriter[] slowwriters = new SlowWriter[NUM_WRITERS];
   for(int i = 1; i <= slowwriters.length; i++) {
 //create slow writers in different speed
 slowwriters[i - 1] = new SlowWriter(fs, new Path(dir, "file" + i), 
i*200L);
   }
 
-  for(SlowWriter s : slowwriters) {
-s.start();
+  for(int i = 0; i < FIRST_BATCH; i++) {
+slowwriters[i].start();
   }
 
   // Let slow writers write something.
-  // Some of them are too slow and will be not yet started. 
-  sleepSeconds(1);
+  // Some of them are too slow and will be not yet started.
+  sleepSeconds(3);
 
   //start new datanodes
   cluster.startDataNodes(conf, 2, true, null, new String[]{RACK1, RACK1});
+  cluster.waitActive();
+  // wait for first block reports for up to 10 seconds
+  cluster.waitFirstBRCompleted(0, 1000

[1/2] hadoop git commit: HDFS-6101. TestReplaceDatanodeOnFailure fails occasionally. Contributed by Wei-Chiu Chuang.

2015-11-24 Thread cnauroth
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 9e458c3e9 -> c79f01775
  refs/heads/trunk 28dfe721b -> 1777608fa


HDFS-6101. TestReplaceDatanodeOnFailure fails occasionally. Contributed by 
Wei-Chiu Chuang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1777608f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1777608f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1777608f

Branch: refs/heads/trunk
Commit: 1777608fa075a807c645619fda87cb8de1b0350c
Parents: 28dfe72
Author: cnauroth 
Authored: Tue Nov 24 09:39:21 2015 -0800
Committer: cnauroth 
Committed: Tue Nov 24 09:39:21 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../hdfs/TestReplaceDatanodeOnFailure.java  | 64 +++-
 2 files changed, 53 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1777608f/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index ca2ed15..d39ed3f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -2368,6 +2368,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9433. DFS getEZForPath API on a non-existent file should throw 
FileNotFoundException
 (Rakesh R via umamahesh)
 
+HDFS-6101. TestReplaceDatanodeOnFailure fails occasionally.
+(Wei-Chiu Chuang via cnauroth)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1777608f/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplaceDatanodeOnFailure.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplaceDatanodeOnFailure.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplaceDatanodeOnFailure.java
index d351020..bbc447c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplaceDatanodeOnFailure.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplaceDatanodeOnFailure.java
@@ -17,10 +17,14 @@
  */
 package org.apache.hadoop.hdfs;
 
+import com.google.common.base.Supplier;
+
 import java.io.IOException;
 import java.util.Arrays;
+import java.util.concurrent.TimeoutException;
 
 import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FSDataOutputStream;
@@ -41,7 +45,7 @@ import org.junit.Test;
  * This class tests that data nodes are correctly replaced on failure.
  */
 public class TestReplaceDatanodeOnFailure {
-  static final Log LOG = AppendTestUtil.LOG;
+  static final Log LOG = LogFactory.getLog(TestReplaceDatanodeOnFailure.class);
 
   static final String DIR = "/" + 
TestReplaceDatanodeOnFailure.class.getSimpleName() + "/";
   static final short REPLICATION = 3;
@@ -113,7 +117,8 @@ public class TestReplaceDatanodeOnFailure {
   @Test
   public void testReplaceDatanodeOnFailure() throws Exception {
 final Configuration conf = new HdfsConfiguration();
-
+// do not consider load factor when selecting a data node
+conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_REPLICATION_CONSIDERLOAD_KEY, 
false);
 //always replace a datanode
 ReplaceDatanodeOnFailure.write(Policy.ALWAYS, true, conf);
 
@@ -123,31 +128,40 @@ public class TestReplaceDatanodeOnFailure {
 ).racks(racks).numDataNodes(REPLICATION).build();
 
 try {
+  cluster.waitActive();
   final DistributedFileSystem fs = cluster.getFileSystem();
   final Path dir = new Path(DIR);
-  
-  final SlowWriter[] slowwriters = new SlowWriter[10];
+  final int NUM_WRITERS = 10;
+  final int FIRST_BATCH = 5;
+  final SlowWriter[] slowwriters = new SlowWriter[NUM_WRITERS];
   for(int i = 1; i <= slowwriters.length; i++) {
 //create slow writers in different speed
 slowwriters[i - 1] = new SlowWriter(fs, new Path(dir, "file" + i), 
i*200L);
   }
 
-  for(SlowWriter s : slowwriters) {
-s.start();
+  for(int i = 0; i < FIRST_BATCH; i++) {
+slowwriters[i].start();
   }
 
   // Let slow writers write something.
-  // Some of them are too slow and will be not yet started. 
-  sleepSeconds(1);
+  // Some of them are too slow and will be not yet started.
+  sleepSeconds(3);
 
   //start new datanodes
   cluster.startDataNodes(conf, 2, true, null, new String[]{RACK1, RACK1});
+  cluster.waitActive();
+  // wait for

hadoop git commit: HDFS-9314. Improve BlockPlacementPolicyDefault's picking of excess replicas. (Xiao Chen via mingma)

2015-11-24 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 1777608fa -> 0e54b164a


HDFS-9314. Improve BlockPlacementPolicyDefault's picking of excess replicas. 
(Xiao Chen via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0e54b164
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0e54b164
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0e54b164

Branch: refs/heads/trunk
Commit: 0e54b164a8d8acf09aca8712116bf7a554cb4846
Parents: 1777608
Author: Ming Ma 
Authored: Tue Nov 24 10:30:24 2015 -0800
Committer: Ming Ma 
Committed: Tue Nov 24 10:30:24 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../BlockPlacementPolicyDefault.java| 32 ++--
 .../BlockPlacementPolicyRackFaultTolerant.java  |  8 ++
 .../BlockPlacementPolicyWithNodeGroup.java  |  3 +-
 .../BlockPlacementPolicyWithUpgradeDomain.java  | 19 ++---
 .../blockmanagement/TestReplicationPolicy.java  | 82 
 .../TestReplicationPolicyWithNodeGroup.java |  6 +-
 .../TestReplicationPolicyWithUpgradeDomain.java | 32 
 .../hdfs/server/namenode/ha/TestDNFencing.java  |  4 +-
 9 files changed, 153 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0e54b164/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index d39ed3f..b441b35 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1676,6 +1676,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-7988. Replace usage of ExactSizeInputStream with LimitInputStream.
 (Walter Su via wheat9)
 
+HDFS-9314. Improve BlockPlacementPolicyDefault's picking of excess
+replicas. (Xiao Chen via mingma)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0e54b164/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
index 13b17e3..08e7851 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
@@ -916,7 +916,8 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
   public DatanodeStorageInfo chooseReplicaToDelete(
   Collection moreThanOne,
   Collection exactlyOne,
-  final List excessTypes) {
+  final List excessTypes,
+  Map> rackMap) {
 long oldestHeartbeat =
   monotonicNow() - heartbeatInterval * tolerateHeartbeatMultiplier;
 DatanodeStorageInfo oldestHeartbeatStorage = null;
@@ -926,7 +927,7 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
 // Pick the node with the oldest heartbeat or with the least free space,
 // if all hearbeats are within the tolerable heartbeat interval
 for(DatanodeStorageInfo storage : pickupReplicaSet(moreThanOne,
-exactlyOne)) {
+exactlyOne, rackMap)) {
   if (!excessTypes.contains(storage.getStorageType())) {
 continue;
   }
@@ -991,7 +992,8 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
   moreThanOne, exactlyOne, excessTypes)) {
 cur = delNodeHintStorage;
   } else { // regular excessive replica removal
-cur = chooseReplicaToDelete(moreThanOne, exactlyOne, excessTypes);
+cur = chooseReplicaToDelete(moreThanOne, exactlyOne, excessTypes,
+rackMap);
   }
   firstOne = false;
   if (cur == null) {
@@ -1044,16 +1046,34 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
 splitNodesWithRack(locs, rackMap, moreThanOne, exactlyOne);
 return notReduceNumOfGroups(moreThanOne, source, target);
   }
+
   /**
* Pick up replica node set for deleting replica as over-replicated. 
* First set contains replica nodes on rack with more than one
* replica while second set contains remaining replica nodes.
-   * So pick up first set if not empty. If first is empty, then pick second.
+   * If only 1 rack, pick all. If 2 racks, pick all that have more than
+   * 1 replicas on the same rack; if no such 

hadoop git commit: HDFS-9314. Improve BlockPlacementPolicyDefault's picking of excess replicas. (Xiao Chen via mingma)

2015-11-24 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 c79f01775 -> 85d04dc46


HDFS-9314. Improve BlockPlacementPolicyDefault's picking of excess replicas. 
(Xiao Chen via mingma)

(cherry picked from commit 0e54b164a8d8acf09aca8712116bf7a554cb4846)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/85d04dc4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/85d04dc4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/85d04dc4

Branch: refs/heads/branch-2
Commit: 85d04dc46494c5b627920bbc021f0515af8f753e
Parents: c79f017
Author: Ming Ma 
Authored: Tue Nov 24 10:30:24 2015 -0800
Committer: Ming Ma 
Committed: Tue Nov 24 10:31:23 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../BlockPlacementPolicyDefault.java| 32 ++--
 .../BlockPlacementPolicyRackFaultTolerant.java  |  8 ++
 .../BlockPlacementPolicyWithNodeGroup.java  |  3 +-
 .../BlockPlacementPolicyWithUpgradeDomain.java  | 19 ++---
 .../blockmanagement/TestReplicationPolicy.java  | 82 
 .../TestReplicationPolicyWithNodeGroup.java |  6 +-
 .../TestReplicationPolicyWithUpgradeDomain.java | 32 
 .../hdfs/server/namenode/ha/TestDNFencing.java  |  4 +-
 9 files changed, 153 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/85d04dc4/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 421240b..5cacca3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -811,6 +811,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-7988. Replace usage of ExactSizeInputStream with LimitInputStream.
 (Walter Su via wheat9)
 
+HDFS-9314. Improve BlockPlacementPolicyDefault's picking of excess
+replicas. (Xiao Chen via mingma)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/85d04dc4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
index 13b17e3..08e7851 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
@@ -916,7 +916,8 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
   public DatanodeStorageInfo chooseReplicaToDelete(
   Collection moreThanOne,
   Collection exactlyOne,
-  final List excessTypes) {
+  final List excessTypes,
+  Map> rackMap) {
 long oldestHeartbeat =
   monotonicNow() - heartbeatInterval * tolerateHeartbeatMultiplier;
 DatanodeStorageInfo oldestHeartbeatStorage = null;
@@ -926,7 +927,7 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
 // Pick the node with the oldest heartbeat or with the least free space,
 // if all hearbeats are within the tolerable heartbeat interval
 for(DatanodeStorageInfo storage : pickupReplicaSet(moreThanOne,
-exactlyOne)) {
+exactlyOne, rackMap)) {
   if (!excessTypes.contains(storage.getStorageType())) {
 continue;
   }
@@ -991,7 +992,8 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
   moreThanOne, exactlyOne, excessTypes)) {
 cur = delNodeHintStorage;
   } else { // regular excessive replica removal
-cur = chooseReplicaToDelete(moreThanOne, exactlyOne, excessTypes);
+cur = chooseReplicaToDelete(moreThanOne, exactlyOne, excessTypes,
+rackMap);
   }
   firstOne = false;
   if (cur == null) {
@@ -1044,16 +1046,34 @@ public class BlockPlacementPolicyDefault extends 
BlockPlacementPolicy {
 splitNodesWithRack(locs, rackMap, moreThanOne, exactlyOne);
 return notReduceNumOfGroups(moreThanOne, source, target);
   }
+
   /**
* Pick up replica node set for deleting replica as over-replicated. 
* First set contains replica nodes on rack with more than one
* replica while second set contains remaining replica nodes.
-   * So pick up first set if not empty. If first is empty, then pick second.
+   * If only 1 rack, pick all. If 2 racks, p

hadoop git commit: HADOOP-12294. Correct CHANGES.txt description.

2015-11-24 Thread cnauroth
Repository: hadoop
Updated Branches:
  refs/heads/trunk 0e54b164a -> 0e1c12c17


HADOOP-12294. Correct CHANGES.txt description.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0e1c12c1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0e1c12c1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0e1c12c1

Branch: refs/heads/trunk
Commit: 0e1c12c1740132778fb29e41fc47d374fb87021e
Parents: 0e54b16
Author: cnauroth 
Authored: Tue Nov 24 10:51:56 2015 -0800
Committer: cnauroth 
Committed: Tue Nov 24 10:51:56 2015 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0e1c12c1/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 873cd64..1f384f2 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -35,8 +35,8 @@ Trunk (Unreleased)
 HADOOP-10787 Rename/remove non-HADOOP_*, etc from the shell scripts.
 (aw via vvasudev)
 
-HADOOP-12294. Remove the support of the deprecated dfs.umask.
-(Chang Li vha wheat9)
+HADOOP-12294. Throw an Exception when fs.permissions.umask-mode is
+misconfigured (Chang Li vha wheat9)
 
   NEW FEATURES
 



hadoop git commit: Revert "HADOOP-11361. Fix a race condition in MetricsSourceAdapter.updateJmxCache. Contributed by Brahma Reddy Battula."

2015-11-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 0e1c12c17 -> 17b1a5482


Revert "HADOOP-11361. Fix a race condition in 
MetricsSourceAdapter.updateJmxCache. Contributed by Brahma Reddy Battula."

This reverts commit 4356e8a5ef0ac6d11a34704b80ef360a710e623a.

Conflicts:

hadoop-common-project/hadoop-common/CHANGES.txt

hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/17b1a548
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/17b1a548
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/17b1a548

Branch: refs/heads/trunk
Commit: 17b1a5482b32dab82225e8233648990bc77674ba
Parents: 0e1c12c
Author: Jason Lowe 
Authored: Tue Nov 24 19:12:04 2015 +
Committer: Jason Lowe 
Committed: Tue Nov 24 19:12:04 2015 +

--
 hadoop-common-project/hadoop-common/CHANGES.txt|  3 ---
 .../hadoop/metrics2/impl/MetricsSourceAdapter.java | 17 ++---
 2 files changed, 10 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/17b1a548/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 1f384f2..cc1827b 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1492,9 +1492,6 @@ Release 2.7.3 - UNRELEASED
 HADOOP-12374. Updated expunge command description.
 (WeiWei Yang via eyang)
 
-HADOOP-11361. Fix a race condition in MetricsSourceAdapter.updateJmxCache.
-(Brahma Reddy Battula via ozawa)
-
 HADOOP-12348. MetricsSystemImpl creates MetricsSourceAdapter with wrong
 time unit parameter. (zxu via rkanter)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/17b1a548/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
index a3665b3..cbba014 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
@@ -158,7 +158,7 @@ class MetricsSourceAdapter implements DynamicMBean {
 
   private void updateJmxCache() {
 boolean getAllMetrics = false;
-synchronized (this) {
+synchronized(this) {
   if (Time.now() - jmxCacheTS >= jmxCacheTTL) {
 // temporarilly advance the expiry while updating the cache
 jmxCacheTS = Time.now() + jmxCacheTTL;
@@ -169,21 +169,24 @@ class MetricsSourceAdapter implements DynamicMBean {
   getAllMetrics = true;
   lastRecsCleared = false;
 }
-  } else {
+  }
+  else {
 return;
   }
+}
 
-  if (getAllMetrics) {
-MetricsCollectorImpl builder = new MetricsCollectorImpl();
-getMetrics(builder, true);
-  }
+if (getAllMetrics) {
+  MetricsCollectorImpl builder = new MetricsCollectorImpl();
+  getMetrics(builder, true);
+}
 
+synchronized(this) {
   updateAttrCache();
   if (getAllMetrics) {
 updateInfoCache();
   }
   jmxCacheTS = Time.now();
-  lastRecs = null; // in case regular interval update is not running
+  lastRecs = null;  // in case regular interval update is not running
   lastRecsCleared = true;
 }
   }



hadoop git commit: Revert "HADOOP-11361. Fix a race condition in MetricsSourceAdapter.updateJmxCache. Contributed by Brahma Reddy Battula."

2015-11-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 85d04dc46 -> 792313011


Revert "HADOOP-11361. Fix a race condition in 
MetricsSourceAdapter.updateJmxCache. Contributed by Brahma Reddy Battula."

This reverts commit 4356e8a5ef0ac6d11a34704b80ef360a710e623a.

Conflicts:

hadoop-common-project/hadoop-common/CHANGES.txt

hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
(cherry picked from commit 17b1a5482b32dab82225e8233648990bc77674ba)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/79231301
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/79231301
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/79231301

Branch: refs/heads/branch-2
Commit: 7923130114c1b9a8e8086809065b9f6cc9860df3
Parents: 85d04dc
Author: Jason Lowe 
Authored: Tue Nov 24 19:12:04 2015 +
Committer: Jason Lowe 
Committed: Tue Nov 24 19:13:26 2015 +

--
 hadoop-common-project/hadoop-common/CHANGES.txt|  3 ---
 .../hadoop/metrics2/impl/MetricsSourceAdapter.java | 17 ++---
 2 files changed, 10 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/79231301/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index a300c50..66b8870 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -871,9 +871,6 @@ Release 2.7.3 - UNRELEASED
 HADOOP-12374. Updated expunge command description.
 (WeiWei Yang via eyang)
 
-HADOOP-11361. Fix a race condition in MetricsSourceAdapter.updateJmxCache.
-(Brahma Reddy Battula via ozawa)
-
 HADOOP-12348. MetricsSystemImpl creates MetricsSourceAdapter with wrong
 time unit parameter. (zxu via rkanter)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/79231301/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
index 706ef7e..d56ee53 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
@@ -158,7 +158,7 @@ class MetricsSourceAdapter implements DynamicMBean {
 
   private void updateJmxCache() {
 boolean getAllMetrics = false;
-synchronized (this) {
+synchronized(this) {
   if (Time.now() - jmxCacheTS >= jmxCacheTTL) {
 // temporarilly advance the expiry while updating the cache
 jmxCacheTS = Time.now() + jmxCacheTTL;
@@ -169,21 +169,24 @@ class MetricsSourceAdapter implements DynamicMBean {
   getAllMetrics = true;
   lastRecsCleared = false;
 }
-  } else {
+  }
+  else {
 return;
   }
+}
 
-  if (getAllMetrics) {
-MetricsCollectorImpl builder = new MetricsCollectorImpl();
-getMetrics(builder, true);
-  }
+if (getAllMetrics) {
+  MetricsCollectorImpl builder = new MetricsCollectorImpl();
+  getMetrics(builder, true);
+}
 
+synchronized(this) {
   updateAttrCache();
   if (getAllMetrics) {
 updateInfoCache();
   }
   jmxCacheTS = Time.now();
-  lastRecs = null; // in case regular interval update is not running
+  lastRecs = null;  // in case regular interval update is not running
   lastRecsCleared = true;
 }
   }



hadoop git commit: Revert "HADOOP-11361. Fix a race condition in MetricsSourceAdapter.updateJmxCache. Contributed by Brahma Reddy Battula."

2015-11-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 9bd9544bd -> 2946621a5


Revert "HADOOP-11361. Fix a race condition in 
MetricsSourceAdapter.updateJmxCache. Contributed by Brahma Reddy Battula."

This reverts commit 4356e8a5ef0ac6d11a34704b80ef360a710e623a.

Conflicts:

hadoop-common-project/hadoop-common/CHANGES.txt

hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
(cherry picked from commit 17b1a5482b32dab82225e8233648990bc77674ba)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2946621a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2946621a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2946621a

Branch: refs/heads/branch-2.7
Commit: 2946621a531bd71e2408951ab72ecaf5f9fea3f0
Parents: 9bd9544
Author: Jason Lowe 
Authored: Tue Nov 24 19:12:04 2015 +
Committer: Jason Lowe 
Committed: Tue Nov 24 19:18:37 2015 +

--
 hadoop-common-project/hadoop-common/CHANGES.txt|  3 ---
 .../hadoop/metrics2/impl/MetricsSourceAdapter.java | 17 ++---
 2 files changed, 10 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2946621a/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 7065abc..3ea617b 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -25,9 +25,6 @@ Release 2.7.3 - UNRELEASED
 HADOOP-12374. Updated expunge command description.
 (WeiWei Yang via eyang)
 
-HADOOP-11361. Fix a race condition in MetricsSourceAdapter.updateJmxCache.
-(Brahma Reddy Battula via ozawa)
-
 HADOOP-12348. MetricsSystemImpl creates MetricsSourceAdapter with wrong
 time unit parameter. (zxu via rkanter)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2946621a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
index 706ef7e..d56ee53 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
@@ -158,7 +158,7 @@ class MetricsSourceAdapter implements DynamicMBean {
 
   private void updateJmxCache() {
 boolean getAllMetrics = false;
-synchronized (this) {
+synchronized(this) {
   if (Time.now() - jmxCacheTS >= jmxCacheTTL) {
 // temporarilly advance the expiry while updating the cache
 jmxCacheTS = Time.now() + jmxCacheTTL;
@@ -169,21 +169,24 @@ class MetricsSourceAdapter implements DynamicMBean {
   getAllMetrics = true;
   lastRecsCleared = false;
 }
-  } else {
+  }
+  else {
 return;
   }
+}
 
-  if (getAllMetrics) {
-MetricsCollectorImpl builder = new MetricsCollectorImpl();
-getMetrics(builder, true);
-  }
+if (getAllMetrics) {
+  MetricsCollectorImpl builder = new MetricsCollectorImpl();
+  getMetrics(builder, true);
+}
 
+synchronized(this) {
   updateAttrCache();
   if (getAllMetrics) {
 updateInfoCache();
   }
   jmxCacheTS = Time.now();
-  lastRecs = null; // in case regular interval update is not running
+  lastRecs = null;  // in case regular interval update is not running
   lastRecsCleared = true;
 }
   }



hadoop git commit: HDFS-5165. Remove the TotalFiles metrics. Contributed by Akira AJISAKA.

2015-11-24 Thread wheat9
Repository: hadoop
Updated Branches:
  refs/heads/trunk 17b1a5482 -> db4cab21f


HDFS-5165. Remove the TotalFiles metrics. Contributed by Akira AJISAKA.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/db4cab21
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/db4cab21
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/db4cab21

Branch: refs/heads/trunk
Commit: db4cab21f4c661f68d6a6dec50aae00d75168486
Parents: 17b1a54
Author: Haohui Mai 
Authored: Tue Nov 24 11:41:57 2015 -0800
Committer: Haohui Mai 
Committed: Tue Nov 24 11:41:57 2015 -0800

--
 .../hadoop-common/src/site/markdown/Metrics.md|  1 -
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   |  2 ++
 .../apache/hadoop/hdfs/server/namenode/FSNamesystem.java  |  8 
 .../hadoop/hdfs/server/namenode/NameNodeMXBean.java   | 10 --
 4 files changed, 2 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/db4cab21/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
--
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
index 9e02ffa..a91bbad 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
@@ -231,7 +231,6 @@ Each metrics record contains tags such as HAState and 
Hostname as additional inf
 | `MillisSinceLastLoadedEdits` | (HA-only) Time in milliseconds since the last 
time standby NameNode load edit log. In active NameNode, set to 0 |
 | `BlockCapacity` | Current number of block capacity |
 | `StaleDataNodes` | Current number of DataNodes marked stale due to delayed 
heartbeat |
-| `TotalFiles` | Deprecated: Use FilesTotal instead |
 | `MissingReplOneBlocks` | Current number of missing blocks with replication 
factor 1 |
 | `NumFilesUnderConstruction` | Current number of files under construction |
 | `NumActiveClients` | Current number of active clients holding lease |

http://git-wip-us.apache.org/repos/asf/hadoop/blob/db4cab21/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index b441b35..639b8f3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -44,6 +44,8 @@ Trunk (Unreleased)
 
 HDFS-9278. Fix preferredBlockSize typo in OIV XML output. (Nicole Pazmany 
via wang)
 
+HDFS-5165. Remove the TotalFiles metrics. (Akira Ajisaka via wheat9)
+
   NEW FEATURES
 
 HDFS-3125. Add JournalService to enable Journal Daemon. (suresh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/db4cab21/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 8d77630..0559288 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -6001,14 +6001,6 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 return getBlocksTotal();
   }
 
-  /** @deprecated Use {@link #getFilesTotal()} instead. */
-  @Deprecated
-  @Override // NameNodeMXBean
-  @Metric
-  public long getTotalFiles() {
-return getFilesTotal();
-  }
-
   @Override // NameNodeMXBean
   public long getNumberOfMissingBlocks() {
 return getMissingBlocksCount();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/db4cab21/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java
index 3f78155..c0d256a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java
@@ -139,16 +139,6 @@ public interface NameNodeMXBean {
   public long getTotalBlocks();
   
   /**
-   * Gets the total

hadoop git commit: HDFS-9368. Implement reads with implicit offset state in libhdfs++. Contributed by James Clampffer.

2015-11-24 Thread jhc
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-8707 68f436689 -> ebfb48b88


HDFS-9368. Implement reads with implicit offset state in libhdfs++.  
Contributed by James Clampffer.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ebfb48b8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ebfb48b8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ebfb48b8

Branch: refs/heads/HDFS-8707
Commit: ebfb48b88bac2fd56571dd51fcc195db5598f1b1
Parents: 68f4366
Author: James 
Authored: Tue Nov 24 15:30:08 2015 -0500
Committer: James 
Committed: Tue Nov 24 15:30:08 2015 -0500

--
 .../native/libhdfspp/lib/bindings/c/hdfs.cc | 97 +---
 .../native/libhdfspp/lib/bindings/c/hdfs_cpp.cc | 62 -
 .../native/libhdfspp/lib/bindings/c/hdfs_cpp.h  | 22 -
 .../main/native/libhdfspp/lib/fs/filesystem.h   |  2 +-
 .../main/native/libhdfspp/lib/fs/inputstream.cc |  2 +
 5 files changed, 167 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ebfb48b8/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/bindings/c/hdfs.cc
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/bindings/c/hdfs.cc
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/bindings/c/hdfs.cc
index 9b985a9..853e9d3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/bindings/c/hdfs.cc
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/bindings/c/hdfs.cc
@@ -58,6 +58,43 @@ static void ReportError(int errnum, std::string msg) {
 #endif
 }
 
+/* Convert Status wrapped error into appropriate errno and return code */
+static int Error(const Status &stat) {
+  int code = stat.code();
+  switch (code) {
+case Status::Code::kOk:
+  return 0;
+case Status::Code::kInvalidArgument:
+  ReportError(EINVAL, "Invalid argument");
+  break;
+case Status::Code::kResourceUnavailable:
+  ReportError(EAGAIN, "Resource temporarily unavailable");
+  break;
+case Status::Code::kUnimplemented:
+  ReportError(ENOSYS, "Function not implemented");
+  break;
+case Status::Code::kException:
+  ReportError(EINTR, "Exception raised");
+  break;
+default:
+  ReportError(ENOSYS, "Error: unrecognised code");
+  }
+  return -1;
+}
+
+/* return false on failure */
+bool CheckSystemAndHandle(hdfsFS fs, hdfsFile file) {
+  if (!fs) {
+ReportError(ENODEV, "Cannot perform FS operations with null FS handle.");
+return false;
+  }
+  if (!file) {
+ReportError(EBADF, "Cannot perform FS operations with null File handle.");
+return false;
+  }
+  return true;
+}
+
 /**
  * C API implementations
  **/
@@ -110,28 +147,66 @@ hdfsFile hdfsOpenFile(hdfsFS fs, const char *path, int 
flags, int bufferSize,
 }
 
 int hdfsCloseFile(hdfsFS fs, hdfsFile file) {
-  if (!fs) {
-ReportError(ENODEV, "Cannot perform FS operations with null FS handle.");
-return -1;
-  }
-  if (!file) {
-ReportError(EBADF, "Cannot perform FS operations with null File handle.");
+  if (!CheckSystemAndHandle(fs, file)) {
 return -1;
   }
+
   delete file;
   return 0;
 }
 
 tSize hdfsPread(hdfsFS fs, hdfsFile file, tOffset position, void *buffer,
 tSize length) {
-  if (!fs) {
-ReportError(ENODEV, "Cannot perform FS operations with null FS handle.");
+  if (!CheckSystemAndHandle(fs, file)) {
 return -1;
   }
-  if (!file) {
-ReportError(EBADF, "Cannot perform FS operations with null File handle.");
+
+  size_t len = length;
+  Status stat = file->get_impl()->Pread(buffer, &len, position);
+  if (!stat.ok()) {
+return Error(stat);
+  }
+  return (tSize)len;
+}
+
+tSize hdfsRead(hdfsFS fs, hdfsFile file, void *buffer, tSize length) {
+  if (!CheckSystemAndHandle(fs, file)) {
+return -1;
+  }
+
+  size_t len = length;
+  Status stat = file->get_impl()->Read(buffer, &len);
+  if (!stat.ok()) {
+return Error(stat);
+  }
+
+  return (tSize)len;
+}
+
+int hdfsSeek(hdfsFS fs, hdfsFile file, tOffset desiredPos) {
+  if (!CheckSystemAndHandle(fs, file)) {
 return -1;
   }
 
-  return file->get_impl()->Pread(buffer, length, position);
+  off_t desired = desiredPos;
+  Status stat = file->get_impl()->Seek(&desired, std::ios_base::beg);
+  if (!stat.ok()) {
+return Error(stat);
+  }
+
+  return (int)desired;
+}
+
+tOffset hdfsTell(hdfsFS fs, hdfsFile file) {
+  if (!CheckSystemAndHandle(fs, file)) {
+return -1;
+  }
+
+  ssize_t offset = 0;
+  Status stat = file->get_impl()->Seek(&offset, std::ios_base::cur);
+  if (!stat.ok()) {
+return Error(stat);
+  }
+
+  return offset;
 }

http://git-

hadoop git commit: HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30 seconds for printing info log messags.

2015-11-24 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.6 238458b25 -> 09dbb1156


HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30 seconds 
for printing info log messags.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/09dbb115
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/09dbb115
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/09dbb115

Branch: refs/heads/branch-2.6
Commit: 09dbb11566410f2e9101555637ad81c7acc7edfd
Parents: 238458b
Author: Tsz-Wo Nicholas Sze 
Authored: Thu Nov 19 14:54:01 2015 -0800
Committer: Tsz-Wo Nicholas Sze 
Committed: Tue Nov 24 12:40:24 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  | 3 +++
 .../hadoop/hdfs/server/blockmanagement/BlockManager.java | 8 +++-
 2 files changed, 6 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/09dbb115/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 5e683e3..fb6df6e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -23,6 +23,9 @@ Release 2.6.3 - UNRELEASED
 HDFS-9289. Make DataStreamer#block thread safe and verify genStamp in
 commitBlock. (Chang Li via zhz)
 
+HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30
+seconds for printing info log messags.  (szetszwo)
+
 Release 2.6.2 - 2015-10-28
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/09dbb115/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index feaf843..8dbee81 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -2853,11 +2853,9 @@ public class BlockManager {
 for(DatanodeStorageInfo storage : blocksMap.getStorages(block, 
State.NORMAL)) {
   final DatanodeDescriptor cur = storage.getDatanodeDescriptor();
   if (storage.areBlockContentsStale()) {
-LOG.info("BLOCK* processOverReplicatedBlock: " +
-"Postponing processing of over-replicated " +
-block + " since storage + " + storage
-+ "datanode " + cur + " does not yet have up-to-date " +
-"block information.");
+LOG.trace("BLOCK* processOverReplicatedBlock: Postponing {}"
++ " since storage {} does not yet have up-to-date information.",
+block, storage);
 postponeBlock(block);
 return;
   }



hadoop git commit: HADOOP-11954. Solaris does not support RLIMIT_MEMLOCK as in Linux (Alan Burlison via aw)

2015-11-24 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/trunk 56493cda0 -> e8a87d739


HADOOP-11954. Solaris does not support RLIMIT_MEMLOCK as in Linux (Alan 
Burlison via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e8a87d73
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e8a87d73
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e8a87d73

Branch: refs/heads/trunk
Commit: e8a87d739f0ffa98fdf87d15440d1e2528080d0c
Parents: 56493cd
Author: Allen Wittenauer 
Authored: Tue Nov 24 12:44:58 2015 -0800
Committer: Allen Wittenauer 
Committed: Tue Nov 24 12:45:08 2015 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt| 3 +++
 .../main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c   | 6 +++---
 2 files changed, 6 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e8a87d73/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index cc1827b..70295be 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1359,6 +1359,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-9822. Create constant MAX_CAPACITY in RetryCache rather than
 hard-coding 16 in RetryCache constructor. (Tsuyoshi Ozawa via wheat9)
 
+HADOOP-11954. Solaris does not support RLIMIT_MEMLOCK as in Linux
+(Alan Burlison via aw)
+
   OPTIMIZATIONS
 
 HADOOP-12051. ProtobufRpcEngine.invoke() should use Exception.toString()

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e8a87d73/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
 
b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
index bd7784e..a7d4b55 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
+++ 
b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
@@ -1255,9 +1255,7 @@ JNIEXPORT jlong JNICALL
 Java_org_apache_hadoop_io_nativeio_NativeIO_getMemlockLimit0(
 JNIEnv *env, jclass clazz)
 {
-#ifdef WINDOWS
-  return 0;
-#else
+#ifdef RLIMIT_MEMLOCK
   struct rlimit rlim;
   int rc = getrlimit(RLIMIT_MEMLOCK, &rlim);
   if (rc != 0) {
@@ -1266,6 +1264,8 @@ JNIEnv *env, jclass clazz)
   }
   return (rlim.rlim_cur == RLIM_INFINITY) ?
 INT64_MAX : rlim.rlim_cur;
+#else
+  return 0;
 #endif
 }
 



hadoop git commit: Move HDFS-9434 to 2.6.3 in CHANGES.txt.

2015-11-24 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 792313011 -> 15ecbc95b


Move HDFS-9434 to 2.6.3 in CHANGES.txt.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/15ecbc95
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/15ecbc95
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/15ecbc95

Branch: refs/heads/branch-2
Commit: 15ecbc95bff4c88e5382772dd9204f720c11990f
Parents: 7923130
Author: Tsz-Wo Nicholas Sze 
Authored: Tue Nov 24 12:42:57 2015 -0800
Committer: Tsz-Wo Nicholas Sze 
Committed: Tue Nov 24 12:44:37 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/15ecbc95/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 5cacca3..9b64fae 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1484,9 +1484,6 @@ Release 2.8.0 - UNRELEASED
 HDFS-9400. TestRollingUpgradeRollback fails on branch-2.
 (Brahma Reddy Battula via cnauroth)
 
-HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30
-seconds for printing info log messags.  (szetszwo)
-
 HDFS-9443. Disabling HDFS client socket cache causes logging message
 printed to console for CLI commands. (Chris Nauroth via wheat9)
 
@@ -2672,6 +2669,9 @@ Release 2.6.3 - UNRELEASED
 HDFS-9431. DistributedFileSystem#concat fails if the target path is
 relative. (Kazuho Fujii via aajisaka)
 
+HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30
+seconds for printing info log messags.  (szetszwo)
+
 Release 2.6.2 - 2015-10-28
 
   INCOMPATIBLE CHANGES



hadoop git commit: HDFS-8855. Webhdfs client leaks active NameNode connections. Contributed by Xiaobing Zhou.

2015-11-24 Thread xyao
Repository: hadoop
Updated Branches:
  refs/heads/trunk e8a87d739 -> fe5624b85


HDFS-8855. Webhdfs client leaks active NameNode connections. Contributed by 
Xiaobing Zhou.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fe5624b8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fe5624b8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fe5624b8

Branch: refs/heads/trunk
Commit: fe5624b85d71720ae9da90a01cad9a3d1ea41160
Parents: e8a87d7
Author: Xiaoyu Yao 
Authored: Tue Nov 24 12:41:08 2015 -0800
Committer: Xiaoyu Yao 
Committed: Tue Nov 24 12:47:57 2015 -0800

--
 .../org/apache/hadoop/security/token/Token.java |  10 +-
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   4 +
 .../server/datanode/web/DatanodeHttpServer.java |   4 +-
 .../web/webhdfs/DataNodeUGIProvider.java| 106 +++--
 .../src/main/resources/hdfs-default.xml |   8 +
 .../web/webhdfs/TestDataNodeUGIProvider.java| 235 +++
 7 files changed, 350 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe5624b8/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
index 2420155..f8b7355 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
@@ -19,6 +19,8 @@
 package org.apache.hadoop.security.token;
 
 import com.google.common.collect.Maps;
+import com.google.common.primitives.Bytes;
+
 import org.apache.commons.codec.binary.Base64;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -32,6 +34,7 @@ import java.io.*;
 import java.util.Arrays;
 import java.util.Map;
 import java.util.ServiceLoader;
+import java.util.UUID;
 
 /**
  * The client-side form of the token.
@@ -337,7 +340,12 @@ public class Token implements 
Writable {
 identifierToString(buffer);
 return buffer.toString();
   }
-  
+
+  public String buildCacheKey() {
+return UUID.nameUUIDFromBytes(
+Bytes.concat(kind.getBytes(), identifier, password)).toString();
+  }
+
   private static ServiceLoader renewers =
   ServiceLoader.load(TokenRenewer.class);
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe5624b8/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 95dfbcf..7d9df2e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -2373,6 +2373,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-6101. TestReplaceDatanodeOnFailure fails occasionally.
 (Wei-Chiu Chuang via cnauroth)
 
+HDFS-8855. Webhdfs client leaks active NameNode connections.
+(Xiaobing Zhou via xyao)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe5624b8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 54e0d10..6986896 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -70,6 +70,10 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final String  DFS_WEBHDFS_NETTY_HIGH_WATERMARK =
   "dfs.webhdfs.netty.high.watermark";
   public static final int  DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT = 65535;
+  public static final String  DFS_WEBHDFS_UGI_EXPIRE_AFTER_ACCESS_KEY =
+  "dfs.webhdfs.ugi.expire.after.access";
+  public static final int DFS_WEBHDFS_UGI_EXPIRE_AFTER_ACCESS_DEFAULT =
+  10*60*1000; //10 minutes
 
   // HA related configuration
   public static final String  DFS_DATANODE_RESTART_REPLICA_EXPIRY_KEY = 
"dfs.datanode.restart.replica.expiration";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe5624b8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java
---

hadoop git commit: HADOOP-11954. Solaris does not support RLIMIT_MEMLOCK as in Linux (Alan Burlison via aw)

2015-11-24 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 15ecbc95b -> 38f8952c4


HADOOP-11954. Solaris does not support RLIMIT_MEMLOCK as in Linux (Alan 
Burlison via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/38f8952c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/38f8952c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/38f8952c

Branch: refs/heads/branch-2
Commit: 38f8952c4da851d4d914e7594afd2287635a9876
Parents: 15ecbc9
Author: Allen Wittenauer 
Authored: Tue Nov 24 12:44:58 2015 -0800
Committer: Allen Wittenauer 
Committed: Tue Nov 24 12:48:02 2015 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt| 3 +++
 .../main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c   | 6 +++---
 2 files changed, 6 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/38f8952c/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 66b8870..2720943 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -735,6 +735,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-9822. Create constant MAX_CAPACITY in RetryCache rather than
 hard-coding 16 in RetryCache constructor. (Tsuyoshi Ozawa via wheat9)
 
+HADOOP-11954. Solaris does not support RLIMIT_MEMLOCK as in Linux
+(Alan Burlison via aw)
+
   OPTIMIZATIONS
 
 HADOOP-12051. ProtobufRpcEngine.invoke() should use Exception.toString()

http://git-wip-us.apache.org/repos/asf/hadoop/blob/38f8952c/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
 
b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
index bd7784e..a7d4b55 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
+++ 
b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
@@ -1255,9 +1255,7 @@ JNIEXPORT jlong JNICALL
 Java_org_apache_hadoop_io_nativeio_NativeIO_getMemlockLimit0(
 JNIEnv *env, jclass clazz)
 {
-#ifdef WINDOWS
-  return 0;
-#else
+#ifdef RLIMIT_MEMLOCK
   struct rlimit rlim;
   int rc = getrlimit(RLIMIT_MEMLOCK, &rlim);
   if (rc != 0) {
@@ -1266,6 +1264,8 @@ JNIEnv *env, jclass clazz)
   }
   return (rlim.rlim_cur == RLIM_INFINITY) ?
 INT64_MAX : rlim.rlim_cur;
+#else
+  return 0;
 #endif
 }
 



[2/2] hadoop git commit: Move HDFS-9434 to 2.6.3 in CHANGES.txt.

2015-11-24 Thread szetszwo
Move HDFS-9434 to 2.6.3 in CHANGES.txt.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/af811030
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/af811030
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/af811030

Branch: refs/heads/branch-2.7
Commit: af811030e5a7294c5b7811a0f40ff78ae1529af4
Parents: 09f4a4e
Author: Tsz-Wo Nicholas Sze 
Authored: Tue Nov 24 12:42:57 2015 -0800
Committer: Tsz-Wo Nicholas Sze 
Committed: Tue Nov 24 12:49:34 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/af811030/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index a5391de..89aede2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1,8 +1,5 @@
 Hadoop HDFS Change Log
 
-HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30
-seconds for printing info log messags.  (szetszwo)
-
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES
@@ -1169,6 +1166,9 @@ Release 2.6.3 - UNRELEASED
 HDFS-9431. DistributedFileSystem#concat fails if the target path is
 relative. (Kazuho Fujii via aajisaka)
 
+HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30
+seconds for printing info log messags.  (szetszwo)
+
 Release 2.6.2 - 2015-10-28
 
   INCOMPATIBLE CHANGES



[1/2] hadoop git commit: HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30 seconds for printing info log messags.

2015-11-24 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 2946621a5 -> af811030e


HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30 seconds 
for printing info log messags.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/09f4a4ee
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/09f4a4ee
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/09f4a4ee

Branch: refs/heads/branch-2.7
Commit: 09f4a4eefd461bc418addd3862aec83dbf7165d3
Parents: 2946621
Author: Tsz-Wo Nicholas Sze 
Authored: Thu Nov 19 14:54:01 2015 -0800
Committer: Tsz-Wo Nicholas Sze 
Committed: Tue Nov 24 12:47:50 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  | 3 +++
 .../hadoop/hdfs/server/blockmanagement/BlockManager.java | 8 +++-
 2 files changed, 6 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/09f4a4ee/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 5e11af9..a5391de 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1,5 +1,8 @@
 Hadoop HDFS Change Log
 
+HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30
+seconds for printing info log messags.  (szetszwo)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/09f4a4ee/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 63a7aed..81367ea 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -2884,11 +2884,9 @@ public class BlockManager {
 for(DatanodeStorageInfo storage : blocksMap.getStorages(block, 
State.NORMAL)) {
   final DatanodeDescriptor cur = storage.getDatanodeDescriptor();
   if (storage.areBlockContentsStale()) {
-LOG.info("BLOCK* processOverReplicatedBlock: " +
-"Postponing processing of over-replicated " +
-block + " since storage + " + storage
-+ "datanode " + cur + " does not yet have up-to-date " +
-"block information.");
+LOG.trace("BLOCK* processOverReplicatedBlock: Postponing {}"
++ " since storage {} does not yet have up-to-date information.",
+block, storage);
 postponeBlock(block);
 return;
   }



hadoop git commit: Move HDFS-9434 to 2.6.3 in CHANGES.txt.

2015-11-24 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/trunk db4cab21f -> 56493cda0


Move HDFS-9434 to 2.6.3 in CHANGES.txt.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/56493cda
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/56493cda
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/56493cda

Branch: refs/heads/trunk
Commit: 56493cda04e30ab737fc6cecc8c43a87d5b006b7
Parents: db4cab2
Author: Tsz-Wo Nicholas Sze 
Authored: Tue Nov 24 12:42:57 2015 -0800
Committer: Tsz-Wo Nicholas Sze 
Committed: Tue Nov 24 12:42:57 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/56493cda/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 639b8f3..95dfbcf 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -2344,9 +2344,6 @@ Release 2.8.0 - UNRELEASED
 HDFS-9400. TestRollingUpgradeRollback fails on branch-2.
 (Brahma Reddy Battula via cnauroth)
 
-HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30
-seconds for printing info log messags.  (szetszwo)
-
 HDFS-9443. Disabling HDFS client socket cache causes logging message
 printed to console for CLI commands. (Chris Nauroth via wheat9)
 
@@ -3515,6 +3512,9 @@ Release 2.6.3 - UNRELEASED
 HDFS-9431. DistributedFileSystem#concat fails if the target path is
 relative. (Kazuho Fujii via aajisaka)
 
+HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30
+seconds for printing info log messags.  (szetszwo)
+
 Release 2.6.2 - 2015-10-28
 
   INCOMPATIBLE CHANGES



hadoop git commit: Move HDFS-9434 to 2.6.3 in CHANGES.txt.

2015-11-24 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7.2 1752fece9 -> 4e8799fff


Move HDFS-9434 to 2.6.3 in CHANGES.txt.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4e8799ff
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4e8799ff
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4e8799ff

Branch: refs/heads/branch-2.7.2
Commit: 4e8799fff4543868d7ebf218ae73f5ae1f807c38
Parents: 1752fec
Author: Tsz-Wo Nicholas Sze 
Authored: Tue Nov 24 12:42:57 2015 -0800
Committer: Tsz-Wo Nicholas Sze 
Committed: Tue Nov 24 12:56:58 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4e8799ff/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index f55504b..d2e1b7d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -95,9 +95,6 @@ Release 2.7.2 - 2015-11-11
 HDFS-9413. getContentSummary() on standby should throw StandbyException.
 (Brahma Reddy Battula via mingma)
 
-HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30
-seconds for printing info log messags.  (szetszwo)
-
 Release 2.7.1 - 2015-07-06
 
   INCOMPATIBLE CHANGES
@@ -1138,6 +1135,9 @@ Release 2.6.3 - UNRELEASED
 
   BUG FIXES
 
+HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30
+seconds for printing info log messags.  (szetszwo)
+
 Release 2.6.2 - 2015-10-28
 
   INCOMPATIBLE CHANGES



hadoop git commit: HDFS-8335. FSNamesystem should construct FSPermissionChecker only if permission is enabled. Contributed by Gabor Liptak.

2015-11-24 Thread wheat9
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 38f8952c4 -> b07c91a16


HDFS-8335. FSNamesystem should construct FSPermissionChecker only if permission 
is enabled. Contributed by Gabor Liptak.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b07c91a1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b07c91a1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b07c91a1

Branch: refs/heads/branch-2
Commit: b07c91a16b8c95f55b73cd59352428c9ffe6e990
Parents: 38f8952
Author: Haohui Mai 
Authored: Tue Nov 24 13:07:26 2015 -0800
Committer: Haohui Mai 
Committed: Tue Nov 24 13:07:45 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  6 +
 .../server/namenode/FSDirStatAndListingOp.java  | 25 +---
 2 files changed, 23 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b07c91a1/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 9b64fae..097c8e8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -867,6 +867,9 @@ Release 2.8.0 - UNRELEASED
 
 HDFS-9318. considerLoad factor can be improved. (Kuhu Shukla via kihwal)
 
+HDFS-8335. FSNamesystem should construct FSPermissionChecker only if
+permission is enabled. (Gabor Liptak via wheat9)
+
   BUG FIXES
 
 HDFS-8091: ACLStatus and XAttributes should be presented to
@@ -1513,6 +1516,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-6101. TestReplaceDatanodeOnFailure fails occasionally.
 (Wei-Chiu Chuang via cnauroth)
 
+HDFS-8335. FSNamesystem should construct FSPermissionChecker only if
+permission is enabled. (Gabor Liptak via wheat9)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b07c91a1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
index 171e2f1..c5c2fb4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
@@ -50,12 +50,17 @@ import static org.apache.hadoop.util.Time.now;
 class FSDirStatAndListingOp {
   static DirectoryListing getListingInt(FSDirectory fsd, final String srcArg,
   byte[] startAfter, boolean needLocation) throws IOException {
-FSPermissionChecker pc = fsd.getPermissionChecker();
 byte[][] pathComponents = FSDirectory
 .getPathComponentsForReservedPath(srcArg);
 final String startAfterString = new String(startAfter, Charsets.UTF_8);
-final String src = fsd.resolvePath(pc, srcArg, pathComponents);
-final INodesInPath iip = fsd.getINodesInPath(src, true);
+String src = null;
+
+if (fsd.isPermissionEnabled()) {
+  FSPermissionChecker pc = fsd.getPermissionChecker();
+  src = fsd.resolvePath(pc, srcArg, pathComponents);
+} else {
+  src = FSDirectory.resolvePath(srcArg, pathComponents, fsd);
+}
 
 // Get file name when startAfter is an INodePath
 if (FSDirectory.isReservedName(startAfterString)) {
@@ -72,8 +77,10 @@ class FSDirStatAndListingOp {
   }
 }
 
+final INodesInPath iip = fsd.getINodesInPath(src, true);
 boolean isSuperUser = true;
 if (fsd.isPermissionEnabled()) {
+  FSPermissionChecker pc = fsd.getPermissionChecker();
   if (iip.getLastINode() != null && iip.getLastINode().isDirectory()) {
 fsd.checkPathAccess(pc, iip, FsAction.READ_EXECUTE);
   } else {
@@ -101,15 +108,17 @@ class FSDirStatAndListingOp {
 if (!DFSUtil.isValidName(src)) {
   throw new InvalidPathException("Invalid file name: " + src);
 }
-FSPermissionChecker pc = fsd.getPermissionChecker();
 byte[][] pathComponents = 
FSDirectory.getPathComponentsForReservedPath(src);
-src = fsd.resolvePath(pc, src, pathComponents);
-final INodesInPath iip = fsd.getINodesInPath(src, resolveLink);
 if (fsd.isPermissionEnabled()) {
+  FSPermissionChecker pc = fsd.getPermissionChecker();
+  src = fsd.resolvePath(pc, src, pathComponents);
+  final INodesInPath iip = fsd.getINodesInPath(src, resolveLink);
   fsd.checkPermission(pc, iip, false, null, null, null, null, false);
+} 

hadoop git commit: HDFS-8335. FSNamesystem should construct FSPermissionChecker only if permission is enabled. Contributed by Gabor Liptak.

2015-11-24 Thread wheat9
Repository: hadoop
Updated Branches:
  refs/heads/trunk fe5624b85 -> 977e0b3c4


HDFS-8335. FSNamesystem should construct FSPermissionChecker only if permission 
is enabled. Contributed by Gabor Liptak.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/977e0b3c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/977e0b3c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/977e0b3c

Branch: refs/heads/trunk
Commit: 977e0b3c4ce76746a3d8590d2d790fdc96c86ca5
Parents: fe5624b
Author: Haohui Mai 
Authored: Tue Nov 24 13:07:26 2015 -0800
Committer: Haohui Mai 
Committed: Tue Nov 24 13:14:49 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  6 +
 .../server/namenode/FSDirStatAndListingOp.java  | 25 +---
 2 files changed, 23 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/977e0b3c/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 7d9df2e..92897b9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1734,6 +1734,9 @@ Release 2.8.0 - UNRELEASED
 
 HDFS-9318. considerLoad factor can be improved. (Kuhu Shukla via kihwal)
 
+HDFS-8335. FSNamesystem should construct FSPermissionChecker only if
+permission is enabled. (Gabor Liptak via wheat9)
+
   BUG FIXES
 
 HDFS-7501. TransactionsSinceLastCheckpoint can be negative on SBNs.
@@ -2376,6 +2379,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-8855. Webhdfs client leaks active NameNode connections.
 (Xiaobing Zhou via xyao)
 
+HDFS-8335. FSNamesystem should construct FSPermissionChecker only if
+permission is enabled. (Gabor Liptak via wheat9)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/977e0b3c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
index a1ac1a7..d8baa6b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
@@ -52,12 +52,17 @@ import static org.apache.hadoop.util.Time.now;
 class FSDirStatAndListingOp {
   static DirectoryListing getListingInt(FSDirectory fsd, final String srcArg,
   byte[] startAfter, boolean needLocation) throws IOException {
-FSPermissionChecker pc = fsd.getPermissionChecker();
 byte[][] pathComponents = FSDirectory
 .getPathComponentsForReservedPath(srcArg);
 final String startAfterString = new String(startAfter, Charsets.UTF_8);
-final String src = fsd.resolvePath(pc, srcArg, pathComponents);
-final INodesInPath iip = fsd.getINodesInPath(src, true);
+String src = null;
+
+if (fsd.isPermissionEnabled()) {
+  FSPermissionChecker pc = fsd.getPermissionChecker();
+  src = fsd.resolvePath(pc, srcArg, pathComponents);
+} else {
+  src = FSDirectory.resolvePath(srcArg, pathComponents, fsd);
+}
 
 // Get file name when startAfter is an INodePath
 if (FSDirectory.isReservedName(startAfterString)) {
@@ -74,8 +79,10 @@ class FSDirStatAndListingOp {
   }
 }
 
+final INodesInPath iip = fsd.getINodesInPath(src, true);
 boolean isSuperUser = true;
 if (fsd.isPermissionEnabled()) {
+  FSPermissionChecker pc = fsd.getPermissionChecker();
   if (iip.getLastINode() != null && iip.getLastINode().isDirectory()) {
 fsd.checkPathAccess(pc, iip, FsAction.READ_EXECUTE);
   } else {
@@ -103,15 +110,17 @@ class FSDirStatAndListingOp {
 if (!DFSUtil.isValidName(src)) {
   throw new InvalidPathException("Invalid file name: " + src);
 }
-FSPermissionChecker pc = fsd.getPermissionChecker();
 byte[][] pathComponents = 
FSDirectory.getPathComponentsForReservedPath(src);
-src = fsd.resolvePath(pc, src, pathComponents);
-final INodesInPath iip = fsd.getINodesInPath(src, resolveLink);
 if (fsd.isPermissionEnabled()) {
+  FSPermissionChecker pc = fsd.getPermissionChecker();
+  src = fsd.resolvePath(pc, src, pathComponents);
+  final INodesInPath iip = fsd.getINodesInPath(src, resolveLink);
   fsd.checkPermission(pc, iip, false, null, null, null, null, false);
+} e

hadoop git commit: MAPREDUCE-5883. "Total megabyte-seconds" in job counters is slightly misleading. Contributed by Nathan Roberts

2015-11-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 977e0b3c4 -> cab3c7c88


MAPREDUCE-5883. "Total megabyte-seconds" in job counters is slightly 
misleading. Contributed by Nathan Roberts


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cab3c7c8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cab3c7c8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cab3c7c8

Branch: refs/heads/trunk
Commit: cab3c7c8892ad33a7eb0955b01e99872ab95e192
Parents: 977e0b3
Author: Jason Lowe 
Authored: Tue Nov 24 22:01:03 2015 +
Committer: Jason Lowe 
Committed: Tue Nov 24 22:01:03 2015 +

--
 hadoop-mapreduce-project/CHANGES.txt | 6 ++
 .../org/apache/hadoop/mapreduce/JobCounter.properties| 8 
 2 files changed, 10 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cab3c7c8/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index d22edd0..3cbacc4 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -654,6 +654,9 @@ Release 2.7.3 - UNRELEASED
 MAPREDUCE-6377. JHS sorting on state column not working in webUi.
 (zhihai xu via devaraj)
 
+MAPREDUCE-5883. "Total megabyte-seconds" in job counters is slightly
+misleading (Nathan Roberts via jlowe)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES
@@ -959,6 +962,9 @@ Release 2.6.3 - UNRELEASED
 position/key information for uncompressed input sometimes. (Zhihai Xu via
 jlowe)
 
+MAPREDUCE-5883. "Total megabyte-seconds" in job counters is slightly
+misleading (Nathan Roberts via jlowe)
+
 Release 2.6.2 - 2015-10-28
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cab3c7c8/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
index 1154784..857f31e 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
@@ -27,10 +27,10 @@ SLOTS_MILLIS_MAPS.name=Total time spent by all 
maps in occupied slot
 SLOTS_MILLIS_REDUCES.name= Total time spent by all reduces in occupied 
slots (ms)
 MILLIS_MAPS.name=  Total time spent by all map tasks (ms)
 MILLIS_REDUCES.name=   Total time spent by all reduce tasks (ms)
-MB_MILLIS_MAPS.name=   Total megabyte-seconds taken by all map 
tasks
-MB_MILLIS_REDUCES.name=Total megabyte-seconds taken by all reduce 
tasks
-VCORES_MILLIS_MAPS.name=   Total vcore-seconds taken by all map tasks
-VCORES_MILLIS_REDUCES.name=Total vcore-seconds taken by all reduce 
tasks
+MB_MILLIS_MAPS.name=   Total megabyte-milliseconds taken by all 
map tasks
+MB_MILLIS_REDUCES.name=Total megabyte-milliseconds taken by all 
reduce tasks
+VCORES_MILLIS_MAPS.name=   Total vcore-milliseconds taken by all map 
tasks
+VCORES_MILLIS_REDUCES.name=Total vcore-milliseconds taken by all 
reduce tasks
 FALLOW_SLOTS_MILLIS_MAPS.name= Total time spent by all maps waiting after 
reserving slots (ms)
 FALLOW_SLOTS_MILLIS_REDUCES.name=  Total time spent by all reduces waiting 
after reserving slots (ms)
 TASKS_REQ_PREEMPT.name=Tasks that have been asked to preempt



hadoop git commit: MAPREDUCE-5883. "Total megabyte-seconds" in job counters is slightly misleading. Contributed by Nathan Roberts (cherry picked from commit cab3c7c8892ad33a7eb0955b01e99872ab95e192)

2015-11-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 b07c91a16 -> ef3e01a1e


MAPREDUCE-5883. "Total megabyte-seconds" in job counters is slightly 
misleading. Contributed by Nathan Roberts
(cherry picked from commit cab3c7c8892ad33a7eb0955b01e99872ab95e192)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ef3e01a1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ef3e01a1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ef3e01a1

Branch: refs/heads/branch-2
Commit: ef3e01a1e40d4e1d234b67e7ff5995e73145cff5
Parents: b07c91a
Author: Jason Lowe 
Authored: Tue Nov 24 22:01:03 2015 +
Committer: Jason Lowe 
Committed: Tue Nov 24 22:01:53 2015 +

--
 hadoop-mapreduce-project/CHANGES.txt | 6 ++
 .../org/apache/hadoop/mapreduce/JobCounter.properties| 8 
 2 files changed, 10 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef3e01a1/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index a668b12..e84be3a 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -364,6 +364,9 @@ Release 2.7.3 - UNRELEASED
 MAPREDUCE-6377. JHS sorting on state column not working in webUi.
 (zhihai xu via devaraj)
 
+MAPREDUCE-5883. "Total megabyte-seconds" in job counters is slightly
+misleading (Nathan Roberts via jlowe)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES
@@ -666,6 +669,9 @@ Release 2.6.3 - UNRELEASED
 position/key information for uncompressed input sometimes. (Zhihai Xu via
 jlowe)
 
+MAPREDUCE-5883. "Total megabyte-seconds" in job counters is slightly
+misleading (Nathan Roberts via jlowe)
+
 Release 2.6.2 - 2015-10-28
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef3e01a1/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
index 7a493a8..774002b 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
@@ -27,9 +27,9 @@ SLOTS_MILLIS_MAPS.name=Total time spent by all 
maps in occupied slot
 SLOTS_MILLIS_REDUCES.name= Total time spent by all reduces in occupied 
slots (ms)
 MILLIS_MAPS.name=  Total time spent by all map tasks (ms)
 MILLIS_REDUCES.name=   Total time spent by all reduce tasks (ms)
-MB_MILLIS_MAPS.name=   Total megabyte-seconds taken by all map 
tasks
-MB_MILLIS_REDUCES.name=Total megabyte-seconds taken by all reduce 
tasks
-VCORES_MILLIS_MAPS.name=   Total vcore-seconds taken by all map tasks
-VCORES_MILLIS_REDUCES.name=Total vcore-seconds taken by all reduce 
tasks
+MB_MILLIS_MAPS.name=   Total megabyte-milliseconds taken by all 
map tasks
+MB_MILLIS_REDUCES.name=Total megabyte-milliseconds taken by all 
reduce tasks
+VCORES_MILLIS_MAPS.name=   Total vcore-milliseconds taken by all map 
tasks
+VCORES_MILLIS_REDUCES.name=Total vcore-milliseconds taken by all 
reduce tasks
 FALLOW_SLOTS_MILLIS_MAPS.name= Total time spent by all maps waiting after 
reserving slots (ms)
 FALLOW_SLOTS_MILLIS_REDUCES.name=  Total time spent by all reduces waiting 
after reserving slots (ms)



hadoop git commit: MAPREDUCE-5883. "Total megabyte-seconds" in job counters is slightly misleading. Contributed by Nathan Roberts (cherry picked from commit cab3c7c8892ad33a7eb0955b01e99872ab95e192)

2015-11-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 af811030e -> 4ba5667ae


MAPREDUCE-5883. "Total megabyte-seconds" in job counters is slightly 
misleading. Contributed by Nathan Roberts
(cherry picked from commit cab3c7c8892ad33a7eb0955b01e99872ab95e192)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4ba5667a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4ba5667a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4ba5667a

Branch: refs/heads/branch-2.7
Commit: 4ba5667aed5b5c677eca5b0ed7405fb63452315f
Parents: af81103
Author: Jason Lowe 
Authored: Tue Nov 24 22:01:03 2015 +
Committer: Jason Lowe 
Committed: Tue Nov 24 22:02:21 2015 +

--
 hadoop-mapreduce-project/CHANGES.txt | 6 ++
 .../org/apache/hadoop/mapreduce/JobCounter.properties| 8 
 2 files changed, 10 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4ba5667a/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index d9aab2a..7220b3a 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -20,6 +20,9 @@ Release 2.7.3 - UNRELEASED
 MAPREDUCE-6377. JHS sorting on state column not working in webUi.
 (zhihai xu via devaraj)
 
+MAPREDUCE-5883. "Total megabyte-seconds" in job counters is slightly
+misleading (Nathan Roberts via jlowe)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES
@@ -322,6 +325,9 @@ Release 2.6.3 - UNRELEASED
 position/key information for uncompressed input sometimes. (Zhihai Xu via
 jlowe)
 
+MAPREDUCE-5883. "Total megabyte-seconds" in job counters is slightly
+misleading (Nathan Roberts via jlowe)
+
 Release 2.6.2 - 2015-10-28
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4ba5667a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
index 7a493a8..774002b 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
@@ -27,9 +27,9 @@ SLOTS_MILLIS_MAPS.name=Total time spent by all 
maps in occupied slot
 SLOTS_MILLIS_REDUCES.name= Total time spent by all reduces in occupied 
slots (ms)
 MILLIS_MAPS.name=  Total time spent by all map tasks (ms)
 MILLIS_REDUCES.name=   Total time spent by all reduce tasks (ms)
-MB_MILLIS_MAPS.name=   Total megabyte-seconds taken by all map 
tasks
-MB_MILLIS_REDUCES.name=Total megabyte-seconds taken by all reduce 
tasks
-VCORES_MILLIS_MAPS.name=   Total vcore-seconds taken by all map tasks
-VCORES_MILLIS_REDUCES.name=Total vcore-seconds taken by all reduce 
tasks
+MB_MILLIS_MAPS.name=   Total megabyte-milliseconds taken by all 
map tasks
+MB_MILLIS_REDUCES.name=Total megabyte-milliseconds taken by all 
reduce tasks
+VCORES_MILLIS_MAPS.name=   Total vcore-milliseconds taken by all map 
tasks
+VCORES_MILLIS_REDUCES.name=Total vcore-milliseconds taken by all 
reduce tasks
 FALLOW_SLOTS_MILLIS_MAPS.name= Total time spent by all maps waiting after 
reserving slots (ms)
 FALLOW_SLOTS_MILLIS_REDUCES.name=  Total time spent by all reduces waiting 
after reserving slots (ms)



hadoop git commit: MAPREDUCE-5883. "Total megabyte-seconds" in job counters is slightly misleading. Contributed by Nathan Roberts (cherry picked from commit cab3c7c8892ad33a7eb0955b01e99872ab95e192)

2015-11-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.6 09dbb1156 -> 5074d836b


MAPREDUCE-5883. "Total megabyte-seconds" in job counters is slightly 
misleading. Contributed by Nathan Roberts
(cherry picked from commit cab3c7c8892ad33a7eb0955b01e99872ab95e192)

Conflicts:

hadoop-mapreduce-project/CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5074d836
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5074d836
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5074d836

Branch: refs/heads/branch-2.6
Commit: 5074d836b08a99361ed8fa348dadfc6746f4b57c
Parents: 09dbb11
Author: Jason Lowe 
Authored: Tue Nov 24 22:03:39 2015 +
Committer: Jason Lowe 
Committed: Tue Nov 24 22:03:39 2015 +

--
 hadoop-mapreduce-project/CHANGES.txt | 3 +++
 .../org/apache/hadoop/mapreduce/JobCounter.properties| 8 
 2 files changed, 7 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5074d836/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index b737ecb..a395c8a 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -32,6 +32,9 @@ Release 2.6.3 - UNRELEASED
 position/key information for uncompressed input sometimes. (Zhihai Xu via
 jlowe)
 
+MAPREDUCE-5883. "Total megabyte-seconds" in job counters is slightly
+misleading (Nathan Roberts via jlowe)
+
 Release 2.6.2 - 2015-10-28
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5074d836/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
index 7a493a8..774002b 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/org/apache/hadoop/mapreduce/JobCounter.properties
@@ -27,9 +27,9 @@ SLOTS_MILLIS_MAPS.name=Total time spent by all 
maps in occupied slot
 SLOTS_MILLIS_REDUCES.name= Total time spent by all reduces in occupied 
slots (ms)
 MILLIS_MAPS.name=  Total time spent by all map tasks (ms)
 MILLIS_REDUCES.name=   Total time spent by all reduce tasks (ms)
-MB_MILLIS_MAPS.name=   Total megabyte-seconds taken by all map 
tasks
-MB_MILLIS_REDUCES.name=Total megabyte-seconds taken by all reduce 
tasks
-VCORES_MILLIS_MAPS.name=   Total vcore-seconds taken by all map tasks
-VCORES_MILLIS_REDUCES.name=Total vcore-seconds taken by all reduce 
tasks
+MB_MILLIS_MAPS.name=   Total megabyte-milliseconds taken by all 
map tasks
+MB_MILLIS_REDUCES.name=Total megabyte-milliseconds taken by all 
reduce tasks
+VCORES_MILLIS_MAPS.name=   Total vcore-milliseconds taken by all map 
tasks
+VCORES_MILLIS_REDUCES.name=Total vcore-milliseconds taken by all 
reduce tasks
 FALLOW_SLOTS_MILLIS_MAPS.name= Total time spent by all maps waiting after 
reserving slots (ms)
 FALLOW_SLOTS_MILLIS_REDUCES.name=  Total time spent by all reduces waiting 
after reserving slots (ms)



hadoop git commit: MAPREDUCE-5870. Support for passing Job priority through Application Submission Context in Mapreduce Side. Contributed by Sunil G

2015-11-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/trunk cab3c7c88 -> f634505d4


MAPREDUCE-5870. Support for passing Job priority through Application Submission 
Context in Mapreduce Side. Contributed by Sunil G


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f634505d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f634505d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f634505d

Branch: refs/heads/trunk
Commit: f634505d48d97e4d461980d68a0cbdf87223646d
Parents: cab3c7c
Author: Jason Lowe 
Authored: Tue Nov 24 22:07:26 2015 +
Committer: Jason Lowe 
Committed: Tue Nov 24 22:07:26 2015 +

--
 hadoop-mapreduce-project/CHANGES.txt|  3 +
 .../v2/app/local/LocalContainerAllocator.java   |  6 ++
 .../mapreduce/v2/app/job/impl/TestJobImpl.java  |  2 +-
 .../app/local/TestLocalContainerAllocator.java  |  5 +-
 .../apache/hadoop/mapreduce/TypeConverter.java  | 48 +-
 .../hadoop/mapreduce/TestTypeConverter.java |  4 +
 .../java/org/apache/hadoop/mapred/JobConf.java  | 92 ++--
 .../org/apache/hadoop/mapred/JobPriority.java   |  6 ++
 .../org/apache/hadoop/mapred/JobStatus.java |  2 +-
 .../java/org/apache/hadoop/mapreduce/Job.java   | 67 --
 .../apache/hadoop/mapreduce/JobPriority.java|  7 +-
 .../org/apache/hadoop/mapreduce/tools/CLI.java  | 29 --
 .../org/apache/hadoop/mapred/TestJobConf.java   | 47 +-
 .../org/apache/hadoop/mapreduce/TestJob.java|  2 +-
 .../org/apache/hadoop/mapred/YARNRunner.java| 19 +++-
 .../apache/hadoop/mapred/TestYARNRunner.java| 25 ++
 .../hadoop/mapreduce/TestMRJobClient.java   |  5 +-
 .../apache/hadoop/mapreduce/v2/TestMRJobs.java  | 63 ++
 18 files changed, 401 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f634505d/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 3cbacc4..ef4bb18 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -434,6 +434,9 @@ Release 2.8.0 - UNRELEASED
 MAPREDUCE-6499. Add elapsed time for retired job in JobHistoryServer WebUI.
 (Lin Yiqun via aajisaka)
 
+MAPREDUCE-5870. Support for passing Job priority through Application
+Submission Context in Mapreduce Side (Sunil G via jlowe)
+
   OPTIMIZATIONS
 
 MAPREDUCE-6376. Add avro binary support for jhist files (Ray Chiang via

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f634505d/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/local/LocalContainerAllocator.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/local/LocalContainerAllocator.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/local/LocalContainerAllocator.java
index aed1023..7437357 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/local/LocalContainerAllocator.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/local/LocalContainerAllocator.java
@@ -43,6 +43,7 @@ import 
org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse;
 import org.apache.hadoop.yarn.api.records.Container;
 import org.apache.hadoop.yarn.api.records.ContainerId;
 import org.apache.hadoop.yarn.api.records.NodeId;
+import org.apache.hadoop.yarn.api.records.Priority;
 import org.apache.hadoop.yarn.api.records.ResourceRequest;
 import org.apache.hadoop.yarn.api.records.Token;
 import org.apache.hadoop.yarn.client.ClientRMProxy;
@@ -146,6 +147,11 @@ public class LocalContainerAllocator extends RMCommunicator
   if (token != null) {
 updateAMRMToken(token);
   }
+  Priority priorityFromResponse = Priority.newInstance(allocateResponse
+  .getApplicationPriority().getPriority());
+
+  // Update the job priority to Job directly.
+  getJob().setJobPriority(priorityFromResponse);
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f634505d/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/te

hadoop git commit: Revert "HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30 seconds for printing info log messags."

2015-11-24 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.6 5074d836b -> 83fcbd49a


Revert "HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30 
seconds for printing info log messags."

This reverts commit 09dbb11566410f2e9101555637ad81c7acc7edfd.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/83fcbd49
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/83fcbd49
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/83fcbd49

Branch: refs/heads/branch-2.6
Commit: 83fcbd49a7508b015a9f91a8f94215b92db9ea2f
Parents: 5074d83
Author: Tsz-Wo Nicholas Sze 
Authored: Tue Nov 24 14:08:45 2015 -0800
Committer: Tsz-Wo Nicholas Sze 
Committed: Tue Nov 24 14:08:45 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  | 3 ---
 .../hadoop/hdfs/server/blockmanagement/BlockManager.java | 8 +---
 2 files changed, 5 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/83fcbd49/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index fb6df6e..5e683e3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -23,9 +23,6 @@ Release 2.6.3 - UNRELEASED
 HDFS-9289. Make DataStreamer#block thread safe and verify genStamp in
 commitBlock. (Chang Li via zhz)
 
-HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30
-seconds for printing info log messags.  (szetszwo)
-
 Release 2.6.2 - 2015-10-28
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/83fcbd49/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 8dbee81..feaf843 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -2853,9 +2853,11 @@ public class BlockManager {
 for(DatanodeStorageInfo storage : blocksMap.getStorages(block, 
State.NORMAL)) {
   final DatanodeDescriptor cur = storage.getDatanodeDescriptor();
   if (storage.areBlockContentsStale()) {
-LOG.trace("BLOCK* processOverReplicatedBlock: Postponing {}"
-+ " since storage {} does not yet have up-to-date information.",
-block, storage);
+LOG.info("BLOCK* processOverReplicatedBlock: " +
+"Postponing processing of over-replicated " +
+block + " since storage + " + storage
++ "datanode " + cur + " does not yet have up-to-date " +
+"block information.");
 postponeBlock(block);
 return;
   }



hadoop git commit: MAPREDUCE-5870. Support for passing Job priority through Application Submission Context in Mapreduce Side. Contributed by Sunil G (cherry picked from commit f634505d48d97e4d461980d6

2015-11-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 ef3e01a1e -> 2f1821850


MAPREDUCE-5870. Support for passing Job priority through Application Submission 
Context in Mapreduce Side. Contributed by Sunil G
(cherry picked from commit f634505d48d97e4d461980d68a0cbdf87223646d)

Conflicts:


hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestJobConf.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2f182185
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2f182185
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2f182185

Branch: refs/heads/branch-2
Commit: 2f18218508524b01a3e19b90950f63920f350c6f
Parents: ef3e01a
Author: Jason Lowe 
Authored: Tue Nov 24 22:15:37 2015 +
Committer: Jason Lowe 
Committed: Tue Nov 24 22:15:37 2015 +

--
 hadoop-mapreduce-project/CHANGES.txt|  3 +
 .../v2/app/local/LocalContainerAllocator.java   |  6 ++
 .../mapreduce/v2/app/job/impl/TestJobImpl.java  |  2 +-
 .../app/local/TestLocalContainerAllocator.java  |  5 +-
 .../apache/hadoop/mapreduce/TypeConverter.java  | 48 +-
 .../hadoop/mapreduce/TestTypeConverter.java |  4 +
 .../java/org/apache/hadoop/mapred/JobConf.java  | 92 ++--
 .../org/apache/hadoop/mapred/JobPriority.java   |  6 ++
 .../org/apache/hadoop/mapred/JobStatus.java |  2 +-
 .../java/org/apache/hadoop/mapreduce/Job.java   | 67 --
 .../apache/hadoop/mapreduce/JobPriority.java|  7 +-
 .../org/apache/hadoop/mapreduce/tools/CLI.java  | 29 --
 .../org/apache/hadoop/mapred/TestJobConf.java   | 47 +-
 .../org/apache/hadoop/mapreduce/TestJob.java|  2 +-
 .../org/apache/hadoop/mapred/YARNRunner.java| 19 +++-
 .../apache/hadoop/mapred/TestYARNRunner.java| 25 ++
 .../hadoop/mapreduce/TestMRJobClient.java   |  5 +-
 .../apache/hadoop/mapreduce/v2/TestMRJobs.java  | 63 ++
 18 files changed, 401 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2f182185/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index e84be3a..324f3a4 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -139,6 +139,9 @@ Release 2.8.0 - UNRELEASED
 MAPREDUCE-6499. Add elapsed time for retired job in JobHistoryServer WebUI.
 (Lin Yiqun via aajisaka)
 
+MAPREDUCE-5870. Support for passing Job priority through Application
+Submission Context in Mapreduce Side (Sunil G via jlowe)
+
   OPTIMIZATIONS
 
 MAPREDUCE-6376. Add avro binary support for jhist files (Ray Chiang via

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2f182185/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/local/LocalContainerAllocator.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/local/LocalContainerAllocator.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/local/LocalContainerAllocator.java
index aed1023..7437357 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/local/LocalContainerAllocator.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/local/LocalContainerAllocator.java
@@ -43,6 +43,7 @@ import 
org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse;
 import org.apache.hadoop.yarn.api.records.Container;
 import org.apache.hadoop.yarn.api.records.ContainerId;
 import org.apache.hadoop.yarn.api.records.NodeId;
+import org.apache.hadoop.yarn.api.records.Priority;
 import org.apache.hadoop.yarn.api.records.ResourceRequest;
 import org.apache.hadoop.yarn.api.records.Token;
 import org.apache.hadoop.yarn.client.ClientRMProxy;
@@ -146,6 +147,11 @@ public class LocalContainerAllocator extends RMCommunicator
   if (token != null) {
 updateAMRMToken(token);
   }
+  Priority priorityFromResponse = Priority.newInstance(allocateResponse
+  .getApplicationPriority().getPriority());
+
+  // Update the job priority to Job directly.
+  getJob().setJobPriority(priorityFromResponse);
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2f182185/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/jav

hadoop git commit: YARN-4132. Separate configs for nodemanager to resourcemanager connection timeout and retries. Contributed by Chang Li (cherry picked from commit 4ac6799d4a8b071e0d367c2d709e84d8ea0

2015-11-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 2f1821850 -> 9e54433c6


YARN-4132. Separate configs for nodemanager to resourcemanager connection 
timeout and retries. Contributed by Chang Li
(cherry picked from commit 4ac6799d4a8b071e0d367c2d709e84d8ea06942d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9e54433c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9e54433c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9e54433c

Branch: refs/heads/branch-2
Commit: 9e54433c6c0911f7f61f35c6bff75868bf029b1a
Parents: 2f18218
Author: Jason Lowe 
Authored: Tue Nov 24 22:35:37 2015 +
Committer: Jason Lowe 
Committed: Tue Nov 24 22:36:19 2015 +

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../hadoop/yarn/conf/YarnConfiguration.java | 18 -
 .../org/apache/hadoop/yarn/client/RMProxy.java  | 39 -
 .../src/main/resources/yarn-default.xml | 20 +
 .../hadoop/yarn/server/api/ServerRMProxy.java   | 22 +-
 .../nodemanager/TestNodeStatusUpdater.java  | 83 
 6 files changed, 181 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9e54433c/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index fa2049a..fee03c5 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -519,6 +519,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3980. Plumb resource-utilization info in node heartbeat through to 
the 
 scheduler. (Inigo Goiri via kasha)
 
+YARN-4132. Separate configs for nodemanager to resourcemanager connection
+timeout and retries (Chang Li via jlowe)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9e54433c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 14c1ffc..5a36bd1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -2050,7 +2050,23 @@ public class YarnConfiguration extends Configuration {
   public static final String YARN_HTTP_POLICY_KEY = YARN_PREFIX + 
"http.policy";
   public static final String YARN_HTTP_POLICY_DEFAULT = 
HttpConfig.Policy.HTTP_ONLY
   .name();
-  
+
+  /**
+   * Max time to wait for NM to connection to RM.
+   * When not set, proxy will fall back to use value of
+   * RESOURCEMANAGER_CONNECT_MAX_WAIT_MS.
+   */
+  public static final String NM_RESOURCEMANAGER_CONNECT_MAX_WAIT_MS =
+  YARN_PREFIX + "nodemanager.resourcemanager.connect.max-wait.ms";
+
+  /**
+   * Time interval between each NM attempt to connection to RM.
+   * When not set, proxy will fall back to use value of
+   * RESOURCEMANAGER_CONNECT_RETRY_INTERVAL_MS.
+   */
+  public static final String NM_RESOURCEMANAGER_CONNECT_RETRY_INTERVAL_MS =
+  YARN_PREFIX + "nodemanager.resourcemanager.connect.retry-interval.ms";
+
   /**
* Node-labels configurations
*/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9e54433c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java
index 23e1691..3779ce5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java
@@ -88,7 +88,32 @@ public class RMProxy {
 YarnConfiguration conf = (configuration instanceof YarnConfiguration)
 ? (YarnConfiguration) configuration
 : new YarnConfiguration(configuration);
-RetryPolicy retryPolicy = createRetryPolicy(conf);
+RetryPolicy retryPolicy =
+createRetryPolicy(conf);
+return createRMProxy(conf, protocol, instance, retryPolicy);
+  }
+
+  /**
+   * Create a proxy for the specified protocol. F

hadoop git commit: YARN-4132. Separate configs for nodemanager to resourcemanager connection timeout and retries. Contributed by Chang Li

2015-11-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/trunk f634505d4 -> 4ac6799d4


YARN-4132. Separate configs for nodemanager to resourcemanager connection 
timeout and retries. Contributed by Chang Li


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4ac6799d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4ac6799d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4ac6799d

Branch: refs/heads/trunk
Commit: 4ac6799d4a8b071e0d367c2d709e84d8ea06942d
Parents: f634505
Author: Jason Lowe 
Authored: Tue Nov 24 22:35:37 2015 +
Committer: Jason Lowe 
Committed: Tue Nov 24 22:35:37 2015 +

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../hadoop/yarn/conf/YarnConfiguration.java | 18 -
 .../org/apache/hadoop/yarn/client/RMProxy.java  | 39 -
 .../src/main/resources/yarn-default.xml | 20 +
 .../hadoop/yarn/server/api/ServerRMProxy.java   | 22 +-
 .../nodemanager/TestNodeStatusUpdater.java  | 83 
 6 files changed, 181 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4ac6799d/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 204c338..4483589 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -571,6 +571,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3980. Plumb resource-utilization info in node heartbeat through to 
the 
 scheduler. (Inigo Goiri via kasha)
 
+YARN-4132. Separate configs for nodemanager to resourcemanager connection
+timeout and retries (Chang Li via jlowe)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4ac6799d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 18e6082..f493fd3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -2050,7 +2050,23 @@ public class YarnConfiguration extends Configuration {
   public static final String YARN_HTTP_POLICY_KEY = YARN_PREFIX + 
"http.policy";
   public static final String YARN_HTTP_POLICY_DEFAULT = 
HttpConfig.Policy.HTTP_ONLY
   .name();
-  
+
+  /**
+   * Max time to wait for NM to connection to RM.
+   * When not set, proxy will fall back to use value of
+   * RESOURCEMANAGER_CONNECT_MAX_WAIT_MS.
+   */
+  public static final String NM_RESOURCEMANAGER_CONNECT_MAX_WAIT_MS =
+  YARN_PREFIX + "nodemanager.resourcemanager.connect.max-wait.ms";
+
+  /**
+   * Time interval between each NM attempt to connection to RM.
+   * When not set, proxy will fall back to use value of
+   * RESOURCEMANAGER_CONNECT_RETRY_INTERVAL_MS.
+   */
+  public static final String NM_RESOURCEMANAGER_CONNECT_RETRY_INTERVAL_MS =
+  YARN_PREFIX + "nodemanager.resourcemanager.connect.retry-interval.ms";
+
   /**
* Node-labels configurations
*/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4ac6799d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java
index 23e1691..3779ce5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java
@@ -88,7 +88,32 @@ public class RMProxy {
 YarnConfiguration conf = (configuration instanceof YarnConfiguration)
 ? (YarnConfiguration) configuration
 : new YarnConfiguration(configuration);
-RetryPolicy retryPolicy = createRetryPolicy(conf);
+RetryPolicy retryPolicy =
+createRetryPolicy(conf);
+return createRMProxy(conf, protocol, instance, retryPolicy);
+  }
+
+  /**
+   * Create a proxy for the specified protocol. For non-HA,
+   * this is a direct connection to the ResourceManager address

hadoop git commit: HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30 seconds for printing info log messags.

2015-11-24 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.6 83fcbd49a -> d2518fd56


HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30 seconds 
for printing info log messags.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d2518fd5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d2518fd5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d2518fd5

Branch: refs/heads/branch-2.6
Commit: d2518fd5627220736e691b4a55638fc1f57d612e
Parents: 83fcbd4
Author: Tsz-Wo Nicholas Sze 
Authored: Tue Nov 24 14:34:51 2015 -0800
Committer: Tsz-Wo Nicholas Sze 
Committed: Tue Nov 24 14:34:51 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   |  3 +++
 .../hadoop/hdfs/server/blockmanagement/BlockManager.java  | 10 +-
 2 files changed, 8 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d2518fd5/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 5e683e3..fb6df6e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -23,6 +23,9 @@ Release 2.6.3 - UNRELEASED
 HDFS-9289. Make DataStreamer#block thread safe and verify genStamp in
 commitBlock. (Chang Li via zhz)
 
+HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30
+seconds for printing info log messags.  (szetszwo)
+
 Release 2.6.2 - 2015-10-28
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d2518fd5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index feaf843..123a706 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -2853,11 +2853,11 @@ public class BlockManager {
 for(DatanodeStorageInfo storage : blocksMap.getStorages(block, 
State.NORMAL)) {
   final DatanodeDescriptor cur = storage.getDatanodeDescriptor();
   if (storage.areBlockContentsStale()) {
-LOG.info("BLOCK* processOverReplicatedBlock: " +
-"Postponing processing of over-replicated " +
-block + " since storage + " + storage
-+ "datanode " + cur + " does not yet have up-to-date " +
-"block information.");
+if (LOG.isTraceEnabled()) {
+  LOG.trace("BLOCK* processOverReplicatedBlock: Postponing " + block
+  + " since storage " + storage
+  + " does not yet have up-to-date information.");
+}
 postponeBlock(block);
 return;
   }



hadoop git commit: HDFS-9359. Test libhdfs++ with existing libhdfs tests. Contributed by Stephen Walkauskas.

2015-11-24 Thread jhc
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-8707 ebfb48b88 -> 8b7769557


HDFS-9359.  Test libhdfs++ with existing libhdfs tests.  Contributed by Stephen 
Walkauskas.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8b776955
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8b776955
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8b776955

Branch: refs/heads/HDFS-8707
Commit: 8b77695572d490c548782c798b2cb2fc72bd11f2
Parents: ebfb48b
Author: James 
Authored: Tue Nov 24 17:56:26 2015 -0500
Committer: James 
Committed: Tue Nov 24 17:56:26 2015 -0500

--
 .../main/native/libhdfspp/tests/CMakeLists.txt  |  22 ++
 .../src/main/native/libhdfspp/tests/hdfs_shim.c | 381 +++
 .../native/libhdfspp/tests/libhdfs_wrapper.c|  22 ++
 .../native/libhdfspp/tests/libhdfs_wrapper.h|  28 ++
 .../libhdfspp/tests/libhdfs_wrapper_defines.h   |  92 +
 .../libhdfspp/tests/libhdfs_wrapper_undefs.h|  92 +
 .../native/libhdfspp/tests/libhdfspp_wrapper.cc |  22 ++
 .../native/libhdfspp/tests/libhdfspp_wrapper.h  |  28 ++
 .../libhdfspp/tests/libhdfspp_wrapper_defines.h |  92 +
 9 files changed, 779 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8b776955/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/CMakeLists.txt
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/CMakeLists.txt
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/CMakeLists.txt
index 51b6bfe..abd3858 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/CMakeLists.txt
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/CMakeLists.txt
@@ -16,6 +16,24 @@
 # limitations under the License.
 #
 
+# Delegate some functionality to libhdfs, until libhdfspp is complete.
+set (LIBHDFS_SRC_DIR ../../libhdfs)
+set (LIBHDFS_TESTS_DIR ../../libhdfs-tests)
+set (LIBHDFSPP_SRC_DIR ..)
+set (LIBHDFSPP_LIB_DIR ${LIBHDFSPP_SRC_DIR}/lib)
+set (LIBHDFSPP_BINDING_C ${LIBHDFSPP_LIB_DIR}/bindings/c)
+include_directories(
+${GENERATED_JAVAH}
+${CMAKE_CURRENT_SOURCE_DIR}
+${CMAKE_BINARY_DIR}
+${JNI_INCLUDE_DIRS}
+${LIBHDFS_SRC_DIR}/include
+${LIBHDFS_SRC_DIR}/..
+${LIBHDFS_SRC_DIR}
+${OS_DIR}
+)
+add_library(hdfspp_test_shim_static STATIC hdfs_shim.c libhdfs_wrapper.c 
libhdfspp_wrapper.cc ${LIBHDFSPP_BINDING_C}/hdfs_cpp.cc)
+
 add_library(test_common OBJECT mock_connection.cc)
 
 set(PROTOBUF_IMPORT_DIRS ${PROTO_HADOOP_TEST_DIR})
@@ -49,3 +67,7 @@ add_test(bad_datanode bad_datanode_test)
 add_executable(node_exclusion_test node_exclusion_test.cc)
 target_link_libraries(node_exclusion_test fs gmock_main common 
${PROTOBUF_LIBRARIES} ${OPENSSL_LIBRARIES} ${CMAKE_THREAD_LIBS_INIT})
 add_test(node_exclusion node_exclusion_test)
+
+build_libhdfs_test(libhdfs_threaded hdfspp_test_shim_static expect.c 
test_libhdfs_threaded.c ${OS_DIR}/thread.c)
+link_libhdfs_test(libhdfs_threaded hdfspp_test_shim_static fs reader rpc proto 
common ${PROTOBUF_LIBRARIES} ${OPENSSL_LIBRARIES} native_mini_dfs 
${JAVA_JVM_LIBRARY})
+add_libhdfs_test(libhdfs_threaded hdfspp_test_shim_static)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8b776955/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_shim.c
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_shim.c
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_shim.c
new file mode 100644
index 000..3954440
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_shim.c
@@ -0,0 +1,381 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "libhdfs_wrapper.h"
+#include "libhdfspp_wrapper.h"
+#include "h

hadoop git commit: YARN-4365. FileSystemNodeLabelStore should check for root dir existence on startup. Contributed by Kuhu Shukla

2015-11-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 4ac6799d4 -> f5acf94ec


YARN-4365. FileSystemNodeLabelStore should check for root dir existence on 
startup. Contributed by Kuhu Shukla


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f5acf94e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f5acf94e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f5acf94e

Branch: refs/heads/trunk
Commit: f5acf94ecafb301a0cc8e8f91f19c8bcbc8da701
Parents: 4ac6799
Author: Jason Lowe 
Authored: Tue Nov 24 23:47:10 2015 +
Committer: Jason Lowe 
Committed: Tue Nov 24 23:47:10 2015 +

--
 hadoop-yarn-project/CHANGES.txt |  6 
 .../nodelabels/FileSystemNodeLabelsStore.java   |  6 ++--
 .../TestFileSystemNodeLabelsStore.java  | 29 
 3 files changed, 39 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f5acf94e/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 4483589..155d5c9 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -1100,6 +1100,9 @@ Release 2.7.3 - UNRELEASED
 YARN-4344. NMs reconnecting with changed capabilities can lead to wrong
 cluster resource calculations (Varun Vasudev via jlowe)
 
+YARN-4365. FileSystemNodeLabelStore should check for root dir existence on
+startup (Kuhu Shukla via jlowe)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES
@@ -1960,6 +1963,9 @@ Release 2.6.3 - UNRELEASED
 YARN-3925. ContainerLogsUtils#getContainerLogFile fails to read container
 log files from full disks. (zhihai xu via jlowe)
 
+YARN-4365. FileSystemNodeLabelStore should check for root dir existence on
+startup (Kuhu Shukla via jlowe)
+
 Release 2.6.2 - 2015-10-28
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f5acf94e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
index 20dc67c..c9727a2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
@@ -88,7 +88,9 @@ public class FileSystemNodeLabelsStore extends 
NodeLabelsStore {
 setFileSystem(conf);
 
 // mkdir of root dir path
-fs.mkdirs(fsWorkingPath);
+if (!fs.exists(fsWorkingPath)) {
+  fs.mkdirs(fsWorkingPath);
+}
   }
 
   @Override
@@ -96,7 +98,7 @@ public class FileSystemNodeLabelsStore extends 
NodeLabelsStore {
 IOUtils.cleanup(LOG, fs, editlogOs);
   }
 
-  private void setFileSystem(Configuration conf) throws IOException {
+  void setFileSystem(Configuration conf) throws IOException {
 Configuration confCopy = new Configuration(conf);
 confCopy.setBoolean("dfs.client.retry.policy.enabled", true);
 String retryPolicy =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f5acf94e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
index 4b052c9..4929f95 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
@@ -24,6 +24,8 @@ import java.util.Arrays;
 import java.util.Map;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.yarn.api.records.NodeLabel;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.event.InlineDispatcher;
@@ -33,6 +35,7 @@ import org.junit.Before;
 import org.junit.Test;
 
 import com.google.common.collect.ImmutableMap;
+i

hadoop git commit: YARN-4365. FileSystemNodeLabelStore should check for root dir existence on startup. Contributed by Kuhu Shukla (cherry picked from commit f5acf94ecafb301a0cc8e8f91f19c8bcbc8da701)

2015-11-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 9e54433c6 -> 3a2f5d632


YARN-4365. FileSystemNodeLabelStore should check for root dir existence on 
startup. Contributed by Kuhu Shukla
(cherry picked from commit f5acf94ecafb301a0cc8e8f91f19c8bcbc8da701)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3a2f5d63
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3a2f5d63
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3a2f5d63

Branch: refs/heads/branch-2
Commit: 3a2f5d6329e80884b795899f0c42c0d61d729e15
Parents: 9e54433
Author: Jason Lowe 
Authored: Tue Nov 24 23:47:10 2015 +
Committer: Jason Lowe 
Committed: Tue Nov 24 23:47:54 2015 +

--
 hadoop-yarn-project/CHANGES.txt |  6 
 .../nodelabels/FileSystemNodeLabelsStore.java   |  6 ++--
 .../TestFileSystemNodeLabelsStore.java  | 29 
 3 files changed, 39 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3a2f5d63/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index fee03c5..fe1d3f0 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -1048,6 +1048,9 @@ Release 2.7.3 - UNRELEASED
 YARN-4344. NMs reconnecting with changed capabilities can lead to wrong
 cluster resource calculations (Varun Vasudev via jlowe)
 
+YARN-4365. FileSystemNodeLabelStore should check for root dir existence on
+startup (Kuhu Shukla via jlowe)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES
@@ -1914,6 +1917,9 @@ Release 2.6.3 - UNRELEASED
 YARN-3925. ContainerLogsUtils#getContainerLogFile fails to read container
 log files from full disks. (zhihai xu via jlowe)
 
+YARN-4365. FileSystemNodeLabelStore should check for root dir existence on
+startup (Kuhu Shukla via jlowe)
+
 Release 2.6.2 - 2015-10-28
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3a2f5d63/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
index 20dc67c..c9727a2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
@@ -88,7 +88,9 @@ public class FileSystemNodeLabelsStore extends 
NodeLabelsStore {
 setFileSystem(conf);
 
 // mkdir of root dir path
-fs.mkdirs(fsWorkingPath);
+if (!fs.exists(fsWorkingPath)) {
+  fs.mkdirs(fsWorkingPath);
+}
   }
 
   @Override
@@ -96,7 +98,7 @@ public class FileSystemNodeLabelsStore extends 
NodeLabelsStore {
 IOUtils.cleanup(LOG, fs, editlogOs);
   }
 
-  private void setFileSystem(Configuration conf) throws IOException {
+  void setFileSystem(Configuration conf) throws IOException {
 Configuration confCopy = new Configuration(conf);
 confCopy.setBoolean("dfs.client.retry.policy.enabled", true);
 String retryPolicy =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3a2f5d63/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
index 4b052c9..4929f95 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
@@ -24,6 +24,8 @@ import java.util.Arrays;
 import java.util.Map;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.yarn.api.records.NodeLabel;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.event.InlineDispatcher;
@@ -33,6 +35,7 @@ import org.junit.Before;
 

hadoop git commit: YARN-4365. FileSystemNodeLabelStore should check for root dir existence on startup. Contributed by Kuhu Shukla (cherry picked from commit f5acf94ecafb301a0cc8e8f91f19c8bcbc8da701)

2015-11-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 4ba5667ae -> b68f527b9


YARN-4365. FileSystemNodeLabelStore should check for root dir existence on 
startup. Contributed by Kuhu Shukla
(cherry picked from commit f5acf94ecafb301a0cc8e8f91f19c8bcbc8da701)

Conflicts:


hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b68f527b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b68f527b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b68f527b

Branch: refs/heads/branch-2.7
Commit: b68f527b9fee64e3c92766befe9544f780f1347e
Parents: 4ba5667
Author: Jason Lowe 
Authored: Tue Nov 24 23:52:46 2015 +
Committer: Jason Lowe 
Committed: Tue Nov 24 23:52:46 2015 +

--
 hadoop-yarn-project/CHANGES.txt |  6 
 .../nodelabels/FileSystemNodeLabelsStore.java   |  6 ++--
 .../TestFileSystemNodeLabelsStore.java  | 29 
 3 files changed, 39 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b68f527b/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index a2573fb..a2e9798 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -31,6 +31,9 @@ Release 2.7.3 - UNRELEASED
 YARN-4344. NMs reconnecting with changed capabilities can lead to wrong
 cluster resource calculations (Varun Vasudev via jlowe)
 
+YARN-4365. FileSystemNodeLabelStore should check for root dir existence on
+startup (Kuhu Shukla via jlowe)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES
@@ -917,6 +920,9 @@ Release 2.6.3 - UNRELEASED
 YARN-3925. ContainerLogsUtils#getContainerLogFile fails to read container
 log files from full disks. (zhihai xu via jlowe)
 
+YARN-4365. FileSystemNodeLabelStore should check for root dir existence on
+startup (Kuhu Shukla via jlowe)
+
 Release 2.6.2 - 2015-10-28
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b68f527b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
index 6e685ee..7648690 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
@@ -84,7 +84,9 @@ public class FileSystemNodeLabelsStore extends 
NodeLabelsStore {
 setFileSystem(conf);
 
 // mkdir of root dir path
-fs.mkdirs(fsWorkingPath);
+if (!fs.exists(fsWorkingPath)) {
+  fs.mkdirs(fsWorkingPath);
+}
   }
 
   @Override
@@ -97,7 +99,7 @@ public class FileSystemNodeLabelsStore extends 
NodeLabelsStore {
 }
   }
 
-  private void setFileSystem(Configuration conf) throws IOException {
+  void setFileSystem(Configuration conf) throws IOException {
 Configuration confCopy = new Configuration(conf);
 confCopy.setBoolean("dfs.client.retry.policy.enabled", true);
 String retryPolicy =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b68f527b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
index 56da848..be4e70d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
@@ -24,6 +24,8 @@ import java.util.Arrays;
 import java.util.Map;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.eve

hadoop git commit: YARN-4365. FileSystemNodeLabelStore should check for root dir existence on startup. Contributed by Kuhu Shukla (cherry picked from commit f5acf94ecafb301a0cc8e8f91f19c8bcbc8da701)

2015-11-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.6 d2518fd56 -> f61e3320c


YARN-4365. FileSystemNodeLabelStore should check for root dir existence on 
startup. Contributed by Kuhu Shukla
(cherry picked from commit f5acf94ecafb301a0cc8e8f91f19c8bcbc8da701)

Conflicts:

hadoop-yarn-project/CHANGES.txt

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f61e3320
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f61e3320
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f61e3320

Branch: refs/heads/branch-2.6
Commit: f61e3320cbc50870c603671bf4b18d28eee95270
Parents: d2518fd
Author: Jason Lowe 
Authored: Tue Nov 24 23:59:40 2015 +
Committer: Jason Lowe 
Committed: Tue Nov 24 23:59:40 2015 +

--
 hadoop-yarn-project/CHANGES.txt |  3 ++
 .../nodelabels/FileSystemNodeLabelsStore.java   |  6 ++--
 .../TestFileSystemNodeLabelsStore.java  | 29 
 3 files changed, 36 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f61e3320/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index fb29938..26c3a19 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -36,6 +36,9 @@ Release 2.6.3 - UNRELEASED
 YARN-3878. AsyncDispatcher can hang while stopping if it is configured for 
 draining events on stop. (Varun Saxena via kasha)
 
+YARN-4365. FileSystemNodeLabelStore should check for root dir existence on
+startup (Kuhu Shukla via jlowe)
+
 Release 2.6.2 - 2015-10-28
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f61e3320/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
index 6e685ee..7648690 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/FileSystemNodeLabelsStore.java
@@ -84,7 +84,9 @@ public class FileSystemNodeLabelsStore extends 
NodeLabelsStore {
 setFileSystem(conf);
 
 // mkdir of root dir path
-fs.mkdirs(fsWorkingPath);
+if (!fs.exists(fsWorkingPath)) {
+  fs.mkdirs(fsWorkingPath);
+}
   }
 
   @Override
@@ -97,7 +99,7 @@ public class FileSystemNodeLabelsStore extends 
NodeLabelsStore {
 }
   }
 
-  private void setFileSystem(Configuration conf) throws IOException {
+  void setFileSystem(Configuration conf) throws IOException {
 Configuration confCopy = new Configuration(conf);
 confCopy.setBoolean("dfs.client.retry.policy.enabled", true);
 String retryPolicy =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f61e3320/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
index 45a2d8d..fd57865 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
@@ -24,6 +24,8 @@ import java.util.Arrays;
 import java.util.Map;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.event.InlineDispatcher;
 import org.junit.After;
@@ -32,6 +34,7 @@ import org.junit.Before;
 import org.junit.Test;
 
 import com.google.common.collect.ImmutableMap;
+import org.mockito.Mockito;
 
 public class TestFileSystemNodeLabelsStore extends NodeLabelTestBase {
   MockNodeLabelManager mgr = null;
@@ -249,4 +252,30

hadoop git commit: HDFS-8807. dfs.datanode.data.dir does not handle spaces between storageType and URI correctly. Contributed by Anu Engineer

2015-11-24 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/trunk f5acf94ec -> 78ec38b2e


HDFS-8807.  dfs.datanode.data.dir does not handle spaces between storageType 
and URI correctly.  Contributed by Anu Engineer


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/78ec38b2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/78ec38b2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/78ec38b2

Branch: refs/heads/trunk
Commit: 78ec38b2ede8bdf3874b2ae051af9580007a9ba1
Parents: f5acf94
Author: Tsz-Wo Nicholas Sze 
Authored: Tue Nov 24 16:01:55 2015 -0800
Committer: Tsz-Wo Nicholas Sze 
Committed: Tue Nov 24 16:01:55 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../hdfs/server/datanode/StorageLocation.java   |  2 +-
 .../hdfs/server/datanode/TestDataDirs.java  | 29 ++--
 3 files changed, 25 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/78ec38b2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 92897b9..db49e54 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1681,6 +1681,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9314. Improve BlockPlacementPolicyDefault's picking of excess
 replicas. (Xiao Chen via mingma)
 
+HDFS-8807.  dfs.datanode.data.dir does not handle spaces between
+storageType and URI correctly.  (Anu Engineer via szetszwo)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/78ec38b2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
index 7873459..46e8e8a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
@@ -87,7 +87,7 @@ public class StorageLocation {
 
 if (matcher.matches()) {
   String classString = matcher.group(1);
-  location = matcher.group(2);
+  location = matcher.group(2).trim();
   if (!classString.isEmpty()) {
 storageType =
 StorageType.valueOf(StringUtils.toUpperCase(classString));

http://git-wip-us.apache.org/repos/asf/hadoop/blob/78ec38b2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataDirs.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataDirs.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataDirs.java
index 396945e..d41c13e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataDirs.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataDirs.java
@@ -36,7 +36,7 @@ import 
org.apache.hadoop.hdfs.server.datanode.DataNode.DataNodeDiskChecker;
 
 public class TestDataDirs {
 
-  @Test (timeout = 3)
+  @Test(timeout = 3)
   public void testDataDirParsing() throws Throwable {
 Configuration conf = new Configuration();
 List locations;
@@ -46,12 +46,16 @@ public class TestDataDirs {
 File dir3 = new File("/dir3");
 File dir4 = new File("/dir4");
 
+File dir5 = new File("/dir5");
+File dir6 = new File("/dir6");
 // Verify that a valid string is correctly parsed, and that storage
-// type is not case-sensitive
-String locations1 = 
"[disk]/dir0,[DISK]/dir1,[sSd]/dir2,[disK]/dir3,[ram_disk]/dir4";
+// type is not case-sensitive and we are able to handle white-space between
+// storage type and URI.
+String locations1 = "[disk]/dir0,[DISK]/dir1,[sSd]/dir2,[disK]/dir3," +
+"[ram_disk]/dir4,[disk]/dir5, [disk] /dir6, [disk] ";
 conf.set(DFS_DATANODE_DATA_DIR_KEY, locations1);
 locations = DataNode.getStorageLocations(conf);
-assertThat(locations.size(), is(5));
+assertThat(locations.size(), is(8));
 assertThat(locations.get(0).getStorageType(), is(StorageType.DISK));
 assertThat(locations.get(0).getUri(), is(dir0.toURI()));
 assertThat(locations.get(1).getStorageType(), is(Storage

hadoop git commit: HDFS-8807. dfs.datanode.data.dir does not handle spaces between storageType and URI correctly. Contributed by Anu Engineer

2015-11-24 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 3a2f5d632 -> 79e55fb47


HDFS-8807.  dfs.datanode.data.dir does not handle spaces between storageType 
and URI correctly.  Contributed by Anu Engineer


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/79e55fb4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/79e55fb4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/79e55fb4

Branch: refs/heads/branch-2
Commit: 79e55fb47f16c9ed3dfb0ff2b103ed920b75cf62
Parents: 3a2f5d6
Author: Tsz-Wo Nicholas Sze 
Authored: Tue Nov 24 16:01:55 2015 -0800
Committer: Tsz-Wo Nicholas Sze 
Committed: Tue Nov 24 16:05:21 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../hdfs/server/datanode/StorageLocation.java   |  2 +-
 .../hdfs/server/datanode/TestDataDirs.java  | 29 ++--
 3 files changed, 25 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/79e55fb4/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 097c8e8..0732893 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -814,6 +814,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9314. Improve BlockPlacementPolicyDefault's picking of excess
 replicas. (Xiao Chen via mingma)
 
+HDFS-8807.  dfs.datanode.data.dir does not handle spaces between
+storageType and URI correctly.  (Anu Engineer via szetszwo)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/79e55fb4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
index 5c8dd85..4cae381 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
@@ -87,7 +87,7 @@ public class StorageLocation {
 
 if (matcher.matches()) {
   String classString = matcher.group(1);
-  location = matcher.group(2);
+  location = matcher.group(2).trim();
   if (!classString.isEmpty()) {
 storageType =
 StorageType.valueOf(StringUtils.toUpperCase(classString));

http://git-wip-us.apache.org/repos/asf/hadoop/blob/79e55fb4/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataDirs.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataDirs.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataDirs.java
index 396945e..d41c13e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataDirs.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataDirs.java
@@ -36,7 +36,7 @@ import 
org.apache.hadoop.hdfs.server.datanode.DataNode.DataNodeDiskChecker;
 
 public class TestDataDirs {
 
-  @Test (timeout = 3)
+  @Test(timeout = 3)
   public void testDataDirParsing() throws Throwable {
 Configuration conf = new Configuration();
 List locations;
@@ -46,12 +46,16 @@ public class TestDataDirs {
 File dir3 = new File("/dir3");
 File dir4 = new File("/dir4");
 
+File dir5 = new File("/dir5");
+File dir6 = new File("/dir6");
 // Verify that a valid string is correctly parsed, and that storage
-// type is not case-sensitive
-String locations1 = 
"[disk]/dir0,[DISK]/dir1,[sSd]/dir2,[disK]/dir3,[ram_disk]/dir4";
+// type is not case-sensitive and we are able to handle white-space between
+// storage type and URI.
+String locations1 = "[disk]/dir0,[DISK]/dir1,[sSd]/dir2,[disK]/dir3," +
+"[ram_disk]/dir4,[disk]/dir5, [disk] /dir6, [disk] ";
 conf.set(DFS_DATANODE_DATA_DIR_KEY, locations1);
 locations = DataNode.getStorageLocations(conf);
-assertThat(locations.size(), is(5));
+assertThat(locations.size(), is(8));
 assertThat(locations.get(0).getStorageType(), is(StorageType.DISK));
 assertThat(locations.get(0).getUri(), is(dir0.toURI()));
 assertThat(locations.get(1).getStorageType(), is(Sto

hadoop git commit: YARN-4384. updateNodeResource CLI should not accept negative values for resource. (Junping Du via wangda)

2015-11-24 Thread wangda
Repository: hadoop
Updated Branches:
  refs/heads/trunk 78ec38b2e -> 23c625ec5


YARN-4384. updateNodeResource CLI should not accept negative values for 
resource. (Junping Du via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/23c625ec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/23c625ec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/23c625ec

Branch: refs/heads/trunk
Commit: 23c625ec571b01c0a2811728608890da38f86890
Parents: 78ec38b
Author: Wangda Tan 
Authored: Tue Nov 24 16:35:56 2015 -0800
Committer: Wangda Tan 
Committed: Tue Nov 24 16:35:56 2015 -0800

--
 hadoop-yarn-project/CHANGES.txt   |  3 +++
 .../org/apache/hadoop/yarn/client/cli/RMAdminCLI.java | 11 +++
 .../apache/hadoop/yarn/client/cli/TestRMAdminCLI.java | 14 ++
 3 files changed, 28 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c625ec/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 155d5c9..e036335 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -1069,6 +1069,9 @@ Release 2.8.0 - UNRELEASED
 
 YARN-4387. Fix typo in FairScheduler log message. (Xin Wang via ozawa)
 
+YARN-4384. updateNodeResource CLI should not accept negative values for 
resource.
+(Junping Du via wangda)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c625ec/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/RMAdminCLI.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/RMAdminCLI.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/RMAdminCLI.java
index 21ba7a8..a5e53e4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/RMAdminCLI.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/RMAdminCLI.java
@@ -418,11 +418,17 @@ public class RMAdminCLI extends HAAdmin {
 
   private int updateNodeResource(String nodeIdStr, int memSize,
   int cores, int overCommitTimeout) throws IOException, YarnException {
+// check resource value first
+if (invalidResourceValue(memSize, cores)) {
+  throw new IllegalArgumentException("Invalid resource value: " + "(" +
+  memSize + "," + cores + ") for updateNodeResource.");
+}
 // Refresh the nodes
 ResourceManagerAdministrationProtocol adminProtocol = 
createAdminProtocol();
 UpdateNodeResourceRequest request =
   recordFactory.newRecordInstance(UpdateNodeResourceRequest.class);
 NodeId nodeId = ConverterUtils.toNodeId(nodeIdStr);
+
 Resource resource = Resources.createResource(memSize, cores);
 Map resourceMap =
 new HashMap();
@@ -433,6 +439,11 @@ public class RMAdminCLI extends HAAdmin {
 return 0;
   }
 
+  // complain negative value for cpu or memory.
+  private boolean invalidResourceValue(int memValue, int coreValue) {
+return (memValue < 0) || (coreValue < 0);
+  }
+
   private int getGroups(String[] usernames) throws IOException {
 // Get groups users belongs to
 ResourceManagerAdministrationProtocol adminProtocol = 
createAdminProtocol();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c625ec/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
index 085cf02..f01441d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
@@ -219,6 +219,20 @@ public class TestRMAdminCLI {
   }
 
   @Test(timeout=500)
+  public void testUpdateNodeResourceWithInvalidValue() throws Exception {
+String nodeIdStr = "0.0.0.0:0";
+int memSize = -2048;
+int cores = 2;
+String[] args = { "-updateNodeResource", nodeIdStr,
+Integer.toString(memSize), Integer.toString(cores) };
+// execution of command line is expected to be failed
+assertEquals(-1, rmAdmi

hadoop git commit: YARN-4384. updateNodeResource CLI should not accept negative values for resource. (Junping Du via wangda)

2015-11-24 Thread wangda
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 79e55fb47 -> 7b9f7eaf5


YARN-4384. updateNodeResource CLI should not accept negative values for 
resource. (Junping Du via wangda)

(cherry picked from commit 23c625ec571b01c0a2811728608890da38f86890)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7b9f7eaf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7b9f7eaf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7b9f7eaf

Branch: refs/heads/branch-2
Commit: 7b9f7eaf5c2be6589f3ed1fbf13a64b2b61f2a1b
Parents: 79e55fb
Author: Wangda Tan 
Authored: Tue Nov 24 16:35:56 2015 -0800
Committer: Wangda Tan 
Committed: Tue Nov 24 16:36:30 2015 -0800

--
 hadoop-yarn-project/CHANGES.txt   |  3 +++
 .../org/apache/hadoop/yarn/client/cli/RMAdminCLI.java | 11 +++
 .../apache/hadoop/yarn/client/cli/TestRMAdminCLI.java | 14 ++
 3 files changed, 28 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b9f7eaf/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index fe1d3f0..feef3a3 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -1017,6 +1017,9 @@ Release 2.8.0 - UNRELEASED
 
 YARN-4387. Fix typo in FairScheduler log message. (Xin Wang via ozawa)
 
+YARN-4384. updateNodeResource CLI should not accept negative values for 
resource.
+(Junping Du via wangda)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b9f7eaf/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/RMAdminCLI.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/RMAdminCLI.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/RMAdminCLI.java
index 21ba7a8..a5e53e4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/RMAdminCLI.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/RMAdminCLI.java
@@ -418,11 +418,17 @@ public class RMAdminCLI extends HAAdmin {
 
   private int updateNodeResource(String nodeIdStr, int memSize,
   int cores, int overCommitTimeout) throws IOException, YarnException {
+// check resource value first
+if (invalidResourceValue(memSize, cores)) {
+  throw new IllegalArgumentException("Invalid resource value: " + "(" +
+  memSize + "," + cores + ") for updateNodeResource.");
+}
 // Refresh the nodes
 ResourceManagerAdministrationProtocol adminProtocol = 
createAdminProtocol();
 UpdateNodeResourceRequest request =
   recordFactory.newRecordInstance(UpdateNodeResourceRequest.class);
 NodeId nodeId = ConverterUtils.toNodeId(nodeIdStr);
+
 Resource resource = Resources.createResource(memSize, cores);
 Map resourceMap =
 new HashMap();
@@ -433,6 +439,11 @@ public class RMAdminCLI extends HAAdmin {
 return 0;
   }
 
+  // complain negative value for cpu or memory.
+  private boolean invalidResourceValue(int memValue, int coreValue) {
+return (memValue < 0) || (coreValue < 0);
+  }
+
   private int getGroups(String[] usernames) throws IOException {
 // Get groups users belongs to
 ResourceManagerAdministrationProtocol adminProtocol = 
createAdminProtocol();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b9f7eaf/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
index 085cf02..f01441d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
@@ -219,6 +219,20 @@ public class TestRMAdminCLI {
   }
 
   @Test(timeout=500)
+  public void testUpdateNodeResourceWithInvalidValue() throws Exception {
+String nodeIdStr = "0.0.0.0:0";
+int memSize = -2048;
+int cores = 2;
+String[] args = { "-updateNodeResource", nodeIdStr,
+Integer.toString(memSize), Integer.toString(cores) };
+// exe

hadoop git commit: HADOOP-12415. Add io.netty dependency to hadoop-nfs and to hadoop-hdfs, needed to build Bigtop successfuly, see BIGTOP-2049

2015-11-24 Thread cos
Repository: hadoop
Updated Branches:
  refs/heads/trunk 23c625ec5 -> b4c6b511e


HADOOP-12415. Add io.netty dependency to hadoop-nfs and to hadoop-hdfs, needed 
to build Bigtop successfuly, see BIGTOP-2049

Signed-off-by: Konstantin Boudnik 


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b4c6b511
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b4c6b511
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b4c6b511

Branch: refs/heads/trunk
Commit: b4c6b511e705230dfe4580288addacb81f5c5c5f
Parents: 23c625e
Author: Tom Zeng 
Authored: Mon Sep 21 09:32:25 2015 -0700
Committer: Konstantin Boudnik 
Committed: Tue Nov 24 18:36:34 2015 -0800

--
 hadoop-common-project/hadoop-nfs/pom.xml | 5 +
 hadoop-hdfs-project/hadoop-hdfs/pom.xml  | 5 +
 2 files changed, 10 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b4c6b511/hadoop-common-project/hadoop-nfs/pom.xml
--
diff --git a/hadoop-common-project/hadoop-nfs/pom.xml 
b/hadoop-common-project/hadoop-nfs/pom.xml
index e8156d9..16e53f3 100644
--- a/hadoop-common-project/hadoop-nfs/pom.xml
+++ b/hadoop-common-project/hadoop-nfs/pom.xml
@@ -82,6 +82,11 @@
   runtime
 
 
+  io.netty
+  netty
+  compile
+
+
   com.google.guava
   guava
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b4c6b511/hadoop-hdfs-project/hadoop-hdfs/pom.xml
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
index c8a0dc6..cffc11b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
@@ -176,6 +176,11 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd";>
 
 
   io.netty
+  netty
+  compile
+
+
+  io.netty
   netty-all
   compile
 



hadoop git commit: HADOOP-12415. Add io.netty dependency to hadoop-nfs and to hadoop-hdfs, needed to build Bigtop successfuly, see BIGTOP-2049

2015-11-24 Thread cos
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 7b9f7eaf5 -> d6f7d2292


HADOOP-12415. Add io.netty dependency to hadoop-nfs and to hadoop-hdfs, needed 
to build Bigtop successfuly, see BIGTOP-2049

Signed-off-by: Konstantin Boudnik 
(cherry picked from commit b4c6b511e705230dfe4580288addacb81f5c5c5f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d6f7d229
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d6f7d229
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d6f7d229

Branch: refs/heads/branch-2
Commit: d6f7d22929d556783fd682227d86d5bf9aba7275
Parents: 7b9f7ea
Author: Tom Zeng 
Authored: Mon Sep 21 09:32:25 2015 -0700
Committer: Konstantin Boudnik 
Committed: Tue Nov 24 18:39:01 2015 -0800

--
 hadoop-common-project/hadoop-nfs/pom.xml | 5 +
 hadoop-hdfs-project/hadoop-hdfs/pom.xml  | 5 +
 2 files changed, 10 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d6f7d229/hadoop-common-project/hadoop-nfs/pom.xml
--
diff --git a/hadoop-common-project/hadoop-nfs/pom.xml 
b/hadoop-common-project/hadoop-nfs/pom.xml
index f101669..932c23e 100644
--- a/hadoop-common-project/hadoop-nfs/pom.xml
+++ b/hadoop-common-project/hadoop-nfs/pom.xml
@@ -82,6 +82,11 @@
   runtime
 
 
+  io.netty
+  netty
+  compile
+
+
   com.google.guava
   guava
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d6f7d229/hadoop-hdfs-project/hadoop-hdfs/pom.xml
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
index 0d1856b..b4c3c7e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
@@ -175,6 +175,11 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd";>
 
 
   io.netty
+  netty
+  compile
+
+
+  io.netty
   netty-all
   compile
 



hadoop git commit: MAPREDUCE-6553. Replace '\u2b05' with '<-' in rendering job configuration. Contributed by Gabor Liptak.

2015-11-24 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/trunk b4c6b511e -> 5ec44fc35


MAPREDUCE-6553. Replace '\u2b05' with '<-' in rendering job configuration. 
Contributed by Gabor Liptak.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5ec44fc3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5ec44fc3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5ec44fc3

Branch: refs/heads/trunk
Commit: 5ec44fc35c129d266edaebbc6fa17754a89b2ad6
Parents: b4c6b51
Author: Akira Ajisaka 
Authored: Wed Nov 25 16:30:23 2015 +0900
Committer: Akira Ajisaka 
Committed: Wed Nov 25 16:30:23 2015 +0900

--
 hadoop-mapreduce-project/CHANGES.txt  | 3 +++
 .../java/org/apache/hadoop/mapreduce/v2/app/webapp/ConfBlock.java | 3 +--
 2 files changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ec44fc3/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index ef4bb18..170ae7b 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -642,6 +642,9 @@ Release 2.8.0 - UNRELEASED
MAPREDUCE-6533. testDetermineCacheVisibilities of
TestClientDistributedCacheManager is broken (Chang Li via jlowe)
 
+   MAPREDUCE-6553. Replace '\u2b05' with '<-' in rendering job configuration.
+   (Gabor Liptak via aajisaka)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ec44fc3/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/ConfBlock.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/ConfBlock.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/ConfBlock.java
index fd2edb4..4cb79bf 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/ConfBlock.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/ConfBlock.java
@@ -90,8 +90,7 @@ public class ConfBlock extends HtmlBlock {
 boolean first = true;
 for(int i = (sources.length  - 2); i >= 0; i--) {
   if(!first) {
-// \u2B05 is an arrow <--
-buffer.append(" \u2B05 ");
+buffer.append(" <- ");
   }
   first = false;
   buffer.append(sources[i]);



hadoop git commit: MAPREDUCE-6553. Replace '\u2b05' with '<-' in rendering job configuration. Contributed by Gabor Liptak.

2015-11-24 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 d6f7d2292 -> c6cf7910a


MAPREDUCE-6553. Replace '\u2b05' with '<-' in rendering job configuration. 
Contributed by Gabor Liptak.

(cherry picked from commit 5ec44fc35c129d266edaebbc6fa17754a89b2ad6)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c6cf7910
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c6cf7910
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c6cf7910

Branch: refs/heads/branch-2
Commit: c6cf7910a461618433a0c23e2a7aa06d53b2a4c1
Parents: d6f7d22
Author: Akira Ajisaka 
Authored: Wed Nov 25 16:30:23 2015 +0900
Committer: Akira Ajisaka 
Committed: Wed Nov 25 16:33:18 2015 +0900

--
 hadoop-mapreduce-project/CHANGES.txt  | 3 +++
 .../java/org/apache/hadoop/mapreduce/v2/app/webapp/ConfBlock.java | 3 +--
 2 files changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c6cf7910/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 324f3a4..42139ff 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -352,6 +352,9 @@ Release 2.8.0 - UNRELEASED
 
 MAPREDUCE-6540. TestMRTimelineEventHandling fails (sjlee)
 
+MAPREDUCE-6553. Replace '\u2b05' with '<-' in rendering job configuration.
+(Gabor Liptak via aajisaka)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c6cf7910/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/ConfBlock.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/ConfBlock.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/ConfBlock.java
index fd2edb4..4cb79bf 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/ConfBlock.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/ConfBlock.java
@@ -90,8 +90,7 @@ public class ConfBlock extends HtmlBlock {
 boolean first = true;
 for(int i = (sources.length  - 2); i >= 0; i--) {
   if(!first) {
-// \u2B05 is an arrow <--
-buffer.append(" \u2B05 ");
+buffer.append(" <- ");
   }
   first = false;
   buffer.append(sources[i]);



hadoop git commit: Fix indents in the 2.8.0 section of MapReduce CHANGES.txt.

2015-11-24 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/trunk 5ec44fc35 -> 177975e96


Fix indents in the 2.8.0 section of MapReduce CHANGES.txt.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/177975e9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/177975e9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/177975e9

Branch: refs/heads/trunk
Commit: 177975e962284a2bbd954ebb32bf8f6da5fc975e
Parents: 5ec44fc
Author: Akira Ajisaka 
Authored: Wed Nov 25 16:39:49 2015 +0900
Committer: Akira Ajisaka 
Committed: Wed Nov 25 16:42:22 2015 +0900

--
 hadoop-mapreduce-project/CHANGES.txt | 52 +++
 1 file changed, 26 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/177975e9/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 170ae7b..1631668 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -307,17 +307,17 @@ Release 2.8.0 - UNRELEASED
 
   NEW FEATURES
 
-   MAPREDUCE-6284. Add Task Attempt State API to MapReduce Application
-   Master REST API. (Ryu Kobayashi via ozawa)
+MAPREDUCE-6284. Add Task Attempt State API to MapReduce Application
+Master REST API. (Ryu Kobayashi via ozawa)
 
-   MAPREDUCE-6364. Add a "Kill" link to Task Attempts page. (Ryu Kobayashi
-   via ozawa)
+MAPREDUCE-6364. Add a "Kill" link to Task Attempts page. (Ryu Kobayashi
+via ozawa)
 
-   MAPREDUCE-6304. Specifying node labels when submitting MR jobs.
-   (Naganarasimha G R via wangda)
+MAPREDUCE-6304. Specifying node labels when submitting MR jobs.
+(Naganarasimha G R via wangda)
 
-   MAPREDUCE-6415. Create a tool to combine aggregated logs into HAR files. 
-   (Robert Kanter via kasha)
+MAPREDUCE-6415. Create a tool to combine aggregated logs into HAR files.
+(Robert Kanter via kasha)
 
   IMPROVEMENTS
 
@@ -617,33 +617,33 @@ Release 2.8.0 - UNRELEASED
 MAPREDUCE-6484. Yarn Client uses local address instead of RM address as
 token renewer in a secure cluster when RM HA is enabled. (Zhihai Xu)
 
-   MAPREDUCE-6480. archive-logs tool may miss applications (rkanter)
+MAPREDUCE-6480. archive-logs tool may miss applications (rkanter)
 
-   MAPREDUCE-6494. Permission issue when running archive-logs tool as
-   different users (rkanter)
+MAPREDUCE-6494. Permission issue when running archive-logs tool as
+different users (rkanter)
 
-   MAPREDUCE-6485. Create a new task attempt with failed map task priority
-   if in-progress attempts are unassigned. (Xianyin Xin via rohithsharmaks)
+MAPREDUCE-6485. Create a new task attempt with failed map task priority
+if in-progress attempts are unassigned. (Xianyin Xin via rohithsharmaks)
 
-   MAPREDUCE-6503. archive-logs tool should use HADOOP_PREFIX instead
-   of HADOOP_HOME (rkanter)
+MAPREDUCE-6503. archive-logs tool should use HADOOP_PREFIX instead
+of HADOOP_HOME (rkanter)
 
-   MAPREDUCE-6302. Preempt reducers after a configurable timeout irrespective 
-   of headroom. (kasha)
+MAPREDUCE-6302. Preempt reducers after a configurable timeout irrespective
+of headroom. (kasha)
 
-   MAPREDUCE-6495. Docs for archive-logs tool (rkanter)
+MAPREDUCE-6495. Docs for archive-logs tool (rkanter)
 
-   MAPREDUCE-6508. TestNetworkedJob fails consistently due to delegation 
-   token changes on RM. (Akira AJISAKA via junping_du)
+MAPREDUCE-6508. TestNetworkedJob fails consistently due to delegation
+token changes on RM. (Akira AJISAKA via junping_du)
 
-   MAPREDUCE-6515. Update Application priority in AM side from AM-RM heartbeat
-   (Sunil G via jlowe)
+MAPREDUCE-6515. Update Application priority in AM side from AM-RM heartbeat
+(Sunil G via jlowe)
 
-   MAPREDUCE-6533. testDetermineCacheVisibilities of
-   TestClientDistributedCacheManager is broken (Chang Li via jlowe)
+MAPREDUCE-6533. testDetermineCacheVisibilities of
+TestClientDistributedCacheManager is broken (Chang Li via jlowe)
 
-   MAPREDUCE-6553. Replace '\u2b05' with '<-' in rendering job configuration.
-   (Gabor Liptak via aajisaka)
+MAPREDUCE-6553. Replace '\u2b05' with '<-' in rendering job configuration.
+(Gabor Liptak via aajisaka)
 
 Release 2.7.3 - UNRELEASED
 



hadoop git commit: Fix indents in the 2.8.0 section of MapReduce CHANGES.txt.

2015-11-24 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 c6cf7910a -> 5794dc83b


Fix indents in the 2.8.0 section of MapReduce CHANGES.txt.

(cherry picked from commit 177975e962284a2bbd954ebb32bf8f6da5fc975e)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5794dc83
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5794dc83
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5794dc83

Branch: refs/heads/branch-2
Commit: 5794dc83b05fee6fab222c983a1c440eddf923df
Parents: c6cf791
Author: Akira Ajisaka 
Authored: Wed Nov 25 16:39:49 2015 +0900
Committer: Akira Ajisaka 
Committed: Wed Nov 25 16:44:25 2015 +0900

--
 hadoop-mapreduce-project/CHANGES.txt | 48 +++
 1 file changed, 24 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5794dc83/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 42139ff..8046e88 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -6,17 +6,17 @@ Release 2.8.0 - UNRELEASED
 
   NEW FEATURES
 
-   MAPREDUCE-6284. Add Task Attempt State API to MapReduce Application
-   Master REST API. (Ryu Kobayashi via ozawa)
+MAPREDUCE-6284. Add Task Attempt State API to MapReduce Application
+Master REST API. (Ryu Kobayashi via ozawa)
 
-   MAPREDUCE-6364. Add a "Kill" link to Task Attempts page. (Ryu Kobayashi
-   via ozawa)
+MAPREDUCE-6364. Add a "Kill" link to Task Attempts page. (Ryu Kobayashi
+via ozawa)
 
-   MAPREDUCE-6304. Specifying node labels when submitting MR jobs.
-   (Naganarasimha G R via wangda)
+MAPREDUCE-6304. Specifying node labels when submitting MR jobs.
+(Naganarasimha G R via wangda)
 
-   MAPREDUCE-6415. Create a tool to combine aggregated logs into HAR files. 
-   (Robert Kanter via kasha)
+MAPREDUCE-6415. Create a tool to combine aggregated logs into HAR files.
+(Robert Kanter via kasha)
 
   IMPROVEMENTS
 
@@ -325,30 +325,30 @@ Release 2.8.0 - UNRELEASED
 MAPREDUCE-6484. Yarn Client uses local address instead of RM address as
 token renewer in a secure cluster when RM HA is enabled. (Zhihai Xu)
 
-   MAPREDUCE-6480. archive-logs tool may miss applications (rkanter)
+MAPREDUCE-6480. archive-logs tool may miss applications (rkanter)
 
-   MAPREDUCE-6494. Permission issue when running archive-logs tool as
-   different users (rkanter)
+MAPREDUCE-6494. Permission issue when running archive-logs tool as
+different users (rkanter)
 
-   MAPREDUCE-6485. Create a new task attempt with failed map task priority
-   if in-progress attempts are unassigned. (Xianyin Xin via rohithsharmaks)
+MAPREDUCE-6485. Create a new task attempt with failed map task priority
+if in-progress attempts are unassigned. (Xianyin Xin via rohithsharmaks)
 
-   MAPREDUCE-6503. archive-logs tool should use HADOOP_PREFIX instead
-   of HADOOP_HOME (rkanter)
+MAPREDUCE-6503. archive-logs tool should use HADOOP_PREFIX instead
+of HADOOP_HOME (rkanter)
 
-   MAPREDUCE-6302. Preempt reducers after a configurable timeout irrespective 
-   of headroom. (kasha)
+MAPREDUCE-6302. Preempt reducers after a configurable timeout irrespective
+of headroom. (kasha)
 
-   MAPREDUCE-6495. Docs for archive-logs tool (rkanter)
+MAPREDUCE-6495. Docs for archive-logs tool (rkanter)
 
-   MAPREDUCE-6508. TestNetworkedJob fails consistently due to delegation 
-   token changes on RM. (Akira AJISAKA via junping_du)
+MAPREDUCE-6508. TestNetworkedJob fails consistently due to delegation
+token changes on RM. (Akira AJISAKA via junping_du)
 
-   MAPREDUCE-6515. Update Application priority in AM side from AM-RM heartbeat
-   (Sunil G via jlowe)
+MAPREDUCE-6515. Update Application priority in AM side from AM-RM heartbeat
+(Sunil G via jlowe)
 
-   MAPREDUCE-6533. testDetermineCacheVisibilities of
-   TestClientDistributedCacheManager is broken (Chang Li via jlowe)
+MAPREDUCE-6533. testDetermineCacheVisibilities of
+TestClientDistributedCacheManager is broken (Chang Li via jlowe)
 
 MAPREDUCE-6540. TestMRTimelineEventHandling fails (sjlee)