[2/2] hadoop git commit: YARN-8113. Update placement constraints doc with application namespaces and inter-app constraints. Contributed by Weiwei Yang.

2018-05-02 Thread kkaranasos
YARN-8113. Update placement constraints doc with application namespaces and 
inter-app constraints. Contributed by Weiwei Yang.

(cherry picked from commit 3b34fca4b5d67a2685852f30bb61e7c408a0e886)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/62ad9d51
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/62ad9d51
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/62ad9d51

Branch: refs/heads/branch-3.1
Commit: 62ad9d512d70247c11b0db62d9385eb8444cad15
Parents: 6fce887
Author: Konstantinos Karanasos 
Authored: Wed May 2 11:48:35 2018 -0700
Committer: Konstantinos Karanasos 
Committed: Wed May 2 11:51:45 2018 -0700

--
 .../site/markdown/PlacementConstraints.md.vm| 67 +++-
 1 file changed, 52 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/62ad9d51/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md.vm
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md.vm
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md.vm
index cb34c3f..4ac1683 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md.vm
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md.vm
@@ -28,7 +28,7 @@ YARN allows applications to specify placement constraints in 
the form of data lo
 
 For example, it may be beneficial to co-locate the allocations of a job on the 
same rack (*affinity* constraints) to reduce network costs, spread allocations 
across machines (*anti-affinity* constraints) to minimize resource 
interference, or allow up to a specific number of allocations in a node group 
(*cardinality* constraints) to strike a balance between the two. Placement 
decisions also affect resilience. For example, allocations placed within the 
same cluster upgrade domain would go offline simultaneously.
 
-The applications can specify constraints without requiring knowledge of the 
underlying topology of the cluster (e.g., one does not need to specify the 
specific node or rack where their containers should be placed with constraints) 
or the other applications deployed. Currently **intra-application** constraints 
are supported, but the design that is followed is generic and support for 
constraints across applications will soon be added. Moreover, all constraints 
at the moment are **hard**, that is, if the constraints for a container cannot 
be satisfied due to the current cluster condition or conflicting constraints, 
the container request will remain pending or get will get rejected.
+The applications can specify constraints without requiring knowledge of the 
underlying topology of the cluster (e.g., one does not need to specify the 
specific node or rack where their containers should be placed with constraints) 
or the other applications deployed. Currently, all constraints are **hard**, 
that is, if a constraint for a container cannot be satisfied due to the current 
cluster condition or conflicting constraints, the container request will remain 
pending or get rejected.
 
 Note that in this document we use the notion of “allocation” to refer to a 
unit of resources (e.g., CPU and memory) that gets allocated in a node. In the 
current implementation of YARN, an allocation corresponds to a single 
container. However, in case an application uses an allocation to spawn more 
than one containers, an allocation could correspond to multiple containers.
 
@@ -65,15 +65,19 @@ $ yarn 
org.apache.hadoop.yarn.applications.distributedshell.Client -jar share/ha
 where **PlacementSpec** is of the form:
 
 ```
-PlacementSpec => "" | KeyVal;PlacementSpec
-KeyVal=> SourceTag=Constraint
-SourceTag => String
-Constraint=> NumContainers | NumContainers,"IN",Scope,TargetTag | 
NumContainers,"NOTIN",Scope,TargetTag | 
NumContainers,"CARDINALITY",Scope,TargetTag,MinCard,MaxCard
-NumContainers => int
-Scope => "NODE" | "RACK"
-TargetTag => String
-MinCard   => int
-MaxCard   => int
+PlacementSpec => "" | KeyVal;PlacementSpec
+KeyVal=> SourceTag=ConstraintExpr
+SourceTag => String
+ConstraintExpr=> NumContainers | NumContainers, Constraint
+Constraint=> SingleConstraint | CompositeConstraint
+SingleConstraint  => "IN",Scope,TargetTag | "NOTIN",Scope,TargetTag | 
"CARDINALITY",Scope,TargetTag,MinCard,MaxCard
+CompositeConstraint   => AND(ConstraintList) | OR(ConstraintList)
+ConstraintList=> Constraint | Constraint:ConstraintList
+NumContainers  

[1/2] hadoop git commit: YARN-8113. Update placement constraints doc with application namespaces and inter-app constraints. Contributed by Weiwei Yang.

2018-05-02 Thread kkaranasos
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 6fce88765 -> 62ad9d512
  refs/heads/trunk 883f68222 -> 3b34fca4b


YARN-8113. Update placement constraints doc with application namespaces and 
inter-app constraints. Contributed by Weiwei Yang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3b34fca4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3b34fca4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3b34fca4

Branch: refs/heads/trunk
Commit: 3b34fca4b5d67a2685852f30bb61e7c408a0e886
Parents: 883f682
Author: Konstantinos Karanasos 
Authored: Wed May 2 11:48:35 2018 -0700
Committer: Konstantinos Karanasos 
Committed: Wed May 2 11:49:56 2018 -0700

--
 .../site/markdown/PlacementConstraints.md.vm| 67 +++-
 1 file changed, 52 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b34fca4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md.vm
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md.vm
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md.vm
index cb34c3f..4ac1683 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md.vm
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md.vm
@@ -28,7 +28,7 @@ YARN allows applications to specify placement constraints in 
the form of data lo
 
 For example, it may be beneficial to co-locate the allocations of a job on the 
same rack (*affinity* constraints) to reduce network costs, spread allocations 
across machines (*anti-affinity* constraints) to minimize resource 
interference, or allow up to a specific number of allocations in a node group 
(*cardinality* constraints) to strike a balance between the two. Placement 
decisions also affect resilience. For example, allocations placed within the 
same cluster upgrade domain would go offline simultaneously.
 
-The applications can specify constraints without requiring knowledge of the 
underlying topology of the cluster (e.g., one does not need to specify the 
specific node or rack where their containers should be placed with constraints) 
or the other applications deployed. Currently **intra-application** constraints 
are supported, but the design that is followed is generic and support for 
constraints across applications will soon be added. Moreover, all constraints 
at the moment are **hard**, that is, if the constraints for a container cannot 
be satisfied due to the current cluster condition or conflicting constraints, 
the container request will remain pending or get will get rejected.
+The applications can specify constraints without requiring knowledge of the 
underlying topology of the cluster (e.g., one does not need to specify the 
specific node or rack where their containers should be placed with constraints) 
or the other applications deployed. Currently, all constraints are **hard**, 
that is, if a constraint for a container cannot be satisfied due to the current 
cluster condition or conflicting constraints, the container request will remain 
pending or get rejected.
 
 Note that in this document we use the notion of “allocation” to refer to a 
unit of resources (e.g., CPU and memory) that gets allocated in a node. In the 
current implementation of YARN, an allocation corresponds to a single 
container. However, in case an application uses an allocation to spawn more 
than one containers, an allocation could correspond to multiple containers.
 
@@ -65,15 +65,19 @@ $ yarn 
org.apache.hadoop.yarn.applications.distributedshell.Client -jar share/ha
 where **PlacementSpec** is of the form:
 
 ```
-PlacementSpec => "" | KeyVal;PlacementSpec
-KeyVal=> SourceTag=Constraint
-SourceTag => String
-Constraint=> NumContainers | NumContainers,"IN",Scope,TargetTag | 
NumContainers,"NOTIN",Scope,TargetTag | 
NumContainers,"CARDINALITY",Scope,TargetTag,MinCard,MaxCard
-NumContainers => int
-Scope => "NODE" | "RACK"
-TargetTag => String
-MinCard   => int
-MaxCard   => int
+PlacementSpec => "" | KeyVal;PlacementSpec
+KeyVal=> SourceTag=ConstraintExpr
+SourceTag => String
+ConstraintExpr=> NumContainers | NumContainers, Constraint
+Constraint=> SingleConstraint | CompositeConstraint
+SingleConstraint  => "IN",Scope,TargetTag | "NOTIN",Scope,TargetTag | 
"CARDINALITY",Scope,TargetTag,MinCard,MaxCard
+CompositeConstraint   => AND(ConstraintList) | OR(ConstraintList)
+ConstraintList=> 

[2/2] hadoop git commit: YARN-8195. Fix constraint cardinality check in the presence of multiple target allocation tags. Contributed by Weiwei Yang.

2018-04-30 Thread kkaranasos
YARN-8195. Fix constraint cardinality check in the presence of multiple target 
allocation tags. Contributed by Weiwei Yang.

(cherry picked from commit 9b0955545174abe16fd81240db30f175145ee89b)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9d296709
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9d296709
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9d296709

Branch: refs/heads/branch-3.1
Commit: 9d2967098d07aa39d422b31b620481a296ac7376
Parents: ce62991
Author: Konstantinos Karanasos 
Authored: Mon Apr 30 11:54:30 2018 -0700
Committer: Konstantinos Karanasos 
Committed: Mon Apr 30 11:55:26 2018 -0700

--
 .../constraint/PlacementConstraintsUtil.java|  8 +-
 .../TestPlacementConstraintsUtil.java   | 88 
 2 files changed, 92 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9d296709/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
index efa7b65..f47e1d4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
@@ -91,20 +91,20 @@ public final class PlacementConstraintsUtil {
 if (sc.getScope().equals(PlacementConstraints.NODE)) {
   if (checkMinCardinality) {
 minScopeCardinality = tm.getNodeCardinalityByOp(node.getNodeID(),
-allocationTags, Long::max);
+allocationTags, Long::min);
   }
   if (checkMaxCardinality) {
 maxScopeCardinality = tm.getNodeCardinalityByOp(node.getNodeID(),
-allocationTags, Long::min);
+allocationTags, Long::max);
   }
 } else if (sc.getScope().equals(PlacementConstraints.RACK)) {
   if (checkMinCardinality) {
 minScopeCardinality = tm.getRackCardinalityByOp(node.getRackName(),
-allocationTags, Long::max);
+allocationTags, Long::min);
   }
   if (checkMaxCardinality) {
 maxScopeCardinality = tm.getRackCardinalityByOp(node.getRackName(),
-allocationTags, Long::min);
+allocationTags, Long::max);
   }
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9d296709/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementConstraintsUtil.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementConstraintsUtil.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementConstraintsUtil.java
index 3248450..dc61981 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementConstraintsUtil.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementConstraintsUtil.java
@@ -42,6 +42,7 @@ import java.util.concurrent.ConcurrentMap;
 import java.util.stream.Collectors;
 import java.util.stream.Stream;
 import java.util.concurrent.atomic.AtomicLong;
+import com.google.common.collect.ImmutableMap;
 
 import org.apache.hadoop.yarn.api.records.NodeId;
 import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
@@ -228,6 +229,93 @@ public class TestPlacementConstraintsUtil {
   }
 
   @Test
+  public void testMultiTagsPlacementConstraints()
+  throws InvalidAllocationTagsQueryException {
+

[1/2] hadoop git commit: YARN-8195. Fix constraint cardinality check in the presence of multiple target allocation tags. Contributed by Weiwei Yang.

2018-04-30 Thread kkaranasos
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 ce6299164 -> 9d2967098
  refs/heads/trunk 3d43474f7 -> 9b0955545


YARN-8195. Fix constraint cardinality check in the presence of multiple target 
allocation tags. Contributed by Weiwei Yang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9b095554
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9b095554
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9b095554

Branch: refs/heads/trunk
Commit: 9b0955545174abe16fd81240db30f175145ee89b
Parents: 3d43474
Author: Konstantinos Karanasos 
Authored: Mon Apr 30 11:54:30 2018 -0700
Committer: Konstantinos Karanasos 
Committed: Mon Apr 30 11:54:30 2018 -0700

--
 .../constraint/PlacementConstraintsUtil.java|  8 +-
 .../TestPlacementConstraintsUtil.java   | 88 
 2 files changed, 92 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9b095554/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
index efa7b65..f47e1d4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
@@ -91,20 +91,20 @@ public final class PlacementConstraintsUtil {
 if (sc.getScope().equals(PlacementConstraints.NODE)) {
   if (checkMinCardinality) {
 minScopeCardinality = tm.getNodeCardinalityByOp(node.getNodeID(),
-allocationTags, Long::max);
+allocationTags, Long::min);
   }
   if (checkMaxCardinality) {
 maxScopeCardinality = tm.getNodeCardinalityByOp(node.getNodeID(),
-allocationTags, Long::min);
+allocationTags, Long::max);
   }
 } else if (sc.getScope().equals(PlacementConstraints.RACK)) {
   if (checkMinCardinality) {
 minScopeCardinality = tm.getRackCardinalityByOp(node.getRackName(),
-allocationTags, Long::max);
+allocationTags, Long::min);
   }
   if (checkMaxCardinality) {
 maxScopeCardinality = tm.getRackCardinalityByOp(node.getRackName(),
-allocationTags, Long::min);
+allocationTags, Long::max);
   }
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9b095554/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementConstraintsUtil.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementConstraintsUtil.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementConstraintsUtil.java
index 3248450..dc61981 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementConstraintsUtil.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementConstraintsUtil.java
@@ -42,6 +42,7 @@ import java.util.concurrent.ConcurrentMap;
 import java.util.stream.Collectors;
 import java.util.stream.Stream;
 import java.util.concurrent.atomic.AtomicLong;
+import com.google.common.collect.ImmutableMap;
 
 import org.apache.hadoop.yarn.api.records.NodeId;
 import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
@@ -228,6 +229,93 @@ public class TestPlacementConstraintsUtil {
   }
 
   @Test
+  public void testMultiTagsPlacementConstraints()
+ 

[2/2] hadoop git commit: YARN-8111. Simplify PlacementConstraints API by removing allocationTagToIntraApp. Contributed by Weiwei Yang.

2018-04-20 Thread kkaranasos
YARN-8111. Simplify PlacementConstraints API by removing 
allocationTagToIntraApp. Contributed by Weiwei Yang.

(cherry picked from commit 28e2244390c990877dc2ee2733cf9b8d2c75128e)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/18c86a3f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/18c86a3f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/18c86a3f

Branch: refs/heads/branch-3.1
Commit: 18c86a3fb6599baea569ed9d678194e43d2a98a6
Parents: 71b0d52
Author: Konstantinos Karanasos 
Authored: Fri Apr 20 12:24:48 2018 -0700
Committer: Konstantinos Karanasos 
Committed: Fri Apr 20 12:25:08 2018 -0700

--
 .../yarn/api/resource/PlacementConstraints.java | 24 +++--
 .../api/resource/TestPlacementConstraints.java  |  4 ++-
 .../yarn/service/component/Component.java   |  2 +-
 .../yarn/server/resourcemanager/MockAM.java |  2 +-
 ...stSingleConstraintAppPlacementAllocator.java | 36 ++--
 5 files changed, 28 insertions(+), 40 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/18c86a3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraints.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraints.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraints.java
index 02138bd..d22a6bd 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraints.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraints.java
@@ -258,15 +258,17 @@ public final class PlacementConstraints {
 
 /**
  * Constructs a target expression on an allocation tag. It is satisfied if
- * there are allocations with one of the given tags.
+ * there are allocations with one of the given tags. The default namespace
+ * for these tags is {@link AllocationTagNamespaceType#SELF}, this only
+ * checks tags within the application.
  *
  * @param allocationTags the set of tags that the attribute should take
  *  values from
  * @return the resulting expression on the allocation tags
  */
 public static TargetExpression allocationTag(String... allocationTags) {
-  return new TargetExpression(TargetType.ALLOCATION_TAG, null,
-  allocationTags);
+  return allocationTagWithNamespace(
+  AllocationTagNamespaceType.SELF.toString(), allocationTags);
 }
 
 /**
@@ -282,22 +284,6 @@ public final class PlacementConstraints {
   return new TargetExpression(TargetType.ALLOCATION_TAG,
   namespace, allocationTags);
 }
-
-/**
- * Constructs a target expression on an allocation tag. It is satisfied if
- * there are allocations with one of the given tags. Comparing to
- * {@link PlacementTargets#allocationTag(String...)}, this only checks tags
- * within the application.
- *
- * @param allocationTags the set of tags that the attribute should take
- *  values from
- * @return the resulting expression on the allocation tags
- */
-public static TargetExpression allocationTagToIntraApp(
-String... allocationTags) {
-  return new TargetExpression(TargetType.ALLOCATION_TAG,
-  AllocationTagNamespaceType.SELF.toString(), allocationTags);
-}
   }
 
   // Creation of compound constraints.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/18c86a3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraints.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraints.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraints.java
index 2f8cc62..516ecfc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraints.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraints.java
@@ -28,6 +28,7 @@ import static 
org.apache.hadoop.yarn.api.resource.PlacementConstraints.targetNot
 import static 
org.apache.hadoop.yarn.api.resource.PlacementConstraints.PlacementTargets.allocationTag;
 import static 

[1/2] hadoop git commit: YARN-8111. Simplify PlacementConstraints API by removing allocationTagToIntraApp. Contributed by Weiwei Yang.

2018-04-20 Thread kkaranasos
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 71b0d5298 -> 18c86a3fb
  refs/heads/trunk 766544c0b -> 28e224439


YARN-8111. Simplify PlacementConstraints API by removing 
allocationTagToIntraApp. Contributed by Weiwei Yang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/28e22443
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/28e22443
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/28e22443

Branch: refs/heads/trunk
Commit: 28e2244390c990877dc2ee2733cf9b8d2c75128e
Parents: 766544c
Author: Konstantinos Karanasos 
Authored: Fri Apr 20 12:24:48 2018 -0700
Committer: Konstantinos Karanasos 
Committed: Fri Apr 20 12:24:48 2018 -0700

--
 .../yarn/api/resource/PlacementConstraints.java | 24 +++--
 .../api/resource/TestPlacementConstraints.java  |  4 ++-
 .../yarn/service/component/Component.java   |  2 +-
 .../yarn/server/resourcemanager/MockAM.java |  2 +-
 ...stSingleConstraintAppPlacementAllocator.java | 36 ++--
 5 files changed, 28 insertions(+), 40 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/28e22443/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraints.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraints.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraints.java
index 02138bd..d22a6bd 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraints.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraints.java
@@ -258,15 +258,17 @@ public final class PlacementConstraints {
 
 /**
  * Constructs a target expression on an allocation tag. It is satisfied if
- * there are allocations with one of the given tags.
+ * there are allocations with one of the given tags. The default namespace
+ * for these tags is {@link AllocationTagNamespaceType#SELF}, this only
+ * checks tags within the application.
  *
  * @param allocationTags the set of tags that the attribute should take
  *  values from
  * @return the resulting expression on the allocation tags
  */
 public static TargetExpression allocationTag(String... allocationTags) {
-  return new TargetExpression(TargetType.ALLOCATION_TAG, null,
-  allocationTags);
+  return allocationTagWithNamespace(
+  AllocationTagNamespaceType.SELF.toString(), allocationTags);
 }
 
 /**
@@ -282,22 +284,6 @@ public final class PlacementConstraints {
   return new TargetExpression(TargetType.ALLOCATION_TAG,
   namespace, allocationTags);
 }
-
-/**
- * Constructs a target expression on an allocation tag. It is satisfied if
- * there are allocations with one of the given tags. Comparing to
- * {@link PlacementTargets#allocationTag(String...)}, this only checks tags
- * within the application.
- *
- * @param allocationTags the set of tags that the attribute should take
- *  values from
- * @return the resulting expression on the allocation tags
- */
-public static TargetExpression allocationTagToIntraApp(
-String... allocationTags) {
-  return new TargetExpression(TargetType.ALLOCATION_TAG,
-  AllocationTagNamespaceType.SELF.toString(), allocationTags);
-}
   }
 
   // Creation of compound constraints.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/28e22443/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraints.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraints.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraints.java
index 2f8cc62..516ecfc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraints.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraints.java
@@ -28,6 +28,7 @@ import static 
org.apache.hadoop.yarn.api.resource.PlacementConstraints.targetNot
 import static 
org.apache.hadoop.yarn.api.resource.PlacementConstraints.PlacementTargets.allocationTag;
 import static 

[1/2] hadoop git commit: YARN-8013. Support application tags when defining application namespaces for placement constraints. Contributed by Weiwei Yang.

2018-04-04 Thread kkaranasos
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 4db13cb94 -> fa464c12a
  refs/heads/trunk 42cd367c9 -> 7853ec8d2


YARN-8013. Support application tags when defining application namespaces for 
placement constraints. Contributed by Weiwei Yang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7853ec8d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7853ec8d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7853ec8d

Branch: refs/heads/trunk
Commit: 7853ec8d2fb8731b7f7c28fd87491a0a2d47967e
Parents: 42cd367
Author: Konstantinos Karanasos 
Authored: Wed Apr 4 10:51:58 2018 -0700
Committer: Konstantinos Karanasos 
Committed: Wed Apr 4 10:51:58 2018 -0700

--
 .../api/records/AllocationTagNamespaceType.java |   2 +-
 .../constraint/AllocationTagNamespace.java  | 312 --
 .../scheduler/constraint/AllocationTags.java|  44 ++-
 .../constraint/AllocationTagsManager.java   |  47 ++-
 .../constraint/PlacementConstraintsUtil.java|  41 +--
 .../constraint/TargetApplications.java  |  53 ++-
 .../constraint/TargetApplicationsNamespace.java | 326 +++
 .../SingleConstraintAppPlacementAllocator.java  |  21 --
 .../server/resourcemanager/rmapp/MockRMApp.java |   9 +-
 ...estSchedulingRequestContainerAllocation.java |   5 +-
 .../constraint/TestAllocationTagsManager.java   |  22 +-
 .../constraint/TestAllocationTagsNamespace.java |  89 -
 .../TestPlacementConstraintsUtil.java   | 125 ++-
 13 files changed, 654 insertions(+), 442 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7853ec8d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/AllocationTagNamespaceType.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/AllocationTagNamespaceType.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/AllocationTagNamespaceType.java
index de5492e..f304600 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/AllocationTagNamespaceType.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/AllocationTagNamespaceType.java
@@ -26,7 +26,7 @@ public enum AllocationTagNamespaceType {
   SELF("self"),
   NOT_SELF("not-self"),
   APP_ID("app-id"),
-  APP_LABEL("app-label"),
+  APP_TAG("app-tag"),
   ALL("all");
 
   private String typeKeyword;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7853ec8d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagNamespace.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagNamespace.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagNamespace.java
deleted file mode 100644
index 7b9f3be..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagNamespace.java
+++ /dev/null
@@ -1,312 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint;
-
-import com.google.common.base.Strings;
-import com.google.common.collect.ImmutableSet;
-import org.apache.hadoop.yarn.api.records.AllocationTagNamespaceType;
-import 

[2/2] hadoop git commit: YARN-8013. Support application tags when defining application namespaces for placement constraints. Contributed by Weiwei Yang.

2018-04-04 Thread kkaranasos
YARN-8013. Support application tags when defining application namespaces for 
placement constraints. Contributed by Weiwei Yang.

(cherry picked from commit 7853ec8d2fb8731b7f7c28fd87491a0a2d47967e)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fa464c12
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fa464c12
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fa464c12

Branch: refs/heads/branch-3.1
Commit: fa464c12ad67228ec6dfd20f9217c0630c9adf23
Parents: 4db13cb
Author: Konstantinos Karanasos 
Authored: Wed Apr 4 10:51:58 2018 -0700
Committer: Konstantinos Karanasos 
Committed: Wed Apr 4 10:53:33 2018 -0700

--
 .../api/records/AllocationTagNamespaceType.java |   2 +-
 .../constraint/AllocationTagNamespace.java  | 312 --
 .../scheduler/constraint/AllocationTags.java|  44 ++-
 .../constraint/AllocationTagsManager.java   |  47 ++-
 .../constraint/PlacementConstraintsUtil.java|  41 +--
 .../constraint/TargetApplications.java  |  53 ++-
 .../constraint/TargetApplicationsNamespace.java | 326 +++
 .../SingleConstraintAppPlacementAllocator.java  |  21 --
 .../server/resourcemanager/rmapp/MockRMApp.java |   9 +-
 ...estSchedulingRequestContainerAllocation.java |   5 +-
 .../constraint/TestAllocationTagsManager.java   |  22 +-
 .../constraint/TestAllocationTagsNamespace.java |  89 -
 .../TestPlacementConstraintsUtil.java   | 125 ++-
 13 files changed, 654 insertions(+), 442 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa464c12/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/AllocationTagNamespaceType.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/AllocationTagNamespaceType.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/AllocationTagNamespaceType.java
index de5492e..f304600 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/AllocationTagNamespaceType.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/AllocationTagNamespaceType.java
@@ -26,7 +26,7 @@ public enum AllocationTagNamespaceType {
   SELF("self"),
   NOT_SELF("not-self"),
   APP_ID("app-id"),
-  APP_LABEL("app-label"),
+  APP_TAG("app-tag"),
   ALL("all");
 
   private String typeKeyword;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa464c12/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagNamespace.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagNamespace.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagNamespace.java
deleted file mode 100644
index 7b9f3be..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagNamespace.java
+++ /dev/null
@@ -1,312 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint;
-
-import com.google.common.base.Strings;
-import com.google.common.collect.ImmutableSet;
-import org.apache.hadoop.yarn.api.records.AllocationTagNamespaceType;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-
-import java.util.ArrayList;
-import 

[2/2] hadoop git commit: YARN-7921. Transform a PlacementConstraint to a string expression. Contributed by Weiwei Yang.

2018-02-26 Thread kkaranasos
YARN-7921. Transform a PlacementConstraint to a string expression. Contributed 
by Weiwei Yang.

(cherry picked from commit e85188101c6c74b348a2fb6aa0f4e85c81b4a28c)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/eb8765bb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/eb8765bb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/eb8765bb

Branch: refs/heads/branch-3.1
Commit: eb8765bbe9ddbf435247cc171113646c88c228c0
Parents: 33f8232
Author: Konstantinos Karanasos 
Authored: Mon Feb 26 12:15:16 2018 -0800
Committer: Konstantinos Karanasos 
Committed: Mon Feb 26 12:17:08 2018 -0800

--
 .../yarn/api/resource/PlacementConstraint.java  | 141 ++-
 .../resource/TestPlacementConstraintParser.java | 102 --
 .../TestPlacementConstraintTransformations.java |   7 +
 3 files changed, 235 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/eb8765bb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
index c054cbc..9bb17f4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
@@ -23,6 +23,8 @@ import java.util.Arrays;
 import java.util.HashSet;
 import java.util.List;
 import java.util.Set;
+import java.util.stream.Collectors;
+import java.util.Iterator;
 
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceAudience.Public;
@@ -45,6 +47,11 @@ public class PlacementConstraint {
 this.constraintExpr = constraintExpr;
   }
 
+  @Override
+  public String toString() {
+return this.constraintExpr.toString();
+  }
+
   /**
* Get the constraint expression of the placement constraint.
*
@@ -226,6 +233,42 @@ public class PlacementConstraint {
 }
 
 @Override
+public String toString() {
+  int max = getMaxCardinality();
+  int min = getMinCardinality();
+  List targetExprList = getTargetExpressions().stream()
+  .map(TargetExpression::toString).collect(Collectors.toList());
+  List targetConstraints = new ArrayList<>();
+  for (String targetExpr : targetExprList) {
+if (min == 0 && max == 0) {
+  // anti-affinity
+  targetConstraints.add(new StringBuilder()
+  .append("notin").append(",")
+  .append(getScope()).append(",")
+  .append(targetExpr)
+  .toString());
+} else if (min == 1 && max == Integer.MAX_VALUE) {
+  // affinity
+  targetConstraints.add(new StringBuilder()
+  .append("in").append(",")
+  .append(getScope()).append(",")
+  .append(targetExpr)
+  .toString());
+} else {
+  // cardinality
+  targetConstraints.add(new StringBuilder()
+  .append("cardinality").append(",")
+  .append(getScope()).append(",")
+  .append(targetExpr).append(",")
+  .append(min).append(",")
+  .append(max)
+  .toString());
+}
+  }
+  return String.join(":", targetConstraints);
+}
+
+@Override
 public  T accept(Visitor visitor) {
   return visitor.visit(this);
 }
@@ -326,6 +369,23 @@ public class PlacementConstraint {
 }
 
 @Override
+public String toString() {
+  StringBuffer sb = new StringBuffer();
+  if (TargetType.ALLOCATION_TAG == this.targetType) {
+// following by a comma separated tags
+sb.append(String.join(",", getTargetValues()));
+  } else if (TargetType.NODE_ATTRIBUTE == this.targetType) {
+// following by a comma separated key value pairs
+if (this.getTargetValues() != null) {
+  String attributeName = this.getTargetKey();
+  String attributeValues = String.join(":", this.getTargetValues());
+  sb.append(attributeName + "=[" + attributeValues + "]");
+}
+  }
+  return sb.toString();
+}
+
+@Override
 public  T accept(Visitor visitor) {
   return visitor.visit(this);
 }
@@ -345,7 +405,16 @@ public class PlacementConstraint {
  * TargetOperator enum helps to specify type.
  */
 

[1/2] hadoop git commit: YARN-7921. Transform a PlacementConstraint to a string expression. Contributed by Weiwei Yang.

2018-02-26 Thread kkaranasos
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 33f82323b -> eb8765bbe
  refs/heads/trunk 451265a83 -> e85188101


YARN-7921. Transform a PlacementConstraint to a string expression. Contributed 
by Weiwei Yang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e8518810
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e8518810
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e8518810

Branch: refs/heads/trunk
Commit: e85188101c6c74b348a2fb6aa0f4e85c81b4a28c
Parents: 451265a
Author: Konstantinos Karanasos 
Authored: Mon Feb 26 12:15:16 2018 -0800
Committer: Konstantinos Karanasos 
Committed: Mon Feb 26 12:15:16 2018 -0800

--
 .../yarn/api/resource/PlacementConstraint.java  | 141 ++-
 .../resource/TestPlacementConstraintParser.java | 102 --
 .../TestPlacementConstraintTransformations.java |   7 +
 3 files changed, 235 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e8518810/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
index c054cbc..9bb17f4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
@@ -23,6 +23,8 @@ import java.util.Arrays;
 import java.util.HashSet;
 import java.util.List;
 import java.util.Set;
+import java.util.stream.Collectors;
+import java.util.Iterator;
 
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceAudience.Public;
@@ -45,6 +47,11 @@ public class PlacementConstraint {
 this.constraintExpr = constraintExpr;
   }
 
+  @Override
+  public String toString() {
+return this.constraintExpr.toString();
+  }
+
   /**
* Get the constraint expression of the placement constraint.
*
@@ -226,6 +233,42 @@ public class PlacementConstraint {
 }
 
 @Override
+public String toString() {
+  int max = getMaxCardinality();
+  int min = getMinCardinality();
+  List targetExprList = getTargetExpressions().stream()
+  .map(TargetExpression::toString).collect(Collectors.toList());
+  List targetConstraints = new ArrayList<>();
+  for (String targetExpr : targetExprList) {
+if (min == 0 && max == 0) {
+  // anti-affinity
+  targetConstraints.add(new StringBuilder()
+  .append("notin").append(",")
+  .append(getScope()).append(",")
+  .append(targetExpr)
+  .toString());
+} else if (min == 1 && max == Integer.MAX_VALUE) {
+  // affinity
+  targetConstraints.add(new StringBuilder()
+  .append("in").append(",")
+  .append(getScope()).append(",")
+  .append(targetExpr)
+  .toString());
+} else {
+  // cardinality
+  targetConstraints.add(new StringBuilder()
+  .append("cardinality").append(",")
+  .append(getScope()).append(",")
+  .append(targetExpr).append(",")
+  .append(min).append(",")
+  .append(max)
+  .toString());
+}
+  }
+  return String.join(":", targetConstraints);
+}
+
+@Override
 public  T accept(Visitor visitor) {
   return visitor.visit(this);
 }
@@ -326,6 +369,23 @@ public class PlacementConstraint {
 }
 
 @Override
+public String toString() {
+  StringBuffer sb = new StringBuffer();
+  if (TargetType.ALLOCATION_TAG == this.targetType) {
+// following by a comma separated tags
+sb.append(String.join(",", getTargetValues()));
+  } else if (TargetType.NODE_ATTRIBUTE == this.targetType) {
+// following by a comma separated key value pairs
+if (this.getTargetValues() != null) {
+  String attributeName = this.getTargetKey();
+  String attributeValues = String.join(":", this.getTargetValues());
+  sb.append(attributeName + "=[" + attributeValues + "]");
+}
+  }
+  return sb.toString();
+}
+
+@Override
 public  T accept(Visitor visitor) {
   return visitor.visit(this);
 }
@@ -345,7 +405,16 @@ public class PlacementConstraint {
  * 

[1/4] hadoop git commit: YARN-7920. Simplify configuration for PlacementConstraints. Contributed by Wangda Tan.

2018-02-15 Thread kkaranasos
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 4cdc57f6a -> 41708402a
  refs/heads/trunk 47473952e -> 0b489e564


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0b489e56/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md
new file mode 100644
index 000..6af62e7
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md
@@ -0,0 +1,136 @@
+
+
+Placement Constraints
+=
+
+
+
+
+Overview
+
+
+YARN allows applications to specify placement constraints in the form of data 
locality (preference to specific nodes or racks) or (non-overlapping) node 
labels. This document focuses on more expressive placement constraints in YARN. 
Such constraints can be crucial for the performance and resilience of 
applications, especially those that include long-running containers, such as 
services, machine-learning and streaming workloads.
+
+For example, it may be beneficial to co-locate the allocations of a job on the 
same rack (*affinity* constraints) to reduce network costs, spread allocations 
across machines (*anti-affinity* constraints) to minimize resource 
interference, or allow up to a specific number of allocations in a node group 
(*cardinality* constraints) to strike a balance between the two. Placement 
decisions also affect resilience. For example, allocations placed within the 
same cluster upgrade domain would go offline simultaneously.
+
+The applications can specify constraints without requiring knowledge of the 
underlying topology of the cluster (e.g., one does not need to specify the 
specific node or rack where their containers should be placed with constraints) 
or the other applications deployed. Currently **intra-application** constraints 
are supported, but the design that is followed is generic and support for 
constraints across applications will soon be added. Moreover, all constraints 
at the moment are **hard**, that is, if the constraints for a container cannot 
be satisfied due to the current cluster condition or conflicting constraints, 
the container request will remain pending or get will get rejected.
+
+Note that in this document we use the notion of “allocation” to refer to a 
unit of resources (e.g., CPU and memory) that gets allocated in a node. In the 
current implementation of YARN, an allocation corresponds to a single 
container. However, in case an application uses an allocation to spawn more 
than one containers, an allocation could correspond to multiple containers.
+
+
+Quick Guide
+---
+
+We first describe how to enable scheduling with placement constraints and then 
provide examples of how to experiment with this feature using the distributed 
shell, an application that allows to run a given shell command on a set of 
containers.
+
+### Enabling placement constraints
+
+To enable placement constraints, the following property has to be set to 
`placement-processor` or `scheduler` in **conf/yarn-site.xml**:
+
+| Property | Description | Default value |
+|: |:--- |:- |
+| `yarn.resourcemanager.placement-constraints.handler` | Specify which handler 
will be used to process PlacementConstraints. Acceptable values are: 
`placement-processor`, `scheduler`, and `disabled`. | `disabled` |
+
+We now give more details about each of the three placement constraint handlers:
+
+* `placement-processor`: Using this handler, the placement of containers with 
constraints is determined as a pre-processing step before the capacity or the 
fair scheduler is called. Once the placement is decided, the capacity/fair 
scheduler is invoked to perform the actual allocation. The advantage of this 
handler is that it supports all constraint types (affinity, anti-affinity, 
cardinality). Moreover, it considers multiple containers at a time, which 
allows to satisfy more constraints than a container-at-a-time approach can 
achieve. As it sits outside the main scheduler, it can be used by both the 
capacity and fair schedulers. Note that at the moment it does not account for 
task priorities within an application, given that such priorities might be 
conflicting with the placement constraints.
+* `scheduler`: Using this handler, containers with constraints will be placed 
by the main scheduler (as of now, only the capacity scheduler supports 
SchedulingRequests). It currently supports anti-affinity constraints (no 
affinity or cardinality). The advantage of this handler, when compared to the 
`placement-processor`, is that it follows the same ordering rules for queues 
(sorted by utilization, priority), apps (sorted by FIFO/fairness/priority) and 
tasks within 

[4/4] hadoop git commit: YARN-7920. Simplify configuration for PlacementConstraints. Contributed by Wangda Tan.

2018-02-15 Thread kkaranasos
YARN-7920. Simplify configuration for PlacementConstraints. Contributed by 
Wangda Tan.

(cherry picked from commit 0b489e564ce5a50324a530e29c18aa8a75276c50)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/41708402
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/41708402
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/41708402

Branch: refs/heads/branch-3.1
Commit: 41708402a1c1033d8829aad23db9cfe90de77acd
Parents: 4cdc57f
Author: Konstantinos Karanasos 
Authored: Thu Feb 15 14:23:27 2018 -0800
Committer: Konstantinos Karanasos 
Committed: Thu Feb 15 14:25:34 2018 -0800

--
 .../hadoop/yarn/conf/YarnConfiguration.java |  54 ++-
 .../TestAMRMClientPlacementConstraints.java |   3 +-
 .../src/main/resources/yarn-default.xml |  10 +-
 .../ApplicationMasterService.java   |  46 ++-
 .../scheduler/capacity/CapacityScheduler.java   |  13 -
 .../CapacitySchedulerConfiguration.java |   5 -
 .../processor/AbstractPlacementProcessor.java   |  96 +
 .../processor/DisabledPlacementProcessor.java   |  77 
 .../processor/PlacementConstraintProcessor.java | 340 +
 .../processor/PlacementProcessor.java   | 377 ---
 .../processor/SchedulerPlacementProcessor.java  |  55 +++
 ...apacitySchedulerSchedulingRequestUpdate.java |   4 +
 ...estSchedulingRequestContainerAllocation.java |   8 +-
 ...hedulingRequestContainerAllocationAsync.java |   4 +-
 .../scheduler/capacity/TestUtils.java   |   4 +-
 .../constraint/TestPlacementProcessor.java  |  12 +-
 .../src/site/markdown/PlacementConstraints.md   | 136 +++
 .../site/markdown/PlacementConstraints.md.vm| 149 
 18 files changed, 818 insertions(+), 575 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/41708402/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 118f9fb..6677478 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -532,11 +532,57 @@ public class YarnConfiguration extends Configuration {
   public static final String RM_SCHEDULER = 
 RM_PREFIX + "scheduler.class";
 
-  /** Enable rich placement constraints. */
-  public static final String RM_PLACEMENT_CONSTRAINTS_ENABLED =
-  RM_PREFIX + "placement-constraints.enabled";
+  /**
+   * Specify which handler will be used to process PlacementConstraints.
+   * For details on PlacementConstraints, please refer to
+   * {@link org.apache.hadoop.yarn.api.resource.PlacementConstraint}
+   */
+  @Private
+  public static final String RM_PLACEMENT_CONSTRAINTS_HANDLER =
+  RM_PREFIX + "placement-constraints.handler";
+
+  /**
+   * This handler rejects all allocate calls made by an application, if they
+   * contain a {@link org.apache.hadoop.yarn.api.records.SchedulingRequest}.
+   */
+  @Private
+  public static final String DISABLED_RM_PLACEMENT_CONSTRAINTS_HANDLER =
+  "disabled";
 
-  public static final boolean DEFAULT_RM_PLACEMENT_CONSTRAINTS_ENABLED = false;
+  /**
+   * Using this handler, the placement of containers with constraints is
+   * determined as a pre-processing step before the capacity or the fair
+   * scheduler is called. Once the placement is decided, the capacity/fair
+   * scheduler is invoked to perform the actual allocation. The advantage of
+   * this approach is that it supports all constraint types (affinity,
+   * anti-affinity, cardinality). Moreover, it considers multiple containers at
+   * a time, which allows to satisfy more constraints than a 
container-at-a-time
+   * approach can achieve. As it sits outside the main scheduler, it can be 
used
+   * by both the capacity and fair schedulers. Note that at the moment it does
+   * not account for task priorities within an application, given that such
+   * priorities might be conflicting with the placement constraints.
+   */
+  @Private
+  public static final String PROCESSOR_RM_PLACEMENT_CONSTRAINTS_HANDLER =
+  "placement-processor";
+
+  /**
+   * Using this handler, containers with constraints will be placed by the main
+   * scheduler. If the configured RM scheduler
+   * yarn.resourcemanager.scheduler.class
+   * cannot handle placement constraints, 

[3/4] hadoop git commit: YARN-7920. Simplify configuration for PlacementConstraints. Contributed by Wangda Tan.

2018-02-15 Thread kkaranasos
http://git-wip-us.apache.org/repos/asf/hadoop/blob/41708402/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md
new file mode 100644
index 000..6af62e7
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md
@@ -0,0 +1,136 @@
+
+
+Placement Constraints
+=
+
+
+
+
+Overview
+
+
+YARN allows applications to specify placement constraints in the form of data 
locality (preference to specific nodes or racks) or (non-overlapping) node 
labels. This document focuses on more expressive placement constraints in YARN. 
Such constraints can be crucial for the performance and resilience of 
applications, especially those that include long-running containers, such as 
services, machine-learning and streaming workloads.
+
+For example, it may be beneficial to co-locate the allocations of a job on the 
same rack (*affinity* constraints) to reduce network costs, spread allocations 
across machines (*anti-affinity* constraints) to minimize resource 
interference, or allow up to a specific number of allocations in a node group 
(*cardinality* constraints) to strike a balance between the two. Placement 
decisions also affect resilience. For example, allocations placed within the 
same cluster upgrade domain would go offline simultaneously.
+
+The applications can specify constraints without requiring knowledge of the 
underlying topology of the cluster (e.g., one does not need to specify the 
specific node or rack where their containers should be placed with constraints) 
or the other applications deployed. Currently **intra-application** constraints 
are supported, but the design that is followed is generic and support for 
constraints across applications will soon be added. Moreover, all constraints 
at the moment are **hard**, that is, if the constraints for a container cannot 
be satisfied due to the current cluster condition or conflicting constraints, 
the container request will remain pending or get will get rejected.
+
+Note that in this document we use the notion of “allocation” to refer to a 
unit of resources (e.g., CPU and memory) that gets allocated in a node. In the 
current implementation of YARN, an allocation corresponds to a single 
container. However, in case an application uses an allocation to spawn more 
than one containers, an allocation could correspond to multiple containers.
+
+
+Quick Guide
+---
+
+We first describe how to enable scheduling with placement constraints and then 
provide examples of how to experiment with this feature using the distributed 
shell, an application that allows to run a given shell command on a set of 
containers.
+
+### Enabling placement constraints
+
+To enable placement constraints, the following property has to be set to 
`placement-processor` or `scheduler` in **conf/yarn-site.xml**:
+
+| Property | Description | Default value |
+|: |:--- |:- |
+| `yarn.resourcemanager.placement-constraints.handler` | Specify which handler 
will be used to process PlacementConstraints. Acceptable values are: 
`placement-processor`, `scheduler`, and `disabled`. | `disabled` |
+
+We now give more details about each of the three placement constraint handlers:
+
+* `placement-processor`: Using this handler, the placement of containers with 
constraints is determined as a pre-processing step before the capacity or the 
fair scheduler is called. Once the placement is decided, the capacity/fair 
scheduler is invoked to perform the actual allocation. The advantage of this 
handler is that it supports all constraint types (affinity, anti-affinity, 
cardinality). Moreover, it considers multiple containers at a time, which 
allows to satisfy more constraints than a container-at-a-time approach can 
achieve. As it sits outside the main scheduler, it can be used by both the 
capacity and fair schedulers. Note that at the moment it does not account for 
task priorities within an application, given that such priorities might be 
conflicting with the placement constraints.
+* `scheduler`: Using this handler, containers with constraints will be placed 
by the main scheduler (as of now, only the capacity scheduler supports 
SchedulingRequests). It currently supports anti-affinity constraints (no 
affinity or cardinality). The advantage of this handler, when compared to the 
`placement-processor`, is that it follows the same ordering rules for queues 
(sorted by utilization, priority), apps (sorted by FIFO/fairness/priority) and 
tasks within the same app (priority) that are enforced by the existing main 
scheduler.
+* `disabled`: Using this handler, if a 

[2/4] hadoop git commit: YARN-7920. Simplify configuration for PlacementConstraints. Contributed by Wangda Tan.

2018-02-15 Thread kkaranasos
YARN-7920. Simplify configuration for PlacementConstraints. Contributed by 
Wangda Tan.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0b489e56
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0b489e56
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0b489e56

Branch: refs/heads/trunk
Commit: 0b489e564ce5a50324a530e29c18aa8a75276c50
Parents: 4747395
Author: Konstantinos Karanasos 
Authored: Thu Feb 15 14:23:27 2018 -0800
Committer: Konstantinos Karanasos 
Committed: Thu Feb 15 14:23:38 2018 -0800

--
 .../hadoop/yarn/conf/YarnConfiguration.java |  54 ++-
 .../TestAMRMClientPlacementConstraints.java |   3 +-
 .../src/main/resources/yarn-default.xml |  10 +-
 .../ApplicationMasterService.java   |  46 ++-
 .../scheduler/capacity/CapacityScheduler.java   |  13 -
 .../CapacitySchedulerConfiguration.java |   5 -
 .../processor/AbstractPlacementProcessor.java   |  96 +
 .../processor/DisabledPlacementProcessor.java   |  77 
 .../processor/PlacementConstraintProcessor.java | 340 +
 .../processor/PlacementProcessor.java   | 377 ---
 .../processor/SchedulerPlacementProcessor.java  |  55 +++
 ...apacitySchedulerSchedulingRequestUpdate.java |   4 +
 ...estSchedulingRequestContainerAllocation.java |   8 +-
 ...hedulingRequestContainerAllocationAsync.java |   4 +-
 .../scheduler/capacity/TestUtils.java   |   4 +-
 .../constraint/TestPlacementProcessor.java  |  12 +-
 .../src/site/markdown/PlacementConstraints.md   | 136 +++
 .../site/markdown/PlacementConstraints.md.vm| 149 
 18 files changed, 818 insertions(+), 575 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0b489e56/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 118f9fb..6677478 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -532,11 +532,57 @@ public class YarnConfiguration extends Configuration {
   public static final String RM_SCHEDULER = 
 RM_PREFIX + "scheduler.class";
 
-  /** Enable rich placement constraints. */
-  public static final String RM_PLACEMENT_CONSTRAINTS_ENABLED =
-  RM_PREFIX + "placement-constraints.enabled";
+  /**
+   * Specify which handler will be used to process PlacementConstraints.
+   * For details on PlacementConstraints, please refer to
+   * {@link org.apache.hadoop.yarn.api.resource.PlacementConstraint}
+   */
+  @Private
+  public static final String RM_PLACEMENT_CONSTRAINTS_HANDLER =
+  RM_PREFIX + "placement-constraints.handler";
+
+  /**
+   * This handler rejects all allocate calls made by an application, if they
+   * contain a {@link org.apache.hadoop.yarn.api.records.SchedulingRequest}.
+   */
+  @Private
+  public static final String DISABLED_RM_PLACEMENT_CONSTRAINTS_HANDLER =
+  "disabled";
 
-  public static final boolean DEFAULT_RM_PLACEMENT_CONSTRAINTS_ENABLED = false;
+  /**
+   * Using this handler, the placement of containers with constraints is
+   * determined as a pre-processing step before the capacity or the fair
+   * scheduler is called. Once the placement is decided, the capacity/fair
+   * scheduler is invoked to perform the actual allocation. The advantage of
+   * this approach is that it supports all constraint types (affinity,
+   * anti-affinity, cardinality). Moreover, it considers multiple containers at
+   * a time, which allows to satisfy more constraints than a 
container-at-a-time
+   * approach can achieve. As it sits outside the main scheduler, it can be 
used
+   * by both the capacity and fair schedulers. Note that at the moment it does
+   * not account for task priorities within an application, given that such
+   * priorities might be conflicting with the placement constraints.
+   */
+  @Private
+  public static final String PROCESSOR_RM_PLACEMENT_CONSTRAINTS_HANDLER =
+  "placement-processor";
+
+  /**
+   * Using this handler, containers with constraints will be placed by the main
+   * scheduler. If the configured RM scheduler
+   * yarn.resourcemanager.scheduler.class
+   * cannot handle placement constraints, the corresponding SchedulingRequests
+   * will be rejected. As of now, only 

hadoop git commit: YARN-7778. Merging of placement constraints defined at different levels. Contributed by Weiwei Yang.

2018-02-02 Thread kkaranasos
Repository: hadoop
Updated Branches:
  refs/heads/trunk b6e50fad5 -> 50723889c


YARN-7778. Merging of placement constraints defined at different levels. 
Contributed by Weiwei Yang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/50723889
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/50723889
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/50723889

Branch: refs/heads/trunk
Commit: 50723889cc29e8dadfa6ab6afbb90ac798d66878
Parents: b6e50fa
Author: Konstantinos Karanasos 
Authored: Fri Feb 2 14:43:54 2018 -0800
Committer: Konstantinos Karanasos 
Committed: Fri Feb 2 14:46:20 2018 -0800

--
 .../MemoryPlacementConstraintManager.java   | 42 ++
 .../constraint/PlacementConstraintManager.java  | 13 
 .../constraint/PlacementConstraintsUtil.java| 24 ++
 .../TestPlacementConstraintManagerService.java  | 82 
 ...stSingleConstraintAppPlacementAllocator.java |  5 ++
 5 files changed, 150 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/50723889/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/MemoryPlacementConstraintManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/MemoryPlacementConstraintManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/MemoryPlacementConstraintManager.java
index ceff6f6..5cb8b99 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/MemoryPlacementConstraintManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/MemoryPlacementConstraintManager.java
@@ -24,6 +24,8 @@ import java.util.Collections;
 import java.util.HashMap;
 import java.util.Map;
 import java.util.Set;
+import java.util.List;
+import java.util.ArrayList;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 import java.util.stream.Collectors;
 import java.util.stream.Stream;
@@ -33,6 +35,7 @@ import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.resource.PlacementConstraint;
+import org.apache.hadoop.yarn.api.resource.PlacementConstraints;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -237,6 +240,45 @@ public class MemoryPlacementConstraintManager
   }
 
   @Override
+  public PlacementConstraint getMultilevelConstraint(ApplicationId appId,
+  Set sourceTags, PlacementConstraint schedulingRequestConstraint) 
{
+List constraints = new ArrayList<>();
+// Add scheduling request-level constraint.
+if (schedulingRequestConstraint != null) {
+  constraints.add(schedulingRequestConstraint);
+}
+// Add app-level constraint if appId is given.
+if (appId != null && sourceTags != null
+&& !sourceTags.isEmpty()) {
+  constraints.add(getConstraint(appId, sourceTags));
+}
+// Add global constraint.
+if (sourceTags != null && !sourceTags.isEmpty()) {
+  constraints.add(getGlobalConstraint(sourceTags));
+}
+
+// Remove all null or duplicate constraints.
+List allConstraints =
+constraints.stream()
+.filter(placementConstraint -> placementConstraint != null
+&& placementConstraint.getConstraintExpr() != null)
+.map(PlacementConstraint::getConstraintExpr)
+.distinct()
+.collect(Collectors.toList());
+
+// Compose an AND constraint
+// When merge request(RC), app(AC) and global constraint(GC),
+// we do a merge on them with CC=AND(GC, AC, RC) and returns a
+// composite AND constraint. Subsequently we check if CC could
+// be satisfied. This ensures that every level of constraint
+// is satisfied.
+PlacementConstraint.And andConstraint = PlacementConstraints.and(
+allConstraints.toArray(new PlacementConstraint
+.AbstractConstraint[allConstraints.size()]));
+return andConstraint.build();
+  }
+
+  @Override
   public void unregisterApplication(ApplicationId appId) {
 try {
   writeLock.lock();


hadoop git commit: YARN-7788. Factor out management of temp tags from AllocationTagsManager. (Arun Suresh via kkaranasos)

2018-01-22 Thread kkaranasos
Repository: hadoop
Updated Branches:
  refs/heads/YARN-6592 1612832d8 -> 904e0232e


YARN-7788. Factor out management of temp tags from AllocationTagsManager. (Arun 
Suresh via kkaranasos)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/904e0232
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/904e0232
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/904e0232

Branch: refs/heads/YARN-6592
Commit: 904e0232eec74a462a0070ea889a37bc8da8ab13
Parents: 1612832
Author: Konstantinos Karanasos <kkarana...@apache.org>
Authored: Mon Jan 22 23:51:02 2018 -0800
Committer: Konstantinos Karanasos <kkarana...@apache.org>
Committed: Mon Jan 22 23:51:02 2018 -0800

--
 .../constraint/AllocationTagsManager.java   | 110 +++-
 .../algorithm/DefaultPlacementAlgorithm.java|   8 +-
 .../algorithm/LocalAllocationTagsManager.java   | 167 +++
 .../constraint/TestAllocationTagsManager.java   |  82 -
 .../TestLocalAllocationTagsManager.java | 139 +++
 5 files changed, 336 insertions(+), 170 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/904e0232/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagsManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagsManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagsManager.java
index 962e548..7ad5e8c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagsManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagsManager.java
@@ -24,17 +24,14 @@ import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.lang.StringUtils;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.ContainerId;
 import org.apache.hadoop.yarn.api.records.NodeId;
 import org.apache.hadoop.yarn.api.records.SchedulingRequest;
-import org.apache.hadoop.yarn.api.resource.PlacementConstraints;
 import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
 import org.apache.log4j.Logger;
 
 import java.util.HashMap;
-import java.util.HashSet;
 import java.util.Map;
 import java.util.Set;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
@@ -61,9 +58,6 @@ public class AllocationTagsManager {
   // Application's tags to Rack
   private Map<ApplicationId, TypeToCountedTags> perAppRackMappings =
   new HashMap<>();
-  // Application's Temporary containers mapping
-  private Map<ApplicationId, Map<NodeId, Map<ContainerId, Set>>>
-  appTempMappings = new HashMap<>();
 
   // Global tags to node mapping (used to fast return aggregated tags
   // cardinality across apps)
@@ -76,7 +70,7 @@ public class AllocationTagsManager {
* Currently used both for NodeId to Tag, Count and Rack to Tag, Count
*/
   @VisibleForTesting
-  static class TypeToCountedTags {
+  public static class TypeToCountedTags {
 // Map<Type, Map<Tag, Count>>
 private Map<T, Map<String, Long>> typeToTagsWithCount = new HashMap<>();
 
@@ -214,7 +208,7 @@ public class AllocationTagsManager {
   }
 
   @VisibleForTesting
-  Map<ApplicationId, TypeToCountedTags> getPerAppNodeMappings() {
+  public Map<ApplicationId, TypeToCountedTags> getPerAppNodeMappings() {
 return perAppNodeMappings;
   }
 
@@ -233,12 +227,6 @@ public class AllocationTagsManager {
 return globalRackMapping;
   }
 
-  @VisibleForTesting
-  public Map<NodeId, Map<ContainerId, Set>> getAppTempMappings(
-  ApplicationId applicationId) {
-return appTempMappings.get(applicationId);
-  }
-
   public AllocationTagsManager(RMContext context) {
 ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
 readLock = lock.readLock();
@@ -246,39 +234,6 @@ public class AllocationTagsManager {
 rmContext = context;
   }
 
-  //

[35/50] [abbrv] hadoop git commit: HDFS-12893. [READ] Support replication of Provided blocks with non-default topologies.

2017-12-18 Thread kkaranasos
HDFS-12893. [READ] Support replication of Provided blocks with non-default 
topologies.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c89b29bd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c89b29bd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c89b29bd

Branch: refs/heads/YARN-6592
Commit: c89b29bd421152f0e7e16936f18d9e852895c37a
Parents: 0f6aa95
Author: Virajith Jalaparti 
Authored: Fri Dec 8 14:52:48 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:41 2017 -0800

--
 .../server/blockmanagement/BlockManager.java| 30 +++-
 .../blockmanagement/DatanodeStorageInfo.java| 11 +++--
 .../blockmanagement/ProvidedStorageMap.java | 18 ++-
 .../TestNameNodeProvidedImplementation.java | 49 ++--
 4 files changed, 97 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c89b29bd/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 916cbaa..c1cd4db 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -2151,6 +2151,22 @@ public class BlockManager implements BlockStatsMXBean {
   }
 
   /**
+   * Get the associated {@link DatanodeDescriptor} for the storage.
+   * If the storage is of type PROVIDED, one of the nodes that reported
+   * PROVIDED storage are returned. If not, this is equivalent to
+   * {@code storage.getDatanodeDescriptor()}.
+   * @param storage
+   * @return the associated {@link DatanodeDescriptor}.
+   */
+  private DatanodeDescriptor getDatanodeDescriptorFromStorage(
+  DatanodeStorageInfo storage) {
+if (storage.getStorageType() == StorageType.PROVIDED) {
+  return providedStorageMap.chooseProvidedDatanode();
+}
+return storage.getDatanodeDescriptor();
+  }
+
+  /**
* Parse the data-nodes the block belongs to and choose a certain number
* from them to be the recovery sources.
*
@@ -2198,10 +2214,14 @@ public class BlockManager implements BlockStatsMXBean {
 BitSet bitSet = isStriped ?
 new BitSet(((BlockInfoStriped) block).getTotalBlockNum()) : null;
 for (DatanodeStorageInfo storage : blocksMap.getStorages(block)) {
-  final DatanodeDescriptor node = storage.getDatanodeDescriptor();
+  final DatanodeDescriptor node = 
getDatanodeDescriptorFromStorage(storage);
   final StoredReplicaState state = checkReplicaOnStorage(numReplicas, 
block,
   storage, corruptReplicas.getNodes(block), false);
   if (state == StoredReplicaState.LIVE) {
+if (storage.getStorageType() == StorageType.PROVIDED) {
+  storage = new DatanodeStorageInfo(node, storage.getStorageID(),
+  storage.getStorageType(), storage.getState());
+}
 nodesContainingLiveReplicas.add(storage);
   }
   containingNodes.add(node);
@@ -4338,7 +4358,13 @@ public class BlockManager implements BlockStatsMXBean {
 Collection corruptNodes = corruptReplicas
 .getNodes(storedBlock);
 for (DatanodeStorageInfo storage : blocksMap.getStorages(storedBlock)) {
-  final DatanodeDescriptor cur = storage.getDatanodeDescriptor();
+  if (storage.getStorageType() == StorageType.PROVIDED
+  && storage.getState() == State.NORMAL) {
+// assume the policy is satisfied for blocks on PROVIDED storage
+// as long as the storage is in normal state.
+return true;
+  }
+  final DatanodeDescriptor cur = getDatanodeDescriptorFromStorage(storage);
   // Nodes under maintenance should be counted as valid replicas from
   // rack policy point of view.
   if (!cur.isDecommissionInProgress() && !cur.isDecommissioned()

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c89b29bd/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
index 76bf915..3a56ef1 100644
--- 

[40/50] [abbrv] hadoop git commit: YARN-7617. Add a flag in distributed shell to automatically PROMOTE opportunistic containers to guaranteed once they are started. Contributed by Weiwei Yang.

2017-12-18 Thread kkaranasos
YARN-7617. Add a flag in distributed shell to automatically PROMOTE 
opportunistic containers to guaranteed once they are started. Contributed by 
Weiwei Yang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/92896410
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/92896410
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/92896410

Branch: refs/heads/YARN-6592
Commit: 928964102029e96406f5482e8900802f38164501
Parents: 5e81f32
Author: Weiwei Yang 
Authored: Mon Dec 18 10:07:16 2017 +0800
Committer: Weiwei Yang 
Committed: Mon Dec 18 10:07:16 2017 +0800

--
 .../distributedshell/ApplicationMaster.java | 49 +++-
 .../applications/distributedshell/Client.java   | 11 +
 .../site/markdown/OpportunisticContainers.md.vm |  2 +-
 3 files changed, 59 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/92896410/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
index 926de50..b3fa0ff 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
@@ -93,6 +93,8 @@ import org.apache.hadoop.yarn.api.records.URL;
 import org.apache.hadoop.yarn.api.records.UpdatedContainer;
 import org.apache.hadoop.yarn.api.records.ExecutionType;
 import org.apache.hadoop.yarn.api.records.ExecutionTypeRequest;
+import org.apache.hadoop.yarn.api.records.UpdateContainerRequest;
+import org.apache.hadoop.yarn.api.records.ContainerUpdateType;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineEntity;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineEntityGroupId;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineEvent;
@@ -247,6 +249,8 @@ public class ApplicationMaster {
   // Execution type of the containers.
   // Default GUARANTEED.
   private ExecutionType containerType = ExecutionType.GUARANTEED;
+  // Whether to automatically promote opportunistic containers.
+  private boolean autoPromoteContainers = false;
 
   // Resource profile for the container
   private String containerResourceProfile = "";
@@ -420,6 +424,9 @@ public class ApplicationMaster {
 "Environment for shell script. Specified as env_key=env_val pairs");
 opts.addOption("container_type", true,
 "Container execution type, GUARANTEED or OPPORTUNISTIC");
+opts.addOption("promote_opportunistic_after_start", false,
+"Flag to indicate whether to automatically promote opportunistic"
++ " containers to guaranteed.");
 opts.addOption("container_memory", true,
 "Amount of memory in MB to be requested to run the shell command");
 opts.addOption("container_vcores", true,
@@ -576,6 +583,9 @@ public class ApplicationMaster {
   }
   containerType = ExecutionType.valueOf(containerTypeStr);
 }
+if (cliParser.hasOption("promote_opportunistic_after_start")) {
+  autoPromoteContainers = true;
+}
 containerMemory = Integer.parseInt(cliParser.getOptionValue(
 "container_memory", "-1"));
 containerVirtualCores = Integer.parseInt(cliParser.getOptionValue(
@@ -977,7 +987,24 @@ public class ApplicationMaster {
 
 @Override
 public void onContainersUpdated(
-List containers) {}
+List containers) {
+  for (UpdatedContainer container : containers) {
+LOG.info("Container {} updated, updateType={}, resource={}, "
++ "execType={}",
+container.getContainer().getId(),
+container.getUpdateType().toString(),
+container.getContainer().getResource().toString(),
+container.getContainer().getExecutionType());
+
+// TODO Remove this line with finalized updateContainer API.
+// Currently nm client needs to notify the NM to update container
+// execution type via NMClient#updateContainerResource() or
+// 

[28/50] [abbrv] hadoop git commit: HDFS-12712. [9806] Code style cleanup

2017-12-18 Thread kkaranasos
HDFS-12712. [9806] Code style cleanup


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8239e3af
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8239e3af
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8239e3af

Branch: refs/heads/YARN-6592
Commit: 8239e3afb31d3c4485817d4b8b8b195b554acbe7
Parents: 80c3fec
Author: Virajith Jalaparti 
Authored: Fri Dec 15 10:15:15 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:41 2017 -0800

--
 .../hadoop/hdfs/protocol/HdfsConstants.java |   1 -
 .../hadoop/hdfs/protocol/LocatedBlock.java  |  59 +-
 .../hdfs/server/blockmanagement/BlockInfo.java  |   2 +-
 .../server/blockmanagement/BlockManager.java|   5 +-
 .../server/blockmanagement/DatanodeManager.java |   2 +-
 .../blockmanagement/ProvidedStorageMap.java |   4 +-
 .../hadoop/hdfs/server/common/Storage.java  |   6 +-
 .../impl/TextFileRegionAliasMap.java|   2 +-
 .../server/datanode/BlockPoolSliceStorage.java  |   3 +-
 .../hdfs/server/datanode/DataStorage.java   |   4 +-
 .../hdfs/server/datanode/ProvidedReplica.java   |   1 -
 .../hdfs/server/datanode/StorageLocation.java   |  12 +-
 .../datanode/fsdataset/impl/FsDatasetImpl.java  |   6 +-
 .../fsdataset/impl/ProvidedVolumeImpl.java  |  21 +-
 .../hadoop/hdfs/TestBlockStoragePolicy.java |   8 +-
 .../blockmanagement/TestDatanodeManager.java|   5 +-
 .../blockmanagement/TestProvidedStorageMap.java |  12 +-
 .../datanode/TestProvidedReplicaImpl.java   |  13 +-
 .../fsdataset/impl/TestProvidedImpl.java|  64 +-
 hadoop-tools/hadoop-fs2img/pom.xml  |   4 +-
 .../hdfs/server/namenode/FileSystemImage.java   |   3 +-
 .../hdfs/server/namenode/ImageWriter.java   |   7 +-
 .../hdfs/server/namenode/SingleUGIResolver.java |   4 +-
 .../hadoop/hdfs/server/namenode/TreePath.java   |   3 +-
 .../namenode/ITestProvidedImplementation.java   | 927 ++
 .../hdfs/server/namenode/RandomTreeWalk.java|   4 +-
 .../TestNameNodeProvidedImplementation.java | 934 ---
 27 files changed, 1040 insertions(+), 1076 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8239e3af/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index e9e6103..fd7f9e0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -47,7 +47,6 @@ public final class HdfsConstants {
   public static final String WARM_STORAGE_POLICY_NAME = "WARM";
   public static final byte COLD_STORAGE_POLICY_ID = 2;
   public static final String COLD_STORAGE_POLICY_NAME = "COLD";
-  // branch HDFS-9806 XXX temporary until HDFS-7076
   public static final byte PROVIDED_STORAGE_POLICY_ID = 1;
   public static final String PROVIDED_STORAGE_POLICY_NAME = "PROVIDED";
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8239e3af/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
index 5ad0bca..29f1b6d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hdfs.protocol;
 
+import java.io.Serializable;
 import java.util.Arrays;
 import java.util.Comparator;
 import java.util.List;
@@ -40,6 +41,32 @@ import com.google.common.collect.Lists;
 @InterfaceStability.Evolving
 public class LocatedBlock {
 
+  /**
+   * Comparator that ensures that a PROVIDED storage type is greater than any
+   * other storage type. Any other storage types are considered equal.
+   */
+  private static class ProvidedLastComparator
+  implements Comparator, Serializable {
+
+private static final long serialVersionUID = 6441720011443190984L;
+
+@Override
+public int compare(DatanodeInfoWithStorage dns1,
+DatanodeInfoWithStorage dns2) {
+  if 

[27/50] [abbrv] hadoop git commit: HDFS-12712. [9806] Code style cleanup

2017-12-18 Thread kkaranasos
http://git-wip-us.apache.org/repos/asf/hadoop/blob/8239e3af/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
--
diff --git 
a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
 
b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
deleted file mode 100644
index 1023616..000
--- 
a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ /dev/null
@@ -1,934 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hdfs.server.namenode;
-
-import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileOutputStream;
-import java.io.IOException;
-import java.io.OutputStreamWriter;
-import java.io.Writer;
-import java.net.InetSocketAddress;
-import java.nio.ByteBuffer;
-import java.nio.channels.Channels;
-import java.nio.channels.ReadableByteChannel;
-import java.nio.file.Files;
-import java.util.HashSet;
-import java.util.Iterator;
-import java.util.Random;
-import java.util.Set;
-
-import org.apache.commons.io.FileUtils;
-import org.apache.hadoop.fs.BlockLocation;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.FileUtil;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.StorageType;
-import org.apache.hadoop.hdfs.DFSClient;
-import org.apache.hadoop.hdfs.DFSConfigKeys;
-import org.apache.hadoop.hdfs.DFSTestUtil;
-import org.apache.hadoop.hdfs.DistributedFileSystem;
-import org.apache.hadoop.hdfs.HdfsConfiguration;
-import org.apache.hadoop.hdfs.MiniDFSCluster;
-import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
-import org.apache.hadoop.hdfs.protocol.LocatedBlock;
-import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
-import org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap;
-import org.apache.hadoop.hdfs.server.aliasmap.InMemoryLevelDBAliasMapServer;
-import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
-import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
-import org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerTestUtil;
-import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor;
-import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager;
-import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStatistics;
-import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo;
-import org.apache.hadoop.hdfs.server.blockmanagement.ProvidedStorageMap;
-import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
-import 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.InMemoryLevelDBAliasMapClient;
-import 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap;
-import org.apache.hadoop.hdfs.server.datanode.DataNode;
-
-import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_REPLICATOR_CLASSNAME_KEY;
-import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY;
-
-import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
-import org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl;
-import org.apache.hadoop.hdfs.server.protocol.StorageReport;
-import org.apache.hadoop.net.NodeBase;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.rules.TestName;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import static 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap.fileNameFromBlockPoolID;
-import static org.apache.hadoop.net.NodeBase.PATH_SEPARATOR_STR;
-import static org.junit.Assert.*;
-
-public class TestNameNodeProvidedImplementation {
-
-  @Rule public TestName name = new TestName();
-  public static final Logger LOG =
-  LoggerFactory.getLogger(TestNameNodeProvidedImplementation.class);
-
-  final Random r = new Random();
-  final File fBASE = new 

[22/50] [abbrv] hadoop git commit: HDFS-12809. [READ] Fix the randomized selection of locations in {{ProvidedBlocksBuilder}}.

2017-12-18 Thread kkaranasos
HDFS-12809. [READ] Fix the randomized selection of locations in 
{{ProvidedBlocksBuilder}}.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4d59dabb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4d59dabb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4d59dabb

Branch: refs/heads/YARN-6592
Commit: 4d59dabb7f6ef1d8565bf2bb2d38aeb91bf7f7cc
Parents: 3d3be87
Author: Virajith Jalaparti 
Authored: Mon Nov 27 17:04:20 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:40 2017 -0800

--
 .../blockmanagement/ProvidedStorageMap.java | 112 +++
 .../TestNameNodeProvidedImplementation.java |  26 -
 2 files changed, 61 insertions(+), 77 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4d59dabb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
index 6fec977..c85eb2c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
@@ -19,11 +19,12 @@ package org.apache.hadoop.hdfs.server.blockmanagement;
 
 import java.io.IOException;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
-import java.util.Map;
 import java.util.NavigableMap;
+import java.util.Random;
 import java.util.Set;
 import java.util.UUID;
 import java.util.concurrent.ConcurrentSkipListMap;
@@ -229,11 +230,8 @@ public class ProvidedStorageMap {
 sids.add(currInfo.getStorageID());
 types.add(storageType);
 if (StorageType.PROVIDED.equals(storageType)) {
-  DatanodeDescriptor dn = chooseProvidedDatanode(excludedUUids);
-  locs.add(
-  new DatanodeInfoWithStorage(
-  dn, currInfo.getStorageID(), currInfo.getStorageType()));
-  excludedUUids.add(dn.getDatanodeUuid());
+  // Provided location will be added to the list of locations after
+  // examining all local locations.
   isProvidedBlock = true;
 } else {
   locs.add(new DatanodeInfoWithStorage(
@@ -245,11 +243,17 @@ public class ProvidedStorageMap {
 
   int numLocations = locs.size();
   if (isProvidedBlock) {
+// add the first datanode here
+DatanodeDescriptor dn = chooseProvidedDatanode(excludedUUids);
+locs.add(
+new DatanodeInfoWithStorage(dn, storageId, StorageType.PROVIDED));
+excludedUUids.add(dn.getDatanodeUuid());
+numLocations++;
 // add more replicas until we reach the defaultReplication
 for (int count = numLocations + 1;
 count <= defaultReplication && count <= providedDescriptor
 .activeProvidedDatanodes(); count++) {
-  DatanodeDescriptor dn = chooseProvidedDatanode(excludedUUids);
+  dn = chooseProvidedDatanode(excludedUUids);
   locs.add(new DatanodeInfoWithStorage(
   dn, storageId, StorageType.PROVIDED));
   sids.add(storageId);
@@ -284,6 +288,9 @@ public class ProvidedStorageMap {
 
 private final NavigableMap dns =
 new ConcurrentSkipListMap<>();
+// maintain a separate list of the datanodes with provided storage
+// to efficiently choose Datanodes when required.
+private final List dnR = new ArrayList<>();
 public final static String NETWORK_LOCATION = "/REMOTE";
 public final static String NAME = "PROVIDED";
 
@@ -300,8 +307,8 @@ public class ProvidedStorageMap {
 
 DatanodeStorageInfo getProvidedStorage(
 DatanodeDescriptor dn, DatanodeStorage s) {
-  LOG.info("X adding Datanode " + dn.getDatanodeUuid());
   dns.put(dn.getDatanodeUuid(), dn);
+  dnR.add(dn);
   // TODO: maintain separate RPC ident per dn
   return storageMap.get(s.getStorageID());
 }
@@ -315,84 +322,42 @@ public class ProvidedStorageMap {
 }
 
 DatanodeDescriptor choose(DatanodeDescriptor client) {
-  // exact match for now
-  DatanodeDescriptor dn = client != null ?
-  dns.get(client.getDatanodeUuid()) : null;
-  if (null == dn) {
-dn = chooseRandom();
-  }
-  return dn;
+  return choose(client, Collections.emptySet());
 

[43/50] [abbrv] hadoop git commit: HADOOP-13974. S3Guard CLI to support list/purge of pending multipart commits. Contributed by Aaron Fabbri

2017-12-18 Thread kkaranasos
HADOOP-13974. S3Guard CLI to support list/purge of pending multipart commits.
Contributed by Aaron Fabbri


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/35ad9b1d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/35ad9b1d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/35ad9b1d

Branch: refs/heads/YARN-6592
Commit: 35ad9b1dd279b769381ea1625d9bf776c309c5cb
Parents: 94576b1
Author: Steve Loughran 
Authored: Mon Dec 18 21:18:52 2017 +
Committer: Steve Loughran 
Committed: Mon Dec 18 21:19:06 2017 +

--
 .../java/org/apache/hadoop/security/KDiag.java  |  30 +-
 .../java/org/apache/hadoop/fs/s3a/Invoker.java  |   7 +-
 .../apache/hadoop/fs/s3a/MultipartUtils.java| 214 ++
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java |  30 +-
 .../java/org/apache/hadoop/fs/s3a/S3AUtils.java |   3 +-
 .../hadoop/fs/s3a/WriteOperationHelper.java |   5 +-
 .../hadoop/fs/s3a/commit/CommitOperations.java  |   2 +-
 .../fs/s3a/commit/MagicCommitIntegration.java   |   2 +-
 .../hadoop/fs/s3a/s3guard/S3GuardTool.java  | 287 +--
 .../src/site/markdown/tools/hadoop-aws/index.md |   7 +-
 .../site/markdown/tools/hadoop-aws/s3guard.md   |  35 ++-
 .../hadoop/fs/s3a/ITestS3AMultipartUtils.java   | 126 
 .../apache/hadoop/fs/s3a/MockS3AFileSystem.java |   7 +
 .../hadoop/fs/s3a/MultipartTestUtils.java   | 184 
 .../org/apache/hadoop/fs/s3a/S3ATestUtils.java  |  21 +-
 .../fs/s3a/commit/AbstractCommitITest.java  |   3 +-
 .../commit/magic/ITestS3AHugeMagicCommits.java  |   2 +-
 .../fs/s3a/s3guard/ITestS3GuardToolLocal.java   | 187 
 18 files changed, 1082 insertions(+), 70 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/35ad9b1d/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/KDiag.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/KDiag.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/KDiag.java
index c8d0b33..b4e535c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/KDiag.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/KDiag.java
@@ -81,6 +81,11 @@ public class KDiag extends Configured implements Tool, 
Closeable {
* variable. This is what kinit will use by default: {@value}
*/
   public static final String KRB5_CCNAME = "KRB5CCNAME";
+  /**
+   * Location of main kerberos configuration file as passed down via an
+   * environment variable.
+   */
+  public static final String KRB5_CONFIG = "KRB5_CONFIG";
   public static final String JAVA_SECURITY_KRB5_CONF
 = "java.security.krb5.conf";
   public static final String JAVA_SECURITY_KRB5_REALM
@@ -321,14 +326,15 @@ public class KDiag extends Configured implements Tool, 
Closeable {
 
 title("Environment Variables");
 for (String env : new String[]{
-  HADOOP_JAAS_DEBUG,
-  KRB5_CCNAME,
-  HADOOP_USER_NAME,
-  HADOOP_PROXY_USER,
-  HADOOP_TOKEN_FILE_LOCATION,
-  "HADOOP_SECURE_LOG",
-  "HADOOP_OPTS",
-  "HADOOP_CLIENT_OPTS",
+HADOOP_JAAS_DEBUG,
+KRB5_CCNAME,
+KRB5_CONFIG,
+HADOOP_USER_NAME,
+HADOOP_PROXY_USER,
+HADOOP_TOKEN_FILE_LOCATION,
+"HADOOP_SECURE_LOG",
+"HADOOP_OPTS",
+"HADOOP_CLIENT_OPTS",
 }) {
   printEnv(env);
 }
@@ -562,14 +568,14 @@ public class KDiag extends Configured implements Tool, 
Closeable {
 krbPath = jvmKrbPath;
   }
 
-  String krb5name = System.getenv(KRB5_CCNAME);
+  String krb5name = System.getenv(KRB5_CONFIG);
   if (krb5name != null) {
 println("Setting kerberos path from environment variable %s: \"%s\"",
-  KRB5_CCNAME, krb5name);
+KRB5_CONFIG, krb5name);
 krbPath = krb5name;
 if (jvmKrbPath != null) {
   println("Warning - both %s and %s were set - %s takes priority",
-JAVA_SECURITY_KRB5_CONF, KRB5_CCNAME, KRB5_CCNAME);
+  JAVA_SECURITY_KRB5_CONF, KRB5_CONFIG, KRB5_CONFIG);
 }
   }
 
@@ -919,7 +925,7 @@ public class KDiag extends Configured implements Tool, 
Closeable {
   private void dump(File file) throws IOException {
 try (FileInputStream in = new FileInputStream(file)) {
   for (String line : IOUtils.readLines(in)) {
-println(line);
+println("%s", line);
   }
 }
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/35ad9b1d/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Invoker.java

[20/50] [abbrv] hadoop git commit: HDFS-12665. [AliasMap] Create a version of the AliasMap that runs in memory in the Namenode (leveldb). Contributed by Ewan Higgs.

2017-12-18 Thread kkaranasos
http://git-wip-us.apache.org/repos/asf/hadoop/blob/352f994b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
index 1ef2f2b..faf1f83 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
@@ -28,7 +28,6 @@ import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage;
 import org.apache.hadoop.hdfs.util.RwLock;
 import org.junit.Before;
 import org.junit.Test;
-
 import java.io.IOException;
 
 import static org.junit.Assert.assertNotNull;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/352f994b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestInMemoryLevelDBAliasMapClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestInMemoryLevelDBAliasMapClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestInMemoryLevelDBAliasMapClient.java
new file mode 100644
index 000..4a9661b
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestInMemoryLevelDBAliasMapClient.java
@@ -0,0 +1,341 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.common.blockaliasmap.impl;
+
+import com.google.common.collect.Lists;
+import com.google.common.io.Files;
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation;
+import org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap;
+import org.apache.hadoop.hdfs.server.aliasmap.InMemoryLevelDBAliasMapServer;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.junit.Assert.assertArrayEquals;
+import static org.junit.Assert.assertEquals;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.List;
+import java.util.Optional;
+import java.util.Random;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.stream.Collectors;
+
+/**
+ * Tests the {@link InMemoryLevelDBAliasMapClient}.
+ */
+public class TestInMemoryLevelDBAliasMapClient {
+
+  private InMemoryLevelDBAliasMapServer levelDBAliasMapServer;
+  private InMemoryLevelDBAliasMapClient inMemoryLevelDBAliasMapClient;
+  private File tempDir;
+  private Configuration conf;
+
+  @Before
+  public void setUp() throws IOException {
+levelDBAliasMapServer =
+new InMemoryLevelDBAliasMapServer(InMemoryAliasMap::init);
+conf = new Configuration();
+int port = 9876;
+
+conf.set(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_RPC_ADDRESS,
+"localhost:" + port);
+tempDir = Files.createTempDir();
+conf.set(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_LEVELDB_DIR,
+tempDir.getAbsolutePath());
+inMemoryLevelDBAliasMapClient = new InMemoryLevelDBAliasMapClient();
+  }
+
+  @After
+  public void tearDown() throws IOException {
+levelDBAliasMapServer.close();
+inMemoryLevelDBAliasMapClient.close();
+FileUtils.deleteDirectory(tempDir);
+  }
+
+  @Test
+  public void 

[32/50] [abbrv] hadoop git commit: HDFS-12903. [READ] Fix closing streams in ImageWriter

2017-12-18 Thread kkaranasos
HDFS-12903. [READ] Fix closing streams in ImageWriter


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/962b5e72
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/962b5e72
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/962b5e72

Branch: refs/heads/YARN-6592
Commit: 962b5e722ba86d1c012be11280c6b8fb5e0a2043
Parents: 71ec170
Author: Virajith Jalaparti 
Authored: Thu Dec 7 14:21:24 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:41 2017 -0800

--
 .../java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/962b5e72/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
--
diff --git 
a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
 
b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
index 0abc7a7..c21c282 100644
--- 
a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
+++ 
b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
@@ -183,9 +183,9 @@ public class ImageWriter implements Closeable {
   dirsTmp.deleteOnExit();
   dirsTmpStream = new FileOutputStream(dirsTmp);
   dirs = beginSection(dirsTmpStream);
-} catch (IOException e) {
+} catch (Throwable e) {
   IOUtils.cleanupWithLogger(null, raw, dirsTmpStream);
-  throw e;
+  throw new IOException(e);
 }
 
 try {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[08/50] [abbrv] hadoop git commit: HDFS-11673. [READ] Handle failures of Datanode with PROVIDED storage

2017-12-18 Thread kkaranasos
HDFS-11673. [READ] Handle failures of Datanode with PROVIDED storage


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/546b95f4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/546b95f4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/546b95f4

Branch: refs/heads/YARN-6592
Commit: 546b95f4843f3cbbbdf72d90d202cad551696082
Parents: 55ade54
Author: Virajith Jalaparti 
Authored: Thu Jun 1 16:01:31 2017 -0700
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:38 2017 -0800

--
 .../hdfs/server/blockmanagement/BlockInfo.java  | 12 +++-
 .../server/blockmanagement/BlockManager.java|  5 +-
 .../server/blockmanagement/BlockProvider.java   | 18 +++--
 .../blockmanagement/ProvidedStorageMap.java | 54 +--
 .../blockmanagement/TestProvidedStorageMap.java | 10 ++-
 .../TestNameNodeProvidedImplementation.java | 72 +++-
 6 files changed, 150 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/546b95f4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
index e9d235c..eb09b7b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
@@ -24,6 +24,7 @@ import java.util.NoSuchElementException;
 
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.BlockType;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
@@ -188,8 +189,15 @@ public abstract class BlockInfo extends Block
 int len = getCapacity();
 for(int idx = 0; idx < len; idx++) {
   DatanodeStorageInfo cur = getStorageInfo(idx);
-  if(cur != null && cur.getDatanodeDescriptor() == dn) {
-return cur;
+  if(cur != null) {
+if (cur.getStorageType() == StorageType.PROVIDED) {
+  //if block resides on provided storage, only match the storage ids
+  if (dn.getStorageInfo(cur.getStorageID()) != null) {
+return cur;
+  }
+} else if (cur.getDatanodeDescriptor() == dn) {
+  return cur;
+}
   }
 }
 return null;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/546b95f4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 0e3eab3..07502c1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1514,6 +1514,7 @@ public class BlockManager implements BlockStatsMXBean {

   /** Remove the blocks associated to the given datanode. */
   void removeBlocksAssociatedTo(final DatanodeDescriptor node) {
+providedStorageMap.removeDatanode(node);
 for (DatanodeStorageInfo storage : node.getStorageInfos()) {
   final Iterator it = storage.getBlockIterator();
   //add the BlockInfos to a new collection as the
@@ -2462,7 +2463,7 @@ public class BlockManager implements BlockStatsMXBean {
   // !#! Register DN with provided storage, not with storage owned by DN
   // !#! DN should still have a ref to the DNStorageInfo
   DatanodeStorageInfo storageInfo =
-  providedStorageMap.getStorage(node, storage);
+  providedStorageMap.getStorage(node, storage, context);
 
   if (storageInfo == null) {
 // We handle this for backwards compatibility.
@@ -2589,7 +2590,7 @@ public class BlockManager implements BlockStatsMXBean {
 }
   }
   
-  private Collection processReport(
+  Collection processReport(
   final DatanodeStorageInfo storageInfo,
   final BlockListAsLongs report,
   BlockReportContext context) throws IOException {


[29/50] [abbrv] hadoop git commit: HDFS-11640. [READ] Datanodes should use a unique identifier when reading from external stores

2017-12-18 Thread kkaranasos
HDFS-11640. [READ] Datanodes should use a unique identifier when reading from 
external stores


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4531588a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4531588a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4531588a

Branch: refs/heads/YARN-6592
Commit: 4531588a94dcd2b4141b12828cb60ca3b953a58c
Parents: fb996a3
Author: Virajith Jalaparti 
Authored: Wed Dec 6 09:39:56 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:41 2017 -0800

--
 .../hadoop/hdfs/server/common/FileRegion.java   |  7 ++-
 .../impl/TextFileRegionAliasMap.java| 16 --
 .../datanode/FinalizedProvidedReplica.java  | 20 ---
 .../hdfs/server/datanode/ProvidedReplica.java   | 34 ++--
 .../hdfs/server/datanode/ReplicaBuilder.java| 12 -
 .../fsdataset/impl/ProvidedVolumeImpl.java  |  9 
 .../datanode/TestProvidedReplicaImpl.java   |  2 +-
 .../fsdataset/impl/TestProvidedImpl.java| 57 
 .../hadoop/hdfs/server/namenode/FSTreeWalk.java |  6 +--
 .../hdfs/server/namenode/ImageWriter.java   |  2 +-
 .../hadoop/hdfs/server/namenode/TreePath.java   | 40 ++
 .../hdfs/server/namenode/RandomTreeWalk.java|  6 +--
 12 files changed, 174 insertions(+), 37 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4531588a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java
index e6f0d0a..b605234 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java
@@ -37,8 +37,13 @@ public class FileRegion implements BlockAlias {
 
   public FileRegion(long blockId, Path path, long offset,
   long length, long genStamp) {
+this(blockId, path, offset, length, genStamp, new byte[0]);
+  }
+
+  public FileRegion(long blockId, Path path, long offset,
+long length, long genStamp, byte[] nonce) {
 this(new Block(blockId, length, genStamp),
-new ProvidedStorageLocation(path, offset, length, new byte[0]));
+new ProvidedStorageLocation(path, offset, length, nonce));
   }
 
   public FileRegion(long blockId, Path path, long offset, long length) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4531588a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
index 878a208..150371d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
@@ -26,6 +26,7 @@ import java.io.InputStream;
 import java.io.InputStreamReader;
 import java.io.OutputStream;
 import java.io.OutputStreamWriter;
+import java.nio.charset.Charset;
 import java.util.ArrayList;
 import java.util.Iterator;
 import java.util.Map;
@@ -353,11 +354,16 @@ public class TextFileRegionAliasMap
 return null;
   }
   String[] f = line.split(delim);
-  if (f.length != 5) {
+  if (f.length != 5 && f.length != 6) {
 throw new IOException("Invalid line: " + line);
   }
+  byte[] nonce = new byte[0];
+  if (f.length == 6) {
+nonce = f[5].getBytes(Charset.forName("UTF-8"));
+  }
   return new FileRegion(Long.parseLong(f[0]), new Path(f[1]),
-  Long.parseLong(f[2]), Long.parseLong(f[3]), Long.parseLong(f[4]));
+  Long.parseLong(f[2]), Long.parseLong(f[3]), Long.parseLong(f[4]),
+  nonce);
 }
 
 public InputStream createStream() throws IOException {
@@ -442,7 +448,11 @@ public class TextFileRegionAliasMap
   out.append(psl.getPath().toString()).append(delim);
   out.append(Long.toString(psl.getOffset())).append(delim);
   out.append(Long.toString(psl.getLength())).append(delim);
-  

[39/50] [abbrv] hadoop git commit: HADOOP-15106. FileSystem::open(PathHandle) should throw a specific exception on validation failure

2017-12-18 Thread kkaranasos
HADOOP-15106. FileSystem::open(PathHandle) should throw a specific exception on 
validation failure


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5e81f32d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5e81f32d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5e81f32d

Branch: refs/heads/YARN-6592
Commit: 5e81f32d1155ea96c892099008cfeb50799082eb
Parents: fc7ec80
Author: Chris Douglas 
Authored: Sat Dec 16 10:53:10 2017 -0800
Committer: Chris Douglas 
Committed: Sat Dec 16 10:53:10 2017 -0800

--
 .../java/org/apache/hadoop/fs/FileSystem.java   |  6 +++
 .../hadoop/fs/InvalidPathHandleException.java   | 46 
 .../src/site/markdown/filesystem/filesystem.md  |  2 +-
 .../fs/contract/AbstractContractOpenTest.java   |  7 +--
 .../hadoop/hdfs/DistributedFileSystem.java  |  3 ++
 .../hadoop/hdfs/protocol/HdfsPathHandle.java| 16 +++
 6 files changed, 67 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5e81f32d/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index a364921..6b7dead 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -957,6 +957,8 @@ public abstract class FileSystem extends Configured 
implements Closeable {
* resource directly and verify that the resource referenced
* satisfies constraints specified at its construciton.
* @param fd PathHandle object returned by the FS authority.
+   * @throws InvalidPathHandleException If {@link PathHandle} constraints are
+   *not satisfied
* @throws IOException IO failure
* @throws UnsupportedOperationException If {@link #open(PathHandle, int)}
*   not overridden by subclass
@@ -973,6 +975,8 @@ public abstract class FileSystem extends Configured 
implements Closeable {
* satisfies constraints specified at its construciton.
* @param fd PathHandle object returned by the FS authority.
* @param bufferSize the size of the buffer to use
+   * @throws InvalidPathHandleException If {@link PathHandle} constraints are
+   *not satisfied
* @throws IOException IO failure
* @throws UnsupportedOperationException If not overridden by subclass
*/
@@ -994,6 +998,8 @@ public abstract class FileSystem extends Configured 
implements Closeable {
* the specified constraints.
*/
   public final PathHandle getPathHandle(FileStatus stat, HandleOpt... opt) {
+// method is final with a default so clients calling getPathHandle(stat)
+// get the same semantics for all FileSystem implementations
 if (null == opt || 0 == opt.length) {
   return createPathHandle(stat, HandleOpt.path());
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5e81f32d/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/InvalidPathHandleException.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/InvalidPathHandleException.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/InvalidPathHandleException.java
new file mode 100644
index 000..8e26ea7
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/InvalidPathHandleException.java
@@ -0,0 +1,46 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs;
+
+import java.io.IOException;
+
+import 

[07/50] [abbrv] hadoop git commit: HDFS-11791. [READ] Test for increasing replication of provided files.

2017-12-18 Thread kkaranasos
HDFS-11791. [READ] Test for increasing replication of provided files.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4851f06b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4851f06b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4851f06b

Branch: refs/heads/YARN-6592
Commit: 4851f06bc2df9d2cfc69fc7c4cecf7babcaa7728
Parents: 89b9faf
Author: Virajith Jalaparti 
Authored: Wed May 31 10:29:53 2017 -0700
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:38 2017 -0800

--
 .../TestNameNodeProvidedImplementation.java | 55 
 1 file changed, 55 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4851f06b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
--
diff --git 
a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
 
b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index 5062439..e171557 100644
--- 
a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ 
b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -23,6 +23,7 @@ import java.io.FileOutputStream;
 import java.io.IOException;
 import java.io.OutputStreamWriter;
 import java.io.Writer;
+import java.net.InetSocketAddress;
 import java.nio.ByteBuffer;
 import java.nio.channels.Channels;
 import java.nio.channels.ReadableByteChannel;
@@ -34,10 +35,15 @@ import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.hdfs.DFSClient;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSTestUtil;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockFormatProvider;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockProvider;
 import org.apache.hadoop.hdfs.server.common.BlockFormat;
@@ -378,4 +384,53 @@ public class TestNameNodeProvidedImplementation {
 assertEquals(1, locations.length);
 assertEquals(2, locations[0].getHosts().length);
   }
+
+  private DatanodeInfo[] getAndCheckBlockLocations(DFSClient client,
+  String filename, int expectedLocations) throws IOException {
+LocatedBlocks locatedBlocks = client.getLocatedBlocks(
+filename, 0, baseFileLen);
+//given the start and length in the above call,
+//only one LocatedBlock in LocatedBlocks
+assertEquals(1, locatedBlocks.getLocatedBlocks().size());
+LocatedBlock locatedBlock = locatedBlocks.getLocatedBlocks().get(0);
+assertEquals(expectedLocations, locatedBlock.getLocations().length);
+return locatedBlock.getLocations();
+  }
+
+  /**
+   * Tests setting replication of provided files.
+   * @throws Exception
+   */
+  @Test
+  public void testSetReplicationForProvidedFiles() throws Exception {
+createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
+FixedBlockResolver.class);
+startCluster(NNDIRPATH, 2, null,
+new StorageType[][] {
+{StorageType.PROVIDED},
+{StorageType.DISK}},
+false);
+
+String filename = "/" + filePrefix + (numFiles - 1) + fileSuffix;
+Path file = new Path(filename);
+FileSystem fs = cluster.getFileSystem();
+
+//set the replication to 2, and test that the file has
+//the required replication.
+fs.setReplication(file, (short) 2);
+DFSTestUtil.waitForReplication((DistributedFileSystem) fs,
+file, (short) 2, 1);
+DFSClient client = new DFSClient(new InetSocketAddress("localhost",
+cluster.getNameNodePort()), cluster.getConfiguration(0));
+getAndCheckBlockLocations(client, filename, 2);
+
+//set the replication back to 1
+fs.setReplication(file, (short) 1);
+DFSTestUtil.waitForReplication((DistributedFileSystem) fs,
+file, (short) 1, 1);
+//the only replica left should be the PROVIDED datanode
+DatanodeInfo[] infos = getAndCheckBlockLocations(client, filename, 1);
+assertEquals(cluster.getDataNodes().get(0).getDatanodeUuid(),
+infos[0].getDatanodeUuid());
+  }
 }



[36/50] [abbrv] hadoop git commit: Revert "HDFS-12903. [READ] Fix closing streams in ImageWriter"

2017-12-18 Thread kkaranasos
Revert "HDFS-12903. [READ] Fix closing streams in ImageWriter"

This reverts commit c1bf2654b0e9118985b8518b0254eac4dd302a2f.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e515103a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e515103a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e515103a

Branch: refs/heads/YARN-6592
Commit: e515103a83e12ad4908c0ca0b4b1aa4a87e2a840
Parents: 8239e3a
Author: Chris Douglas 
Authored: Fri Dec 15 17:40:50 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:42 2017 -0800

--
 .../java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e515103a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
--
diff --git 
a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
 
b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
index 1be5190..14a5f8f 100644
--- 
a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
+++ 
b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
@@ -183,9 +183,9 @@ public class ImageWriter implements Closeable {
   dirsTmp.deleteOnExit();
   dirsTmpStream = new FileOutputStream(dirsTmp);
   dirs = beginSection(dirsTmpStream);
-} catch (Throwable e) {
+} catch (IOException e) {
   IOUtils.cleanupWithLogger(null, raw, dirsTmpStream);
-  throw new IOException(e);
+  throw e;
 }
 
 try {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[09/50] [abbrv] hadoop git commit: HDFS-12289. [READ] HDFS-12091 breaks the tests for provided block reads

2017-12-18 Thread kkaranasos
HDFS-12289. [READ] HDFS-12091 breaks the tests for provided block reads


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aca023b7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aca023b7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aca023b7

Branch: refs/heads/YARN-6592
Commit: aca023b72cdb325ca66d196443218f6107efa1ca
Parents: 2407c9b
Author: Virajith Jalaparti 
Authored: Mon Aug 14 10:29:47 2017 -0700
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:38 2017 -0800

--
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  | 30 +++-
 .../TestNameNodeProvidedImplementation.java |  4 ++-
 2 files changed, 32 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/aca023b7/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
index 2d710be..c694854 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
@@ -147,6 +147,9 @@ public class MiniDFSCluster implements AutoCloseable {
   GenericTestUtils.SYSPROP_TEST_DATA_DIR;
   /** Configuration option to set the data dir: {@value} */
   public static final String HDFS_MINIDFS_BASEDIR = "hdfs.minidfs.basedir";
+  /** Configuration option to set the provided data dir: {@value} */
+  public static final String HDFS_MINIDFS_BASEDIR_PROVIDED =
+  "hdfs.minidfs.basedir.provided";
   public static final String  DFS_NAMENODE_SAFEMODE_EXTENSION_TESTING_KEY
   = DFS_NAMENODE_SAFEMODE_EXTENSION_KEY + ".testing";
   public static final String  DFS_NAMENODE_DECOMMISSION_INTERVAL_TESTING_KEY
@@ -1397,7 +1400,12 @@ public class MiniDFSCluster implements AutoCloseable {
   if ((storageTypes != null) && (j >= storageTypes.length)) {
 break;
   }
-  File dir = getInstanceStorageDir(dnIndex, j);
+  File dir;
+  if (storageTypes != null && storageTypes[j] == StorageType.PROVIDED) {
+dir = getProvidedStorageDir(dnIndex, j);
+  } else {
+dir = getInstanceStorageDir(dnIndex, j);
+  }
   dir.mkdirs();
   if (!dir.isDirectory()) {
 throw new IOException("Mkdirs failed to create directory for DataNode 
" + dir);
@@ -2847,6 +2855,26 @@ public class MiniDFSCluster implements AutoCloseable {
   }
 
   /**
+   * Get a storage directory for PROVIDED storages.
+   * The PROVIDED directory to return can be set by using the configuration
+   * parameter {@link #HDFS_MINIDFS_BASEDIR_PROVIDED}. If this parameter is
+   * not set, this function behaves exactly the same as
+   * {@link #getInstanceStorageDir(int, int)}. Currently, the two parameters
+   * are ignored as only one PROVIDED storage is supported in HDFS-9806.
+   *
+   * @param dnIndex datanode index (starts from 0)
+   * @param dirIndex directory index
+   * @return Storage directory
+   */
+  public File getProvidedStorageDir(int dnIndex, int dirIndex) {
+String base = conf.get(HDFS_MINIDFS_BASEDIR_PROVIDED, null);
+if (base == null) {
+  return getInstanceStorageDir(dnIndex, dirIndex);
+}
+return new File(base);
+  }
+
+  /**
* Get a storage directory for a datanode.
* 
* /data/data<2*dnIndex + 1>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aca023b7/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
--
diff --git 
a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
 
b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index 60b306f..3f937c4 100644
--- 
a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ 
b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -74,7 +74,7 @@ public class TestNameNodeProvidedImplementation {
   final Random r = new Random();
   final File fBASE = new File(MiniDFSCluster.getBaseDirectory());
   final Path BASE = new Path(fBASE.toURI().toString());
-  final Path NAMEPATH = new Path(BASE, "providedDir");;
+  final Path NAMEPATH = new Path(BASE, "providedDir");
   final Path NNDIRPATH = new Path(BASE, "nnDir");
   final Path 

[05/50] [abbrv] hadoop git commit: HDFS-12091. [READ] Check that the replicas served from a ProvidedVolumeImpl belong to the correct external storage

2017-12-18 Thread kkaranasos
HDFS-12091. [READ] Check that the replicas served from a ProvidedVolumeImpl 
belong to the correct external storage


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/663b3c08
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/663b3c08
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/663b3c08

Branch: refs/heads/YARN-6592
Commit: 663b3c08b131ea2db693e1a5d2f5da98242fa854
Parents: 546b95f
Author: Virajith Jalaparti 
Authored: Mon Aug 7 11:35:49 2017 -0700
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:38 2017 -0800

--
 .../hdfs/server/datanode/StorageLocation.java   |  26 +++--
 .../fsdataset/impl/ProvidedVolumeImpl.java  |  67 ++--
 .../fsdataset/impl/TestProvidedImpl.java| 105 ++-
 3 files changed, 173 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/663b3c08/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
index fb7acfd..d72448d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
@@ -64,21 +64,25 @@ public class StorageLocation
 this.storageType = storageType;
 if (uri.getScheme() == null || uri.getScheme().equals("file")) {
   // make sure all URIs that point to a file have the same scheme
-  try {
-File uriFile = new File(uri.getPath());
-String uriStr = uriFile.toURI().normalize().toString();
-if (uriStr.endsWith("/")) {
-  uriStr = uriStr.substring(0, uriStr.length() - 1);
-}
-uri = new URI(uriStr);
-  } catch (URISyntaxException e) {
-throw new IllegalArgumentException(
-"URI: " + uri + " is not in the expected format");
-  }
+  uri = normalizeFileURI(uri);
 }
 baseURI = uri;
   }
 
+  public static URI normalizeFileURI(URI uri) {
+try {
+  File uriFile = new File(uri.getPath());
+  String uriStr = uriFile.toURI().normalize().toString();
+  if (uriStr.endsWith("/")) {
+uriStr = uriStr.substring(0, uriStr.length() - 1);
+  }
+  return new URI(uriStr);
+} catch (URISyntaxException e) {
+  throw new IllegalArgumentException(
+  "URI: " + uri + " is not in the expected format");
+}
+  }
+
   public StorageType getStorageType() {
 return this.storageType;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/663b3c08/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
index 421b9cc..5cd28c7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
@@ -41,6 +41,7 @@ import 
org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
 import org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline;
 import org.apache.hadoop.hdfs.server.datanode.ReplicaInfo;
 import org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.ReportCompiler;
+import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
 import org.apache.hadoop.hdfs.server.datanode.checker.VolumeCheckResult;
 import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
 import org.apache.hadoop.hdfs.server.datanode.FileIoProvider;
@@ -64,7 +65,7 @@ import org.apache.hadoop.util.Time;
 public class ProvidedVolumeImpl extends FsVolumeImpl {
 
   static class ProvidedBlockPoolSlice {
-private FsVolumeImpl providedVolume;
+private ProvidedVolumeImpl providedVolume;
 
 private FileRegionProvider provider;
 private Configuration conf;
@@ -89,13 +90,20 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
   return provider;
 }
 
+@VisibleForTesting
+void setFileRegionProvider(FileRegionProvider newProvider) {
+  this.provider = newProvider;
+}
+
   

[11/50] [abbrv] hadoop git commit: HDFS-12778. [READ] Report multiple locations for PROVIDED blocks

2017-12-18 Thread kkaranasos
HDFS-12778. [READ] Report multiple locations for PROVIDED blocks


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3d3be87e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3d3be87e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3d3be87e

Branch: refs/heads/YARN-6592
Commit: 3d3be87e301d9f8ab1a220bc5dbeae0f032c5a86
Parents: 3b1d303
Author: Virajith Jalaparti 
Authored: Tue Nov 21 14:54:57 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:39 2017 -0800

--
 .../blockmanagement/ProvidedStorageMap.java | 149 +++
 .../server/namenode/FixedBlockResolver.java |   3 +-
 .../TestNameNodeProvidedImplementation.java | 127 +++-
 3 files changed, 151 insertions(+), 128 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3d3be87e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
index 2bc8faa..6fec977 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
@@ -35,7 +35,6 @@ import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.BlockListAsLongs;
 import org.apache.hadoop.hdfs.protocol.DatanodeID;
-import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfoWithStorage;
 import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
 import org.apache.hadoop.hdfs.protocol.LocatedBlock;
@@ -72,6 +71,7 @@ public class ProvidedStorageMap {
   private final DatanodeStorageInfo providedStorageInfo;
   private boolean providedEnabled;
   private long capacity;
+  private int defaultReplication;
 
   ProvidedStorageMap(RwLock lock, BlockManager bm, Configuration conf)
   throws IOException {
@@ -95,6 +95,8 @@ public class ProvidedStorageMap {
 storageId, State.NORMAL, StorageType.PROVIDED);
 providedDescriptor = new ProvidedDescriptor();
 providedStorageInfo = providedDescriptor.createProvidedStorage(ds);
+this.defaultReplication = conf.getInt(DFSConfigKeys.DFS_REPLICATION_KEY,
+DFSConfigKeys.DFS_REPLICATION_DEFAULT);
 
 this.bm = bm;
 this.lock = lock;
@@ -198,63 +200,72 @@ public class ProvidedStorageMap {
*/
   class ProvidedBlocksBuilder extends LocatedBlockBuilder {
 
-private ShadowDatanodeInfoWithStorage pending;
-private boolean hasProvidedLocations;
-
 ProvidedBlocksBuilder(int maxBlocks) {
   super(maxBlocks);
-  pending = new ShadowDatanodeInfoWithStorage(
-  providedDescriptor, storageId);
-  hasProvidedLocations = false;
+}
+
+private DatanodeDescriptor chooseProvidedDatanode(
+Set excludedUUids) {
+  DatanodeDescriptor dn = providedDescriptor.choose(null, excludedUUids);
+  if (dn == null) {
+dn = providedDescriptor.choose(null);
+  }
+  return dn;
 }
 
 @Override
 LocatedBlock newLocatedBlock(ExtendedBlock eb,
 DatanodeStorageInfo[] storages, long pos, boolean isCorrupt) {
 
-  DatanodeInfoWithStorage[] locs =
-new DatanodeInfoWithStorage[storages.length];
-  String[] sids = new String[storages.length];
-  StorageType[] types = new StorageType[storages.length];
+  List locs = new ArrayList<>();
+  List sids = new ArrayList<>();
+  List types = new ArrayList<>();
+  boolean isProvidedBlock = false;
+  Set excludedUUids = new HashSet<>();
+
   for (int i = 0; i < storages.length; ++i) {
-sids[i] = storages[i].getStorageID();
-types[i] = storages[i].getStorageType();
-if (StorageType.PROVIDED.equals(storages[i].getStorageType())) {
-  locs[i] = pending;
-  hasProvidedLocations = true;
+DatanodeStorageInfo currInfo = storages[i];
+StorageType storageType = currInfo.getStorageType();
+sids.add(currInfo.getStorageID());
+types.add(storageType);
+if (StorageType.PROVIDED.equals(storageType)) {
+  DatanodeDescriptor dn = chooseProvidedDatanode(excludedUUids);
+  locs.add(
+  new DatanodeInfoWithStorage(
+  dn, currInfo.getStorageID(), currInfo.getStorageType()));
+  

[17/50] [abbrv] hadoop git commit: HDFS-12779. [READ] Allow cluster id to be specified to the Image generation tool

2017-12-18 Thread kkaranasos
HDFS-12779. [READ] Allow cluster id to be specified to the Image generation tool


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6cd80b25
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6cd80b25
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6cd80b25

Branch: refs/heads/YARN-6592
Commit: 6cd80b2521e6283036d8c7058d8e452a93ff8e4b
Parents: 90d1b47
Author: Virajith Jalaparti 
Authored: Thu Nov 9 14:09:14 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:39 2017 -0800

--
 .../hdfs/server/protocol/NamespaceInfo.java |  4 
 .../hdfs/server/namenode/FileSystemImage.java   |  4 
 .../hdfs/server/namenode/ImageWriter.java   | 11 -
 .../TestNameNodeProvidedImplementation.java | 24 +++-
 4 files changed, 41 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6cd80b25/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/NamespaceInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/NamespaceInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/NamespaceInfo.java
index 66ce9ee..433d9b7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/NamespaceInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/NamespaceInfo.java
@@ -160,6 +160,10 @@ public class NamespaceInfo extends StorageInfo {
 return state;
   }
 
+  public void setClusterID(String clusterID) {
+this.clusterID = clusterID;
+  }
+
   @Override
   public String toString(){
 return super.toString() + ";bpid=" + blockPoolID;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6cd80b25/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
--
diff --git 
a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
 
b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
index 2e57c9f..b66c830 100644
--- 
a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
+++ 
b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
@@ -68,6 +68,7 @@ public class FileSystemImage implements Tool {
 options.addOption("b", "blockclass", true, "Block output class");
 options.addOption("i", "blockidclass", true, "Block resolver class");
 options.addOption("c", "cachedirs", true, "Max active dirents");
+options.addOption("cid", "clusterID", true, "Cluster ID");
 options.addOption("h", "help", false, "Print usage");
 return options;
   }
@@ -112,6 +113,9 @@ public class FileSystemImage implements Tool {
   case "c":
 opts.cache(Integer.parseInt(o.getValue()));
 break;
+  case "cid":
+opts.clusterID(o.getValue());
+break;
   default:
 throw new UnsupportedOperationException("Internal error");
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6cd80b25/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
--
diff --git 
a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
 
b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
index 390bb39..9bd8852 100644
--- 
a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
+++ 
b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
@@ -126,13 +126,16 @@ public class ImageWriter implements Closeable {
   throw new IllegalStateException("Incompatible layout " +
   info.getLayoutVersion() + " (expected " + LAYOUT_VERSION);
 }
+// set the cluster id, if given
+if (opts.clusterID.length() > 0) {
+  info.setClusterID(opts.clusterID);
+}
 stor.format(info);
 blockPoolID = info.getBlockPoolID();
   }
   outdir = new Path(tmp, "current");
   out = outfs.create(new Path(outdir, "fsimage_000"));
 } else {
-  // XXX necessary? writing a NNStorage now...
   outdir = null;
   outfs = null;
   out = opts.outStream;
@@ -517,6 +520,7 @@ public class ImageWriter implements Closeable {
 private UGIResolver ugis;
  

[15/50] [abbrv] hadoop git commit: HDFS-12607. [READ] Even one dead datanode with PROVIDED storage results in ProvidedStorageInfo being marked as FAILED

2017-12-18 Thread kkaranasos
HDFS-12607. [READ] Even one dead datanode with PROVIDED storage results in 
ProvidedStorageInfo being marked as FAILED


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/71d0a825
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/71d0a825
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/71d0a825

Branch: refs/heads/YARN-6592
Commit: 71d0a825711387fe06396323a9ca6a5af0ade415
Parents: 98f5ed5
Author: Virajith Jalaparti 
Authored: Mon Nov 6 11:05:59 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:39 2017 -0800

--
 .../blockmanagement/DatanodeDescriptor.java |  6 ++-
 .../blockmanagement/ProvidedStorageMap.java | 40 +---
 .../TestNameNodeProvidedImplementation.java | 40 
 3 files changed, 71 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/71d0a825/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index e3d6582..c17ab4c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -455,8 +455,10 @@ public class DatanodeDescriptor extends DatanodeInfo {
   totalDfsUsed += report.getDfsUsed();
   totalNonDfsUsed += report.getNonDfsUsed();
 
-  if (StorageType.PROVIDED.equals(
-  report.getStorage().getStorageType())) {
+  // for PROVIDED storages, do not call updateStorage() unless
+  // DatanodeStorageInfo already exists!
+  if (StorageType.PROVIDED.equals(report.getStorage().getStorageType())
+  && storageMap.get(report.getStorage().getStorageID()) == null) {
 continue;
   }
   DatanodeStorageInfo storage = updateStorage(report.getStorage());

http://git-wip-us.apache.org/repos/asf/hadoop/blob/71d0a825/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
index a848d50..3d19775 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
@@ -66,7 +66,6 @@ public class ProvidedStorageMap {
   // limit to a single provider for now
   private RwLock lock;
   private BlockManager bm;
-  private boolean hasDNs = false;
   private BlockAliasMap aliasMap;
 
   private final String storageId;
@@ -123,6 +122,11 @@ public class ProvidedStorageMap {
   BlockReportContext context) throws IOException {
 if (providedEnabled && storageId.equals(s.getStorageID())) {
   if (StorageType.PROVIDED.equals(s.getStorageType())) {
+if (providedStorageInfo.getState() == State.FAILED
+&& s.getState() == State.NORMAL) {
+  providedStorageInfo.setState(State.NORMAL);
+  LOG.info("Provided storage transitioning to state " + State.NORMAL);
+}
 processProvidedStorageReport(context);
 dn.injectStorage(providedStorageInfo);
 return providedDescriptor.getProvidedStorage(dn, s);
@@ -135,21 +139,14 @@ public class ProvidedStorageMap {
   private void processProvidedStorageReport(BlockReportContext context)
   throws IOException {
 assert lock.hasWriteLock() : "Not holding write lock";
-if (hasDNs) {
-  return;
-}
-if (providedStorageInfo.getBlockReportCount() == 0) {
+if (providedStorageInfo.getBlockReportCount() == 0
+|| providedDescriptor.activeProvidedDatanodes() == 0) {
   LOG.info("Calling process first blk report from storage: "
   + providedStorageInfo);
   // first pass; periodic refresh should call bm.processReport
   bm.processFirstBlockReport(providedStorageInfo,
   new ProvidedBlockList(aliasMap.getReader(null).iterator()));
-} else {
-  bm.processReport(providedStorageInfo,
-  new 

[03/50] [abbrv] hadoop git commit: HDFS-12605. [READ] TestNameNodeProvidedImplementation#testProvidedDatanodeFailures fails after rebase

2017-12-18 Thread kkaranasos
HDFS-12605. [READ] 
TestNameNodeProvidedImplementation#testProvidedDatanodeFailures fails after 
rebase


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d6a9a899
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d6a9a899
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d6a9a899

Branch: refs/heads/YARN-6592
Commit: d6a9a8997339939b59ce36246225f7cc45b21da5
Parents: 17052c4
Author: Virajith Jalaparti 
Authored: Wed Oct 18 13:53:11 2017 -0700
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:38 2017 -0800

--
 .../hdfs/server/blockmanagement/DatanodeDescriptor.java | 12 
 .../namenode/TestNameNodeProvidedImplementation.java|  6 +++---
 2 files changed, 15 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d6a9a899/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index 28a3d1a..e3d6582 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -489,6 +489,18 @@ public class DatanodeDescriptor extends DatanodeInfo {
 synchronized (storageMap) {
   DatanodeStorageInfo storage = storageMap.get(s.getStorageID());
   if (null == storage) {
+LOG.info("Adding new storage ID {} for DN {}", s.getStorageID(),
+getXferAddr());
+DFSTopologyNodeImpl parent = null;
+if (getParent() instanceof DFSTopologyNodeImpl) {
+  parent = (DFSTopologyNodeImpl) getParent();
+}
+StorageType type = s.getStorageType();
+if (!hasStorageType(type) && parent != null) {
+  // we are about to add a type this node currently does not have,
+  // inform the parent that a new type is added to this datanode
+  parent.childAddStorage(getName(), type);
+}
 storageMap.put(s.getStorageID(), s);
   } else {
 assert storage == s : "found " + storage + " expected " + s;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d6a9a899/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
--
diff --git 
a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
 
b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index 3f937c4..d622b9e 100644
--- 
a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ 
b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -481,13 +481,13 @@ public class TestNameNodeProvidedImplementation {
   assertEquals(providedDatanode2.getDatanodeUuid(),
   dnInfos[0].getDatanodeUuid());
 
-  //stop the 2nd provided datanode
-  cluster.stopDataNode(1);
+  // stop the 2nd provided datanode
+  MiniDFSCluster.DataNodeProperties providedDNProperties2 =
+  cluster.stopDataNode(0);
   // make NameNode detect that datanode is down
   BlockManagerTestUtil.noticeDeadDatanode(
   cluster.getNameNode(),
   providedDatanode2.getDatanodeId().getXferAddr());
-
   getAndCheckBlockLocations(client, filename, 0);
 
   //restart the provided datanode


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[50/50] [abbrv] hadoop git commit: YARN-6593. [API] Introduce Placement Constraint object. (Konstantinos Karanasos via wangda)

2017-12-18 Thread kkaranasos
YARN-6593. [API] Introduce Placement Constraint object. (Konstantinos Karanasos 
via wangda)

Change-Id: Id00edb7185fdf01cce6e40f920cac3585f8cbe9c


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/45b1ca60
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/45b1ca60
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/45b1ca60

Branch: refs/heads/YARN-6592
Commit: 45b1ca602814b03a5df35494f2edc7beab3d3db2
Parents: c7a4dda
Author: Wangda Tan 
Authored: Thu Aug 3 14:03:55 2017 -0700
Committer: Konstantinos Karanasos 
Committed: Mon Dec 18 16:07:00 2017 -0800

--
 .../yarn/api/resource/PlacementConstraint.java  | 567 +++
 .../yarn/api/resource/PlacementConstraints.java | 286 ++
 .../hadoop/yarn/api/resource/package-info.java  |  23 +
 .../src/main/proto/yarn_protos.proto|  55 ++
 .../api/resource/TestPlacementConstraints.java  | 106 
 .../PlacementConstraintFromProtoConverter.java  | 116 
 .../pb/PlacementConstraintToProtoConverter.java | 174 ++
 .../apache/hadoop/yarn/api/pb/package-info.java |  23 +
 .../yarn/api/records/impl/pb/ProtoUtils.java|  27 +
 .../PlacementConstraintTransformations.java | 209 +++
 .../hadoop/yarn/api/resource/package-info.java  |  23 +
 .../TestPlacementConstraintPBConversion.java| 195 +++
 .../TestPlacementConstraintTransformations.java | 183 ++
 13 files changed, 1987 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/45b1ca60/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
new file mode 100644
index 000..f0e3982
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
@@ -0,0 +1,567 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.api.resource;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceAudience.Public;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+
+/**
+ * {@code PlacementConstraint} represents a placement constraint for a resource
+ * allocation.
+ */
+@Public
+@Unstable
+public class PlacementConstraint {
+
+  /**
+   * The constraint expression tree.
+   */
+  private AbstractConstraint constraintExpr;
+
+  public PlacementConstraint(AbstractConstraint constraintExpr) {
+this.constraintExpr = constraintExpr;
+  }
+
+  /**
+   * Get the constraint expression of the placement constraint.
+   *
+   * @return the constraint expression
+   */
+  public AbstractConstraint getConstraintExpr() {
+return constraintExpr;
+  }
+
+  /**
+   * Interface used to enable the elements of the constraint tree to be 
visited.
+   */
+  @Private
+  public interface Visitable {
+/**
+ * Visitor pattern.
+ *
+ * @param visitor visitor to be used
+ * @param  defines the type that the visitor will use and the return 
type
+ *  of the accept.
+ * @return the result of visiting a given object.
+ */
+ T accept(Visitor visitor);
+
+  }
+
+  /**
+   * Visitor API for a constraint tree.
+   *
+   * @param  determines the return type of the visit methods.
+   */
+  @Private
+  public interface Visitor {
+T visit(SingleConstraint constraint);
+
+T visit(TargetExpression target);
+
+T visit(TargetConstraint constraint);
+
+T visit(CardinalityConstraint constraint);
+
+T 

[34/50] [abbrv] hadoop git commit: HDFS-12912. [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-18 Thread kkaranasos
HDFS-12912. [READ] Fix configuration and implementation of LevelDB-based alias 
maps


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/80c3fec3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/80c3fec3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/80c3fec3

Branch: refs/heads/YARN-6592
Commit: 80c3fec3a13c41051daaae42e5c9a9fedf5c7ee7
Parents: c89b29b
Author: Virajith Jalaparti 
Authored: Wed Dec 13 13:39:21 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:41 2017 -0800

--
 .../hdfs/server/aliasmap/InMemoryAliasMap.java  | 42 ++--
 .../aliasmap/InMemoryLevelDBAliasMapServer.java |  9 +++--
 .../impl/LevelDBFileRegionAliasMap.java |  5 +++
 .../src/site/markdown/HdfsProvidedStorage.md|  4 +-
 .../server/aliasmap/ITestInMemoryAliasMap.java  |  9 +++--
 .../server/aliasmap/TestInMemoryAliasMap.java   |  2 +-
 .../impl/TestInMemoryLevelDBAliasMapClient.java |  2 +
 .../impl/TestLevelDbMockAliasMapClient.java |  2 +-
 .../TestNameNodeProvidedImplementation.java |  2 +
 9 files changed, 45 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/80c3fec3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java
index 3d9eeea..142a040 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java
@@ -59,6 +59,7 @@ public class InMemoryAliasMap implements 
InMemoryAliasMapProtocol,
 
   private final DB levelDb;
   private Configuration conf;
+  private String blockPoolID;
 
   @Override
   public void setConf(Configuration conf) {
@@ -79,32 +80,38 @@ public class InMemoryAliasMap implements 
InMemoryAliasMapProtocol,
 .toString();
   }
 
-  public static @Nonnull InMemoryAliasMap init(Configuration conf)
-  throws IOException {
+  public static @Nonnull InMemoryAliasMap init(Configuration conf,
+  String blockPoolID) throws IOException {
 Options options = new Options();
 options.createIfMissing(true);
 String directory =
 conf.get(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_LEVELDB_DIR);
 LOG.info("Attempting to load InMemoryAliasMap from \"{}\"", directory);
-File path = new File(directory);
-if (!path.exists()) {
+File levelDBpath;
+if (blockPoolID != null) {
+  levelDBpath = new File(directory, blockPoolID);
+} else {
+  levelDBpath = new File(directory);
+}
+if (!levelDBpath.exists()) {
   String error = createPathErrorMessage(directory);
   throw new IOException(error);
 }
-DB levelDb = JniDBFactory.factory.open(path, options);
-InMemoryAliasMap aliasMap = new InMemoryAliasMap(levelDb);
+DB levelDb = JniDBFactory.factory.open(levelDBpath, options);
+InMemoryAliasMap aliasMap = new InMemoryAliasMap(levelDb, blockPoolID);
 aliasMap.setConf(conf);
 return aliasMap;
   }
 
   @VisibleForTesting
-  InMemoryAliasMap(DB levelDb) {
+  InMemoryAliasMap(DB levelDb, String blockPoolID) {
 this.levelDb = levelDb;
+this.blockPoolID = blockPoolID;
   }
 
   @Override
   public IterationResult list(Optional marker) throws IOException {
-return withIterator((DBIterator iterator) -> {
+try (DBIterator iterator = levelDb.iterator()) {
   Integer batchSize =
   conf.getInt(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_BATCH_SIZE,
   DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_BATCH_SIZE_DEFAULT);
@@ -130,8 +137,7 @@ public class InMemoryAliasMap implements 
InMemoryAliasMapProtocol,
   } else {
 return new IterationResult(batch, Optional.empty());
   }
-
-});
+}
   }
 
   public @Nonnull Optional read(@Nonnull Block block)
@@ -159,7 +165,7 @@ public class InMemoryAliasMap implements 
InMemoryAliasMapProtocol,
 
   @Override
   public String getBlockPoolId() {
-return null;
+return blockPoolID;
   }
 
   public void close() throws IOException {
@@ -202,21 +208,15 @@ public class InMemoryAliasMap implements 
InMemoryAliasMapProtocol,
 return blockOutputStream.toByteArray();
   }
 
-  private IterationResult withIterator(
-  CheckedFunction func) throws IOException {
-try (DBIterator iterator = levelDb.iterator()) {
-  return 

[10/50] [abbrv] hadoop git commit: HDFS-12093. [READ] Share remoteFS between ProvidedReplica instances.

2017-12-18 Thread kkaranasos
HDFS-12093. [READ] Share remoteFS between ProvidedReplica instances.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2407c9b9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2407c9b9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2407c9b9

Branch: refs/heads/YARN-6592
Commit: 2407c9b93aabb021b76c802b19c928fb6cbb0a85
Parents: 663b3c0
Author: Virajith Jalaparti 
Authored: Mon Aug 7 14:31:15 2017 -0700
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:38 2017 -0800

--
 .../datanode/FinalizedProvidedReplica.java  |  6 +++--
 .../hdfs/server/datanode/ProvidedReplica.java   | 25 +++-
 .../hdfs/server/datanode/ReplicaBuilder.java| 11 +++--
 .../fsdataset/impl/ProvidedVolumeImpl.java  | 17 +
 .../datanode/TestProvidedReplicaImpl.java   |  2 +-
 5 files changed, 40 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2407c9b9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
index 722d573..e23d6be 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.hdfs.server.datanode;
 import java.net.URI;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
 import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
 import org.apache.hadoop.hdfs.server.protocol.ReplicaRecoveryInfo;
@@ -31,8 +32,9 @@ public class FinalizedProvidedReplica extends ProvidedReplica 
{
 
   public FinalizedProvidedReplica(long blockId, URI fileURI,
   long fileOffset, long blockLen, long genStamp,
-  FsVolumeSpi volume, Configuration conf) {
-super(blockId, fileURI, fileOffset, blockLen, genStamp, volume, conf);
+  FsVolumeSpi volume, Configuration conf, FileSystem remoteFS) {
+super(blockId, fileURI, fileOffset, blockLen, genStamp, volume, conf,
+remoteFS);
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2407c9b9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
index 946ab5a..2b3bd13 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
@@ -65,16 +65,23 @@ public abstract class ProvidedReplica extends ReplicaInfo {
* @param volume the volume this block belongs to
*/
   public ProvidedReplica(long blockId, URI fileURI, long fileOffset,
-  long blockLen, long genStamp, FsVolumeSpi volume, Configuration conf) {
+  long blockLen, long genStamp, FsVolumeSpi volume, Configuration conf,
+  FileSystem remoteFS) {
 super(volume, blockId, blockLen, genStamp);
 this.fileURI = fileURI;
 this.fileOffset = fileOffset;
 this.conf = conf;
-try {
-  this.remoteFS = FileSystem.get(fileURI, this.conf);
-} catch (IOException e) {
-  LOG.warn("Failed to obtain filesystem for " + fileURI);
-  this.remoteFS = null;
+if (remoteFS != null) {
+  this.remoteFS = remoteFS;
+} else {
+  LOG.warn(
+  "Creating an reference to the remote FS for provided block " + this);
+  try {
+this.remoteFS = FileSystem.get(fileURI, this.conf);
+  } catch (IOException e) {
+LOG.warn("Failed to obtain filesystem for " + fileURI);
+this.remoteFS = null;
+  }
 }
   }
 
@@ -83,11 +90,7 @@ public abstract class ProvidedReplica extends ReplicaInfo {
 this.fileURI = r.fileURI;
 this.fileOffset = r.fileOffset;
 this.conf = r.conf;
-try {
-  this.remoteFS = FileSystem.newInstance(fileURI, this.conf);
-} catch (IOException e) {
-  this.remoteFS = null;
-}
+this.remoteFS = 

[26/50] [abbrv] hadoop git commit: HDFS-12885. Add visibility/stability annotations. Contributed by Chris Douglas

2017-12-18 Thread kkaranasos
HDFS-12885. Add visibility/stability annotations. Contributed by Chris Douglas


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a027055d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a027055d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a027055d

Branch: refs/heads/YARN-6592
Commit: a027055dd2bf5009fe272e9ceb08305bd0a8cc31
Parents: b634053
Author: Virajith Jalaparti 
Authored: Tue Dec 5 09:51:09 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:40 2017 -0800

--
 .../apache/hadoop/hdfs/protocol/ProvidedStorageLocation.java| 4 
 .../org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolPB.java   | 2 ++
 .../hdfs/protocolPB/AliasMapProtocolServerSideTranslatorPB.java | 4 
 .../InMemoryAliasMapProtocolClientSideTranslatorPB.java | 4 
 .../apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java| 4 
 .../hadoop/hdfs/server/aliasmap/InMemoryAliasMapProtocol.java   | 4 
 .../hdfs/server/aliasmap/InMemoryLevelDBAliasMapServer.java | 4 
 .../hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java  | 4 
 .../java/org/apache/hadoop/hdfs/server/common/BlockAlias.java   | 4 
 .../java/org/apache/hadoop/hdfs/server/common/FileRegion.java   | 4 
 .../hadoop/hdfs/server/common/blockaliasmap/BlockAliasMap.java  | 4 
 .../blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java   | 4 
 .../common/blockaliasmap/impl/LevelDBFileRegionAliasMap.java| 4 
 .../common/blockaliasmap/impl/TextFileRegionAliasMap.java   | 4 
 .../hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java   | 4 
 .../org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java | 4 
 .../hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java | 4 +++-
 .../org/apache/hadoop/hdfs/server/namenode/BlockResolver.java   | 4 
 .../java/org/apache/hadoop/hdfs/server/namenode/FSTreeWalk.java | 4 
 .../org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java | 4 
 .../hdfs/server/namenode/FixedBlockMultiReplicaResolver.java| 4 
 .../apache/hadoop/hdfs/server/namenode/FixedBlockResolver.java  | 4 
 .../org/apache/hadoop/hdfs/server/namenode/FsUGIResolver.java   | 5 +
 .../org/apache/hadoop/hdfs/server/namenode/ImageWriter.java | 4 
 .../apache/hadoop/hdfs/server/namenode/NullBlockAliasMap.java   | 4 
 .../apache/hadoop/hdfs/server/namenode/SingleUGIResolver.java   | 4 
 .../java/org/apache/hadoop/hdfs/server/namenode/TreePath.java   | 4 
 .../java/org/apache/hadoop/hdfs/server/namenode/TreeWalk.java   | 5 +
 .../org/apache/hadoop/hdfs/server/namenode/UGIResolver.java | 4 
 .../org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java  | 4 
 30 files changed, 119 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a027055d/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ProvidedStorageLocation.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ProvidedStorageLocation.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ProvidedStorageLocation.java
index eee58ba..861ef8e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ProvidedStorageLocation.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ProvidedStorageLocation.java
@@ -16,6 +16,8 @@
  */
 package org.apache.hadoop.hdfs.protocol;
 
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.fs.Path;
 
 import javax.annotation.Nonnull;
@@ -25,6 +27,8 @@ import java.util.Arrays;
  * ProvidedStorageLocation is a location in an external storage system
  * containing the data for a block (~Replica).
  */
+@InterfaceAudience.Private
+@InterfaceStability.Evolving
 public class ProvidedStorageLocation {
   private final Path path;
   private final long offset;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a027055d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolPB.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolPB.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolPB.java
index 98b3ee1..4e14fad 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolPB.java
+++ 

[13/50] [abbrv] hadoop git commit: HDFS-12777. [READ] Reduce memory and CPU footprint for PROVIDED volumes.

2017-12-18 Thread kkaranasos
HDFS-12777. [READ] Reduce memory and CPU footprint for PROVIDED volumes.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e1a28f95
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e1a28f95
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e1a28f95

Branch: refs/heads/YARN-6592
Commit: e1a28f95b8ffcb86300148f10a23b710f8388341
Parents: 6cd80b2
Author: Virajith Jalaparti 
Authored: Fri Nov 10 10:19:33 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:39 2017 -0800

--
 .../hdfs/server/datanode/DirectoryScanner.java  |  4 +
 .../datanode/FinalizedProvidedReplica.java  |  8 ++
 .../hdfs/server/datanode/ProvidedReplica.java   | 77 +++-
 .../hdfs/server/datanode/ReplicaBuilder.java| 37 +-
 .../fsdataset/impl/ProvidedVolumeImpl.java  | 30 +++-
 .../fsdataset/impl/TestProvidedImpl.java| 76 ---
 6 files changed, 196 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e1a28f95/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
index 3b6d06c..8fb8551 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
@@ -530,6 +530,10 @@ public class DirectoryScanner implements Runnable {
   new HashMap();
 
   for (int i = 0; i < volumes.size(); i++) {
+if (volumes.get(i).getStorageType() == StorageType.PROVIDED) {
+  // Disable scanning PROVIDED volumes to keep overhead low
+  continue;
+}
 ReportCompiler reportCompiler =
 new ReportCompiler(datanode, volumes.get(i));
 Future result =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e1a28f95/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
index e23d6be..bcc9a38 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
@@ -21,6 +21,7 @@ import java.net.URI;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
 import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
 import org.apache.hadoop.hdfs.server.protocol.ReplicaRecoveryInfo;
@@ -37,6 +38,13 @@ public class FinalizedProvidedReplica extends 
ProvidedReplica {
 remoteFS);
   }
 
+  public FinalizedProvidedReplica(long blockId, Path pathPrefix,
+  String pathSuffix, long fileOffset, long blockLen, long genStamp,
+  FsVolumeSpi volume, Configuration conf, FileSystem remoteFS) {
+super(blockId, pathPrefix, pathSuffix, fileOffset, blockLen,
+genStamp, volume, conf, remoteFS);
+  }
+
   @Override
   public ReplicaState getState() {
 return ReplicaState.FINALIZED;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e1a28f95/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
index 2b3bd13..8681421 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
@@ -23,6 +23,7 @@ import java.io.InputStream;
 import java.io.OutputStream;
 import java.net.URI;
 
+import com.google.common.annotations.VisibleForTesting;
 import 

[23/50] [abbrv] hadoop git commit: HDFS-12685. [READ] FsVolumeImpl exception when scanning Provided storage volume

2017-12-18 Thread kkaranasos
HDFS-12685. [READ] FsVolumeImpl exception when scanning Provided storage volume


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cc933cba
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cc933cba
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cc933cba

Branch: refs/heads/YARN-6592
Commit: cc933cba77c147153e463415fc192cee2d53a1ef
Parents: 4d59dab
Author: Virajith Jalaparti 
Authored: Thu Nov 30 10:11:12 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:40 2017 -0800

--
 .../impl/TextFileRegionAliasMap.java|  3 +-
 .../hdfs/server/datanode/DirectoryScanner.java  |  3 +-
 .../server/datanode/fsdataset/FsVolumeSpi.java  | 40 ++--
 .../fsdataset/impl/ProvidedVolumeImpl.java  |  4 +-
 .../fsdataset/impl/TestProvidedImpl.java| 19 ++
 5 files changed, 37 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc933cba/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
index 80f48c1..bd04d60 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
@@ -439,7 +439,8 @@ public class TextFileRegionAliasMap
 
   @Override
   public void refresh() throws IOException {
-//nothing to do;
+throw new UnsupportedOperationException(
+"Refresh not supported by " + getClass());
   }
 
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc933cba/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
index 8fb8551..ab9743c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
@@ -515,7 +515,8 @@ public class DirectoryScanner implements Runnable {
*
* @return a map of sorted arrays of block information
*/
-  private Map getDiskReport() {
+  @VisibleForTesting
+  public Map getDiskReport() {
 ScanInfoPerBlockPool list = new ScanInfoPerBlockPool();
 ScanInfoPerBlockPool[] dirReports = null;
 // First get list of data directories

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc933cba/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
index 15e71f0..20a153d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
@@ -296,8 +296,23 @@ public interface FsVolumeSpi
  */
 public ScanInfo(long blockId, File blockFile, File metaFile,
 FsVolumeSpi vol) {
-  this(blockId, blockFile, metaFile, vol, null,
-  (blockFile != null) ? blockFile.length() : 0);
+  this.blockId = blockId;
+  String condensedVolPath =
+  (vol == null || vol.getBaseURI() == null) ? null :
+  getCondensedPath(new File(vol.getBaseURI()).getAbsolutePath());
+  this.blockSuffix = blockFile == null ? null :
+  getSuffix(blockFile, condensedVolPath);
+  this.blockLength = (blockFile != null) ? blockFile.length() : 0;
+  if (metaFile == null) {
+this.metaSuffix = null;
+  } else if (blockFile == null) {
+this.metaSuffix = getSuffix(metaFile, condensedVolPath);
+  } else {
+this.metaSuffix = 

[19/50] [abbrv] hadoop git commit: HDFS-12713. [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata. Contributed by Ewan Higgs

2017-12-18 Thread kkaranasos
HDFS-12713. [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS 
metadata and PROVIDED storage metadata. Contributed by Ewan Higgs


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9c35be86
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9c35be86
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9c35be86

Branch: refs/heads/YARN-6592
Commit: 9c35be86e17021202823bfd3c2067ff3b312ce5c
Parents: a027055
Author: Virajith Jalaparti 
Authored: Tue Dec 5 13:46:30 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:40 2017 -0800

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   | 10 +--
 .../AliasMapProtocolServerSideTranslatorPB.java | 10 +++
 ...yAliasMapProtocolClientSideTranslatorPB.java | 17 -
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  2 +-
 .../hdfs/server/aliasmap/InMemoryAliasMap.java  |  7 +-
 .../aliasmap/InMemoryAliasMapProtocol.java  |  7 ++
 .../aliasmap/InMemoryLevelDBAliasMapServer.java | 13 +++-
 .../blockmanagement/ProvidedStorageMap.java |  8 ++-
 .../hadoop/hdfs/server/common/FileRegion.java   | 30 ++--
 .../common/blockaliasmap/BlockAliasMap.java | 14 ++--
 .../impl/InMemoryLevelDBAliasMapClient.java | 24 ++-
 .../impl/LevelDBFileRegionAliasMap.java | 22 --
 .../impl/TextFileRegionAliasMap.java| 76 
 .../fsdataset/impl/ProvidedVolumeImpl.java  | 41 ++-
 .../hadoop/hdfs/server/namenode/NameNode.java   |  6 +-
 .../hdfs/server/protocol/NamespaceInfo.java |  4 ++
 .../src/main/proto/AliasMapProtocol.proto   |  8 +++
 .../src/main/resources/hdfs-default.xml | 23 +-
 .../blockmanagement/TestProvidedStorageMap.java |  4 +-
 .../impl/TestInMemoryLevelDBAliasMapClient.java | 41 +--
 .../impl/TestLevelDBFileRegionAliasMap.java | 10 +--
 .../impl/TestLevelDbMockAliasMapClient.java | 19 +++--
 .../impl/TestTextBlockAliasMap.java | 55 +++---
 .../fsdataset/impl/TestProvidedImpl.java|  9 ++-
 .../hdfs/server/namenode/FileSystemImage.java   |  4 ++
 .../hdfs/server/namenode/ImageWriter.java   | 14 +++-
 .../hdfs/server/namenode/NullBlockAliasMap.java |  6 +-
 .../hadoop/hdfs/server/namenode/TreePath.java   |  3 +-
 .../TestNameNodeProvidedImplementation.java | 24 +++
 29 files changed, 346 insertions(+), 165 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9c35be86/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 7db0a8d..2ef2bf0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -342,17 +342,19 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   public static final String DFS_PROVIDER_STORAGEUUID = 
"dfs.provided.storage.id";
   public static final String DFS_PROVIDER_STORAGEUUID_DEFAULT =  "DS-PROVIDED";
   public static final String DFS_PROVIDED_ALIASMAP_CLASS = 
"dfs.provided.aliasmap.class";
+  public static final String DFS_PROVIDED_ALIASMAP_LOAD_RETRIES = 
"dfs.provided.aliasmap.load.retries";
 
   public static final String DFS_PROVIDED_ALIASMAP_TEXT_DELIMITER = 
"dfs.provided.aliasmap.text.delimiter";
   public static final String DFS_PROVIDED_ALIASMAP_TEXT_DELIMITER_DEFAULT = 
",";
 
-  public static final String DFS_PROVIDED_ALIASMAP_TEXT_READ_PATH = 
"dfs.provided.aliasmap.text.read.path";
-  public static final String DFS_PROVIDED_ALIASMAP_TEXT_PATH_DEFAULT = 
"file:///tmp/blocks.csv";
+  public static final String DFS_PROVIDED_ALIASMAP_TEXT_READ_FILE = 
"dfs.provided.aliasmap.text.read.file";
+  public static final String DFS_PROVIDED_ALIASMAP_TEXT_READ_FILE_DEFAULT = 
"file:///tmp/blocks.csv";
 
   public static final String DFS_PROVIDED_ALIASMAP_TEXT_CODEC = 
"dfs.provided.aliasmap.text.codec";
-  public static final String DFS_PROVIDED_ALIASMAP_TEXT_WRITE_PATH = 
"dfs.provided.aliasmap.text.write.path";
+  public static final String DFS_PROVIDED_ALIASMAP_TEXT_WRITE_DIR = 
"dfs.provided.aliasmap.text.write.dir";
+  public static final String DFS_PROVIDED_ALIASMAP_TEXT_WRITE_DIR_DEFAULT = 
"file:///tmp/";
 
-  public static final String DFS_PROVIDED_ALIASMAP_LEVELDB_PATH = 
"dfs.provided.aliasmap.leveldb.read.path";
+  public static final String DFS_PROVIDED_ALIASMAP_LEVELDB_PATH = 
"dfs.provided.aliasmap.leveldb.path";
 
   public static final String  

[24/50] [abbrv] hadoop git commit: HDFS-12591. [READ] Implement LevelDBFileRegionFormat. Contributed by Ewan Higgs.

2017-12-18 Thread kkaranasos
HDFS-12591. [READ] Implement LevelDBFileRegionFormat. Contributed by Ewan Higgs.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b634053c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b634053c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b634053c

Branch: refs/heads/YARN-6592
Commit: b634053c4daec181511abb314aeef0a8fe851086
Parents: 352f994
Author: Virajith Jalaparti 
Authored: Sat Dec 2 12:22:00 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:40 2017 -0800

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   2 +
 .../impl/LevelDBFileRegionAliasMap.java | 257 +++
 .../impl/TestLevelDBFileRegionAliasMap.java | 115 +
 3 files changed, 374 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b634053c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 00976f9..7db0a8d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -352,6 +352,8 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final String DFS_PROVIDED_ALIASMAP_TEXT_CODEC = 
"dfs.provided.aliasmap.text.codec";
   public static final String DFS_PROVIDED_ALIASMAP_TEXT_WRITE_PATH = 
"dfs.provided.aliasmap.text.write.path";
 
+  public static final String DFS_PROVIDED_ALIASMAP_LEVELDB_PATH = 
"dfs.provided.aliasmap.leveldb.read.path";
+
   public static final String  DFS_LIST_LIMIT = "dfs.ls.limit";
   public static final int DFS_LIST_LIMIT_DEFAULT = 1000;
   public static final String  DFS_CONTENT_SUMMARY_LIMIT_KEY = 
"dfs.content-summary.limit";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b634053c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/LevelDBFileRegionAliasMap.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/LevelDBFileRegionAliasMap.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/LevelDBFileRegionAliasMap.java
new file mode 100644
index 000..66971a3
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/LevelDBFileRegionAliasMap.java
@@ -0,0 +1,257 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.common.blockaliasmap.impl;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Optional;
+
+import org.iq80.leveldb.DB;
+import org.iq80.leveldb.DBIterator;
+import static org.fusesource.leveldbjni.JniDBFactory.factory;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PROVIDED_ALIASMAP_LEVELDB_PATH;
+import static 
org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap.fromBlockBytes;
+import static 
org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap.fromProvidedStorageLocationBytes;
+import static 
org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap.toProtoBufBytes;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A LevelDB based implementation of 

[02/50] [abbrv] hadoop git commit: HDFS-11902. [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-12-18 Thread kkaranasos
HDFS-11902. [READ] Merge BlockFormatProvider and FileRegionProvider.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/98f5ed5a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/98f5ed5a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/98f5ed5a

Branch: refs/heads/YARN-6592
Commit: 98f5ed5aa377ddd3f35b763b20c499d2ccac2ed5
Parents: d6a9a89
Author: Virajith Jalaparti 
Authored: Fri Nov 3 13:45:56 2017 -0700
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:38 2017 -0800

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  17 +-
 .../blockmanagement/BlockFormatProvider.java|  91 
 .../server/blockmanagement/BlockProvider.java   |  75 
 .../blockmanagement/ProvidedStorageMap.java |  63 ++-
 .../hadoop/hdfs/server/common/BlockFormat.java  |  82 
 .../hdfs/server/common/FileRegionProvider.java  |  37 --
 .../server/common/TextFileRegionFormat.java | 442 --
 .../server/common/TextFileRegionProvider.java   |  88 
 .../common/blockaliasmap/BlockAliasMap.java |  88 
 .../impl/TextFileRegionAliasMap.java| 445 +++
 .../common/blockaliasmap/package-info.java  |  27 ++
 .../fsdataset/impl/ProvidedVolumeImpl.java  |  76 ++--
 .../src/main/resources/hdfs-default.xml |  34 +-
 .../blockmanagement/TestProvidedStorageMap.java |  41 +-
 .../hdfs/server/common/TestTextBlockFormat.java | 160 ---
 .../impl/TestTextBlockAliasMap.java | 161 +++
 .../fsdataset/impl/TestProvidedImpl.java|  75 ++--
 .../hdfs/server/namenode/FileSystemImage.java   |   4 +-
 .../hdfs/server/namenode/ImageWriter.java   |  25 +-
 .../hdfs/server/namenode/NullBlockAliasMap.java |  86 
 .../hdfs/server/namenode/NullBlockFormat.java   |  87 
 .../hadoop/hdfs/server/namenode/TreePath.java   |   8 +-
 .../TestNameNodeProvidedImplementation.java |  25 +-
 23 files changed, 994 insertions(+), 1243 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/98f5ed5a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 7449987..cb57675 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -331,22 +331,19 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   public static final String DFS_NAMENODE_PROVIDED_ENABLED = 
"dfs.namenode.provided.enabled";
   public static final boolean DFS_NAMENODE_PROVIDED_ENABLED_DEFAULT = false;
 
-  public static final String DFS_NAMENODE_BLOCK_PROVIDER_CLASS = 
"dfs.namenode.block.provider.class";
-
-  public static final String DFS_PROVIDER_CLASS = "dfs.provider.class";
   public static final String DFS_PROVIDER_DF_CLASS = "dfs.provided.df.class";
   public static final String DFS_PROVIDER_STORAGEUUID = 
"dfs.provided.storage.id";
   public static final String DFS_PROVIDER_STORAGEUUID_DEFAULT =  "DS-PROVIDED";
-  public static final String DFS_PROVIDER_BLK_FORMAT_CLASS = 
"dfs.provided.blockformat.class";
+  public static final String DFS_PROVIDED_ALIASMAP_CLASS = 
"dfs.provided.aliasmap.class";
 
-  public static final String DFS_PROVIDED_BLOCK_MAP_DELIMITER = 
"dfs.provided.textprovider.delimiter";
-  public static final String DFS_PROVIDED_BLOCK_MAP_DELIMITER_DEFAULT = ",";
+  public static final String DFS_PROVIDED_ALIASMAP_TEXT_DELIMITER = 
"dfs.provided.aliasmap.text.delimiter";
+  public static final String DFS_PROVIDED_ALIASMAP_TEXT_DELIMITER_DEFAULT = 
",";
 
-  public static final String DFS_PROVIDED_BLOCK_MAP_READ_PATH = 
"dfs.provided.textprovider.read.path";
-  public static final String DFS_PROVIDED_BLOCK_MAP_PATH_DEFAULT = 
"file:///tmp/blocks.csv";
+  public static final String DFS_PROVIDED_ALIASMAP_TEXT_READ_PATH = 
"dfs.provided.aliasmap.text.read.path";
+  public static final String DFS_PROVIDED_ALIASMAP_TEXT_PATH_DEFAULT = 
"file:///tmp/blocks.csv";
 
-  public static final String DFS_PROVIDED_BLOCK_MAP_CODEC = 
"dfs.provided.textprovider.read.codec";
-  public static final String DFS_PROVIDED_BLOCK_MAP_WRITE_PATH  = 
"dfs.provided.textprovider.write.path";
+  public static final String DFS_PROVIDED_ALIASMAP_TEXT_CODEC = 
"dfs.provided.aliasmap.text.codec";
+  public static final String DFS_PROVIDED_ALIASMAP_TEXT_WRITE_PATH = 
"dfs.provided.aliasmap.text.write.path";
 
   public static final String  DFS_LIST_LIMIT = "dfs.ls.limit";
   

[06/50] [abbrv] hadoop git commit: HDFS-11792. [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl

2017-12-18 Thread kkaranasos
HDFS-11792. [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/55ade54b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/55ade54b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/55ade54b

Branch: refs/heads/YARN-6592
Commit: 55ade54b8ed36e18f028f478381a96e7b8c6be50
Parents: 4851f06
Author: Virajith Jalaparti 
Authored: Wed May 31 15:17:12 2017 -0700
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:38 2017 -0800

--
 .../fsdataset/impl/ProvidedVolumeImpl.java  |  6 +-
 .../fsdataset/impl/TestProvidedImpl.java| 94 ++--
 2 files changed, 92 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/55ade54b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
index a48e117..421b9cc 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
@@ -191,7 +191,11 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
 
   @Override
   long getBlockPoolUsed(String bpid) throws IOException {
-return df.getBlockPoolUsed(bpid);
+if (bpSlices.containsKey(bpid)) {
+  return df.getBlockPoolUsed(bpid);
+} else {
+  throw new IOException("block pool " + bpid + " is not found");
+}
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/55ade54b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
index 2c119fe..4753235 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
@@ -83,6 +83,7 @@ public class TestProvidedImpl {
   private static final String BASE_DIR =
   new FileSystemTestHelper().getTestRootDir();
   private static final int NUM_LOCAL_INIT_VOLUMES = 1;
+  //only support one provided volume for now.
   private static final int NUM_PROVIDED_INIT_VOLUMES = 1;
   private static final String[] BLOCK_POOL_IDS = {"bpid-0", "bpid-1"};
   private static final int NUM_PROVIDED_BLKS = 10;
@@ -208,6 +209,39 @@ public class TestProvidedImpl {
 }
   }
 
+  public static class TestProvidedVolumeDF
+  implements ProvidedVolumeDF, Configurable {
+
+@Override
+public void setConf(Configuration conf) {
+}
+
+@Override
+public Configuration getConf() {
+  return null;
+}
+
+@Override
+public long getCapacity() {
+  return Long.MAX_VALUE;
+}
+
+@Override
+public long getSpaceUsed() {
+  return -1;
+}
+
+@Override
+public long getBlockPoolUsed(String bpid) {
+  return -1;
+}
+
+@Override
+public long getAvailable() {
+  return Long.MAX_VALUE;
+}
+  }
+
   private static Storage.StorageDirectory createLocalStorageDirectory(
   File root, Configuration conf)
   throws SecurityException, IOException {
@@ -299,8 +333,8 @@ public class TestProvidedImpl {
   public void setUp() throws IOException {
 datanode = mock(DataNode.class);
 storage = mock(DataStorage.class);
-this.conf = new Configuration();
-this.conf.setLong(DFS_DATANODE_SCAN_PERIOD_HOURS_KEY, 0);
+conf = new Configuration();
+conf.setLong(DFS_DATANODE_SCAN_PERIOD_HOURS_KEY, 0);
 
 when(datanode.getConf()).thenReturn(conf);
 final DNConf dnConf = new DNConf(datanode);
@@ -312,8 +346,10 @@ public class TestProvidedImpl {
 new ShortCircuitRegistry(conf);
 when(datanode.getShortCircuitRegistry()).thenReturn(shortCircuitRegistry);
 
-this.conf.setClass(DFSConfigKeys.DFS_PROVIDER_CLASS,
+conf.setClass(DFSConfigKeys.DFS_PROVIDER_CLASS,
 TestFileRegionProvider.class, 

[37/50] [abbrv] hadoop git commit: HDFS-12903. [READ] Fix closing streams in ImageWriter. Contributed by Virajith Jalaparti

2017-12-18 Thread kkaranasos
HDFS-12903. [READ] Fix closing streams in ImageWriter. Contributed by Virajith 
Jalaparti


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4b3a7859
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4b3a7859
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4b3a7859

Branch: refs/heads/YARN-6592
Commit: 4b3a785914d890c47745e57d12a5a9abd084ffc1
Parents: e515103
Author: Chris Douglas 
Authored: Fri Dec 15 17:41:46 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:42 2017 -0800

--
 .../dev-support/findbugs-exclude.xml| 28 
 1 file changed, 28 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4b3a7859/hadoop-tools/hadoop-fs2img/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-tools/hadoop-fs2img/dev-support/findbugs-exclude.xml 
b/hadoop-tools/hadoop-fs2img/dev-support/findbugs-exclude.xml
new file mode 100644
index 000..b60767f
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/dev-support/findbugs-exclude.xml
@@ -0,0 +1,28 @@
+
+
+
+
+  
+  
+
+
+
+  
+
+


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[38/50] [abbrv] hadoop git commit: Merge branch 'HDFS-9806' into trunk

2017-12-18 Thread kkaranasos
Merge branch 'HDFS-9806' into trunk


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fc7ec80d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fc7ec80d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fc7ec80d

Branch: refs/heads/YARN-6592
Commit: fc7ec80d85a751b2b2b261a2b97ec38c7b58f1df
Parents: 44825f0 4b3a785
Author: Chris Douglas 
Authored: Fri Dec 15 18:06:24 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 18:06:24 2017 -0800

--
 .../java/org/apache/hadoop/fs/StorageType.java  |   3 +-
 .../org/apache/hadoop/fs/shell/TestCount.java   |   3 +-
 .../hadoop/hdfs/protocol/HdfsConstants.java |   3 +
 .../hadoop/hdfs/protocol/LocatedBlock.java  | 103 ++-
 .../hdfs/protocol/ProvidedStorageLocation.java  |  89 ++
 .../hadoop/hdfs/protocolPB/PBHelperClient.java  |  36 +
 .../src/main/proto/hdfs.proto   |  15 +
 hadoop-hdfs-project/hadoop-hdfs/pom.xml |   7 +-
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  29 +
 .../hdfs/protocolPB/AliasMapProtocolPB.java |  37 +
 .../AliasMapProtocolServerSideTranslatorPB.java | 134 +++
 ...yAliasMapProtocolClientSideTranslatorPB.java | 174 
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  28 +
 .../hdfs/server/aliasmap/InMemoryAliasMap.java  | 222 +
 .../aliasmap/InMemoryAliasMapProtocol.java  | 103 +++
 .../aliasmap/InMemoryLevelDBAliasMapServer.java | 153 +++
 .../hdfs/server/blockmanagement/BlockInfo.java  |  17 +-
 .../server/blockmanagement/BlockManager.java| 149 ++-
 .../BlockStoragePolicySuite.java|   6 +
 .../blockmanagement/DatanodeDescriptor.java |  44 +-
 .../server/blockmanagement/DatanodeManager.java |   2 +
 .../blockmanagement/DatanodeStatistics.java |   3 +
 .../server/blockmanagement/DatanodeStats.java   |   4 +-
 .../blockmanagement/DatanodeStorageInfo.java|  15 +-
 .../blockmanagement/HeartbeatManager.java   |   9 +-
 .../blockmanagement/LocatedBlockBuilder.java| 109 +++
 .../blockmanagement/ProvidedStorageMap.java | 540 +++
 .../blockmanagement/StorageTypeStats.java   |  33 +-
 .../hadoop/hdfs/server/common/BlockAlias.java   |  33 +
 .../hadoop/hdfs/server/common/FileRegion.java   |  85 ++
 .../hadoop/hdfs/server/common/Storage.java  |  71 +-
 .../hadoop/hdfs/server/common/StorageInfo.java  |   6 +
 .../common/blockaliasmap/BlockAliasMap.java | 113 +++
 .../impl/InMemoryLevelDBAliasMapClient.java | 178 
 .../impl/LevelDBFileRegionAliasMap.java | 274 ++
 .../impl/TextFileRegionAliasMap.java| 490 ++
 .../common/blockaliasmap/package-info.java  |  27 +
 .../server/datanode/BlockPoolSliceStorage.java  |  20 +-
 .../hdfs/server/datanode/DataStorage.java   |  44 +-
 .../hdfs/server/datanode/DirectoryScanner.java  |  26 +-
 .../datanode/FinalizedProvidedReplica.java  | 122 +++
 .../hdfs/server/datanode/ProvidedReplica.java   | 350 +++
 .../hdfs/server/datanode/ReplicaBuilder.java| 141 ++-
 .../hdfs/server/datanode/ReplicaInfo.java   |  20 +-
 .../hdfs/server/datanode/StorageLocation.java   |  54 +-
 .../server/datanode/fsdataset/FsDatasetSpi.java |   4 +-
 .../server/datanode/fsdataset/FsVolumeSpi.java  |  38 +-
 .../datanode/fsdataset/impl/FsDatasetImpl.java  |  65 +-
 .../datanode/fsdataset/impl/FsDatasetUtil.java  |  25 +-
 .../datanode/fsdataset/impl/FsVolumeImpl.java   |  19 +-
 .../fsdataset/impl/FsVolumeImplBuilder.java |   6 +
 .../fsdataset/impl/ProvidedVolumeImpl.java  | 718 ++
 .../federation/metrics/FederationMBean.java |   6 +
 .../federation/metrics/FederationMetrics.java   |   5 +
 .../federation/metrics/NamenodeBeanMetrics.java |  10 +
 .../resolver/MembershipNamenodeResolver.java|   1 +
 .../resolver/NamenodeStatusReport.java  |  12 +-
 .../router/NamenodeHeartbeatService.java|   3 +-
 .../store/records/MembershipStats.java  |   4 +
 .../records/impl/pb/MembershipStatsPBImpl.java  |  10 +
 .../apache/hadoop/hdfs/server/mover/Mover.java  |   2 +-
 .../server/namenode/FSImageCompression.java |   2 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  12 +
 .../hadoop/hdfs/server/namenode/NNStorage.java  |  10 +-
 .../hadoop/hdfs/server/namenode/NameNode.java   |  21 +
 .../hdfs/server/namenode/NameNodeMXBean.java|  10 +-
 .../namenode/metrics/FSNamesystemMBean.java |   7 +-
 .../hdfs/server/protocol/NamespaceInfo.java |   8 +
 .../src/main/proto/AliasMapProtocol.proto   |  68 ++
 .../src/main/proto/FederationProtocol.proto |   1 +
 .../src/main/resources/hdfs-default.xml | 119 +++
 .../src/main/webapps/hdfs/dfshealth.html|   1 +
 .../src/site/markdown/HdfsProvidedStorage.md| 247 +
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  |  30 +-
 

[44/50] [abbrv] hadoop git commit: YARN-7661. NodeManager metrics return wrong value after update node resource. Contributed by Yang Wang

2017-12-18 Thread kkaranasos
YARN-7661. NodeManager metrics return wrong value after update node resource. 
Contributed by Yang Wang


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/811fabde
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/811fabde
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/811fabde

Branch: refs/heads/YARN-6592
Commit: 811fabdebe881248756c0165bf7667bfc22be9bb
Parents: 35ad9b1
Author: Jason Lowe 
Authored: Mon Dec 18 14:28:27 2017 -0600
Committer: Jason Lowe 
Committed: Mon Dec 18 15:20:06 2017 -0600

--
 .../yarn/server/nodemanager/metrics/NodeManagerMetrics.java| 2 +-
 .../server/nodemanager/metrics/TestNodeManagerMetrics.java | 6 ++
 2 files changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/811fabde/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
index f0abfd4..1e7149b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
@@ -211,7 +211,7 @@ public class NodeManagerMetrics {
 
   public void addResource(Resource res) {
 availableMB = availableMB + res.getMemorySize();
-availableGB.incr((int)Math.floor(availableMB/1024d));
+availableGB.set((int)Math.floor(availableMB/1024d));
 availableVCores.incr(res.getVirtualCores());
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/811fabde/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/metrics/TestNodeManagerMetrics.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/metrics/TestNodeManagerMetrics.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/metrics/TestNodeManagerMetrics.java
index a08ee82..5dead91 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/metrics/TestNodeManagerMetrics.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/metrics/TestNodeManagerMetrics.java
@@ -84,6 +84,12 @@ public class TestNodeManagerMetrics {
 // allocatedGB: 3.75GB allocated memory is shown as 4GB
 // availableGB: 4.25GB available memory is shown as 4GB
 checkMetrics(10, 1, 1, 1, 1, 1, 4, 7, 4, 13, 3);
+
+// Update resource and check available resource again
+metrics.addResource(total);
+MetricsRecordBuilder rb = getMetrics("NodeManagerMetrics");
+assertGauge("AvailableGB", 12, rb);
+assertGauge("AvailableVCores", 19, rb);
   }
 
   private void checkMetrics(int launched, int completed, int failed, int 
killed,


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[47/50] [abbrv] hadoop git commit: YARN-7522. Introduce AllocationTagsManager to associate allocation tags to nodes. (Wangda Tan via asuresh)

2017-12-18 Thread kkaranasos
YARN-7522. Introduce AllocationTagsManager to associate allocation tags to 
nodes. (Wangda Tan via asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bf2a8ccc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bf2a8ccc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bf2a8ccc

Branch: refs/heads/YARN-6592
Commit: bf2a8ccc0908d94b974a78bb4aba0035149dfc77
Parents: ca28a79
Author: Arun Suresh 
Authored: Fri Dec 8 00:24:00 2017 -0800
Committer: Konstantinos Karanasos 
Committed: Mon Dec 18 16:07:00 2017 -0800

--
 .../resourcemanager/RMActiveServiceContext.java |  15 +
 .../yarn/server/resourcemanager/RMContext.java  |   5 +
 .../server/resourcemanager/RMContextImpl.java   |  12 +
 .../server/resourcemanager/ResourceManager.java |   9 +
 .../constraint/AllocationTagsManager.java   | 431 +++
 .../constraint/AllocationTagsNamespaces.java|  31 ++
 .../InvalidAllocationTagsQueryException.java|  35 ++
 .../rmcontainer/RMContainer.java|   8 +
 .../rmcontainer/RMContainerImpl.java|  21 +
 .../constraint/TestAllocationTagsManager.java   | 328 ++
 .../rmcontainer/TestRMContainerImpl.java| 124 ++
 .../scheduler/capacity/TestUtils.java   |   9 +
 .../scheduler/fifo/TestFifoScheduler.java   |   5 +
 13 files changed, 1033 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bf2a8ccc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
index 9dc5945..6ee3a4c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
@@ -33,6 +33,7 @@ import org.apache.hadoop.yarn.event.Dispatcher;
 import 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMDelegatedNodeLabelsUpdater;
 import 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import 
org.apache.hadoop.yarn.server.resourcemanager.placement.PlacementManager;
+import 
org.apache.hadoop.yarn.server.resourcemanager.constraint.AllocationTagsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.recovery.NullRMStateStore;
 import org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore;
 import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationSystem;
@@ -107,6 +108,7 @@ public class RMActiveServiceContext {
 
   private RMAppLifetimeMonitor rmAppLifetimeMonitor;
   private QueueLimitCalculator queueLimitCalculator;
+  private AllocationTagsManager allocationTagsManager;
 
   public RMActiveServiceContext() {
 queuePlacementManager = new PlacementManager();
@@ -398,6 +400,19 @@ public class RMActiveServiceContext {
 
   @Private
   @Unstable
+  public AllocationTagsManager getAllocationTagsManager() {
+return allocationTagsManager;
+  }
+
+  @Private
+  @Unstable
+  public void setAllocationTagsManager(
+  AllocationTagsManager allocationTagsManager) {
+this.allocationTagsManager = allocationTagsManager;
+  }
+
+  @Private
+  @Unstable
   public RMDelegatedNodeLabelsUpdater getRMDelegatedNodeLabelsUpdater() {
 return rmDelegatedNodeLabelsUpdater;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bf2a8ccc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
index ec94030..62899d9 100644
--- 

[12/50] [abbrv] hadoop git commit: HDFS-12775. [READ] Fix reporting of Provided volumes

2017-12-18 Thread kkaranasos
HDFS-12775. [READ] Fix reporting of Provided volumes


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3b1d3030
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3b1d3030
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3b1d3030

Branch: refs/heads/YARN-6592
Commit: 3b1d30301bcd35bbe525a7e122d3e5acfab92c88
Parents: e1a28f9
Author: Virajith Jalaparti 
Authored: Thu Nov 16 03:52:12 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:39 2017 -0800

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   1 -
 .../server/blockmanagement/BlockManager.java|  19 ++-
 .../blockmanagement/DatanodeDescriptor.java |  24 ++--
 .../blockmanagement/DatanodeStatistics.java |   3 +
 .../server/blockmanagement/DatanodeStats.java   |   4 +-
 .../blockmanagement/HeartbeatManager.java   |   9 +-
 .../blockmanagement/ProvidedStorageMap.java |  60 +++--
 .../blockmanagement/StorageTypeStats.java   |  33 -
 .../fsdataset/impl/DefaultProvidedVolumeDF.java |  58 -
 .../fsdataset/impl/ProvidedVolumeDF.java|  34 -
 .../fsdataset/impl/ProvidedVolumeImpl.java  | 101 ---
 .../federation/metrics/FederationMBean.java |   6 +
 .../federation/metrics/FederationMetrics.java   |   5 +
 .../federation/metrics/NamenodeBeanMetrics.java |  10 ++
 .../resolver/MembershipNamenodeResolver.java|   1 +
 .../resolver/NamenodeStatusReport.java  |  12 +-
 .../router/NamenodeHeartbeatService.java|   3 +-
 .../store/records/MembershipStats.java  |   4 +
 .../records/impl/pb/MembershipStatsPBImpl.java  |  10 ++
 .../hdfs/server/namenode/FSNamesystem.java  |  12 ++
 .../hdfs/server/namenode/NameNodeMXBean.java|  10 +-
 .../namenode/metrics/FSNamesystemMBean.java |   7 +-
 .../src/main/proto/FederationProtocol.proto |   1 +
 .../src/main/resources/hdfs-default.xml |   8 --
 .../src/main/webapps/hdfs/dfshealth.html|   1 +
 .../blockmanagement/TestProvidedStorageMap.java |  39 +++---
 .../fsdataset/impl/TestProvidedImpl.java|  55 ++--
 .../metrics/TestFederationMetrics.java  |   2 +
 .../TestNameNodeProvidedImplementation.java | 125 ---
 29 files changed, 425 insertions(+), 232 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b1d3030/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index cb57675..fbdc859 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -331,7 +331,6 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final String DFS_NAMENODE_PROVIDED_ENABLED = 
"dfs.namenode.provided.enabled";
   public static final boolean DFS_NAMENODE_PROVIDED_ENABLED_DEFAULT = false;
 
-  public static final String DFS_PROVIDER_DF_CLASS = "dfs.provided.df.class";
   public static final String DFS_PROVIDER_STORAGEUUID = 
"dfs.provided.storage.id";
   public static final String DFS_PROVIDER_STORAGEUUID_DEFAULT =  "DS-PROVIDED";
   public static final String DFS_PROVIDED_ALIASMAP_CLASS = 
"dfs.provided.aliasmap.class";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b1d3030/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 07502c1..f92c4e8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -103,6 +103,8 @@ import 
org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State;
 import org.apache.hadoop.hdfs.server.protocol.KeyUpdateCommand;
 import org.apache.hadoop.hdfs.server.protocol.ReceivedDeletedBlockInfo;
 import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks;
+import org.apache.hadoop.hdfs.server.protocol.StorageReport;
+import org.apache.hadoop.hdfs.server.protocol.VolumeFailureSummary;
 import org.apache.hadoop.hdfs.util.FoldedTreeSet;
 import 

[18/50] [abbrv] hadoop git commit: HDFS-12776. [READ] Increasing replication for PROVIDED files should create local replicas

2017-12-18 Thread kkaranasos
HDFS-12776. [READ] Increasing replication for PROVIDED files should create 
local replicas


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/90d1b47a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/90d1b47a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/90d1b47a

Branch: refs/heads/YARN-6592
Commit: 90d1b47a2a400e07e2b6b812c4bbd9c4f2877786
Parents: 87dc026
Author: Virajith Jalaparti 
Authored: Thu Nov 9 13:03:41 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:39 2017 -0800

--
 .../hdfs/server/blockmanagement/BlockInfo.java  |  7 ++--
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 25 +++---
 .../TestNameNodeProvidedImplementation.java | 36 +++-
 3 files changed, 45 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/90d1b47a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
index eb09b7b..8f59df6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
@@ -187,20 +187,23 @@ public abstract class BlockInfo extends Block
*/
   DatanodeStorageInfo findStorageInfo(DatanodeDescriptor dn) {
 int len = getCapacity();
+DatanodeStorageInfo providedStorageInfo = null;
 for(int idx = 0; idx < len; idx++) {
   DatanodeStorageInfo cur = getStorageInfo(idx);
   if(cur != null) {
 if (cur.getStorageType() == StorageType.PROVIDED) {
   //if block resides on provided storage, only match the storage ids
   if (dn.getStorageInfo(cur.getStorageID()) != null) {
-return cur;
+// do not return here as we have to check the other
+// DatanodeStorageInfos for this block which could be local
+providedStorageInfo = cur;
   }
 } else if (cur.getDatanodeDescriptor() == dn) {
   return cur;
 }
   }
 }
-return null;
+return providedStorageInfo;
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/90d1b47a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index db8d60c..fd06a56 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -1512,6 +1512,13 @@ class FsDatasetImpl implements 
FsDatasetSpi {
 }
   }
 
+  private boolean isReplicaProvided(ReplicaInfo replicaInfo) {
+if (replicaInfo == null) {
+  return false;
+}
+return replicaInfo.getVolume().getStorageType() == StorageType.PROVIDED;
+  }
+
   @Override // FsDatasetSpi
   public ReplicaHandler createTemporary(StorageType storageType,
   String storageId, ExtendedBlock b, boolean isTransfer)
@@ -1530,12 +1537,14 @@ class FsDatasetImpl implements 
FsDatasetSpi {
   isInPipeline = currentReplicaInfo.getState() == 
ReplicaState.TEMPORARY
   || currentReplicaInfo.getState() == ReplicaState.RBW;
   /*
-   * If the current block is old, reject.
+   * If the current block is not PROVIDED and old, reject.
* else If transfer request, then accept it.
* else if state is not RBW/Temporary, then reject
+   * If current block is PROVIDED, ignore the replica.
*/
-  if ((currentReplicaInfo.getGenerationStamp() >= 
b.getGenerationStamp())
-  || (!isTransfer && !isInPipeline)) {
+  if (((currentReplicaInfo.getGenerationStamp() >= b
+  .getGenerationStamp()) || (!isTransfer && !isInPipeline))
+  && !isReplicaProvided(currentReplicaInfo)) {
 throw new ReplicaAlreadyExistsException("Block " + b
 + " already exists in state " + currentReplicaInfo.getState()
 + " and thus cannot be 

[16/50] [abbrv] hadoop git commit: HDFS-12671. [READ] Test NameNode restarts when PROVIDED is configured

2017-12-18 Thread kkaranasos
HDFS-12671. [READ] Test NameNode restarts when PROVIDED is configured


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c293cc8e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c293cc8e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c293cc8e

Branch: refs/heads/YARN-6592
Commit: c293cc8e9b032d2c573340725ef8ecc15d49430d
Parents: 71d0a82
Author: Virajith Jalaparti 
Authored: Tue Nov 7 12:54:27 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:39 2017 -0800

--
 .../TestNameNodeProvidedImplementation.java | 52 +++-
 1 file changed, 39 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c293cc8e/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
--
diff --git 
a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
 
b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index aae04be..f0303b5 100644
--- 
a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ 
b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -507,16 +507,10 @@ public class TestNameNodeProvidedImplementation {
 DataNode providedDatanode = cluster.getDataNodes().get(0);
 
 DFSClient client = new DFSClient(new InetSocketAddress("localhost",
-cluster.getNameNodePort()), cluster.getConfiguration(0));
+cluster.getNameNodePort()), cluster.getConfiguration(0));
 
 for (int i= 0; i < numFiles; i++) {
-  String filename = "/" + filePrefix + i + fileSuffix;
-
-  DatanodeInfo[] dnInfos = getAndCheckBlockLocations(client, filename, 1);
-  // location should be the provided DN.
-  assertTrue(dnInfos[0].getDatanodeUuid()
-  .equals(providedDatanode.getDatanodeUuid()));
-
+  verifyFileLocation(i);
   // NameNode thinks the datanode is down
   BlockManagerTestUtil.noticeDeadDatanode(
   cluster.getNameNode(),
@@ -524,12 +518,44 @@ public class TestNameNodeProvidedImplementation {
   cluster.waitActive();
   cluster.triggerHeartbeats();
   Thread.sleep(1000);
+  verifyFileLocation(i);
+}
+  }
 
-  // should find the block on the 2nd provided datanode.
-  dnInfos = getAndCheckBlockLocations(client, filename, 1);
-  assertTrue(
-  dnInfos[0].getDatanodeUuid()
-  .equals(providedDatanode.getDatanodeUuid()));
+  @Test(timeout=3)
+  public void testNamenodeRestart() throws Exception {
+createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
+FixedBlockResolver.class);
+// 2 Datanodes, 1 PROVIDED and other DISK
+startCluster(NNDIRPATH, 2, null,
+new StorageType[][] {
+{StorageType.PROVIDED},
+{StorageType.DISK}},
+false);
+
+verifyFileLocation(numFiles - 1);
+cluster.restartNameNodes();
+cluster.waitActive();
+verifyFileLocation(numFiles - 1);
+  }
+
+  /**
+   * verify that the specified file has a valid provided location.
+   * @param fileIndex the index of the file to verify.
+   * @throws Exception
+   */
+  private void verifyFileLocation(int fileIndex)
+  throws Exception {
+DataNode providedDatanode = cluster.getDataNodes().get(0);
+DFSClient client = new DFSClient(
+new InetSocketAddress("localhost", cluster.getNameNodePort()),
+cluster.getConfiguration(0));
+if (fileIndex <= numFiles && fileIndex >= 0) {
+  String filename = "/" + filePrefix + fileIndex + fileSuffix;
+  DatanodeInfo[] dnInfos = getAndCheckBlockLocations(client, filename, 1);
+  // location should be the provided DN
+  assertEquals(providedDatanode.getDatanodeUuid(),
+  dnInfos[0].getDatanodeUuid());
 }
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[45/50] [abbrv] hadoop git commit: HADOOP-15109. TestDFSIO -read -random doesn't work on file sized 4GB. Contributed by Ajay Kumar.

2017-12-18 Thread kkaranasos
HADOOP-15109. TestDFSIO -read -random doesn't work on file sized 4GB. 
Contributed by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c7a4dda3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c7a4dda3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c7a4dda3

Branch: refs/heads/YARN-6592
Commit: c7a4dda3c5571e64c216810f8eb1a824c9b8f6f8
Parents: 811fabd
Author: Chen Liang 
Authored: Mon Dec 18 13:25:47 2017 -0800
Committer: Chen Liang 
Committed: Mon Dec 18 13:25:47 2017 -0800

--
 .../src/test/java/org/apache/hadoop/fs/TestDFSIO.java | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c7a4dda3/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
index 68befea..10709be 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
@@ -31,8 +31,8 @@ import java.io.PrintStream;
 import java.text.DecimalFormat;
 import java.util.Collection;
 import java.util.Date;
-import java.util.Random;
 import java.util.StringTokenizer;
+import java.util.concurrent.ThreadLocalRandom;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
@@ -582,7 +582,7 @@ public class TestDFSIO implements Tool {
* 3) Skip-read skips skipSize bytes after every read  : skipSize > 0
*/
   public static class RandomReadMapper extends IOStatMapper {
-private Random rnd;
+private ThreadLocalRandom rnd;
 private long fileSize;
 private long skipSize;
 
@@ -593,7 +593,7 @@ public class TestDFSIO implements Tool {
 }
 
 public RandomReadMapper() { 
-  rnd = new Random();
+  rnd = ThreadLocalRandom.current();
 }
 
 @Override // IOMapperBase
@@ -635,8 +635,8 @@ public class TestDFSIO implements Tool {
  * @return
  */
 private long nextOffset(long current) {
-  if(skipSize == 0)
-return rnd.nextInt((int)(fileSize));
+  if (skipSize == 0)
+return rnd.nextLong(fileSize);
   if(skipSize > 0)
 return (current < 0) ? 0 : (current + bufferSize + skipSize);
   // skipSize < 0


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[25/50] [abbrv] hadoop git commit: HDFS-12894. [READ] Skip setting block count of ProvidedDatanodeStorageInfo on DN registration update

2017-12-18 Thread kkaranasos
HDFS-12894. [READ] Skip setting block count of ProvidedDatanodeStorageInfo on 
DN registration update


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fb996a32
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fb996a32
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fb996a32

Branch: refs/heads/YARN-6592
Commit: fb996a32a98a25c0fe34a8ebb28563b53cd6e20e
Parents: 9c35be8
Author: Virajith Jalaparti 
Authored: Tue Dec 5 17:55:32 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:40 2017 -0800

--
 .../server/blockmanagement/BlockManager.java|  5 +
 .../blockmanagement/DatanodeDescriptor.java |  4 +++-
 .../TestNameNodeProvidedImplementation.java | 20 +++-
 3 files changed, 27 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fb996a32/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index f92c4e8..916cbaa 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -4943,4 +4943,9 @@ public class BlockManager implements BlockStatsMXBean {
   public void setBlockRecoveryTimeout(long blockRecoveryTimeout) {
 pendingRecoveryBlocks.setRecoveryTimeoutInterval(blockRecoveryTimeout);
   }
+
+  @VisibleForTesting
+  public ProvidedStorageMap getProvidedStorageMap() {
+return providedStorageMap;
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fb996a32/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index 83c608f..fc58708 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -919,7 +919,9 @@ public class DatanodeDescriptor extends DatanodeInfo {
 
 // must re-process IBR after re-registration
 for(DatanodeStorageInfo storage : getStorageInfos()) {
-  storage.setBlockReportCount(0);
+  if (storage.getStorageType() != StorageType.PROVIDED) {
+storage.setBlockReportCount(0);
+  }
 }
 heartbeatedSinceRegistration = false;
 forceRegistration = false;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fb996a32/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
--
diff --git 
a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
 
b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index deaf9d5..d057247 100644
--- 
a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ 
b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -559,7 +559,9 @@ public class TestNameNodeProvidedImplementation {
 DataNode providedDatanode2 = cluster.getDataNodes().get(1);
 
 DFSClient client = new DFSClient(new InetSocketAddress("localhost",
-cluster.getNameNodePort()), cluster.getConfiguration(0));
+cluster.getNameNodePort()), cluster.getConfiguration(0));
+
+DatanodeStorageInfo providedDNInfo = getProvidedDatanodeStorageInfo();
 
 if (numFiles >= 1) {
   String filename = "/" + filePrefix + (numFiles - 1) + fileSuffix;
@@ -596,10 +598,15 @@ public class TestNameNodeProvidedImplementation {
   providedDatanode2.getDatanodeId().getXferAddr());
   getAndCheckBlockLocations(client, filename, baseFileLen, 1, 0);
 
+  // BR count for the provided ProvidedDatanodeStorageInfo should reset to
+  // 0, when all DNs with PROVIDED storage fail.
+  

[14/50] [abbrv] hadoop git commit: HDFS-12789. [READ] Image generation tool does not close an opened stream

2017-12-18 Thread kkaranasos
HDFS-12789. [READ] Image generation tool does not close an opened stream


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/87dc026b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/87dc026b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/87dc026b

Branch: refs/heads/YARN-6592
Commit: 87dc026beec5d69a84771631ebca5fadb2f7195b
Parents: c293cc8
Author: Virajith Jalaparti 
Authored: Wed Nov 8 10:28:50 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:39 2017 -0800

--
 .../hadoop/hdfs/server/namenode/ImageWriter.java   | 17 -
 1 file changed, 12 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/87dc026b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
--
diff --git 
a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
 
b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
index ea1888a..390bb39 100644
--- 
a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
+++ 
b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
@@ -165,16 +165,23 @@ public class ImageWriter implements Closeable {
 
 // create directory and inode sections as side-files.
 // The details are written to files to avoid keeping them in memory.
-dirsTmp = File.createTempFile("fsimg_dir", null);
-dirsTmp.deleteOnExit();
-dirs = beginSection(new FileOutputStream(dirsTmp));
+FileOutputStream dirsTmpStream = null;
+try {
+  dirsTmp = File.createTempFile("fsimg_dir", null);
+  dirsTmp.deleteOnExit();
+  dirsTmpStream = new FileOutputStream(dirsTmp);
+  dirs = beginSection(dirsTmpStream);
+} catch (IOException e) {
+  IOUtils.cleanupWithLogger(null, raw, dirsTmpStream);
+  throw e;
+}
+
 try {
   inodesTmp = File.createTempFile("fsimg_inode", null);
   inodesTmp.deleteOnExit();
   inodes = new FileOutputStream(inodesTmp);
 } catch (IOException e) {
-  // appropriate to close raw?
-  IOUtils.cleanup(null, raw, dirs);
+  IOUtils.cleanupWithLogger(null, raw, dirsTmpStream, dirs);
   throw e;
 }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[48/50] [abbrv] hadoop git commit: YARN-7448. [API] Add SchedulingRequest to the AllocateRequest. (Panagiotis Garefalakis via asuresh)

2017-12-18 Thread kkaranasos
YARN-7448. [API] Add SchedulingRequest to the AllocateRequest. (Panagiotis 
Garefalakis via asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ca28a795
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ca28a795
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ca28a795

Branch: refs/heads/YARN-6592
Commit: ca28a795ce9738471a20c81bf9a245bc70b7cffc
Parents: 3b9faf5
Author: Arun Suresh 
Authored: Fri Nov 17 10:42:43 2017 -0800
Committer: Konstantinos Karanasos 
Committed: Mon Dec 18 16:07:00 2017 -0800

--
 .../api/protocolrecords/AllocateRequest.java| 42 ++
 .../hadoop/yarn/api/records/ResourceSizing.java | 27 +++
 .../yarn/api/records/SchedulingRequest.java |  1 +
 .../src/main/proto/yarn_service_protos.proto|  1 +
 .../impl/pb/AllocateRequestPBImpl.java  | 83 
 .../records/impl/pb/ResourceSizingPBImpl.java   |  2 +-
 .../impl/pb/SchedulingRequestPBImpl.java| 16 
 .../hadoop/yarn/api/TestPBImplRecords.java  | 19 +
 8 files changed, 190 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ca28a795/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateRequest.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateRequest.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateRequest.java
index ae0891e..d8d2347 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateRequest.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateRequest.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.yarn.api.protocolrecords;
 
+import java.util.Collections;
 import java.util.List;
 
 import org.apache.hadoop.classification.InterfaceAudience.Public;
@@ -28,6 +29,7 @@ import org.apache.hadoop.yarn.api.records.Container;
 import org.apache.hadoop.yarn.api.records.ContainerId;
 import org.apache.hadoop.yarn.api.records.ResourceBlacklistRequest;
 import org.apache.hadoop.yarn.api.records.ResourceRequest;
+import org.apache.hadoop.yarn.api.records.SchedulingRequest;
 import org.apache.hadoop.yarn.api.records.UpdateContainerRequest;
 import org.apache.hadoop.yarn.util.Records;
 
@@ -212,6 +214,32 @@ public abstract class AllocateRequest {
   public abstract void setUpdateRequests(
   List updateRequests);
 
+  /**
+   * Get the list of Scheduling requests being sent by the
+   * ApplicationMaster.
+   * @return list of {@link SchedulingRequest} being sent by the
+   * ApplicationMaster.
+   */
+  @Public
+  @Unstable
+  public List getSchedulingRequests() {
+return Collections.EMPTY_LIST;
+  }
+
+  /**
+   * Set the list of Scheduling requests to inform the
+   * ResourceManager about the application's resource requirements
+   * (potentially including allocation tags & placement constraints).
+   * @param schedulingRequests list of SchedulingRequest to update
+   *  the ResourceManager about the application's resource
+   *  requirements.
+   */
+  @Public
+  @Unstable
+  public void setSchedulingRequests(
+  List schedulingRequests) {
+  }
+
   @Public
   @Unstable
   public static AllocateRequestBuilder newBuilder() {
@@ -314,6 +342,20 @@ public abstract class AllocateRequest {
 }
 
 /**
+ * Set the schedulingRequests of the request.
+ * @see AllocateRequest#setSchedulingRequests(List)
+ * @param schedulingRequests SchedulingRequest of the request
+ * @return {@link AllocateRequestBuilder}
+ */
+@Public
+@Unstable
+public AllocateRequestBuilder schedulingRequests(
+List schedulingRequests) {
+  allocateRequest.setSchedulingRequests(schedulingRequests);
+  return this;
+}
+
+/**
  * Return generated {@link AllocateRequest} object.
  * @return {@link AllocateRequest}
  */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ca28a795/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceSizing.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceSizing.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceSizing.java
index d82be11..8cdc63f 100644
--- 

[49/50] [abbrv] hadoop git commit: YARN-6594. [API] Introduce SchedulingRequest object. (Konstantinos Karanasos via wangda)

2017-12-18 Thread kkaranasos
YARN-6594. [API] Introduce SchedulingRequest object. (Konstantinos Karanasos 
via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/16cbed89
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/16cbed89
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/16cbed89

Branch: refs/heads/YARN-6592
Commit: 16cbed8998648439e16fa2c40decadffc0679d98
Parents: 45b1ca6
Author: Wangda Tan 
Authored: Mon Oct 30 16:54:02 2017 -0700
Committer: Konstantinos Karanasos 
Committed: Mon Dec 18 16:07:00 2017 -0800

--
 .../hadoop/yarn/api/records/ResourceSizing.java |  64 +
 .../yarn/api/records/SchedulingRequest.java | 205 ++
 .../src/main/proto/yarn_protos.proto|  14 +
 .../records/impl/pb/ResourceSizingPBImpl.java   | 117 
 .../impl/pb/SchedulingRequestPBImpl.java| 266 +++
 5 files changed, 666 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/16cbed89/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceSizing.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceSizing.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceSizing.java
new file mode 100644
index 000..d82be11
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceSizing.java
@@ -0,0 +1,64 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.api.records;
+
+import org.apache.hadoop.classification.InterfaceAudience.Public;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.yarn.util.Records;
+
+/**
+ * {@code ResourceSizing} contains information for the size of a
+ * {@link SchedulingRequest}, such as the number of requested allocations and
+ * the resources for each allocation.
+ */
+@Public
+@Unstable
+public abstract class ResourceSizing {
+
+  @Public
+  @Unstable
+  public static ResourceSizing newInstance(Resource resources) {
+return ResourceSizing.newInstance(1, resources);
+  }
+
+  @Public
+  @Unstable
+  public static ResourceSizing newInstance(int numAllocations, Resource 
resources) {
+ResourceSizing resourceSizing = Records.newRecord(ResourceSizing.class);
+resourceSizing.setNumAllocations(numAllocations);
+resourceSizing.setResources(resources);
+return resourceSizing;
+  }
+
+  @Public
+  @Unstable
+  public abstract int getNumAllocations();
+
+  @Public
+  @Unstable
+  public abstract void setNumAllocations(int numAllocations);
+
+  @Public
+  @Unstable
+  public abstract Resource getResources();
+
+  @Public
+  @Unstable
+  public abstract void setResources(Resource resources);
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/16cbed89/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SchedulingRequest.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SchedulingRequest.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SchedulingRequest.java
new file mode 100644
index 000..47a0697
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SchedulingRequest.java
@@ -0,0 +1,205 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this 

[42/50] [abbrv] hadoop git commit: HDFS-12818. Support multiple storages in DataNodeCluster / SimulatedFSDataset. Contributed by Erik Krogen.

2017-12-18 Thread kkaranasos
HDFS-12818. Support multiple storages in DataNodeCluster / SimulatedFSDataset. 
Contributed by Erik Krogen.

Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/94576b17
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/94576b17
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/94576b17

Branch: refs/heads/YARN-6592
Commit: 94576b17fbc19c440efafb6c3322f53ec78a5b55
Parents: 0010089
Author: Erik Krogen 
Authored: Mon Dec 18 11:36:22 2017 -0800
Committer: Konstantin V Shvachko 
Committed: Mon Dec 18 11:36:22 2017 -0800

--
 .../server/datanode/SimulatedFSDataset.java | 308 +--
 .../server/datanode/TestSimulatedFSDataset.java | 147 +
 ...tSimulatedFSDatasetWithMultipleStorages.java |  50 +++
 3 files changed, 352 insertions(+), 153 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/94576b17/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
index c31df4c..987ba97 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
@@ -23,8 +23,8 @@ import java.io.InputStream;
 import java.io.OutputStream;
 import java.net.URI;
 import java.nio.channels.ClosedChannelException;
+import java.util.ArrayList;
 import java.util.Collection;
-import java.util.Collections;
 import java.util.HashMap;
 import java.util.LinkedList;
 import java.util.List;
@@ -37,11 +37,13 @@ import javax.management.NotCompliantMBeanException;
 import javax.management.ObjectName;
 import javax.management.StandardMBean;
 
+import com.google.common.math.LongMath;
 import org.apache.commons.lang.ArrayUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.DF;
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.server.datanode.checker.VolumeCheckResult;
+import 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImplTestUtils;
 import org.apache.hadoop.util.AutoCloseableLock;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.protocol.Block;
@@ -88,6 +90,7 @@ import org.apache.hadoop.util.DataChecksum;
  */
 public class SimulatedFSDataset implements FsDatasetSpi {
   public final static int BYTE_MASK = 0xff;
+  private final static int DEFAULT_NUM_SIMULATED_DATA_DIRS = 1;
   static class Factory extends FsDatasetSpi.Factory {
 @Override
 public SimulatedFSDataset newInstance(DataNode datanode,
@@ -100,10 +103,42 @@ public class SimulatedFSDataset implements 
FsDatasetSpi {
   return true;
 }
   }
-  
+
+  /**
+   * Used to change the default number of data storages and to mark the
+   * FSDataset as simulated.
+   */
+  static class TestUtilsFactory
+  extends FsDatasetTestUtils.Factory {
+@Override
+public FsDatasetTestUtils newInstance(DataNode datanode) {
+  return new FsDatasetImplTestUtils(datanode) {
+@Override
+public int getDefaultNumOfDataDirs() {
+  return DEFAULT_NUM_SIMULATED_DATA_DIRS;
+}
+  };
+}
+
+@Override
+public boolean isSimulated() {
+  return true;
+}
+
+@Override
+public int getDefaultNumOfDataDirs() {
+  return DEFAULT_NUM_SIMULATED_DATA_DIRS;
+}
+
+  }
+
   public static void setFactory(Configuration conf) {
 conf.set(DFSConfigKeys.DFS_DATANODE_FSDATASET_FACTORY_KEY,
 Factory.class.getName());
+conf.setClass("org.apache.hadoop.hdfs.server.datanode." +
+"SimulatedFSDatasetTestUtilsFactory",
+TestUtilsFactory.class, FsDatasetTestUtils.Factory.class
+);
   }
 
   public static byte simulatedByte(Block b, long offsetInBlk) {
@@ -151,7 +186,7 @@ public class SimulatedFSDataset implements 
FsDatasetSpi {
   if (theBlock.getNumBytes() < 0) {
 theBlock.setNumBytes(0);
   }
-  if (!storage.alloc(bpid, theBlock.getNumBytes())) { 
+  if (!getStorage(theBlock).alloc(bpid, theBlock.getNumBytes())) {
 // expected length - actual length may
 // be more - we find out at finalize
 DataNode.LOG.warn("Lack of free storage on a block alloc");
@@ -169,7 +204,7 @@ public class SimulatedFSDataset implements 
FsDatasetSpi {
 
 @Override
 public String getStorageUuid() {
-  return storage.getStorageUuid();
+  return 

[33/50] [abbrv] hadoop git commit: HDFS-12905. [READ] Handle decommissioning and under-maintenance Datanodes with Provided storage.

2017-12-18 Thread kkaranasos
HDFS-12905. [READ] Handle decommissioning and under-maintenance Datanodes with 
Provided storage.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0f6aa956
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0f6aa956
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0f6aa956

Branch: refs/heads/YARN-6592
Commit: 0f6aa9564cbe0812a8cab36d999e353269dd6bc9
Parents: 2298f2d
Author: Virajith Jalaparti 
Authored: Fri Dec 8 10:07:40 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:41 2017 -0800

--
 .../blockmanagement/ProvidedStorageMap.java | 13 ++-
 .../TestNameNodeProvidedImplementation.java | 95 
 2 files changed, 107 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0f6aa956/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
index 7fbc71a..208ed3e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
@@ -342,14 +342,25 @@ public class ProvidedStorageMap {
   return dn;
 }
   }
+  // prefer live nodes first.
+  DatanodeDescriptor dn = chooseRandomNode(excludedUUids, true);
+  if (dn == null) {
+dn = chooseRandomNode(excludedUUids, false);
+  }
+  return dn;
+}
 
+private DatanodeDescriptor chooseRandomNode(Set excludedUUids,
+boolean preferLiveNodes) {
   Random r = new Random();
   for (int i = dnR.size() - 1; i >= 0; --i) {
 int pos = r.nextInt(i + 1);
 DatanodeDescriptor node = dnR.get(pos);
 String uuid = node.getDatanodeUuid();
 if (!excludedUUids.contains(uuid)) {
-  return node;
+  if (!preferLiveNodes || node.getAdminState() == AdminStates.NORMAL) {
+return node;
+  }
 }
 Collections.swap(dnR, i, pos);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0f6aa956/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
--
diff --git 
a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
 
b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index d057247..394e8d8 100644
--- 
a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ 
b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -56,6 +56,7 @@ import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerTestUtil;
 import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor;
+import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager;
 import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStatistics;
 import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo;
 import org.apache.hadoop.hdfs.server.blockmanagement.ProvidedStorageMap;
@@ -795,4 +796,98 @@ public class TestNameNodeProvidedImplementation {
 FileUtils.deleteDirectory(tempDirectory);
   }
 
+  private DatanodeDescriptor getDatanodeDescriptor(DatanodeManager dnm,
+  int dnIndex) throws Exception {
+return 
dnm.getDatanode(cluster.getDataNodes().get(dnIndex).getDatanodeId());
+  }
+
+  private void startDecommission(FSNamesystem namesystem, DatanodeManager dnm,
+  int dnIndex) throws Exception {
+namesystem.writeLock();
+DatanodeDescriptor dnDesc = getDatanodeDescriptor(dnm, dnIndex);
+dnm.getDatanodeAdminManager().startDecommission(dnDesc);
+namesystem.writeUnlock();
+  }
+
+  private void startMaintenance(FSNamesystem namesystem, DatanodeManager dnm,
+  int dnIndex) throws Exception {
+namesystem.writeLock();
+DatanodeDescriptor dnDesc = getDatanodeDescriptor(dnm, dnIndex);
+dnm.getDatanodeAdminManager().startMaintenance(dnDesc, Long.MAX_VALUE);
+namesystem.writeUnlock();
+  }
+
+  

[46/50] [abbrv] hadoop git commit: YARN-6595. [API] Add Placement Constraints at the application level. (Arun Suresh via kkaranasos)

2017-12-18 Thread kkaranasos
YARN-6595. [API] Add Placement Constraints at the application level. (Arun 
Suresh via kkaranasos)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3b9faf58
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3b9faf58
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3b9faf58

Branch: refs/heads/YARN-6592
Commit: 3b9faf585af3760fabd0bb31d22d87ed95b2ef23
Parents: 16cbed8
Author: Konstantinos Karanasos <kkarana...@apache.org>
Authored: Mon Nov 13 15:25:24 2017 -0800
Committer: Konstantinos Karanasos <kkarana...@apache.org>
Committed: Mon Dec 18 16:07:00 2017 -0800

--
 .../RegisterApplicationMasterRequest.java   |  42 -
 .../yarn/api/resource/PlacementConstraint.java  | 156 +++
 .../src/main/proto/yarn_protos.proto|   6 +
 .../src/main/proto/yarn_service_protos.proto|   1 +
 .../RegisterApplicationMasterRequestPBImpl.java | 106 -
 .../hadoop/yarn/api/BasePBImplRecordsTest.java  |  11 ++
 6 files changed, 313 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b9faf58/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java
index 395e190..f2d537a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java
@@ -18,11 +18,16 @@
 
 package org.apache.hadoop.yarn.api.protocolrecords;
 
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Set;
+
 import org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Stable;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.yarn.api.ApplicationMasterProtocol;
+import org.apache.hadoop.yarn.api.resource.PlacementConstraint;
 import org.apache.hadoop.yarn.util.Records;
-
 /**
  * The request sent by the {@code ApplicationMaster} to {@code ResourceManager}
  * on registration.
@@ -132,4 +137,39 @@ public abstract class RegisterApplicationMasterRequest {
   @Public
   @Stable
   public abstract void setTrackingUrl(String trackingUrl);
+
+  /**
+   * Return all Placement Constraints specified at the Application level. The
+   * mapping is from a set of allocation tags to a
+   * PlacementConstraint associated with the tags, i.e., each
+   * {@link org.apache.hadoop.yarn.api.records.SchedulingRequest} that has 
those
+   * tags will be placed taking into account the corresponding constraint.
+   *
+   * @return A map of Placement Constraints.
+   */
+  @Public
+  @Unstable
+  public Map<Set, PlacementConstraint> getPlacementConstraints() {
+return new HashMap<>();
+  }
+
+  /**
+   * Set Placement Constraints applicable to the
+   * {@link org.apache.hadoop.yarn.api.records.SchedulingRequest}s
+   * of this application.
+   * The mapping is from a set of allocation tags to a
+   * PlacementConstraint associated with the tags.
+   * For example:
+   *  Map 
+   *   hb_regionserver - node_anti_affinity,
+   *   hb_regionserver, hb_master - rack_affinity,
+   *   ...
+   *  
+   * @param placementConstraints Placement Constraint Mapping.
+   */
+  @Public
+  @Unstable
+  public void setPlacementConstraints(
+  Map<Set, PlacementConstraint> placementConstraints) {
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b9faf58/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
index f0e3982..b6e851a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
@@ -54,6 +54,26 @@ public class PlacementConstrain

[30/50] [abbrv] hadoop git commit: HDFS-12887. [READ] Allow Datanodes with Provided volumes to start when blocks with the same id exist locally

2017-12-18 Thread kkaranasos
HDFS-12887. [READ] Allow Datanodes with Provided volumes to start when blocks 
with the same id exist locally


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/71ec1701
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/71ec1701
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/71ec1701

Branch: refs/heads/YARN-6592
Commit: 71ec170107e67e42cdbc5052c3f7b23c64751835
Parents: 4531588
Author: Virajith Jalaparti 
Authored: Wed Dec 6 09:42:31 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:41 2017 -0800

--
 .../hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/71ec1701/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
index f65fbbc..59ec100 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
@@ -208,8 +208,8 @@ class ProvidedVolumeImpl extends FsVolumeImpl {
 incrNumBlocks();
 incDfsUsed(region.getBlock().getNumBytes());
   } else {
-throw new IOException("A block with id " + newReplica.getBlockId()
-+ " already exists in the volumeMap");
+LOG.warn("A block with id " + newReplica.getBlockId()
++ " exists locally. Skipping PROVIDED replica");
   }
 }
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[41/50] [abbrv] hadoop git commit: YARN-7664. Several javadoc errors. Contributed by Sean Mackrory.

2017-12-18 Thread kkaranasos
YARN-7664. Several javadoc errors. Contributed by Sean Mackrory.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/00100895
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/00100895
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/00100895

Branch: refs/heads/YARN-6592
Commit: 001008958d8da008ed2e3be370ea4431fd023c97
Parents: 9289641
Author: Akira Ajisaka 
Authored: Mon Dec 18 22:24:51 2017 +0900
Committer: Akira Ajisaka 
Committed: Mon Dec 18 22:24:51 2017 +0900

--
 .../yarn/api/protocolrecords/AllocateResponse.java  |  4 ++--
 .../hadoop/yarn/util/resource/ResourceUtils.java|  2 +-
 .../fpga/AbstractFpgaVendorPlugin.java  |  2 +-
 .../resourceplugin/fpga/IntelFpgaOpenclPlugin.java  | 16 
 .../yarn/server/resourcemanager/rmapp/RMApp.java|  2 +-
 .../scheduler/capacity/CSQueueUtils.java| 10 ++
 6 files changed, 19 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/00100895/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
index 98346ce..655c6dc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
@@ -376,7 +376,7 @@ public abstract class AllocateResponse {
* Get the list of running containers as viewed by
* ResourceManager from previous application attempts which
* have not been reported to the Application Master yet.
-   * 
+   * 
* These containers were recovered by the RM after the application master
* had already registered. This may happen after RM restart when some NMs get
* delayed in connecting to the RM and reporting the active containers.
@@ -394,7 +394,7 @@ public abstract class AllocateResponse {
* Set the list of running containers as viewed by
* ResourceManager from previous application attempts which have
* not been reported to the Application Master yet.
-   * 
+   * 
* These containers were recovered by the RM after the application master
* had already registered. This may happen after RM restart when some NMs get
* delayed in connecting to the RM and reporting the active containers.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/00100895/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
index 39fd0c5..17567e8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
@@ -422,7 +422,7 @@ public class ResourceUtils {
* Extract unit and actual value from resource value.
* @param resourceValue Value of the resource
* @return Array containing unit and value. [0]=unit, [1]=value
-   * @throws IllegalArgumentExcpetion if units contain non alpha characters
+   * @throws IllegalArgumentException if units contain non alpha characters
*/
   public static String[] parseResourceValue(String resourceValue) {
 String[] resource = new String[2];

http://git-wip-us.apache.org/repos/asf/hadoop/blob/00100895/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java
 

[21/50] [abbrv] hadoop git commit: HDFS-12665. [AliasMap] Create a version of the AliasMap that runs in memory in the Namenode (leveldb). Contributed by Ewan Higgs.

2017-12-18 Thread kkaranasos
HDFS-12665. [AliasMap] Create a version of the AliasMap that runs in memory in 
the Namenode (leveldb). Contributed by Ewan Higgs.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/352f994b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/352f994b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/352f994b

Branch: refs/heads/YARN-6592
Commit: 352f994b6484524cdcfcda021046c59905b62f31
Parents: cc933cb
Author: Virajith Jalaparti 
Authored: Thu Nov 30 10:37:28 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:40 2017 -0800

--
 .../hdfs/protocol/ProvidedStorageLocation.java  |  85 +
 .../hadoop/hdfs/protocolPB/PBHelperClient.java  |  32 ++
 .../src/main/proto/hdfs.proto   |  14 +
 hadoop-hdfs-project/hadoop-hdfs/pom.xml |   7 +-
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   9 +
 .../hdfs/protocolPB/AliasMapProtocolPB.java |  35 ++
 .../AliasMapProtocolServerSideTranslatorPB.java | 120 +++
 ...yAliasMapProtocolClientSideTranslatorPB.java | 159 +
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  28 ++
 .../hdfs/server/aliasmap/InMemoryAliasMap.java  | 213 
 .../aliasmap/InMemoryAliasMapProtocol.java  |  92 +
 .../aliasmap/InMemoryLevelDBAliasMapServer.java | 141 
 .../hadoop/hdfs/server/common/FileRegion.java   |  89 ++---
 .../common/blockaliasmap/BlockAliasMap.java |  19 +-
 .../impl/InMemoryLevelDBAliasMapClient.java | 156 +
 .../impl/TextFileRegionAliasMap.java|  40 ++-
 .../datanode/FinalizedProvidedReplica.java  |  11 +
 .../hdfs/server/datanode/ReplicaBuilder.java|   7 +-
 .../fsdataset/impl/ProvidedVolumeImpl.java  |  38 +--
 .../hadoop/hdfs/server/namenode/NameNode.java   |  21 ++
 .../src/main/proto/AliasMapProtocol.proto   |  60 
 .../src/main/resources/hdfs-default.xml |  34 ++
 .../server/aliasmap/ITestInMemoryAliasMap.java  | 126 +++
 .../server/aliasmap/TestInMemoryAliasMap.java   |  45 +++
 .../blockmanagement/TestProvidedStorageMap.java |   1 -
 .../impl/TestInMemoryLevelDBAliasMapClient.java | 341 +++
 .../impl/TestLevelDbMockAliasMapClient.java | 116 +++
 .../fsdataset/impl/TestProvidedImpl.java|   9 +-
 hadoop-project/pom.xml  |   8 +-
 hadoop-tools/hadoop-fs2img/pom.xml  |   6 +
 .../hdfs/server/namenode/NullBlockAliasMap.java |   9 +-
 .../TestNameNodeProvidedImplementation.java |  65 +++-
 32 files changed, 2016 insertions(+), 120 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/352f994b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ProvidedStorageLocation.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ProvidedStorageLocation.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ProvidedStorageLocation.java
new file mode 100644
index 000..eee58ba
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ProvidedStorageLocation.java
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.protocol;
+
+import org.apache.hadoop.fs.Path;
+
+import javax.annotation.Nonnull;
+import java.util.Arrays;
+
+/**
+ * ProvidedStorageLocation is a location in an external storage system
+ * containing the data for a block (~Replica).
+ */
+public class ProvidedStorageLocation {
+  private final Path path;
+  private final long offset;
+  private final long length;
+  private final byte[] nonce;
+
+  public ProvidedStorageLocation(Path path, long offset, long length,
+  byte[] nonce) {
+this.path = path;
+this.offset = offset;
+this.length = length;
+this.nonce = Arrays.copyOf(nonce, nonce.length);
+  }
+
+  public @Nonnull Path 

[01/50] [abbrv] hadoop git commit: HDFS-11902. [READ] Merge BlockFormatProvider and FileRegionProvider. [Forced Update!]

2017-12-18 Thread kkaranasos
Repository: hadoop
Updated Branches:
  refs/heads/YARN-6592 0e66d31e2 -> bf2a8ccc0 (forced update)


http://git-wip-us.apache.org/repos/asf/hadoop/blob/98f5ed5a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
index 8782e71..40d77f7a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
@@ -52,11 +52,12 @@ import org.apache.hadoop.fs.FileSystemTestHelper;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.server.common.FileRegion;
-import org.apache.hadoop.hdfs.server.common.FileRegionProvider;
 import org.apache.hadoop.hdfs.server.common.Storage;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
 import org.apache.hadoop.hdfs.server.datanode.BlockScanner;
 import org.apache.hadoop.hdfs.server.datanode.DNConf;
 import org.apache.hadoop.hdfs.server.datanode.DataNode;
@@ -168,49 +169,66 @@ public class TestProvidedImpl {
   }
 
   /**
-   * A simple FileRegion provider for tests.
+   * A simple FileRegion BlockAliasMap for tests.
*/
-  public static class TestFileRegionProvider
-  extends FileRegionProvider implements Configurable {
+  public static class TestFileRegionBlockAliasMap
+  extends BlockAliasMap {
 
 private Configuration conf;
 private int minId;
 private int numBlocks;
 private Iterator suppliedIterator;
 
-TestFileRegionProvider() {
+TestFileRegionBlockAliasMap() {
   this(null, MIN_BLK_ID, NUM_PROVIDED_BLKS);
 }
 
-TestFileRegionProvider(Iterator iterator, int minId,
-int numBlocks) {
+TestFileRegionBlockAliasMap(Iterator iterator, int minId,
+int numBlocks) {
   this.suppliedIterator = iterator;
   this.minId = minId;
   this.numBlocks = numBlocks;
 }
 
 @Override
-public Iterator iterator() {
-  if (suppliedIterator == null) {
-return new TestFileRegionIterator(providedBasePath, minId, numBlocks);
-  } else {
-return suppliedIterator;
-  }
-}
+public Reader getReader(Reader.Options opts)
+throws IOException {
+
+  BlockAliasMap.Reader reader =
+  new BlockAliasMap.Reader() {
+@Override
+public Iterator iterator() {
+  if (suppliedIterator == null) {
+return new TestFileRegionIterator(providedBasePath, minId,
+numBlocks);
+  } else {
+return suppliedIterator;
+  }
+}
 
-@Override
-public void setConf(Configuration conf) {
-  this.conf = conf;
+@Override
+public void close() throws IOException {
+
+}
+
+@Override
+public FileRegion resolve(Block ident) throws IOException {
+  return null;
+}
+  };
+  return reader;
 }
 
 @Override
-public Configuration getConf() {
-  return conf;
+public Writer getWriter(Writer.Options opts)
+throws IOException {
+  // not implemented
+  return null;
 }
 
 @Override
-public void refresh() {
-  //do nothing!
+public void refresh() throws IOException {
+  // do nothing!
 }
 
 public void setMinBlkId(int minId) {
@@ -359,8 +377,8 @@ public class TestProvidedImpl {
 new ShortCircuitRegistry(conf);
 when(datanode.getShortCircuitRegistry()).thenReturn(shortCircuitRegistry);
 
-conf.setClass(DFSConfigKeys.DFS_PROVIDER_CLASS,
-TestFileRegionProvider.class, FileRegionProvider.class);
+this.conf.setClass(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_CLASS,
+TestFileRegionBlockAliasMap.class, BlockAliasMap.class);
 conf.setClass(DFSConfigKeys.DFS_PROVIDER_DF_CLASS,
 TestProvidedVolumeDF.class, ProvidedVolumeDF.class);
 
@@ -496,12 +514,13 @@ public class TestProvidedImpl {
 conf.setInt(DFSConfigKeys.DFS_DATANODE_DIRECTORYSCAN_THREADS_KEY, 1);
 for (int i = 0; i < providedVolumes.size(); i++) {
   ProvidedVolumeImpl vol = (ProvidedVolumeImpl) providedVolumes.get(i);
-  TestFileRegionProvider provider = (TestFileRegionProvider)
-  

[04/50] [abbrv] hadoop git commit: HDFS-12584. [READ] Fix errors in image generation tool from latest rebase

2017-12-18 Thread kkaranasos
HDFS-12584. [READ] Fix errors in image generation tool from latest rebase


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/17052c4a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/17052c4a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/17052c4a

Branch: refs/heads/YARN-6592
Commit: 17052c4aff104cb02701bc1e8dc9cd73d1a325fb
Parents: aca023b
Author: Virajith Jalaparti 
Authored: Tue Oct 3 14:44:17 2017 -0700
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:38 2017 -0800

--
 hadoop-tools/hadoop-fs2img/pom.xml  |  4 +--
 .../hdfs/server/namenode/RandomTreeWalk.java| 28 +---
 2 files changed, 14 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/17052c4a/hadoop-tools/hadoop-fs2img/pom.xml
--
diff --git a/hadoop-tools/hadoop-fs2img/pom.xml 
b/hadoop-tools/hadoop-fs2img/pom.xml
index 36096b7..e1411f8 100644
--- a/hadoop-tools/hadoop-fs2img/pom.xml
+++ b/hadoop-tools/hadoop-fs2img/pom.xml
@@ -17,12 +17,12 @@
   
 org.apache.hadoop
 hadoop-project
-3.0.0-alpha3-SNAPSHOT
+3.1.0-SNAPSHOT
 ../../hadoop-project
   
   org.apache.hadoop
   hadoop-fs2img
-  3.0.0-alpha3-SNAPSHOT
+  3.1.0-SNAPSHOT
   fs2img
   fs2img
   jar

http://git-wip-us.apache.org/repos/asf/hadoop/blob/17052c4a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
--
diff --git 
a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
 
b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
index c82c489..d002e4a 100644
--- 
a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
+++ 
b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
@@ -113,22 +113,18 @@ public class RandomTreeWalk extends TreeWalk {
 final long len = isDir ? 0 : r.nextInt(Integer.MAX_VALUE);
 final int nblocks = 0 == len ? 0 : (((int)((len - 1) / blocksize)) + 1);
 BlockLocation[] blocks = genBlocks(r, nblocks, blocksize, len);
-try {
-  return new LocatedFileStatus(new FileStatus(
-  len,  /* long length, */
-  isDir,/* boolean isdir,   */
-  1,/* int block_replication,   */
-  blocksize,/* long blocksize,  */
-  0L,   /* long modification_time,  */
-  0L,   /* long access_time,*/
-  null, /* FsPermission permission, */
-  "hadoop", /* String owner,*/
-  "hadoop", /* String group,*/
-  name),/* Path path*/
-  blocks);
-} catch (IOException e) {
-  throw new RuntimeException(e);
-}
+return new LocatedFileStatus(new FileStatus(
+len,  /* long length, */
+isDir,/* boolean isdir,   */
+1,/* int block_replication,   */
+blocksize,/* long blocksize,  */
+0L,   /* long modification_time,  */
+0L,   /* long access_time,*/
+null, /* FsPermission permission, */
+"hadoop", /* String owner,*/
+"hadoop", /* String group,*/
+name),/* Path path*/
+blocks);
   }
 
   BlockLocation[] genBlocks(Random r, int nblocks, int blocksize, long len) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[31/50] [abbrv] hadoop git commit: HDFS-12874. Documentation for provided storage. Contributed by Virajith Jalaparti

2017-12-18 Thread kkaranasos
HDFS-12874. Documentation for provided storage. Contributed by Virajith 
Jalaparti


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2298f2d7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2298f2d7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2298f2d7

Branch: refs/heads/YARN-6592
Commit: 2298f2d76b2cafd84c8f7421ae792336d6f2f37a
Parents: 962b5e7
Author: Chris Douglas 
Authored: Thu Dec 7 17:41:00 2017 -0800
Committer: Chris Douglas 
Committed: Fri Dec 15 17:51:41 2017 -0800

--
 .../src/main/resources/hdfs-default.xml |   2 +-
 .../src/site/markdown/HdfsProvidedStorage.md| 247 +++
 2 files changed, 248 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2298f2d7/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 3dc583c..7b5ccbc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -4643,7 +4643,7 @@
 
   The class that is used to specify the input format of the blocks on
   provided storages. The default is
-  org.apache.hadoop.hdfs.server.common.TextFileRegionAliasMap which uses
+  
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap 
which uses
   file regions to describe blocks. The file regions are specified as a
   delimited text file. Each file region is a 6-tuple containing the
   block id, remote file path, offset into file, length of block, the

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2298f2d7/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
new file mode 100644
index 000..7455044
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
@@ -0,0 +1,247 @@
+
+
+HDFS Provided Storage
+=
+
+Provided storage allows data *stored outside HDFS* to be mapped to and 
addressed
+from HDFS. It builds on [heterogeneous storage](./ArchivalStorage.html) by
+introducing a new storage type, `PROVIDED`, to the set of media in a datanode.
+Clients accessing data in
+`PROVIDED` storages can cache replicas in local media, enforce HDFS invariants
+(e.g., security, quotas), and address more data than the cluster could persist
+in the storage attached to DataNodes. This architecture is particularly useful
+in scenarios where HDFS clusters are ephemeral (e.g., cloud scenarios), and/or
+require to read data that lives in other storage systems (e.g., blob stores).
+
+Provided storage is an experimental feature in HDFS.
+
+
+
+Introduction
+
+
+As of this writing, support for mounting external storage as `PROVIDED` blocks
+is limited to creating a *read-only image* of a remote namespace that 
implements the
+`org.apache.hadoop.fs.FileSystem` interface, and starting a NameNode
+to serve the image. Specifically, reads from a snapshot of a remote namespace 
are
+supported. Adding a remote namespace to an existing/running namenode, 
refreshing the
+remote snapshot, unmounting, and writes are not available in this release. One
+can use [ViewFs](./ViewFs.html) and [RBF](HDFSRouterFederation.html) to
+integrate namespaces with `PROVIDED` storage into an existing deployment.
+
+Creating HDFS Clusters with `PROVIDED` Storage
+--
+
+One can create snapshots of the remote namespace using the `fs2img` tool. Given
+a path to a remote `FileSystem`, the tool creates an _image_ mirroring the
+namespace and an _alias map_ that maps blockIDs in the generated image to a
+`FileRegion` in the remote filesystem. A `FileRegion` contains sufficient 
information to
+address a fixed sequence of bytes in the remote `FileSystem` (e.g., file, 
offset, length)
+and a nonce to verify that the region is unchanged since the image was 
generated.
+
+After the NameNode image and alias map are created, the NameNode and DataNodes
+must be configured to consistently reference this address space. When a 
DataNode
+registers with an attached, `PROVIDED` storage, the NameNode considers all the
+external blocks to be addressable through that DataNode, and may begin to 
direct
+clients to it. Symmetrically, the DataNode must be able to map every block in

[42/50] [abbrv] hadoop git commit: HADOOP-15042. Azure PageBlobInputStream.skip() can return negative value when numberOfPagesRemaining is 0. Contributed by Rajesh Balamohan

2017-11-28 Thread kkaranasos
HADOOP-15042. Azure PageBlobInputStream.skip() can return negative value when 
numberOfPagesRemaining is 0.
Contributed by Rajesh Balamohan


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0ea182d0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0ea182d0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0ea182d0

Branch: refs/heads/YARN-6592
Commit: 0ea182d0faa35c726dcb37249d48786bfc8ca04c
Parents: 94bed50
Author: Steve Loughran 
Authored: Tue Nov 28 11:52:59 2017 +
Committer: Steve Loughran 
Committed: Tue Nov 28 11:52:59 2017 +

--
 .../java/org/apache/hadoop/fs/azure/PageBlobInputStream.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0ea182d0/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/PageBlobInputStream.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/PageBlobInputStream.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/PageBlobInputStream.java
index 097201b..aaac490 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/PageBlobInputStream.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/PageBlobInputStream.java
@@ -343,9 +343,9 @@ final class PageBlobInputStream extends InputStream {
 
 // Skip over whole pages as necessary without retrieving them from the
 // server.
-long pagesToSkipOver = Math.min(
+long pagesToSkipOver = Math.max(0, Math.min(
 n / PAGE_DATA_SIZE,
-numberOfPagesRemaining - 1);
+numberOfPagesRemaining - 1));
 numberOfPagesRemaining -= pagesToSkipOver;
 currentOffsetInBlob += pagesToSkipOver * PAGE_SIZE;
 skipped += pagesToSkipOver * PAGE_DATA_SIZE;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[27/50] [abbrv] hadoop git commit: HADOOP-13786 Add S3A committer for zero-rename commits to S3 endpoints. Contributed by Steve Loughran and Ryan Blue.

2017-11-28 Thread kkaranasos
HADOOP-13786 Add S3A committer for zero-rename commits to S3 endpoints.
Contributed by Steve Loughran and Ryan Blue.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/de8b6ca5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/de8b6ca5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/de8b6ca5

Branch: refs/heads/YARN-6592
Commit: de8b6ca5ef8614de6d6277b7617e27c788b0555c
Parents: 782ba3b
Author: Steve Loughran 
Authored: Wed Nov 22 15:28:12 2017 +
Committer: Steve Loughran 
Committed: Wed Nov 22 15:28:12 2017 +

--
 .../dev-support/findbugsExcludeFile.xml |7 +
 .../apache/hadoop/fs/FSDataOutputStream.java|9 +
 .../apache/hadoop/fs/PathExistsException.java   |4 +-
 .../org/apache/hadoop/fs/StorageStatistics.java |5 +
 .../apache/hadoop/util/JsonSerialization.java   |  299 +++
 .../src/main/resources/core-default.xml |  117 +-
 .../hadoop/fs/contract/ContractTestUtils.java   |   51 +-
 .../apache/hadoop/test/GenericTestUtils.java|   29 +-
 .../org/apache/hadoop/test/HadoopTestBase.java  |   51 +-
 .../org/apache/hadoop/test/LambdaTestUtils.java |  144 +-
 .../hadoop/util/TestJsonSerialization.java  |  185 ++
 .../mapreduce/TestMapreduceConfigFields.java|   27 +-
 .../lib/output/BindingPathOutputCommitter.java  |  184 ++
 .../lib/output/FileOutputCommitter.java |   12 +-
 .../lib/output/FileOutputCommitterFactory.java  |   38 +
 .../mapreduce/lib/output/FileOutputFormat.java  |   10 +-
 .../lib/output/NamedCommitterFactory.java   |   79 +
 .../lib/output/PathOutputCommitter.java |   17 +
 .../lib/output/PathOutputCommitterFactory.java  |  204 ++
 .../src/main/resources/mapred-default.xml   |   22 +
 .../lib/output/TestPathOutputCommitter.java |   24 +-
 .../output/TestPathOutputCommitterFactory.java  |  495 +
 hadoop-tools/hadoop-aws/pom.xml |   46 +-
 .../hadoop/fs/s3a/AWSBadRequestException.java   |   42 +
 .../hadoop/fs/s3a/AWSClientIOException.java |3 +-
 .../hadoop/fs/s3a/AWSNoResponseException.java   |   31 +
 .../hadoop/fs/s3a/AWSRedirectException.java |   38 +
 .../fs/s3a/AWSServiceThrottledException.java|   42 +
 .../hadoop/fs/s3a/AWSStatus500Exception.java|   37 +
 .../s3a/BlockingThreadPoolExecutorService.java  |2 +-
 .../org/apache/hadoop/fs/s3a/Constants.java |   72 +-
 .../fs/s3a/InconsistentAmazonS3Client.java  |  232 ++-
 .../java/org/apache/hadoop/fs/s3a/Invoker.java  |  485 +
 .../java/org/apache/hadoop/fs/s3a/Listing.java  |   26 +-
 .../java/org/apache/hadoop/fs/s3a/Retries.java  |   92 +
 .../hadoop/fs/s3a/S3ABlockOutputStream.java |  307 +--
 .../org/apache/hadoop/fs/s3a/S3ADataBlocks.java |2 +-
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java |  940 +
 .../apache/hadoop/fs/s3a/S3AInputStream.java|   56 +-
 .../hadoop/fs/s3a/S3AInstrumentation.java   |  231 ++-
 .../apache/hadoop/fs/s3a/S3ARetryPolicy.java|  246 +++
 .../hadoop/fs/s3a/S3AStorageStatistics.java |   12 +-
 .../java/org/apache/hadoop/fs/s3a/S3AUtils.java |  324 ++-
 .../org/apache/hadoop/fs/s3a/S3ListRequest.java |   11 +
 .../hadoop/fs/s3a/S3ObjectAttributes.java   |   10 +-
 .../org/apache/hadoop/fs/s3a/Statistic.java |   56 +-
 .../hadoop/fs/s3a/WriteOperationHelper.java |  474 +
 .../fs/s3a/commit/AbstractS3ACommitter.java |  756 +++
 .../s3a/commit/AbstractS3ACommitterFactory.java |   90 +
 .../hadoop/fs/s3a/commit/CommitConstants.java   |  240 +++
 .../hadoop/fs/s3a/commit/CommitOperations.java  |  596 ++
 .../hadoop/fs/s3a/commit/CommitUtils.java   |  129 ++
 .../hadoop/fs/s3a/commit/CommitUtilsWithMR.java |  192 ++
 .../apache/hadoop/fs/s3a/commit/Duration.java   |   60 +
 .../hadoop/fs/s3a/commit/DurationInfo.java  |   59 +
 .../s3a/commit/InternalCommitterConstants.java  |  100 +
 .../hadoop/fs/s3a/commit/LocalTempDir.java  |   80 +
 .../fs/s3a/commit/MagicCommitIntegration.java   |  182 ++
 .../hadoop/fs/s3a/commit/MagicCommitPaths.java  |  229 ++
 .../fs/s3a/commit/PathCommitException.java  |   43 +
 .../apache/hadoop/fs/s3a/commit/PutTracker.java |  100 +
 .../fs/s3a/commit/S3ACommitterFactory.java  |  129 ++
 .../org/apache/hadoop/fs/s3a/commit/Tasks.java  |  410 
 .../hadoop/fs/s3a/commit/ValidationFailure.java |   53 +
 .../hadoop/fs/s3a/commit/files/PendingSet.java  |  192 ++
 .../s3a/commit/files/PersistentCommitData.java  |   69 +
 .../s3a/commit/files/SinglePendingCommit.java   |  432 
 .../hadoop/fs/s3a/commit/files/SuccessData.java |  322 +++
 .../fs/s3a/commit/files/package-info.java   |   45 +
 .../fs/s3a/commit/magic/MagicCommitTracker.java |  161 ++
 .../s3a/commit/magic/MagicS3GuardCommitter.java |  288 +++
 .../magic/MagicS3GuardCommitterFactory.java |   47 +
 

[43/50] [abbrv] hadoop git commit: YARN-7499. Layout changes to Application details page in new YARN UI. Contributed by Vasudevan Skm.

2017-11-28 Thread kkaranasos
YARN-7499. Layout changes to Application details page in new YARN UI. 
Contributed by Vasudevan Skm.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/641ba5c7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/641ba5c7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/641ba5c7

Branch: refs/heads/YARN-6592
Commit: 641ba5c7a1471f8d799b1f919cd41daffb9da84e
Parents: 0ea182d
Author: Sunil G 
Authored: Tue Nov 28 18:37:11 2017 +0530
Committer: Sunil G 
Committed: Tue Nov 28 18:37:11 2017 +0530

--
 .../webapp/app/controllers/app-table-columns.js |   4 +-
 .../src/main/webapp/app/controllers/yarn-app.js |  69 -
 .../webapp/app/controllers/yarn-flowrun/info.js |   2 +-
 .../src/main/webapp/app/models/yarn-app.js  |   6 +-
 .../src/main/webapp/app/router.js   |  12 +-
 .../src/main/webapp/app/routes/yarn-app.js  |  23 +-
 .../main/webapp/app/routes/yarn-app/attempts.js |  15 +-
 .../main/webapp/app/routes/yarn-app/charts.js   |  18 +-
 .../webapp/app/routes/yarn-app/components.js|  16 +-
 .../main/webapp/app/routes/yarn-app/configs.js  |  16 +-
 .../src/main/webapp/app/routes/yarn-app/info.js |  17 +-
 .../src/main/webapp/app/serializers/yarn-app.js |   2 +-
 .../src/main/webapp/app/styles/app.scss |  24 ++
 .../src/main/webapp/app/styles/colors.scss  |   2 +
 .../src/main/webapp/app/styles/layout.scss  |  42 +++
 .../src/main/webapp/app/styles/variables.scss   |   4 +
 .../src/main/webapp/app/styles/yarn-app.scss|  35 +++
 .../app/templates/components/timeline-view.hbs  |   2 +-
 .../src/main/webapp/app/templates/yarn-app.hbs  | 149 +++---
 .../webapp/app/templates/yarn-app/attempts.hbs  |   2 +-
 .../webapp/app/templates/yarn-app/charts.hbs|  46 ++-
 .../app/templates/yarn-app/components.hbs   |   6 +-
 .../webapp/app/templates/yarn-app/configs.hbs   |  58 ++--
 .../main/webapp/app/templates/yarn-app/info.hbs | 281 +--
 .../webapp/app/templates/yarn-app/loading.hbs   |   2 +-
 .../main/webapp/app/templates/yarn-services.hbs |   2 +-
 26 files changed, 518 insertions(+), 337 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/641ba5c7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
index 05bfad45..a87acc1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
@@ -39,7 +39,7 @@ export default Ember.Controller.extend({
   getCellContent: function(row) {
 return {
   displayText: row.id,
-  href: `#/yarn-app/${row.id}/info`
+  href: `#/yarn-app/${row.id}/attempts`
 };
   }
   }, {
@@ -120,7 +120,7 @@ export default Ember.Controller.extend({
   getCellContent: function(row) {
 return {
   displayText: row.get('appName'),
-  href: `#/yarn-app/${row.id}/info?service=${row.get('appName')}`
+  href: `#/yarn-app/${row.id}/attempts?service=${row.get('appName')}`
 };
   }
 }, {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/641ba5c7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app.js
index c40697f..b84f328 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app.js
@@ -32,6 +32,65 @@ export default Ember.Controller.extend({
 text: 'App'
   }],
 
+  actions: {
+showStopServiceConfirm() {
+  this.set('actionResponse', null);
+  Ember.$("#stopServiceConfirmDialog").modal('show');
+},
+
+stopService() {
+  var self = this;
+  Ember.$("#stopServiceConfirmDialog").modal('hide');
+  var adapter = this.store.adapterFor('yarn-servicedef');
+  self.set('isLoading', true);
+  adapter.stopService(this.model.serviceName).then(function () {
+self.set('actionResponse', { msg: 'Service stopped successfully. Auto 
refreshing in 5 seconds.', type: 'success' });
+ 

[09/50] [abbrv] hadoop git commit: HDFS-12813. RequestHedgingProxyProvider can hide Exception thrown from the Namenode for proxy size of 1. Contributed by Mukul Kumar Singh

2017-11-28 Thread kkaranasos
HDFS-12813.  RequestHedgingProxyProvider can hide Exception thrown from the 
Namenode for proxy size of 1.  Contributed by Mukul Kumar Singh


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/659e85e3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/659e85e3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/659e85e3

Branch: refs/heads/YARN-6592
Commit: 659e85e304d070f9908a96cf6a0e1cbafde6a434
Parents: 60fc2a1
Author: Tsz-Wo Nicholas Sze 
Authored: Mon Nov 20 17:09:19 2017 -0800
Committer: Tsz-Wo Nicholas Sze 
Committed: Mon Nov 20 17:09:19 2017 -0800

--
 .../ha/RequestHedgingProxyProvider.java | 81 ++--
 .../ha/TestRequestHedgingProxyProvider.java | 58 ++
 2 files changed, 114 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/659e85e3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java
index b94e94d..08edfe2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.hdfs.server.namenode.ha;
 
 import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.InvocationTargetException;
 import java.lang.reflect.Method;
 import java.lang.reflect.Proxy;
 import java.net.URI;
@@ -29,6 +30,7 @@ import java.util.concurrent.ExecutorCompletionService;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
+import java.util.concurrent.ExecutionException;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.ipc.RemoteException;
@@ -87,9 +89,19 @@ public class RequestHedgingProxyProvider extends
 targetProxies.remove(toIgnore);
 if (targetProxies.size() == 1) {
   ProxyInfo proxyInfo = targetProxies.values().iterator().next();
-  Object retVal = method.invoke(proxyInfo.proxy, args);
-  successfulProxy = proxyInfo;
-  return retVal;
+  try {
+currentUsedProxy = proxyInfo;
+Object retVal = method.invoke(proxyInfo.proxy, args);
+LOG.debug("Invocation successful on [{}]",
+currentUsedProxy.proxyInfo);
+return retVal;
+  } catch (InvocationTargetException ex) {
+Exception unwrappedException = unwrapInvocationTargetException(ex);
+logProxyException(unwrappedException, currentUsedProxy.proxyInfo);
+LOG.trace("Unsuccessful invocation on [{}]",
+currentUsedProxy.proxyInfo);
+throw unwrappedException;
+  }
 }
 executor = Executors.newFixedThreadPool(proxies.size());
 completionService = new ExecutorCompletionService<>(executor);
@@ -112,15 +124,16 @@ public class RequestHedgingProxyProvider extends
   Future callResultFuture = completionService.take();
   Object retVal;
   try {
+currentUsedProxy = proxyMap.get(callResultFuture);
 retVal = callResultFuture.get();
-successfulProxy = proxyMap.get(callResultFuture);
 LOG.debug("Invocation successful on [{}]",
-successfulProxy.proxyInfo);
+currentUsedProxy.proxyInfo);
 return retVal;
-  } catch (Exception ex) {
+  } catch (ExecutionException ex) {
+Exception unwrappedException = unwrapExecutionException(ex);
 ProxyInfo tProxyInfo = proxyMap.get(callResultFuture);
-logProxyException(ex, tProxyInfo.proxyInfo);
-badResults.put(tProxyInfo.proxyInfo, unwrapException(ex));
+logProxyException(unwrappedException, tProxyInfo.proxyInfo);
+badResults.put(tProxyInfo.proxyInfo, unwrappedException);
 LOG.trace("Unsuccessful invocation on [{}]", tProxyInfo.proxyInfo);
 numAttempts--;
   }
@@ -143,7 +156,7 @@ public class RequestHedgingProxyProvider extends
   }
 
 
-  private volatile ProxyInfo successfulProxy = null;
+  private volatile ProxyInfo currentUsedProxy = null;
   private volatile String toIgnore = null;
 
   public 

[34/50] [abbrv] hadoop git commit: YARN-6483. Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM. (Juan Rodriguez Hortala via asuresh)

2017-11-28 Thread kkaranasos
YARN-6483. Add nodes transitioning to DECOMMISSIONING state to the list of 
updated nodes returned to the AM. (Juan Rodriguez Hortala via asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b46ca7e7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b46ca7e7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b46ca7e7

Branch: refs/heads/YARN-6592
Commit: b46ca7e73b8bac3fdbff0b13afe009308078acf2
Parents: aab4395
Author: Arun Suresh 
Authored: Wed Nov 22 19:16:44 2017 -0800
Committer: Arun Suresh 
Committed: Wed Nov 22 19:18:30 2017 -0800

--
 .../hadoop/yarn/api/records/NodeReport.java |  47 ++--
 .../hadoop/yarn/api/records/NodeUpdateType.java |  29 +
 .../src/main/proto/yarn_protos.proto|   8 ++
 .../hadoop/yarn/client/ProtocolHATestBase.java  |  14 +--
 .../hadoop/yarn/client/cli/TestYarnCLI.java |   2 +-
 .../api/records/impl/pb/NodeReportPBImpl.java   |  50 +++-
 .../yarn/api/records/impl/pb/ProtoUtils.java|  12 ++
 .../hadoop/yarn/server/utils/BuilderUtils.java  |  14 ++-
 .../server/resourcemanager/ClientRMService.java |   5 +-
 .../DecommissioningNodesWatcher.java|  38 +-
 .../resourcemanager/DefaultAMSProcessor.java|  12 +-
 .../resourcemanager/NodesListManager.java   |  78 +
 .../NodesListManagerEventType.java  |   3 +-
 .../server/resourcemanager/rmapp/RMApp.java |  10 +-
 .../server/resourcemanager/rmapp/RMAppImpl.java |  11 +-
 .../rmapp/RMAppNodeUpdateEvent.java |   9 +-
 .../server/resourcemanager/rmnode/RMNode.java   |   2 +-
 .../resourcemanager/rmnode/RMNodeImpl.java  |   5 +
 .../yarn/server/resourcemanager/MockRM.java |  15 +++
 .../resourcemanager/TestClientRMService.java|  50 
 .../TestDecommissioningNodesWatcher.java|   4 +-
 .../resourcemanager/TestRMNodeTransitions.java  |  13 ++-
 .../TestResourceTrackerService.java | 116 ++-
 .../applicationsmanager/MockAsm.java|   4 +-
 .../TestAMRMRPCNodeUpdates.java |  51 
 .../server/resourcemanager/rmapp/MockRMApp.java |   4 +-
 26 files changed, 495 insertions(+), 111 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b46ca7e7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeReport.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeReport.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeReport.java
index 885a3b4..3a80641 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeReport.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeReport.java
@@ -53,7 +53,8 @@ public abstract class NodeReport {
   String httpAddress, String rackName, Resource used, Resource capability,
   int numContainers, String healthReport, long lastHealthReportTime) {
 return newInstance(nodeId, nodeState, httpAddress, rackName, used,
-capability, numContainers, healthReport, lastHealthReportTime, null);
+capability, numContainers, healthReport, lastHealthReportTime,
+null, null, null);
   }
 
   @Private
@@ -61,7 +62,8 @@ public abstract class NodeReport {
   public static NodeReport newInstance(NodeId nodeId, NodeState nodeState,
   String httpAddress, String rackName, Resource used, Resource capability,
   int numContainers, String healthReport, long lastHealthReportTime,
-  Set nodeLabels) {
+  Set nodeLabels, Integer decommissioningTimeout,
+  NodeUpdateType nodeUpdateType) {
 NodeReport nodeReport = Records.newRecord(NodeReport.class);
 nodeReport.setNodeId(nodeId);
 nodeReport.setNodeState(nodeState);
@@ -73,6 +75,8 @@ public abstract class NodeReport {
 nodeReport.setHealthReport(healthReport);
 nodeReport.setLastHealthReportTime(lastHealthReportTime);
 nodeReport.setNodeLabels(nodeLabels);
+nodeReport.setDecommissioningTimeout(decommissioningTimeout);
+nodeReport.setNodeUpdateType(nodeUpdateType);
 return nodeReport;
   }
 
@@ -186,8 +190,8 @@ public abstract class NodeReport {
   public abstract void setLastHealthReportTime(long lastHealthReport);
   
   /**
-   * Get labels of this node
-   * @return labels of this node
+   * Get labels of this node.
+   * @return labels of this node.
*/
   @Public
   @Stable
@@ -198,8 +202,8 @@ public abstract class NodeReport {
   public abstract void setNodeLabels(Set nodeLabels);
 
   /**

[24/50] [abbrv] hadoop git commit: HADOOP-13786 Add S3A committer for zero-rename commits to S3 endpoints. Contributed by Steve Loughran and Ryan Blue.

2017-11-28 Thread kkaranasos
http://git-wip-us.apache.org/repos/asf/hadoop/blob/de8b6ca5/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInstrumentation.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInstrumentation.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInstrumentation.java
index da1fc5a..ef5a434 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInstrumentation.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInstrumentation.java
@@ -24,7 +24,12 @@ import org.slf4j.LoggerFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.fs.FileSystem.Statistics;
+import org.apache.hadoop.metrics2.AbstractMetric;
 import org.apache.hadoop.metrics2.MetricStringBuilder;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsInfo;
+import org.apache.hadoop.metrics2.MetricsRecordBuilder;
+import org.apache.hadoop.metrics2.MetricsTag;
 import org.apache.hadoop.metrics2.annotation.Metrics;
 import org.apache.hadoop.metrics2.lib.Interns;
 import org.apache.hadoop.metrics2.lib.MetricsRegistry;
@@ -122,8 +127,23 @@ public class S3AInstrumentation {
   STREAM_WRITE_BLOCK_UPLOADS_ABORTED,
   STREAM_WRITE_TOTAL_TIME,
   STREAM_WRITE_TOTAL_DATA,
+  COMMITTER_COMMITS_CREATED,
+  COMMITTER_COMMITS_COMPLETED,
+  COMMITTER_JOBS_SUCCEEDED,
+  COMMITTER_JOBS_FAILED,
+  COMMITTER_TASKS_SUCCEEDED,
+  COMMITTER_TASKS_FAILED,
+  COMMITTER_BYTES_COMMITTED,
+  COMMITTER_BYTES_UPLOADED,
+  COMMITTER_COMMITS_FAILED,
+  COMMITTER_COMMITS_ABORTED,
+  COMMITTER_COMMITS_REVERTED,
+  COMMITTER_MAGIC_FILES_CREATED,
   S3GUARD_METADATASTORE_PUT_PATH_REQUEST,
-  S3GUARD_METADATASTORE_INITIALIZATION
+  S3GUARD_METADATASTORE_INITIALIZATION,
+  S3GUARD_METADATASTORE_RETRY,
+  S3GUARD_METADATASTORE_THROTTLED,
+  STORE_IO_THROTTLED
   };
 
 
@@ -179,8 +199,11 @@ public class S3AInstrumentation {
   gauge(statistic.getSymbol(), statistic.getDescription());
 }
 //todo need a config for the quantiles interval?
+int interval = 1;
 quantiles(S3GUARD_METADATASTORE_PUT_PATH_LATENCY,
-"ops", "latency", 1);
+"ops", "latency", interval);
+quantiles(S3GUARD_METADATASTORE_THROTTLE_RATE,
+"events", "frequency (Hz)", interval);
   }
 
   /**
@@ -372,7 +395,7 @@ public class S3AInstrumentation {
   }
 
   /**
-   * Indicate that S3A deleted one or more file.s
+   * Indicate that S3A deleted one or more files.
* @param count number of files.
*/
   public void fileDeleted(int count) {
@@ -506,6 +529,14 @@ public class S3AInstrumentation {
   }
 
   /**
+   * Create a new instance of the committer statistics.
+   * @return a new committer statistics instance
+   */
+  CommitterStatistics newCommitterStatistics() {
+return new CommitterStatistics();
+  }
+
+  /**
* Merge in the statistics of a single input stream into
* the filesystem-wide statistics.
* @param statistics stream statistics
@@ -584,9 +615,12 @@ public class S3AInstrumentation {
 
 /**
  * The inner stream was opened.
+ * @return the previous count
  */
-public void streamOpened() {
+public long streamOpened() {
+  long count = openOperations;
   openOperations++;
+  return count;
 }
 
 /**
@@ -810,10 +844,13 @@ public class S3AInstrumentation {
 }
 
 /**
- * Note an exception in a multipart complete.
+ * Note exception in a multipart complete.
+ * @param count count of exceptions
  */
-void exceptionInMultipartComplete() {
-  exceptionsInMultipartFinalize.incrementAndGet();
+void exceptionInMultipartComplete(int count) {
+  if (count > 0) {
+exceptionsInMultipartFinalize.addAndGet(count);
+  }
 }
 
 /**
@@ -832,6 +869,15 @@ public class S3AInstrumentation {
 }
 
 /**
+ * Data has been uploaded to be committed in a subsequent operation;
+ * to be called at the end of the write.
+ * @param size size in bytes
+ */
+public void commitUploaded(long size) {
+  incrementCounter(COMMITTER_BYTES_UPLOADED, size);
+}
+
+/**
  * Output stream has closed.
  * Trigger merge in of all statistics not updated during operation.
  */
@@ -918,5 +964,176 @@ public class S3AInstrumentation {
 public void storeClosed() {
 
 }
+
+/**
+ * Throttled request.
+ */
+public void throttled() {
+  incrementCounter(S3GUARD_METADATASTORE_THROTTLED, 1);
+  addValueToQuantiles(S3GUARD_METADATASTORE_THROTTLE_RATE, 1);
+}
+
+/**
+ * S3Guard is retrying after a (retryable) failure.
+ */
+public void retrying() {
+  

[04/50] [abbrv] hadoop git commit: YARN-7529. TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails intermittently. Contributed by Chandni Singh

2017-11-28 Thread kkaranasos
YARN-7529. TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails 
intermittently. Contributed by Chandni Singh


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6f9d7a14
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6f9d7a14
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6f9d7a14

Branch: refs/heads/YARN-6592
Commit: 6f9d7a146d5940a9e8a7913c19b43b265d6bfa32
Parents: 6903cf0
Author: Billie Rinaldi 
Authored: Mon Nov 20 07:37:04 2017 -0800
Committer: Billie Rinaldi 
Committed: Mon Nov 20 07:37:04 2017 -0800

--
 .../yarn/service/TestYarnNativeServices.java| 42 +---
 1 file changed, 18 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6f9d7a14/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestYarnNativeServices.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestYarnNativeServices.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestYarnNativeServices.java
index f98d90a..1c517d9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestYarnNativeServices.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestYarnNativeServices.java
@@ -176,7 +176,8 @@ public class TestYarnNativeServices extends 
ServiceTestUtils {
 ServiceClient client = createClient();
 Service exampleApp = createExampleApplication();
 client.actionCreate(exampleApp);
-waitForAllCompToBeReady(client, exampleApp);
+Multimap containersBeforeFailure =
+waitForAllCompToBeReady(client, exampleApp);
 
 LOG.info("Restart the resource manager");
 getYarnCluster().restartResourceManager(
@@ -191,9 +192,6 @@ public class TestYarnNativeServices extends 
ServiceTestUtils {
 ApplicationAttemptId applicationAttemptId = client.getYarnClient()
 .getApplicationReport(exampleAppId).getCurrentApplicationAttemptId();
 
-Multimap containersBeforeFailure = getContainersForAllComp(
-client, exampleApp);
-
 LOG.info("Fail the application attempt {}", applicationAttemptId);
 client.getYarnClient().failApplicationAttempt(applicationAttemptId);
 //wait until attempt 2 is running
@@ -208,7 +206,7 @@ public class TestYarnNativeServices extends 
ServiceTestUtils {
   }
 }, 2000, 20);
 
-Multimap containersAfterFailure = getContainersForAllComp(
+Multimap containersAfterFailure = waitForAllCompToBeReady(
 client, exampleApp);
 Assert.assertEquals("component container affected by restart",
 containersBeforeFailure, containersAfterFailure);
@@ -318,14 +316,26 @@ public class TestYarnNativeServices extends 
ServiceTestUtils {
 }, 2000, 20);
   }
 
-  // wait until all the containers for all components become ready state
-  private void waitForAllCompToBeReady(ServiceClient client,
+  /**
+   * Wait until all the containers for all components become ready state.
+   *
+   * @param client
+   * @param exampleApp
+   * @return all ready containers of a service.
+   * @throws TimeoutException
+   * @throws InterruptedException
+   */
+  private Multimap waitForAllCompToBeReady(ServiceClient 
client,
   Service exampleApp) throws TimeoutException, InterruptedException {
 int expectedTotalContainers = countTotalContainers(exampleApp);
+
+Multimap allContainers = HashMultimap.create();
+
 GenericTestUtils.waitFor(() -> {
   try {
 Service retrievedApp = client.getStatus(exampleApp.getName());
 int totalReadyContainers = 0;
+allContainers.clear();
 LOG.info("Num Components " + retrievedApp.getComponents().size());
 for (Component component : retrievedApp.getComponents()) {
   LOG.info("looking for  " + component.getName());
@@ -339,6 +349,7 @@ public class TestYarnNativeServices extends 
ServiceTestUtils {
 + component.getName());
 if (container.getState() == ContainerState.READY) {
   totalReadyContainers++;
+  allContainers.put(component.getName(), container.getId());
   

[26/50] [abbrv] hadoop git commit: HADOOP-13786 Add S3A committer for zero-rename commits to S3 endpoints. Contributed by Steve Loughran and Ryan Blue.

2017-11-28 Thread kkaranasos
http://git-wip-us.apache.org/repos/asf/hadoop/blob/de8b6ca5/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestPathOutputCommitterFactory.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestPathOutputCommitterFactory.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestPathOutputCommitterFactory.java
new file mode 100644
index 000..13e1c61
--- /dev/null
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestPathOutputCommitterFactory.java
@@ -0,0 +1,495 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.mapreduce.lib.output;
+
+import java.io.IOException;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.TaskAttemptID;
+import org.apache.hadoop.mapreduce.TaskType;
+import org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl;
+
+import static 
org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory.*;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Test the committer factory logic, looking at the override
+ * and fallback behavior.
+ */
+@SuppressWarnings("unchecked")
+public class TestPathOutputCommitterFactory extends Assert {
+
+  private static final String HTTP_COMMITTER_FACTORY = String.format(
+  COMMITTER_FACTORY_SCHEME_PATTERN, "http");
+
+  private static final Path HTTP_PATH = new Path("http://hadoop.apache.org/;);
+  private static final Path HDFS_PATH = new Path("hdfs://localhost:8081/");
+
+  private TaskAttemptID taskAttemptID =
+  new TaskAttemptID("local", 0, TaskType.MAP, 1, 2);
+
+  /**
+   * Set a factory for a schema, verify it works.
+   * @throws Throwable failure
+   */
+  @Test
+  public void testCommitterFactoryForSchema() throws Throwable {
+createCommitterFactory(SimpleCommitterFactory.class,
+HTTP_PATH,
+newBondedConfiguration());
+  }
+
+  /**
+   * A schema factory only affects that filesystem.
+   * @throws Throwable failure
+   */
+  @Test
+  public void testCommitterFactoryFallbackDefault() throws Throwable {
+createCommitterFactory(FileOutputCommitterFactory.class,
+HDFS_PATH,
+newBondedConfiguration());
+  }
+
+  /**
+   * A schema factory only affects that filesystem; test through
+   * {@link PathOutputCommitterFactory#createCommitter(Path, 
TaskAttemptContext)}.
+   * @throws Throwable failure
+   */
+  @Test
+  public void testCommitterFallbackDefault() throws Throwable {
+createCommitter(FileOutputCommitter.class,
+HDFS_PATH,
+taskAttempt(newBondedConfiguration()));
+  }
+
+  /**
+   * Verify that you can override any schema with an explicit name.
+   */
+  @Test
+  public void testCommitterFactoryOverride() throws Throwable {
+Configuration conf = newBondedConfiguration();
+// set up for the schema factory
+// and then set a global one which overrides the others.
+conf.set(COMMITTER_FACTORY_CLASS, OtherFactory.class.getName());
+createCommitterFactory(OtherFactory.class, HDFS_PATH, conf);
+createCommitterFactory(OtherFactory.class, HTTP_PATH, conf);
+  }
+
+  /**
+   * Verify that if the factory class option is "", schema factory
+   * resolution still works.
+   */
+  @Test
+  public void testCommitterFactoryEmptyOption() throws Throwable {
+Configuration conf = newBondedConfiguration();
+// set up for the schema factory
+// and then set a global one which overrides the others.
+conf.set(COMMITTER_FACTORY_CLASS, "");
+createCommitterFactory(SimpleCommitterFactory.class, HTTP_PATH, conf);
+
+// and HDFS, with no schema, falls back to the default
+

[41/50] [abbrv] hadoop git commit: HDFS-12858. Add router admin commands usage in HDFS commands reference doc. Contributed by Yiqun Lin.

2017-11-28 Thread kkaranasos
HDFS-12858. Add router admin commands usage in HDFS commands reference doc. 
Contributed by Yiqun Lin.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/94bed504
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/94bed504
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/94bed504

Branch: refs/heads/YARN-6592
Commit: 94bed5047113fb148194380853ff01e92897a91f
Parents: d8923cd
Author: Yiqun Lin 
Authored: Tue Nov 28 11:48:55 2017 +0800
Committer: Yiqun Lin 
Committed: Tue Nov 28 11:48:55 2017 +0800

--
 .../src/site/markdown/HDFSCommands.md   | 23 
 1 file changed, 23 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/94bed504/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
index c5f80d0..d8462c1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
@@ -414,6 +414,29 @@ Usage:
 
 Runs a HDFS dfsadmin client.
 
+### `dfsrouter`
+
+Usage: `hdfs dfsrouter`
+
+Runs the DFS router. See [Router](./HDFSRouterFederation.html#Router) for more 
info.
+
+### `dfsrouteradmin`
+
+Usage:
+
+  hdfs dfsrouteradmin
+  [-add   ]
+  [-rm ]
+  [-ls ]
+
+| COMMAND\_OPTION | Description |
+|: |: |
+| `-add` *source* *nameservice* *destination* | Add a mount table entry or 
update if it exists. |
+| `-rm` *source* | Remove mount point of specified path. |
+| `-ls` *path* | List mount points under specified path. |
+
+The commands for managing Router-based federation. See [Mount table 
management](./HDFSRouterFederation.html#Mount_table_management) for more info.
+
 ### `diskbalancer`
 
 Usage:


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[28/50] [abbrv] hadoop git commit: YARN-5534. Allow user provided Docker volume mount list. (Contributed by Shane Kumpf)

2017-11-28 Thread kkaranasos
YARN-5534.  Allow user provided Docker volume mount list.  (Contributed by 
Shane Kumpf)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d42a336c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d42a336c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d42a336c

Branch: refs/heads/YARN-6592
Commit: d42a336cfab106d052aa30d80d9d30904123cb55
Parents: de8b6ca
Author: Eric Yang 
Authored: Wed Nov 22 13:05:34 2017 -0500
Committer: Eric Yang 
Committed: Wed Nov 22 13:05:34 2017 -0500

--
 .../runtime/DockerLinuxContainerRuntime.java|  42 +++
 .../linux/runtime/docker/DockerRunCommand.java  |  12 ++
 .../runtime/TestDockerContainerRuntime.java | 109 +++
 .../src/site/markdown/DockerContainers.md   |  48 
 4 files changed, 211 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d42a336c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
index 75a28e6..e61dc23 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
@@ -65,6 +65,7 @@ import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
 import java.util.Set;
+import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
 import static 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.LinuxContainerRuntimeConstants.*;
@@ -134,6 +135,16 @@ import static 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.r
  * source is an absolute path that is not a symlink and that points to a
  * localized resource.
  *   
+ *   
+ * {@code YARN_CONTAINER_RUNTIME_DOCKER_MOUNTS} allows users to specify
+ + additional volume mounts for the Docker container. The value of the
+ * environment variable should be a comma-separated list of mounts.
+ * All such mounts must be given as {@code source:dest:mode}, and the mode
+ * must be "ro" (read-only) or "rw" (read-write) to specify the type of
+ * access being requested. The requested mounts will be validated by
+ * container-executor based on the values set in container-executor.cfg for
+ * {@code docker.allowed.ro-mounts} and {@code docker.allowed.rw-mounts}.
+ *   
  * 
  */
 @InterfaceAudience.Private
@@ -151,6 +162,8 @@ public class DockerLinuxContainerRuntime implements 
LinuxContainerRuntime {
   "^[a-zA-Z0-9][a-zA-Z0-9_.-]+$";
   private static final Pattern hostnamePattern = Pattern.compile(
   HOSTNAME_PATTERN);
+  private static final Pattern USER_MOUNT_PATTERN = Pattern.compile(
+  "(?<=^|,)([^:\\x00]+):([^:\\x00]+):([a-z]+)");
 
   @InterfaceAudience.Private
   public static final String ENV_DOCKER_CONTAINER_IMAGE =
@@ -176,6 +189,9 @@ public class DockerLinuxContainerRuntime implements 
LinuxContainerRuntime {
   @InterfaceAudience.Private
   public static final String ENV_DOCKER_CONTAINER_LOCAL_RESOURCE_MOUNTS =
   "YARN_CONTAINER_RUNTIME_DOCKER_LOCAL_RESOURCE_MOUNTS";
+  @InterfaceAudience.Private
+  public static final String ENV_DOCKER_CONTAINER_MOUNTS =
+  "YARN_CONTAINER_RUNTIME_DOCKER_MOUNTS";
 
   private Configuration conf;
   private Context nmContext;
@@ -675,6 +691,32 @@ public class DockerLinuxContainerRuntime implements 
LinuxContainerRuntime {
   }
 }
 
+if (environment.containsKey(ENV_DOCKER_CONTAINER_MOUNTS)) {
+  Matcher parsedMounts = USER_MOUNT_PATTERN.matcher(
+  environment.get(ENV_DOCKER_CONTAINER_MOUNTS));
+  if (!parsedMounts.find()) {
+throw new ContainerExecutionException(
+"Unable to parse user supplied mount list: "
++ environment.get(ENV_DOCKER_CONTAINER_MOUNTS));
+  }
+  parsedMounts.reset();
+  while (parsedMounts.find()) {
+String src = parsedMounts.group(1);
+String dst = 

[36/50] [abbrv] hadoop git commit: YARN-7509. AsyncScheduleThread and ResourceCommitterService are still running after RM is transitioned to standby. (Tao Yang via wangda)

2017-11-28 Thread kkaranasos
YARN-7509. AsyncScheduleThread and ResourceCommitterService are still running 
after RM is transitioned to standby. (Tao Yang via wangda)

Change-Id: I7477fe355419fd4a0a6e2bdda7319abad4c4c748


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/834e91ee
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/834e91ee
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/834e91ee

Branch: refs/heads/YARN-6592
Commit: 834e91ee91d22d74866afbf6252107e969bf8370
Parents: d162252
Author: Wangda Tan 
Authored: Thu Nov 23 19:59:03 2017 -0800
Committer: Wangda Tan 
Committed: Thu Nov 23 19:59:03 2017 -0800

--
 .../scheduler/capacity/CapacityScheduler.java   |  16 +-
 .../TestRMHAForAsyncScheduler.java  | 155 +++
 2 files changed, 164 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/834e91ee/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index ed30ad1..218adf3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -459,7 +459,7 @@ public class CapacityScheduler extends
* Schedule on all nodes by starting at a random point.
* @param cs
*/
-  static void schedule(CapacityScheduler cs) {
+  static void schedule(CapacityScheduler cs) throws InterruptedException{
 // First randomize the start point
 int current = 0;
 Collection nodes = cs.nodeTracker.getAllNodes();
@@ -475,9 +475,7 @@ public class CapacityScheduler extends
   cs.allocateContainersToNode(node.getNodeID(), false);
 }
 
-try {
-  Thread.sleep(cs.getAsyncScheduleInterval());
-} catch (InterruptedException e) {}
+Thread.sleep(cs.getAsyncScheduleInterval());
   }
 
   static class AsyncScheduleThread extends Thread {
@@ -492,9 +490,9 @@ public class CapacityScheduler extends
 
 @Override
 public void run() {
-  while (true) {
+  while (!Thread.currentThread().isInterrupted()) {
 try {
-  if (!runSchedules.get() || Thread.currentThread().isInterrupted()) {
+  if (!runSchedules.get()) {
 Thread.sleep(100);
   } else {
 // Don't run schedule if we have some pending backlogs already
@@ -505,9 +503,11 @@ public class CapacityScheduler extends
 }
   }
 } catch (InterruptedException ie) {
-  // Do nothing
+  // keep interrupt signal
+  Thread.currentThread().interrupt();
 }
   }
+  LOG.info("AsyncScheduleThread[" + getName() + "] exited!");
 }
 
 public void beginSchedule() {
@@ -546,8 +546,10 @@ public class CapacityScheduler extends
 
 } catch (InterruptedException e) {
   LOG.error(e);
+  Thread.currentThread().interrupt();
 }
   }
+  LOG.info("ResourceCommitterService exited!");
 }
 
 public void addNewCommitRequest(

http://git-wip-us.apache.org/repos/asf/hadoop/blob/834e91ee/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMHAForAsyncScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMHAForAsyncScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMHAForAsyncScheduler.java
new file mode 100644
index 000..46d5cda
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMHAForAsyncScheduler.java
@@ -0,0 +1,155 @@
+/*
+ * Licensed to the Apache Software 

[49/50] [abbrv] hadoop git commit: YARN-6595. [API] Add Placement Constraints at the application level. (Arun Suresh via kkaranasos)

2017-11-28 Thread kkaranasos
YARN-6595. [API] Add Placement Constraints at the application level. (Arun 
Suresh via kkaranasos)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6a0abf39
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6a0abf39
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6a0abf39

Branch: refs/heads/YARN-6592
Commit: 6a0abf39ff3b1db9ea686324f84e43f9ae598ae2
Parents: 3571634
Author: Konstantinos Karanasos <kkarana...@apache.org>
Authored: Mon Nov 13 15:25:24 2017 -0800
Committer: Konstantinos Karanasos <kkarana...@apache.org>
Committed: Tue Nov 28 13:46:29 2017 -0800

--
 .../RegisterApplicationMasterRequest.java   |  42 -
 .../yarn/api/resource/PlacementConstraint.java  | 156 +++
 .../src/main/proto/yarn_protos.proto|   6 +
 .../src/main/proto/yarn_service_protos.proto|   1 +
 .../RegisterApplicationMasterRequestPBImpl.java | 106 -
 .../hadoop/yarn/api/BasePBImplRecordsTest.java  |  11 ++
 6 files changed, 313 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6a0abf39/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java
index 395e190..f2d537a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java
@@ -18,11 +18,16 @@
 
 package org.apache.hadoop.yarn.api.protocolrecords;
 
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Set;
+
 import org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Stable;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.yarn.api.ApplicationMasterProtocol;
+import org.apache.hadoop.yarn.api.resource.PlacementConstraint;
 import org.apache.hadoop.yarn.util.Records;
-
 /**
  * The request sent by the {@code ApplicationMaster} to {@code ResourceManager}
  * on registration.
@@ -132,4 +137,39 @@ public abstract class RegisterApplicationMasterRequest {
   @Public
   @Stable
   public abstract void setTrackingUrl(String trackingUrl);
+
+  /**
+   * Return all Placement Constraints specified at the Application level. The
+   * mapping is from a set of allocation tags to a
+   * PlacementConstraint associated with the tags, i.e., each
+   * {@link org.apache.hadoop.yarn.api.records.SchedulingRequest} that has 
those
+   * tags will be placed taking into account the corresponding constraint.
+   *
+   * @return A map of Placement Constraints.
+   */
+  @Public
+  @Unstable
+  public Map<Set, PlacementConstraint> getPlacementConstraints() {
+return new HashMap<>();
+  }
+
+  /**
+   * Set Placement Constraints applicable to the
+   * {@link org.apache.hadoop.yarn.api.records.SchedulingRequest}s
+   * of this application.
+   * The mapping is from a set of allocation tags to a
+   * PlacementConstraint associated with the tags.
+   * For example:
+   *  Map 
+   *   hb_regionserver - node_anti_affinity,
+   *   hb_regionserver, hb_master - rack_affinity,
+   *   ...
+   *  
+   * @param placementConstraints Placement Constraint Mapping.
+   */
+  @Public
+  @Unstable
+  public void setPlacementConstraints(
+  Map<Set, PlacementConstraint> placementConstraints) {
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6a0abf39/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
index f0e3982..b6e851a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
@@ -54,6 +54,26 @@ public class PlacementConstrain

[37/50] [abbrv] hadoop git commit: YARN-7290. Method canContainerBePreempted can return true when it shouldn't. (Contributed by Steven Rand)

2017-11-28 Thread kkaranasos
YARN-7290. Method canContainerBePreempted can return true when it shouldn't. 
(Contributed by Steven Rand)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2bde3aed
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2bde3aed
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2bde3aed

Branch: refs/heads/YARN-6592
Commit: 2bde3aedf139368fc71f053d8dd6580b498ff46d
Parents: 834e91e
Author: Yufei Gu 
Authored: Fri Nov 24 23:32:46 2017 -0800
Committer: Yufei Gu 
Committed: Fri Nov 24 23:32:46 2017 -0800

--
 .../scheduler/fair/FSAppAttempt.java| 23 +--
 .../scheduler/fair/FSPreemptionThread.java  | 68 ++--
 .../fair/TestFairSchedulerPreemption.java   | 37 ---
 3 files changed, 93 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2bde3aed/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
index e711229..43daace 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
@@ -588,7 +588,8 @@ public class FSAppAttempt extends 
SchedulerApplicationAttempt
 }
   }
 
-  boolean canContainerBePreempted(RMContainer container) {
+  boolean canContainerBePreempted(RMContainer container,
+  Resource alreadyConsideringForPreemption) {
 if (!isPreemptable()) {
   return false;
 }
@@ -610,6 +611,15 @@ public class FSAppAttempt extends 
SchedulerApplicationAttempt
 
 // Check if the app's allocation will be over its fairshare even
 // after preempting this container
+Resource usageAfterPreemption = getUsageAfterPreemptingContainer(
+container.getAllocatedResource(),
+alreadyConsideringForPreemption);
+
+return !isUsageBelowShare(usageAfterPreemption, getFairShare());
+  }
+
+  private Resource getUsageAfterPreemptingContainer(Resource 
containerResources,
+  Resource alreadyConsideringForPreemption) {
 Resource usageAfterPreemption = Resources.clone(getResourceUsage());
 
 // Subtract resources of containers already queued for preemption
@@ -617,10 +627,13 @@ public class FSAppAttempt extends 
SchedulerApplicationAttempt
   Resources.subtractFrom(usageAfterPreemption, resourcesToBePreempted);
 }
 
-// Subtract this container's allocation to compute usage after preemption
-Resources.subtractFrom(
-usageAfterPreemption, container.getAllocatedResource());
-return !isUsageBelowShare(usageAfterPreemption, getFairShare());
+// Subtract resources of this container and other containers of this app
+// that the FSPreemptionThread is already considering for preemption.
+Resources.subtractFrom(usageAfterPreemption, containerResources);
+Resources.subtractFrom(usageAfterPreemption,
+alreadyConsideringForPreemption);
+
+return usageAfterPreemption;
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2bde3aed/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
index b3e59c5..47e580d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
+++ 

[38/50] [abbrv] hadoop git commit: MAPREDUCE-7014. Fix java doc errors in jdk1.8. Contributed by Steve Loughran.

2017-11-28 Thread kkaranasos
MAPREDUCE-7014. Fix java doc errors in jdk1.8. Contributed by Steve Loughran.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3cd75845
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3cd75845
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3cd75845

Branch: refs/heads/YARN-6592
Commit: 3cd75845da1aced3d88e0ce68c68e8d95f48fb79
Parents: 2bde3ae
Author: Rohith Sharma K S 
Authored: Mon Nov 27 22:01:00 2017 +0530
Committer: Rohith Sharma K S 
Committed: Mon Nov 27 22:01:00 2017 +0530

--
 .../lib/output/PathOutputCommitterFactory.java  | 12 ++--
 .../src/main/java/org/apache/hadoop/fs/s3a/Invoker.java |  4 +++-
 .../java/org/apache/hadoop/fs/s3a/S3AFileSystem.java|  2 +-
 .../main/java/org/apache/hadoop/fs/s3a/S3AUtils.java|  4 
 .../org/apache/hadoop/fs/s3a/WriteOperationHelper.java  |  1 +
 .../hadoop/fs/s3a/commit/AbstractS3ACommitter.java  |  1 +
 .../apache/hadoop/fs/s3a/commit/CommitOperations.java   |  2 +-
 .../hadoop/fs/s3a/commit/staging/StagingCommitter.java  |  1 +
 .../hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java|  2 +-
 9 files changed, 19 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3cd75845/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/PathOutputCommitterFactory.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/PathOutputCommitterFactory.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/PathOutputCommitterFactory.java
index 0df14d1..7d214f2 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/PathOutputCommitterFactory.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/PathOutputCommitterFactory.java
@@ -39,12 +39,12 @@ import org.apache.hadoop.util.ReflectionUtils;
  *
  * Algorithm:
  * 
- *   If an explicit committer factory is named, it is used.
- *   The output path is examined.
+ *   If an explicit committer factory is named, it is used.
+ *   The output path is examined.
  *   If is non null and there is an explicit schema for that filesystem,
- *   its factory is instantiated.
- *   Otherwise, an instance of {@link FileOutputCommitter} is
- *   created.
+ *   its factory is instantiated.
+ *   Otherwise, an instance of {@link FileOutputCommitter} is
+ *   created.
  * 
  *
  * In {@link FileOutputFormat}, the created factory has its method
@@ -186,7 +186,7 @@ public class PathOutputCommitterFactory extends Configured {
   }
 
   /**
-   * Create the committer factory for a task attempt & destination, then
+   * Create the committer factory for a task attempt and destination, then
* create the committer from it.
* @param outputPath the task's output path, or or null if no output path
* has been defined.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3cd75845/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Invoker.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Invoker.java 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Invoker.java
index 9900f4c..107a247 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Invoker.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Invoker.java
@@ -55,7 +55,7 @@ import org.apache.hadoop.io.retry.RetryPolicy;
  *
  * The static {@link #quietly(String, String, VoidOperation)} and
  * {@link #quietlyEval(String, String, Operation)} calls exist to take any
- * operation and quietly catch & log at debug. The return value of
+ * operation and quietly catch and log at debug. The return value of
  * {@link #quietlyEval(String, String, Operation)} is a java 8 optional,
  * which can then be used in java8-expressions.
  */
@@ -390,9 +390,11 @@ public class Invoker {
* Execute an operation; any exception raised is caught and
* logged at debug.
* The result is only non-empty if the operation succeeded
+   * @param  type to return
* @param action action to execute
* @param path path (for exception construction)
* @param operation operation
+   * @return the result of a successful operation
*/
   public static  Optional quietlyEval(String action,
   String 

[47/50] [abbrv] hadoop git commit: YARN-6594. [API] Introduce SchedulingRequest object. (Konstantinos Karanasos via wangda)

2017-11-28 Thread kkaranasos
YARN-6594. [API] Introduce SchedulingRequest object. (Konstantinos Karanasos 
via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3571634b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3571634b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3571634b

Branch: refs/heads/YARN-6592
Commit: 3571634b06003d40b9ff5e6235515740d69ded53
Parents: ccf07e9
Author: Wangda Tan 
Authored: Mon Oct 30 16:54:02 2017 -0700
Committer: Konstantinos Karanasos 
Committed: Tue Nov 28 13:46:29 2017 -0800

--
 .../hadoop/yarn/api/records/ResourceSizing.java |  64 +
 .../yarn/api/records/SchedulingRequest.java | 205 ++
 .../src/main/proto/yarn_protos.proto|  14 +
 .../records/impl/pb/ResourceSizingPBImpl.java   | 117 
 .../impl/pb/SchedulingRequestPBImpl.java| 266 +++
 5 files changed, 666 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3571634b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceSizing.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceSizing.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceSizing.java
new file mode 100644
index 000..d82be11
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceSizing.java
@@ -0,0 +1,64 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.api.records;
+
+import org.apache.hadoop.classification.InterfaceAudience.Public;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.yarn.util.Records;
+
+/**
+ * {@code ResourceSizing} contains information for the size of a
+ * {@link SchedulingRequest}, such as the number of requested allocations and
+ * the resources for each allocation.
+ */
+@Public
+@Unstable
+public abstract class ResourceSizing {
+
+  @Public
+  @Unstable
+  public static ResourceSizing newInstance(Resource resources) {
+return ResourceSizing.newInstance(1, resources);
+  }
+
+  @Public
+  @Unstable
+  public static ResourceSizing newInstance(int numAllocations, Resource 
resources) {
+ResourceSizing resourceSizing = Records.newRecord(ResourceSizing.class);
+resourceSizing.setNumAllocations(numAllocations);
+resourceSizing.setResources(resources);
+return resourceSizing;
+  }
+
+  @Public
+  @Unstable
+  public abstract int getNumAllocations();
+
+  @Public
+  @Unstable
+  public abstract void setNumAllocations(int numAllocations);
+
+  @Public
+  @Unstable
+  public abstract Resource getResources();
+
+  @Public
+  @Unstable
+  public abstract void setResources(Resource resources);
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3571634b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SchedulingRequest.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SchedulingRequest.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SchedulingRequest.java
new file mode 100644
index 000..47a0697
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SchedulingRequest.java
@@ -0,0 +1,205 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this 

[31/50] [abbrv] hadoop git commit: YARN-7524. Remove unused FairSchedulerEventLog. (Contributed by Wilfred Spiegelenburg)

2017-11-28 Thread kkaranasos
YARN-7524. Remove unused FairSchedulerEventLog. (Contributed by Wilfred 
Spiegelenburg)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4cc9479d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4cc9479d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4cc9479d

Branch: refs/heads/YARN-6592
Commit: 4cc9479dae2bfb7d14d29b55d103eea9fa35a586
Parents: 738d1a2
Author: Yufei Gu 
Authored: Wed Nov 22 14:18:36 2017 -0800
Committer: Yufei Gu 
Committed: Wed Nov 22 14:18:36 2017 -0800

--
 .../scheduler/fair/FairScheduler.java   |   8 -
 .../fair/FairSchedulerConfiguration.java|  16 --
 .../scheduler/fair/FairSchedulerEventLog.java   | 152 ---
 .../fair/TestFairSchedulerEventLog.java |  83 --
 4 files changed, 259 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4cc9479d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
index b2978d4..661d0a0 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
@@ -177,7 +177,6 @@ public class FairScheduler extends
   protected double rackLocalityThreshold; // Cluster threshold for rack 
locality
   protected long nodeLocalityDelayMs; // Delay for node locality
   protected long rackLocalityDelayMs; // Delay for rack locality
-  private FairSchedulerEventLog eventLog; // Machine-readable event log
   protected boolean assignMultiple; // Allocate multiple containers per
 // heartbeat
   @VisibleForTesting
@@ -404,10 +403,6 @@ public class FairScheduler extends
 return continuousSchedulingSleepMs;
   }
 
-  public FairSchedulerEventLog getEventLog() {
-return eventLog;
-  }
-
   /**
* Add a new application to the scheduler, with a given id, queue name, and
* user. This will accept a new app even if the user or queue is above
@@ -875,7 +870,6 @@ public class FairScheduler extends
 try {
   writeLock.lock();
   long start = getClock().getTime();
-  eventLog.log("HEARTBEAT", nm.getHostName());
   super.nodeUpdate(nm);
 
   FSSchedulerNode fsNode = getFSSchedulerNode(nm.getNodeID());
@@ -1284,8 +1278,6 @@ public class FairScheduler extends
 
   // This stores per-application scheduling information
   this.applications = new ConcurrentHashMap<>();
-  this.eventLog = new FairSchedulerEventLog();
-  eventLog.init(this.conf);
 
   allocConf = new AllocationConfiguration(conf);
   try {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4cc9479d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
index 9c9eee6..38e71a7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
@@ -17,7 +17,6 @@
 */
 package org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair;
 
-import java.io.File;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
@@ -64,12 +63,6 @@ public class FairSchedulerConfiguration extends 

[23/50] [abbrv] hadoop git commit: HADOOP-13786 Add S3A committer for zero-rename commits to S3 endpoints. Contributed by Steve Loughran and Ryan Blue.

2017-11-28 Thread kkaranasos
http://git-wip-us.apache.org/repos/asf/hadoop/blob/de8b6ca5/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitterFactory.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitterFactory.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitterFactory.java
new file mode 100644
index 000..b3bcca1
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitterFactory.java
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.commit;
+
+import java.io.IOException;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.lib.output.PathOutputCommitter;
+import org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory;
+
+/**
+ * Dynamically create the output committer based on subclass type and settings.
+ */
+public abstract class AbstractS3ACommitterFactory
+extends PathOutputCommitterFactory {
+  public static final Logger LOG = LoggerFactory.getLogger(
+  AbstractS3ACommitterFactory.class);
+
+  @Override
+  public PathOutputCommitter createOutputCommitter(Path outputPath,
+  TaskAttemptContext context) throws IOException {
+FileSystem fs = getDestinationFileSystem(outputPath, context);
+PathOutputCommitter outputCommitter;
+if (fs instanceof S3AFileSystem) {
+  outputCommitter = createTaskCommitter((S3AFileSystem)fs,
+  outputPath, context);
+} else {
+  throw new PathCommitException(outputPath,
+  "Filesystem not supported by this committer");
+}
+LOG.info("Using Commmitter {} for {}",
+outputCommitter,
+outputPath);
+return outputCommitter;
+  }
+
+  /**
+   * Get the destination filesystem, returning null if there is none.
+   * Code using this must explicitly or implicitly look for a null value
+   * in the response.
+   * @param outputPath output path
+   * @param context job/task context
+   * @return the destination filesystem, if it can be determined
+   * @throws IOException if the FS cannot be instantiated
+   */
+  protected FileSystem getDestinationFileSystem(Path outputPath,
+  JobContext context)
+  throws IOException {
+return outputPath != null ?
+  FileSystem.get(outputPath.toUri(), context.getConfiguration())
+  : null;
+  }
+
+  /**
+   * Implementation point: create a task committer for a specific filesystem.
+   * @param fileSystem destination FS.
+   * @param outputPath final output path for work
+   * @param context task context
+   * @return a committer
+   * @throws IOException any problem, including the FS not supporting
+   * the desired committer
+   */
+  public abstract PathOutputCommitter createTaskCommitter(
+  S3AFileSystem fileSystem,
+  Path outputPath,
+  TaskAttemptContext context) throws IOException;
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/de8b6ca5/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/CommitConstants.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/CommitConstants.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/CommitConstants.java
new file mode 100644
index 000..03cfcba
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/CommitConstants.java
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use 

[19/50] [abbrv] hadoop git commit: HADOOP-13786 Add S3A committer for zero-rename commits to S3 endpoints. Contributed by Steve Loughran and Ryan Blue.

2017-11-28 Thread kkaranasos
http://git-wip-us.apache.org/repos/asf/hadoop/blob/de8b6ca5/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committers.md
--
diff --git 
a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committers.md 
b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committers.md
new file mode 100644
index 000..c6dbf55
--- /dev/null
+++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committers.md
@@ -0,0 +1,819 @@
+
+
+# Committing work to S3 with the "S3A Committers"
+
+
+
+This page covers the S3A Committers, which can commit work directly
+to an S3 object store.
+
+These committers are designed to solve a fundamental problem which
+the standard committers of work cannot do to S3: consistent, high performance,
+and reliable commitment of output to S3.
+
+For details on their internal design, see
+[S3A Committers: Architecture and 
Implementation](./committer_architecture.html).
+
+
+## Introduction: The Commit Problem
+
+
+Apache Hadoop MapReduce (and behind the scenes, Apache Spark) often write
+the output of their work to filesystems
+
+Normally, Hadoop uses the `FileOutputFormatCommitter` to manage the
+promotion of files created in a single task attempt to the final output of
+a query. This is done in a way to handle failures of tasks and jobs, and to
+support speculative execution. It does that by listing directories and renaming
+their content into the final destination when tasks and then jobs are 
committed.
+
+This has some key requirement of the underlying filesystem:
+
+1. When you list a directory, you see all the files which have been created in 
it,
+and no files which are not in it (i.e. have been deleted).
+1. When you rename a directory, it is an `O(1)` atomic transaction. No other
+process across the cluster may rename a file or directory to the same path.
+If the rename fails for any reason, either the data is at the original 
location,
+or it is at the destination, -in which case the rename actually succeeded.
+
+**The S3 object store and the `s3a://` filesystem client cannot meet these 
requirements.*
+
+1. Amazon S3 has inconsistent directory listings unless S3Guard is enabled.
+1. The S3A mimics `rename()` by copying files and then deleting the originals.
+This can fail partway through, and there is nothing to prevent any other 
process
+in the cluster attempting a rename at the same time.
+
+As a result,
+
+* Files my not be listed, hence not renamed into place.
+* Deleted files may still be discovered, confusing the rename process to the 
point
+of failure.
+* If a rename fails, the data is left in an unknown state.
+* If more than one process attempts to commit work simultaneously, the output
+directory may contain the results of both processes: it is no longer an 
exclusive
+operation.
+*. While S3Guard may deliver the listing consistency, commit time is still
+proportional to the amount of data created. It still can't handle task failure.
+
+**Using the "classic" `FileOutputCommmitter` to commit work to Amazon S3 risks
+loss or corruption of generated data**
+
+
+To address these problems there is now explicit support in the `hadop-aws`
+module for committing work to Amazon S3 via the S3A filesystem client,
+*the S3A Committers*
+
+
+For safe, as well as high-performance output of work to S3,
+we need use "a committer" explicitly written to work with S3, treating it as
+an object store with special features.
+
+
+### Background : Hadoop's "Commit Protocol"
+
+How exactly is work written to its final destination? That is accomplished by
+a "commit protocol" between the workers and the job manager.
+
+This protocol is implemented in Hadoop MapReduce, with a similar but extended
+version in Apache Spark:
+
+1. A "Job" is the entire query, with inputs to outputs
+1. The "Job Manager" is the process in charge of choreographing the execution
+of the job. It may perform some of the actual computation too.
+1. The job has "workers", which are processes which work the actual data
+and write the results.
+1. Workers execute "Tasks", which are fractions of the job, a job whose
+input has been *partitioned* into units of work which can be executed 
independently.
+1. The Job Manager directs workers to execute "tasks", usually trying to 
schedule
+the work close to the data (if the filesystem provides locality information).
+1. Workers can fail: the Job manager needs to detect this and reschedule their 
active tasks.
+1. Workers can also become separated from the Job Manager, a "network 
partition".
+It is (provably) impossible for the Job Manager to distinguish a 
running-but-unreachable
+worker from a failed one.
+1. The output of a failed task must not be visible; this is to avoid its
+data getting into the final output.
+1. Multiple workers can be instructed to evaluate the same partition of the 
work;
+this "speculation" delivers speedup as it can address the "straggler problem".

[50/50] [abbrv] hadoop git commit: YARN-6593. [API] Introduce Placement Constraint object. (Konstantinos Karanasos via wangda)

2017-11-28 Thread kkaranasos
YARN-6593. [API] Introduce Placement Constraint object. (Konstantinos Karanasos 
via wangda)

Change-Id: Id00edb7185fdf01cce6e40f920cac3585f8cbe9c


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ccf07e95
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ccf07e95
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ccf07e95

Branch: refs/heads/YARN-6592
Commit: ccf07e959c1856f00fe40c3dfcadf262a910
Parents: 30941d9
Author: Wangda Tan 
Authored: Thu Aug 3 14:03:55 2017 -0700
Committer: Konstantinos Karanasos 
Committed: Tue Nov 28 13:46:29 2017 -0800

--
 .../yarn/api/resource/PlacementConstraint.java  | 567 +++
 .../yarn/api/resource/PlacementConstraints.java | 286 ++
 .../hadoop/yarn/api/resource/package-info.java  |  23 +
 .../src/main/proto/yarn_protos.proto|  55 ++
 .../api/resource/TestPlacementConstraints.java  | 106 
 .../PlacementConstraintFromProtoConverter.java  | 116 
 .../pb/PlacementConstraintToProtoConverter.java | 174 ++
 .../apache/hadoop/yarn/api/pb/package-info.java |  23 +
 .../yarn/api/records/impl/pb/ProtoUtils.java|  27 +
 .../PlacementConstraintTransformations.java | 209 +++
 .../hadoop/yarn/api/resource/package-info.java  |  23 +
 .../TestPlacementConstraintPBConversion.java| 195 +++
 .../TestPlacementConstraintTransformations.java | 183 ++
 13 files changed, 1987 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ccf07e95/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
new file mode 100644
index 000..f0e3982
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
@@ -0,0 +1,567 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.api.resource;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceAudience.Public;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+
+/**
+ * {@code PlacementConstraint} represents a placement constraint for a resource
+ * allocation.
+ */
+@Public
+@Unstable
+public class PlacementConstraint {
+
+  /**
+   * The constraint expression tree.
+   */
+  private AbstractConstraint constraintExpr;
+
+  public PlacementConstraint(AbstractConstraint constraintExpr) {
+this.constraintExpr = constraintExpr;
+  }
+
+  /**
+   * Get the constraint expression of the placement constraint.
+   *
+   * @return the constraint expression
+   */
+  public AbstractConstraint getConstraintExpr() {
+return constraintExpr;
+  }
+
+  /**
+   * Interface used to enable the elements of the constraint tree to be 
visited.
+   */
+  @Private
+  public interface Visitable {
+/**
+ * Visitor pattern.
+ *
+ * @param visitor visitor to be used
+ * @param  defines the type that the visitor will use and the return 
type
+ *  of the accept.
+ * @return the result of visiting a given object.
+ */
+ T accept(Visitor visitor);
+
+  }
+
+  /**
+   * Visitor API for a constraint tree.
+   *
+   * @param  determines the return type of the visit methods.
+   */
+  @Private
+  public interface Visitor {
+T visit(SingleConstraint constraint);
+
+T visit(TargetExpression target);
+
+T visit(TargetConstraint constraint);
+
+T visit(CardinalityConstraint constraint);
+
+T 

[44/50] [abbrv] hadoop git commit: YARN-7480. Render tooltips on columns where text is clipped in new YARN UI. Contributed by Vasudevan Skm. This closes #293

2017-11-28 Thread kkaranasos
YARN-7480. Render tooltips on columns where text is clipped in new YARN UI. 
Contributed by Vasudevan Skm. This closes #293


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6b76695f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6b76695f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6b76695f

Branch: refs/heads/YARN-6592
Commit: 6b76695f886d4db7287a0425d56d5e13daf5d08d
Parents: 641ba5c
Author: Sunil G 
Authored: Tue Nov 28 22:41:52 2017 +0530
Committer: Sunil G 
Committed: Tue Nov 28 22:41:52 2017 +0530

--
 .../app/components/em-table-tooltip-text.js | 33 +++
 .../webapp/app/controllers/app-table-columns.js |  4 ++
 .../components/em-table-tooltip-text.hbs| 26 
 .../components/em-table-tooltip-text-test.js| 43 
 4 files changed, 106 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6b76695f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/em-table-tooltip-text.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/em-table-tooltip-text.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/em-table-tooltip-text.js
new file mode 100644
index 000..f363460
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/em-table-tooltip-text.js
@@ -0,0 +1,33 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+ import Ember from 'ember';
+
+export default Ember.Component.extend({
+  content: null,
+
+  classNames: ["em-table-text-with-tooltip"],
+
+  didRender: function() {
+this.$().parent().css("position", "static");
+  },
+
+  tooltipText: Ember.computed("content", function () {
+return this.get("content");
+  }),
+});

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6b76695f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
index a87acc1..fb002f9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
@@ -50,6 +50,7 @@ export default Ember.Controller.extend({
   }, {
   id: 'appName',
   headerTitle: 'Application Name',
+  cellComponentName: 'em-table-tooltip-text',
   contentPath: 'appName',
   facetType: null,
   }, {
@@ -66,6 +67,7 @@ export default Ember.Controller.extend({
   }, {
   id: 'queue',
   headerTitle: 'Queue',
+  cellComponentName: 'em-table-tooltip-text',
   contentPath: 'queue',
   }, {
   id: 'progress',
@@ -128,6 +130,7 @@ export default Ember.Controller.extend({
   headerTitle: 'Application ID',
   contentPath: 'id',
   facetType: null,
+  cellComponentName: 'em-table-tooltip-text',
   minWidth: "250px"
 }, {
   id: 'state',
@@ -160,6 +163,7 @@ export default Ember.Controller.extend({
 id: 'queue',
 headerTitle: 'Queue',
 contentPath: 'queue',
+cellComponentName: 'em-table-tooltip-text',
 }, {
   id: 'stTime',
   headerTitle: 'Started Time',

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6b76695f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/em-table-tooltip-text.hbs
--
diff --git 

[25/50] [abbrv] hadoop git commit: HADOOP-13786 Add S3A committer for zero-rename commits to S3 endpoints. Contributed by Steve Loughran and Ryan Blue.

2017-11-28 Thread kkaranasos
http://git-wip-us.apache.org/repos/asf/hadoop/blob/de8b6ca5/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java
index f846689..96de8e4 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java
@@ -22,17 +22,16 @@ import java.io.IOException;
 import java.io.OutputStream;
 import java.util.ArrayList;
 import java.util.List;
-import java.util.concurrent.Callable;
+import java.util.Locale;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ExecutorService;
-import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
 
 import com.amazonaws.AmazonClientException;
 import com.amazonaws.event.ProgressEvent;
 import com.amazonaws.event.ProgressEventType;
 import com.amazonaws.event.ProgressListener;
-import com.amazonaws.services.s3.model.CompleteMultipartUploadResult;
 import com.amazonaws.services.s3.model.PartETag;
 import com.amazonaws.services.s3.model.PutObjectRequest;
 import com.amazonaws.services.s3.model.PutObjectResult;
@@ -47,8 +46,9 @@ import org.slf4j.LoggerFactory;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
-import org.apache.hadoop.io.retry.RetryPolicies;
-import org.apache.hadoop.io.retry.RetryPolicy;
+import org.apache.hadoop.fs.StreamCapabilities;
+import org.apache.hadoop.fs.s3a.commit.CommitConstants;
+import org.apache.hadoop.fs.s3a.commit.PutTracker;
 import org.apache.hadoop.util.Progressable;
 
 import static org.apache.hadoop.fs.s3a.S3AUtils.*;
@@ -65,7 +65,8 @@ import static org.apache.hadoop.fs.s3a.Statistic.*;
  */
 @InterfaceAudience.Private
 @InterfaceStability.Unstable
-class S3ABlockOutputStream extends OutputStream {
+class S3ABlockOutputStream extends OutputStream implements
+StreamCapabilities {
 
   private static final Logger LOG =
   LoggerFactory.getLogger(S3ABlockOutputStream.class);
@@ -87,14 +88,6 @@ class S3ABlockOutputStream extends OutputStream {
   private final ListeningExecutorService executorService;
 
   /**
-   * Retry policy for multipart commits; not all AWS SDK versions retry that.
-   */
-  private final RetryPolicy retryPolicy =
-  RetryPolicies.retryUpToMaximumCountWithProportionalSleep(
-  5,
-  2000,
-  TimeUnit.MILLISECONDS);
-  /**
* Factory for blocks.
*/
   private final S3ADataBlocks.BlockFactory blockFactory;
@@ -120,7 +113,12 @@ class S3ABlockOutputStream extends OutputStream {
   /**
* Write operation helper; encapsulation of the filesystem operations.
*/
-  private final S3AFileSystem.WriteOperationHelper writeOperationHelper;
+  private final WriteOperationHelper writeOperationHelper;
+
+  /**
+   * Track multipart put operation.
+   */
+  private final PutTracker putTracker;
 
   /**
* An S3A output stream which uploads partitions in a separate pool of
@@ -138,6 +136,7 @@ class S3ABlockOutputStream extends OutputStream {
* @param blockFactory factory for creating stream destinations
* @param statistics stats for this stream
* @param writeOperationHelper state of the write operation.
+   * @param putTracker put tracking for commit support
* @throws IOException on any problem
*/
   S3ABlockOutputStream(S3AFileSystem fs,
@@ -147,7 +146,8 @@ class S3ABlockOutputStream extends OutputStream {
   long blockSize,
   S3ADataBlocks.BlockFactory blockFactory,
   S3AInstrumentation.OutputStreamStatistics statistics,
-  S3AFileSystem.WriteOperationHelper writeOperationHelper)
+  WriteOperationHelper writeOperationHelper,
+  PutTracker putTracker)
   throws IOException {
 this.fs = fs;
 this.key = key;
@@ -155,6 +155,7 @@ class S3ABlockOutputStream extends OutputStream {
 this.blockSize = (int) blockSize;
 this.statistics = statistics;
 this.writeOperationHelper = writeOperationHelper;
+this.putTracker = putTracker;
 Preconditions.checkArgument(blockSize >= Constants.MULTIPART_MIN_SIZE,
 "Block size is too small: %d", blockSize);
 this.executorService = MoreExecutors.listeningDecorator(executorService);
@@ -166,7 +167,11 @@ class S3ABlockOutputStream extends OutputStream {
 // writes a 0-byte entry.
 createBlockIfNeeded();
 LOG.debug("Initialized S3ABlockOutputStream for {}" +
-" output to {}", writeOperationHelper, activeBlock);
+" output to {}", key, activeBlock);
+if (putTracker.initialize()) {
+  LOG.debug("Put tracker requests multipart upload");
+  

[40/50] [abbrv] hadoop git commit: YARN-7363. ContainerLocalizer don't have a valid log4j config in case of Linux container executor. (Contributed by Yufei Gu)

2017-11-28 Thread kkaranasos
YARN-7363. ContainerLocalizer don't have a valid log4j config in case of Linux 
container executor. (Contributed by Yufei Gu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d8923cdb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d8923cdb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d8923cdb

Branch: refs/heads/YARN-6592
Commit: d8923cdbf1567aee10a54f144fef734d1465ebed
Parents: fedabca
Author: Yufei Gu 
Authored: Mon Nov 27 11:47:11 2017 -0800
Committer: Yufei Gu 
Committed: Mon Nov 27 14:31:52 2017 -0800

--
 .../hadoop/yarn/conf/YarnConfiguration.java |  6 +++
 .../src/main/resources/yarn-default.xml |  8 
 .../nodemanager/LinuxContainerExecutor.java | 28 +++-
 .../WindowsSecureContainerExecutor.java |  2 +-
 .../localizer/ContainerLocalizer.java   | 46 +++-
 .../TestLinuxContainerExecutorWithMocks.java| 19 +---
 6 files changed, 98 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d8923cdb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index ead9977..c1024ea 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -1675,6 +1675,12 @@ public class YarnConfiguration extends Configuration {
   public static final String NM_CONTAINER_LOCALIZER_JAVA_OPTS_DEFAULT =
   "-Xmx256m";
 
+  /** The log level of container localizer process. */
+  public static final String NM_CONTAINER_LOCALIZER_LOG_LEVEL=
+  NM_PREFIX + "container-localizer.log.level";
+  public static final String NM_CONTAINER_LOCALIZER_LOG_LEVEL_DEFAULT =
+  "INFO";
+
   /** Prefix for runtime configuration constants. */
   public static final String LINUX_CONTAINER_RUNTIME_PREFIX = NM_PREFIX +
   "runtime.linux.";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d8923cdb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index 12cb902..dd9c6bd 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -1165,6 +1165,14 @@
 
   
 
+  The log level for container localizer while it is an independent process.
+
+yarn.nodemanager.container-localizer.log.level
+INFO
+  
+
+  
+
   Where to store container logs. An application's localized log directory
   will be found in ${yarn.nodemanager.log-dirs}/application_${appid}.
   Individual containers' log directories will be below this, in 
directories 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d8923cdb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
index e8c46a2..eaf664f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
@@ -33,6 +33,7 @@ import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.ConfigurationException;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerDiagnosticsUpdateEvent;

[39/50] [abbrv] hadoop git commit: YARN-6168. Restarted RM may not inform AM about all existing containers. Contributed by Chandni Singh

2017-11-28 Thread kkaranasos
YARN-6168. Restarted RM may not inform AM about all existing containers. 
Contributed by Chandni Singh


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fedabcad
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fedabcad
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fedabcad

Branch: refs/heads/YARN-6592
Commit: fedabcad42067ac7dd24de40fab6be2d3485a540
Parents: 3cd7584
Author: Jian He 
Authored: Mon Nov 27 09:55:08 2017 -0800
Committer: Jian He 
Committed: Mon Nov 27 10:19:58 2017 -0800

--
 .../api/protocolrecords/AllocateResponse.java   |  54 +++
 .../src/main/proto/yarn_service_protos.proto|   1 +
 .../impl/pb/AllocateResponsePBImpl.java |  37 +
 .../resourcemanager/DefaultAMSProcessor.java|   3 +
 .../scheduler/AbstractYarnScheduler.java|   4 +-
 .../resourcemanager/scheduler/Allocation.java   |  13 +-
 .../scheduler/SchedulerApplicationAttempt.java  |  48 ++
 .../scheduler/common/fica/FiCaSchedulerApp.java |   5 +-
 .../scheduler/fair/FairScheduler.java   |   3 +-
 .../applicationsmanager/TestAMRestart.java  | 149 +++
 10 files changed, 310 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fedabcad/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
index 9b254ae..98346ce 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
@@ -372,6 +372,44 @@ public abstract class AllocateResponse {
   public void setUpdateErrors(List updateErrors) {
   }
 
+  /**
+   * Get the list of running containers as viewed by
+   * ResourceManager from previous application attempts which
+   * have not been reported to the Application Master yet.
+   * 
+   * These containers were recovered by the RM after the application master
+   * had already registered. This may happen after RM restart when some NMs get
+   * delayed in connecting to the RM and reporting the active containers.
+   * Since they were not reported in the registration
+   * response, they are reported in the response to the AM heartbeat.
+   *
+   * @return the list of running containers as viewed by
+   * ResourceManager from previous application attempts.
+   */
+  @Public
+  @Unstable
+  public abstract List getContainersFromPreviousAttempts();
+
+  /**
+   * Set the list of running containers as viewed by
+   * ResourceManager from previous application attempts which have
+   * not been reported to the Application Master yet.
+   * 
+   * These containers were recovered by the RM after the application master
+   * had already registered. This may happen after RM restart when some NMs get
+   * delayed in connecting to the RM and reporting the active containers.
+   * Since they were not reported in the registration
+   * response, they are reported in the response to the AM heartbeat.
+   *
+   * @param containersFromPreviousAttempt
+   *  the list of running containers as viewed by
+   *  ResourceManager from previous application attempts.
+   */
+  @Private
+  @Unstable
+  public abstract void setContainersFromPreviousAttempts(
+  List containersFromPreviousAttempt);
+
   @Private
   @Unstable
   public static AllocateResponseBuilder newBuilder() {
@@ -590,6 +628,22 @@ public abstract class AllocateResponse {
 }
 
 /**
+ * Set the containersFromPreviousAttempt of the response.
+ * @see AllocateResponse#setContainersFromPreviousAttempts(List)
+ * @param containersFromPreviousAttempt
+ * containersFromPreviousAttempt of the response
+ * @return {@link AllocateResponseBuilder}
+ */
+@Private
+@Unstable
+public AllocateResponseBuilder containersFromPreviousAttempt(
+List containersFromPreviousAttempt) {
+  allocateResponse.setContainersFromPreviousAttempts(
+  containersFromPreviousAttempt);
+  return this;
+}
+
+/**
  * Return generated {@link AllocateResponse} object.
  * @return {@link AllocateResponse}
  */


[11/50] [abbrv] hadoop git commit: YARN-7531. ResourceRequest.equal does not check ExecutionTypeRequest.enforceExecutionType().

2017-11-28 Thread kkaranasos
YARN-7531. ResourceRequest.equal does not check 
ExecutionTypeRequest.enforceExecutionType().


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/67bbbe1c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/67bbbe1c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/67bbbe1c

Branch: refs/heads/YARN-6592
Commit: 67bbbe1c0c05fa01b08a8dabe93c146935420450
Parents: 0ed44f2
Author: Haibo Chen 
Authored: Fri Nov 17 14:30:43 2017 -0800
Committer: Haibo Chen 
Committed: Tue Nov 21 09:09:16 2017 -0800

--
 .../yarn/api/records/ResourceRequest.java   |  3 +-
 .../hadoop/yarn/api/TestResourceRequest.java| 47 
 2 files changed, 48 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/67bbbe1c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java
index beb3380..e46647a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java
@@ -630,8 +630,7 @@ public abstract class ResourceRequest implements 
Comparable {
   if (other.getExecutionTypeRequest() != null) {
 return false;
   }
-} else if (!execTypeRequest.getExecutionType()
-.equals(other.getExecutionTypeRequest().getExecutionType())) {
+} else if (!execTypeRequest.equals(other.getExecutionTypeRequest())) {
   return false;
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/67bbbe1c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestResourceRequest.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestResourceRequest.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestResourceRequest.java
new file mode 100644
index 000..aef838c
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestResourceRequest.java
@@ -0,0 +1,47 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.api;
+
+import org.apache.hadoop.yarn.api.records.ExecutionType;
+import org.apache.hadoop.yarn.api.records.ExecutionTypeRequest;
+import org.apache.hadoop.yarn.api.records.Priority;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.api.records.ResourceRequest;
+import org.junit.Assert;
+import org.junit.Test;
+
+/**
+ * The class to test {@link ResourceRequest}.
+ */
+public class TestResourceRequest {
+
+  @Test
+  public void testEqualsOnExecutionTypeRequest() {
+ResourceRequest resourceRequestA =
+ResourceRequest.newInstance(Priority.newInstance(0), "localhost",
+Resource.newInstance(1024, 1), 1, false, "",
+ExecutionTypeRequest.newInstance(ExecutionType.GUARANTEED, true));
+
+ResourceRequest resourceRequestB =
+ResourceRequest.newInstance(Priority.newInstance(0), "localhost",
+Resource.newInstance(1024, 1), 1, false, "",
+ExecutionTypeRequest.newInstance(ExecutionType.GUARANTEED, false));
+
+Assert.assertFalse(resourceRequestA.equals(resourceRequestB));
+  }
+}


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[45/50] [abbrv] hadoop git commit: YARN-6647. RM can crash during transitionToStandby due to InterruptedException. Contributed by Bibin A Chundatt

2017-11-28 Thread kkaranasos
YARN-6647. RM can crash during transitionToStandby due to InterruptedException. 
Contributed by Bibin A Chundatt


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a2c7a73e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a2c7a73e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a2c7a73e

Branch: refs/heads/YARN-6592
Commit: a2c7a73e33045ce42cce19aacbe45c0421a61994
Parents: 6b76695
Author: Jason Lowe 
Authored: Tue Nov 28 11:10:18 2017 -0600
Committer: Jason Lowe 
Committed: Tue Nov 28 11:15:44 2017 -0600

--
 .../RMDelegationTokenSecretManager.java | 42 ++--
 1 file changed, 29 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a2c7a73e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/RMDelegationTokenSecretManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/RMDelegationTokenSecretManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/RMDelegationTokenSecretManager.java
index 53cc471..37cd741 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/RMDelegationTokenSecretManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/RMDelegationTokenSecretManager.java
@@ -82,14 +82,21 @@ public class RMDelegationTokenSecretManager extends
 return new RMDelegationTokenIdentifier();
   }
 
+  private boolean shouldIgnoreException(Exception e) {
+return !running && e.getCause() instanceof InterruptedException;
+  }
+
   @Override
   protected void storeNewMasterKey(DelegationKey newKey) {
 try {
   LOG.info("storing master key with keyID " + newKey.getKeyId());
   rm.getRMContext().getStateStore().storeRMDTMasterKey(newKey);
 } catch (Exception e) {
-  LOG.error("Error in storing master key with KeyID: " + 
newKey.getKeyId());
-  ExitUtil.terminate(1, e);
+  if (!shouldIgnoreException(e)) {
+LOG.error(
+"Error in storing master key with KeyID: " + newKey.getKeyId());
+ExitUtil.terminate(1, e);
+  }
 }
   }
 
@@ -99,8 +106,10 @@ public class RMDelegationTokenSecretManager extends
   LOG.info("removing master key with keyID " + key.getKeyId());
   rm.getRMContext().getStateStore().removeRMDTMasterKey(key);
 } catch (Exception e) {
-  LOG.error("Error in removing master key with KeyID: " + key.getKeyId());
-  ExitUtil.terminate(1, e);
+  if (!shouldIgnoreException(e)) {
+LOG.error("Error in removing master key with KeyID: " + 
key.getKeyId());
+ExitUtil.terminate(1, e);
+  }
 }
   }
 
@@ -113,9 +122,11 @@ public class RMDelegationTokenSecretManager extends
   rm.getRMContext().getStateStore().storeRMDelegationToken(identifier,
   renewDate);
 } catch (Exception e) {
-  LOG.error("Error in storing RMDelegationToken with sequence number: "
-  + identifier.getSequenceNumber());
-  ExitUtil.terminate(1, e);
+  if (!shouldIgnoreException(e)) {
+LOG.error("Error in storing RMDelegationToken with sequence number: "
++ identifier.getSequenceNumber());
+ExitUtil.terminate(1, e);
+  }
 }
   }
 
@@ -127,9 +138,11 @@ public class RMDelegationTokenSecretManager extends
   + id.getSequenceNumber());
   rm.getRMContext().getStateStore().updateRMDelegationToken(id, renewDate);
 } catch (Exception e) {
-  LOG.error("Error in updating persisted RMDelegationToken" +
-" with sequence number: " + id.getSequenceNumber());
-  ExitUtil.terminate(1, e);
+  if (!shouldIgnoreException(e)) {
+LOG.error("Error in updating persisted RMDelegationToken"
++ " with sequence number: " + id.getSequenceNumber());
+ExitUtil.terminate(1, e);
+  }
 }
   }
 
@@ -141,9 +154,12 @@ public class RMDelegationTokenSecretManager extends
   + ident.getSequenceNumber());
   rm.getRMContext().getStateStore().removeRMDelegationToken(ident);
 } catch (Exception e) {
-  LOG.error("Error in removing RMDelegationToken with sequence number: "
-  + ident.getSequenceNumber());
-  ExitUtil.terminate(1, e);
+  if 

[12/50] [abbrv] hadoop git commit: YARN-7513. Remove the scheduler lock in FSAppAttempt.getWeight() (Contributed by Wilfred Spiegelenburg)

2017-11-28 Thread kkaranasos
YARN-7513. Remove the scheduler lock in FSAppAttempt.getWeight() (Contributed 
by Wilfred Spiegelenburg)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/03c311ea
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/03c311ea
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/03c311ea

Branch: refs/heads/YARN-6592
Commit: 03c311eae3ad591630a452921172a4406dbda181
Parents: 67bbbe1
Author: yufei 
Authored: Tue Nov 21 10:33:34 2017 -0800
Committer: yufei 
Committed: Tue Nov 21 10:33:34 2017 -0800

--
 .../resourcemanager/scheduler/fair/FSAppAttempt.java  | 14 --
 1 file changed, 4 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/03c311ea/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
index 94991eb..e711229 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
@@ -1304,20 +1304,14 @@ public class FSAppAttempt extends 
SchedulerApplicationAttempt
 
   @Override
   public float getWeight() {
-double weight = 1.0;
+float weight = 1.0F;
 
 if (scheduler.isSizeBasedWeight()) {
-  scheduler.getSchedulerReadLock().lock();
-
-  try {
-// Set weight based on current memory demand
-weight = Math.log1p(getDemand().getMemorySize()) / Math.log(2);
-  } finally {
-scheduler.getSchedulerReadLock().unlock();
-  }
+  // Set weight based on current memory demand
+  weight = (float)(Math.log1p(demand.getMemorySize()) / Math.log(2));
 }
 
-return (float)weight * this.getPriority().getPriority();
+return weight * appPriority.getPriority();
   }
 
   @Override


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[16/50] [abbrv] hadoop git commit: HADOOP-13786 Add S3A committer for zero-rename commits to S3 endpoints. Contributed by Steve Loughran and Ryan Blue.

2017-11-28 Thread kkaranasos
http://git-wip-us.apache.org/repos/asf/hadoop/blob/de8b6ca5/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/TestMagicCommitPaths.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/TestMagicCommitPaths.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/TestMagicCommitPaths.java
new file mode 100644
index 000..47d112d
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/TestMagicCommitPaths.java
@@ -0,0 +1,246 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.commit;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import com.google.common.collect.Lists;
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.Path;
+
+import static org.apache.hadoop.test.LambdaTestUtils.*;
+import static org.apache.hadoop.fs.s3a.commit.MagicCommitPaths.*;
+import static org.apache.hadoop.fs.s3a.commit.CommitConstants.*;
+
+/**
+ * Tests for {@link MagicCommitPaths} path operations.
+ */
+public class TestMagicCommitPaths extends Assert {
+
+  private static final List MAGIC_AT_ROOT =
+  list(MAGIC);
+  private static final List MAGIC_AT_ROOT_WITH_CHILD =
+  list(MAGIC, "child");
+  private static final List MAGIC_WITH_CHILD =
+  list("parent", MAGIC, "child");
+  private static final List MAGIC_AT_WITHOUT_CHILD =
+  list("parent", MAGIC);
+
+  private static final List DEEP_MAGIC =
+  list("parent1", "parent2", MAGIC, "child1", "child2");
+
+  public static final String[] EMPTY = {};
+
+  @Test
+  public void testSplitPathEmpty() throws Throwable {
+intercept(IllegalArgumentException.class,
+() -> splitPathToElements(new Path("")));
+  }
+
+  @Test
+  public void testSplitPathDoubleBackslash() {
+assertPathSplits("//", EMPTY);
+  }
+
+  @Test
+  public void testSplitRootPath() {
+assertPathSplits("/", EMPTY);
+  }
+
+  @Test
+  public void testSplitBasic() {
+assertPathSplits("/a/b/c",
+new String[]{"a", "b", "c"});
+  }
+
+  @Test
+  public void testSplitTrailingSlash() {
+assertPathSplits("/a/b/c/",
+new String[]{"a", "b", "c"});
+  }
+
+  @Test
+  public void testSplitShortPath() {
+assertPathSplits("/a",
+new String[]{"a"});
+  }
+
+  @Test
+  public void testSplitShortPathTrailingSlash() {
+assertPathSplits("/a/",
+new String[]{"a"});
+  }
+
+  @Test
+  public void testParentsMagicRoot() {
+assertParents(EMPTY, MAGIC_AT_ROOT);
+  }
+
+  @Test
+  public void testChildrenMagicRoot() {
+assertChildren(EMPTY, MAGIC_AT_ROOT);
+  }
+
+  @Test
+  public void testParentsMagicRootWithChild() {
+assertParents(EMPTY, MAGIC_AT_ROOT_WITH_CHILD);
+  }
+
+  @Test
+  public void testChildMagicRootWithChild() {
+assertChildren(a("child"), MAGIC_AT_ROOT_WITH_CHILD);
+  }
+
+  @Test
+  public void testChildrenMagicWithoutChild() {
+assertChildren(EMPTY, MAGIC_AT_WITHOUT_CHILD);
+  }
+
+  @Test
+  public void testChildMagicWithChild() {
+assertChildren(a("child"), MAGIC_WITH_CHILD);
+  }
+
+  @Test
+  public void testParentMagicWithChild() {
+assertParents(a("parent"), MAGIC_WITH_CHILD);
+  }
+
+  @Test
+  public void testParentDeepMagic() {
+assertParents(a("parent1", "parent2"), DEEP_MAGIC);
+  }
+
+  @Test
+  public void testChildrenDeepMagic() {
+assertChildren(a("child1", "child2"), DEEP_MAGIC);
+  }
+
+  @Test
+  public void testLastElementEmpty() throws Throwable {
+intercept(IllegalArgumentException.class,
+() -> lastElement(new ArrayList<>(0)));
+  }
+
+  @Test
+  public void testLastElementSingle() {
+assertEquals("first", lastElement(l("first")));
+  }
+
+  @Test
+  public void testLastElementDouble() {
+assertEquals("2", lastElement(l("first", "2")));
+  }
+
+  @Test
+  public void testFinalDestinationNoMagic() {
+assertEquals(l("first", "2"),
+finalDestination(l("first", "2")));
+  }
+
+  @Test
+  public void testFinalDestinationMagic1() {
+assertEquals(l("first", "2"),
+finalDestination(l("first", MAGIC, 

[35/50] [abbrv] hadoop git commit: HADOOP-15067. GC time percentage reported in JvmMetrics should be a gauge, not counter. Contributed by Misha Dmitriev.

2017-11-28 Thread kkaranasos
HADOOP-15067. GC time percentage reported in JvmMetrics should be a gauge, not 
counter. Contributed by Misha Dmitriev.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d162252d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d162252d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d162252d

Branch: refs/heads/YARN-6592
Commit: d162252d7a7223631ff66ba0210953296407e55f
Parents: b46ca7e
Author: Xiao Chen 
Authored: Thu Nov 23 09:00:59 2017 -0800
Committer: Xiao Chen 
Committed: Thu Nov 23 09:01:28 2017 -0800

--
 .../main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java| 2 +-
 .../java/org/apache/hadoop/metrics2/source/TestJvmMetrics.java | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d162252d/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java
index 8c3375f..5f9afdd 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java
@@ -188,7 +188,7 @@ public class JvmMetrics implements MetricsSource {
 }
 
 if (gcTimeMonitor != null) {
-  rb.addCounter(GcTimePercentage,
+  rb.addGauge(GcTimePercentage,
   gcTimeMonitor.getLatestGcData().getGcTimePercentage());
 }
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d162252d/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/source/TestJvmMetrics.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/source/TestJvmMetrics.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/source/TestJvmMetrics.java
index 5320b6e..aa1b009 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/source/TestJvmMetrics.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/source/TestJvmMetrics.java
@@ -101,7 +101,7 @@ public class TestJvmMetrics {
 verify(rb).tag(SessionId, "test");
 for (JvmMetricsInfo info : JvmMetricsInfo.values()) {
   if (info.name().equals("GcTimePercentage")) {
-verify(rb).addCounter(eq(info), anyInt());
+verify(rb).addGauge(eq(info), anyInt());
   }
 }
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[29/50] [abbrv] hadoop git commit: HDFS-12847. Regenerate editsStored and editsStored.xml in HDFS tests. Contributed by Lei (Eddy) Xu.

2017-11-28 Thread kkaranasos
HDFS-12847. Regenerate editsStored and editsStored.xml in HDFS tests. 
Contributed by Lei (Eddy) Xu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/785732c1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/785732c1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/785732c1

Branch: refs/heads/YARN-6592
Commit: 785732c13e2ebe9f27350b6be82eb2fb782d7dc4
Parents: d42a336
Author: Lei Xu 
Authored: Wed Nov 22 10:19:58 2017 -0800
Committer: Lei Xu 
Committed: Wed Nov 22 10:22:32 2017 -0800

--
 .../hadoop-hdfs/src/test/resources/editsStored  | Bin 6293 -> 6753 bytes
 .../src/test/resources/editsStored.xml  | 750 +++
 2 files changed, 423 insertions(+), 327 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/785732c1/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
index 8029575..3f2817a 100644
Binary files a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored 
and b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored differ

http://git-wip-us.apache.org/repos/asf/hadoop/blob/785732c1/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
index 0a1c25e..2a57c73 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
@@ -13,8 +13,8 @@
   2
   
 1
-1423097579620
-ef3f2032e2797e8e
+1512000829976
+e7457bcc6ab95a84
   
 
   
@@ -24,8 +24,8 @@
   3
   
 2
-1423097579622
-b978ed731a0b4a65
+1512000829980
+07cc38caf6c47bb4
   
 
   
@@ -37,19 +37,19 @@
   16386
   /file_create
   1
-  1422406380345
-  1422406380345
+  1511309632199
+  1511309632199
   512
-  DFSClient_NONMAPREDUCE_-156773767_1
+  DFSClient_NONMAPREDUCE_2134933941_1
   127.0.0.1
   true
   
-xyao
+lei
 supergroup
 420
   
-  7334ec24-dd6b-4efd-807d-ed0d18625534
-  6
+  a4dc081c-6d6f-42d6-af5b-d260228f1aad
+  5
 
   
   
@@ -60,14 +60,14 @@
   0
   /file_create
   1
-  1422406380369
-  1422406380345
+  1511309632248
+  1511309632199
   512
   
   
   false
   
-xyao
+lei
 supergroup
 420
   
@@ -78,11 +78,11 @@
 
   6
   /file_create
-  DFSClient_NONMAPREDUCE_-156773767_1
+  DFSClient_NONMAPREDUCE_2134933941_1
   127.0.0.1
   false
-  7334ec24-dd6b-4efd-807d-ed0d18625534
-  8
+  a4dc081c-6d6f-42d6-af5b-d260228f1aad
+  7
 
   
   
@@ -93,23 +93,118 @@
   0
   /file_create
   1
-  1422406380376
-  1422406380345
+  1511309632263
+  1511309632199
   512
   
   
   false
   
-xyao
+lei
 supergroup
 420
   
 
   
   
-OP_SET_STORAGE_POLICY
+OP_ADD
 
   8
+  0
+  16387
+  /update_blocks
+  1
+  1511309632266
+  1511309632266
+  4096
+  DFSClient_NONMAPREDUCE_2134933941_1
+  127.0.0.1
+  true
+  
+lei
+supergroup
+420
+  
+  a4dc081c-6d6f-42d6-af5b-d260228f1aad
+  9
+
+  
+  
+OP_ALLOCATE_BLOCK_ID
+
+  9
+  1073741825
+
+  
+  
+OP_SET_GENSTAMP_V2
+
+  10
+  1001
+
+  
+  
+OP_ADD_BLOCK
+
+  11
+  /update_blocks
+  
+1073741825
+0
+1001
+  
+  
+  -2
+
+  
+  
+OP_UPDATE_BLOCKS
+
+  12
+  /update_blocks
+  
+1073741825
+1
+1001
+  
+  
+  -2
+
+  
+  
+OP_UPDATE_BLOCKS
+
+  13
+  /update_blocks
+  
+  -2
+
+  
+  
+OP_CLOSE
+
+  14
+  0
+  0
+  /update_blocks
+  1
+  1511309632454
+  1511309632266
+  4096
+  
+  
+  false
+  
+lei
+supergroup
+420
+  
+
+  
+  
+OP_SET_STORAGE_POLICY
+
+  15
   /file_create
   7
 
@@ -117,36 +212,36 @@
   
 OP_RENAME_OLD
 
-  9
+  16
   0
   /file_create
   /file_moved
-  1422406380383
-  

[17/50] [abbrv] hadoop git commit: HADOOP-13786 Add S3A committer for zero-rename commits to S3 endpoints. Contributed by Steve Loughran and Ryan Blue.

2017-11-28 Thread kkaranasos
http://git-wip-us.apache.org/repos/asf/hadoop/blob/de8b6ca5/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/AbstractITCommitProtocol.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/AbstractITCommitProtocol.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/AbstractITCommitProtocol.java
new file mode 100644
index 000..4d7f524
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/AbstractITCommitProtocol.java
@@ -0,0 +1,1371 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.commit;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileSystemTestHelper;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.MapFile;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.JobStatus;
+import org.apache.hadoop.mapreduce.MRJobConfig;
+import org.apache.hadoop.mapreduce.OutputCommitter;
+import org.apache.hadoop.mapreduce.OutputFormat;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.TaskAttemptID;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.apache.hadoop.mapreduce.lib.output.MapFileOutputFormat;
+import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
+import org.apache.hadoop.mapreduce.task.JobContextImpl;
+import org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl;
+import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.util.concurrent.HadoopExecutors;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.*;
+import static org.apache.hadoop.fs.s3a.S3AUtils.*;
+import static org.apache.hadoop.fs.s3a.S3ATestUtils.*;
+import static org.apache.hadoop.fs.s3a.commit.CommitConstants.*;
+import static org.apache.hadoop.test.LambdaTestUtils.*;
+
+/**
+ * Test the job/task commit actions of an S3A Committer, including trying to
+ * simulate some failure and retry conditions.
+ * Derived from
+ * {@code org.apache.hadoop.mapreduce.lib.output.TestFileOutputCommitter}.
+ *
+ * This is a complex test suite as it tries to explore the full lifecycle
+ * of committers, and is designed for subclassing.
+ */
+@SuppressWarnings({"unchecked", "ThrowableNotThrown", "unused"})
+public abstract class AbstractITCommitProtocol extends AbstractCommitITest {
+  private Path outDir;
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(AbstractITCommitProtocol.class);
+
+  private static final String SUB_DIR = "SUB_DIR";
+
+  protected static final String PART_0 = "part-m-0";
+
+  /**
+   * Counter to guarantee that even in parallel test runs, no job has the same
+   * ID.
+   */
+
+  private String jobId;
+
+  // A random task attempt id for testing.
+  private String attempt0;
+  private TaskAttemptID taskAttempt0;
+
+  private String attempt1;
+  private TaskAttemptID taskAttempt1;
+
+  private static final Text KEY_1 = new Text("key1");
+  private static final Text KEY_2 = new Text("key2");
+  private static final Text VAL_1 = new Text("val1");
+  private static final Text VAL_2 = new Text("val2");
+
+  /** A job to abort in test case teardown. */
+  

[10/50] [abbrv] hadoop git commit: HADOOP-15046. Document Apache Hadoop does not support Java 9 in BUILDING.txt. Contributed by Hanisha Koneru.

2017-11-28 Thread kkaranasos
HADOOP-15046. Document Apache Hadoop does not support Java 9 in BUILDING.txt. 
Contributed by Hanisha Koneru.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0ed44f25
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0ed44f25
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0ed44f25

Branch: refs/heads/YARN-6592
Commit: 0ed44f25653ad2d97e2726140a7f77a555c40471
Parents: 659e85e
Author: Akira Ajisaka 
Authored: Wed Nov 22 01:07:42 2017 +0900
Committer: Akira Ajisaka 
Committed: Wed Nov 22 01:07:42 2017 +0900

--
 BUILDING.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0ed44f25/BUILDING.txt
--
diff --git a/BUILDING.txt b/BUILDING.txt
index 9955563..dec3011 100644
--- a/BUILDING.txt
+++ b/BUILDING.txt
@@ -4,7 +4,7 @@ Build instructions for Hadoop
 Requirements:
 
 * Unix System
-* JDK 1.8+
+* JDK 1.8
 * Maven 3.3 or later
 * ProtocolBuffer 2.5.0
 * CMake 3.1 or newer (if compiling native code)
@@ -344,7 +344,7 @@ Building on Windows
 Requirements:
 
 * Windows System
-* JDK 1.8+
+* JDK 1.8
 * Maven 3.0 or later
 * ProtocolBuffer 2.5.0
 * CMake 3.1 or newer


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[08/50] [abbrv] hadoop git commit: HDFS-12804. Use slf4j instead of log4j in FSEditLog. Contributed by Mukul Kumar Singh.

2017-11-28 Thread kkaranasos
HDFS-12804. Use slf4j instead of log4j in FSEditLog. Contributed by Mukul Kumar 
Singh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/60fc2a13
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/60fc2a13
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/60fc2a13

Branch: refs/heads/YARN-6592
Commit: 60fc2a138827c2c29fa7e9d6844e3b8d43809726
Parents: 0d781dd
Author: Chen Liang 
Authored: Mon Nov 20 12:49:53 2017 -0800
Committer: Chen Liang 
Committed: Mon Nov 20 12:49:53 2017 -0800

--
 .../hadoop/hdfs/server/namenode/FSEditLog.java  | 23 ++--
 .../hdfs/server/namenode/TestEditLog.java   |  4 ++--
 .../server/namenode/TestEditLogAutoroll.java| 10 -
 .../hdfs/server/namenode/TestEditLogRace.java   |  4 ++--
 .../server/namenode/ha/TestEditLogTailer.java   |  8 +++
 5 files changed, 24 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/60fc2a13/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
index 7ca63f8..72e00ee 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
@@ -29,8 +29,6 @@ import java.util.Iterator;
 import java.util.List;
 import java.util.concurrent.atomic.AtomicLong;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -114,6 +112,8 @@ import 
org.apache.hadoop.security.token.delegation.DelegationKey;
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
 import com.google.common.collect.Lists;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * FSEditLog maintains a log of the namespace modifications.
@@ -122,9 +122,7 @@ import com.google.common.collect.Lists;
 @InterfaceAudience.Private
 @InterfaceStability.Evolving
 public class FSEditLog implements LogsPurgeable {
-
-  public static final Log LOG = LogFactory.getLog(FSEditLog.class);
-
+  public static final Logger LOG = LoggerFactory.getLogger(FSEditLog.class);
   /**
* State machine for edit log.
* 
@@ -329,7 +327,8 @@ public class FSEditLog implements LogsPurgeable {
   String error = String.format("Cannot start writing at txid %s " +
 "when there is a stream available for read: %s",
 segmentTxId, streams.get(0));
-  IOUtils.cleanup(LOG, streams.toArray(new EditLogInputStream[0]));
+  IOUtils.cleanupWithLogger(LOG,
+  streams.toArray(new EditLogInputStream[0]));
   throw new IllegalStateException(error);
 }
 
@@ -689,9 +688,9 @@ public class FSEditLog implements LogsPurgeable {
 "Could not sync enough journals to persistent storage " +
 "due to " + e.getMessage() + ". " +
 "Unsynced transactions: " + (txid - synctxid);
-LOG.fatal(msg, new Exception());
+LOG.error(msg, new Exception());
 synchronized(journalSetLock) {
-  IOUtils.cleanup(LOG, journalSet);
+  IOUtils.cleanupWithLogger(LOG, journalSet);
 }
 terminate(1, msg);
   }
@@ -715,9 +714,9 @@ public class FSEditLog implements LogsPurgeable {
   final String msg =
   "Could not sync enough journals to persistent storage. "
   + "Unsynced transactions: " + (txid - synctxid);
-  LOG.fatal(msg, new Exception());
+  LOG.error(msg, new Exception());
   synchronized(journalSetLock) {
-IOUtils.cleanup(LOG, journalSet);
+IOUtils.cleanupWithLogger(LOG, journalSet);
   }
   terminate(1, msg);
 }
@@ -772,7 +771,7 @@ public class FSEditLog implements LogsPurgeable {
 buf.append(editLogStream.getNumSync());
 buf.append(" SyncTimes(ms): ");
 buf.append(journalSet.getSyncTimes());
-LOG.info(buf);
+LOG.info(buf.toString());
   }
 
   /** Record the RPC IDs if necessary */
@@ -1711,7 +1710,7 @@ public class FSEditLog implements LogsPurgeable {
   if (recovery != null) {
 // If recovery mode is enabled, continue loading even if we know we
 // can't load up to toAtLeastTxId.

[06/50] [abbrv] hadoop git commit: HADOOP-15024 Support user agent configuration and include that & Hadoop version information to oss server. Contributed by Sammi Chen.

2017-11-28 Thread kkaranasos
HADOOP-15024 Support user agent configuration and include that & Hadoop version 
information to oss server.
Contributed by Sammi Chen.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c326fc89
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c326fc89
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c326fc89

Branch: refs/heads/YARN-6592
Commit: c326fc89b06a8fe0978306378ba217748c7f2054
Parents: 9fb4eff
Author: Steve Loughran 
Authored: Mon Nov 20 18:56:42 2017 +
Committer: Steve Loughran 
Committed: Mon Nov 20 18:56:42 2017 +

--
 .../apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java | 4 
 .../main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java  | 7 +++
 .../src/site/markdown/tools/hadoop-aliyun/index.md| 2 +-
 3 files changed, 12 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c326fc89/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
index 2e8edc7..a7f13c0 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
@@ -53,6 +53,7 @@ import org.apache.hadoop.fs.LocatedFileStatus;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.PathFilter;
 import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.util.VersionInfo;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -101,6 +102,9 @@ public class AliyunOSSFileSystemStore {
 ESTABLISH_TIMEOUT_DEFAULT));
 clientConf.setSocketTimeout(conf.getInt(SOCKET_TIMEOUT_KEY,
 SOCKET_TIMEOUT_DEFAULT));
+clientConf.setUserAgent(
+conf.get(USER_AGENT_PREFIX, USER_AGENT_PREFIX_DEFAULT) + ", Hadoop/"
++ VersionInfo.getVersion());
 
 String proxyHost = conf.getTrimmed(PROXY_HOST_KEY, "");
 int proxyPort = conf.getInt(PROXY_PORT_KEY, -1);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c326fc89/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
index 04a2ccd..baa171f 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
@@ -18,6 +18,8 @@
 
 package org.apache.hadoop.fs.aliyun.oss;
 
+import com.aliyun.oss.common.utils.VersionInfoUtils;
+
 /**
  * ALL configuration constants for OSS filesystem.
  */
@@ -26,6 +28,11 @@ public final class Constants {
   private Constants() {
   }
 
+  // User agent
+  public static final String USER_AGENT_PREFIX = "fs.oss.user.agent.prefix";
+  public static final String USER_AGENT_PREFIX_DEFAULT =
+  VersionInfoUtils.getDefaultUserAgent();
+
   // Class of credential provider
   public static final String ALIYUN_OSS_CREDENTIALS_PROVIDER_KEY =
   "fs.oss.credentials.provider";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c326fc89/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md 
b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
index 2913279..9f24ce6 100644
--- a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
+++ b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
@@ -274,7 +274,7 @@ XInclude inclusion. Here is an example of 
`contract-test-options.xml`:
 
   
 fs.oss.impl
-org.apache.hadoop.fs.aliyun.AliyunOSSFileSystem
+org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
   
 
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[07/50] [abbrv] hadoop git commit: YARN-7527. Over-allocate node resource in async-scheduling mode of CapacityScheduler. (Tao Yang via wangda)

2017-11-28 Thread kkaranasos
YARN-7527. Over-allocate node resource in async-scheduling mode of 
CapacityScheduler. (Tao Yang via wangda)

Change-Id: I51ae6c2ab7a3d1febdd7d8d0519b63a13295ac7d


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0d781dd0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0d781dd0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0d781dd0

Branch: refs/heads/YARN-6592
Commit: 0d781dd03b979d65de94978071b2faa55005b34a
Parents: c326fc8
Author: Wangda Tan 
Authored: Mon Nov 20 11:48:15 2017 -0800
Committer: Wangda Tan 
Committed: Mon Nov 20 11:48:15 2017 -0800

--
 .../scheduler/common/fica/FiCaSchedulerApp.java |  4 +-
 .../TestCapacitySchedulerAsyncScheduling.java   | 71 
 2 files changed, 74 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0d781dd0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
index 40405fc..e9bee14 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
@@ -417,7 +417,9 @@ public class FiCaSchedulerApp extends 
SchedulerApplicationAttempt {
 
   // Common part of check container allocation regardless if it is a
   // increase container or regular container
-  commonCheckContainerAllocation(allocation, schedulerContainer);
+  if (!commonCheckContainerAllocation(allocation, schedulerContainer)) 
{
+return false;
+  }
 } else {
   // Container reserved first time will be NEW, after the container
   // accepted & confirmed, it will become RESERVED state

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0d781dd0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerAsyncScheduling.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerAsyncScheduling.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerAsyncScheduling.java
index 0c3130d..77596e2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerAsyncScheduling.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerAsyncScheduling.java
@@ -405,6 +405,77 @@ public class TestCapacitySchedulerAsyncScheduling {
 rm.stop();
   }
 
+  @Test (timeout = 3)
+  public void testNodeResourceOverAllocated()
+  throws Exception {
+// disable async-scheduling for simulating complex scene
+Configuration disableAsyncConf = new Configuration(conf);
+disableAsyncConf.setBoolean(
+CapacitySchedulerConfiguration.SCHEDULE_ASYNCHRONOUSLY_ENABLE, false);
+
+// init RM & NMs & Nodes
+final MockRM rm = new MockRM(disableAsyncConf);
+rm.start();
+final MockNM nm1 = rm.registerNode("h1:1234", 9 * GB);
+final MockNM nm2 = rm.registerNode("h2:1234", 9 * GB);
+List nmLst = new ArrayList<>();
+nmLst.add(nm1);
+nmLst.add(nm2);
+
+// init scheduler & nodes
+while (
+((CapacityScheduler) rm.getRMContext().getScheduler()).getNodeTracker()
+.nodeCount() < 2) {
+  Thread.sleep(10);
+}
+Assert.assertEquals(2,
+((AbstractYarnScheduler) rm.getRMContext().getScheduler())
+  

  1   2   3   >