[hadoop] branch trunk updated (a346381 -> 3f89084)

2019-09-24 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from a346381  HDDS-2168. TestOzoneManagerDoubleBufferWithOMResponse 
sometimes fails with out of memory error (#1509)
 add 3f89084  HDFS-14845. Ignore AuthenticationFilterInitializer for 
HttpFSServerWebServer and honor hadoop.http.authentication configs.

No new revisions were added by this update.

Summary of changes:
 .../src/site/markdown/DeprecatedProperties.md  |  4 
 .../fs/http/server/HttpFSAuthenticationFilter.java | 16 ++--
 .../fs/http/server/HttpFSServerWebServer.java  | 22 ++
 .../src/main/resources/httpfs-default.xml  | 20 
 4 files changed, 56 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (6917754 -> a346381)

2019-09-24 Thread bharat
This is an automated email from the ASF dual-hosted git repository.

bharat pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 6917754  HDDS-2172.Ozone shell should remove description about REST 
protocol support. Contributed by Siddharth Wagle.
 add a346381  HDDS-2168. TestOzoneManagerDoubleBufferWithOMResponse 
sometimes fails with out of memory error (#1509)

No new revisions were added by this update.

Summary of changes:
 .../TestOzoneManagerDoubleBufferWithOMResponse.java | 21 +++--
 1 file changed, 7 insertions(+), 14 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2172.Ozone shell should remove description about REST protocol support. Contributed by Siddharth Wagle.

2019-09-24 Thread bharat
This is an automated email from the ASF dual-hosted git repository.

bharat pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 6917754  HDDS-2172.Ozone shell should remove description about REST 
protocol support. Contributed by Siddharth Wagle.
6917754 is described below

commit 6917754ba78a4754f826117fa7909bb116543114
Author: Bharat Viswanadham 
AuthorDate: Tue Sep 24 16:06:10 2019 -0700

HDDS-2172.Ozone shell should remove description about REST protocol 
support. Contributed by Siddharth Wagle.
---
 .../src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java   | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java
index 118a8a4..999eede 100644
--- 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java
+++ 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java
@@ -35,8 +35,7 @@ public abstract class Shell extends GenericCli {
   private static final Logger LOG = LoggerFactory.getLogger(Shell.class);
 
   public static final String OZONE_URI_DESCRIPTION = "Ozone URI could start "
-  + "with o3:// or http(s):// or without prefix. REST protocol will "
-  + "be used for http(s), RPC otherwise. URI may contain the host and port 
"
+  + "with o3:// or without prefix. URI may contain the host and port "
   + "of the OM server. Both are optional. "
   + "If they are not specified it will be identified from "
   + "the config files.";


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: YARN-9730. Support forcing configured partitions to be exclusive based on app node label

2019-09-24 Thread jhung
This is an automated email from the ASF dual-hosted git repository.

jhung pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 806c7b7  YARN-9730. Support forcing configured partitions to be 
exclusive based on app node label
806c7b7 is described below

commit 806c7b7dfb0517b9904df8be132fd2c011b74b9f
Author: Jonathan Hung 
AuthorDate: Tue Sep 24 12:13:29 2019 -0700

YARN-9730. Support forcing configured partitions to be exclusive based on 
app node label

(cherry picked from commit 73a044a63822303f792183244e25432528ecfb1e)
---
 .../apache/hadoop/yarn/conf/YarnConfiguration.java |   6 +
 .../src/main/resources/yarn-default.xml|   9 +
 .../resourcemanager/DefaultAMSProcessor.java   |   6 +
 .../yarn/server/resourcemanager/RMAppManager.java  |   6 +
 .../yarn/server/resourcemanager/RMContext.java |   4 +
 .../yarn/server/resourcemanager/RMContextImpl.java |  15 ++
 .../scheduler/SchedulerApplicationAttempt.java |   9 +
 .../resourcemanager/scheduler/SchedulerUtils.java  |  23 ++
 .../capacity/CapacitySchedulerConfiguration.java   |   7 +
 .../scheduler/capacity/LeafQueue.java  |   8 +-
 .../policy/AbstractComparatorOrderingPolicy.java   |   8 +-
 .../FifoOrderingPolicyWithExclusivePartitions.java | 144 
 ...chedulableEntity.java => IteratorSelector.java} |  48 ++--
 .../scheduler/policy/OrderingPolicy.java   |   3 +-
 .../scheduler/policy/SchedulableEntity.java|   5 +
 .../scheduler/TestSchedulerUtils.java  | 142 
 .../scheduler/capacity/TestCapacityScheduler.java  |  11 +-
 .../scheduler/capacity/TestLeafQueue.java  | 145 +++-
 .../scheduler/policy/MockSchedulableEntity.java|  15 +-
 .../scheduler/policy/TestFairOrderingPolicy.java   |  12 +-
 .../scheduler/policy/TestFifoOrderingPolicy.java   |   2 +-
 .../TestFifoOrderingPolicyForPendingApps.java  |   5 +-
 ...tFifoOrderingPolicyWithExclusivePartitions.java | 244 +
 23 files changed, 821 insertions(+), 56 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 87d2f0c..43ee826 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -3624,6 +3624,12 @@ public class YarnConfiguration extends Configuration {
   public static final String DEFAULT_NODELABEL_CONFIGURATION_TYPE =
   CENTRALIZED_NODELABEL_CONFIGURATION_TYPE;
 
+  public static final String EXCLUSIVE_ENFORCED_PARTITIONS_SUFFIX
+  = "exclusive-enforced-partitions";
+
+  public static final String EXCLUSIVE_ENFORCED_PARTITIONS = NODE_LABELS_PREFIX
+  + EXCLUSIVE_ENFORCED_PARTITIONS_SUFFIX;
+
   public static final String MAX_CLUSTER_LEVEL_APPLICATION_PRIORITY =
   YARN_PREFIX + "cluster.max-application-priority";
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index 0835d02..f9efeb9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -4103,4 +4103,13 @@
 6
   
 
+  
+
+  Comma-separated list of partitions. If a label P is in this list,
+  then the RM will enforce that an app has resource requests with label
+  P iff that app's node label expression is P.
+
+yarn.node-labels.exclusive-enforced-partitions
+
+  
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
index a3bdb2f..dc1c952 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
@@ -76,6 +76,7 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.Allocation;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ContainerUpdates;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler
 

[hadoop] branch branch-2 updated: YARN-9730. Support forcing configured partitions to be exclusive based on app node label

2019-09-24 Thread jhung
This is an automated email from the ASF dual-hosted git repository.

jhung pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new eedbf9d  YARN-9730. Support forcing configured partitions to be 
exclusive based on app node label
eedbf9d is described below

commit eedbf9d195f9270d3b6128a638c8eaf1a005aa78
Author: Jonathan Hung 
AuthorDate: Tue Sep 24 12:13:29 2019 -0700

YARN-9730. Support forcing configured partitions to be exclusive based on 
app node label

(cherry picked from commit 73a044a63822303f792183244e25432528ecfb1e)
(cherry picked from commit dd094d79023f6598e47146166aa8c213e03d41b7)
(cherry picked from commit 10bdcb6f1da3b86146efa479c0bbc8d1da505789)
---
 .../apache/hadoop/yarn/conf/YarnConfiguration.java |   6 +
 .../src/main/resources/yarn-default.xml|   9 +
 .../resourcemanager/DefaultAMSProcessor.java   |   6 +
 .../yarn/server/resourcemanager/RMAppManager.java  |   6 +
 .../yarn/server/resourcemanager/RMContext.java |   4 +
 .../yarn/server/resourcemanager/RMContextImpl.java |  15 ++
 .../scheduler/SchedulerApplicationAttempt.java |   9 +
 .../resourcemanager/scheduler/SchedulerUtils.java  |  23 ++
 .../capacity/CapacitySchedulerConfiguration.java   |   7 +
 .../scheduler/capacity/LeafQueue.java  |   8 +-
 .../policy/AbstractComparatorOrderingPolicy.java   |   8 +-
 .../FifoOrderingPolicyWithExclusivePartitions.java | 144 
 ...chedulableEntity.java => IteratorSelector.java} |  48 ++--
 .../scheduler/policy/OrderingPolicy.java   |   3 +-
 .../scheduler/policy/SchedulableEntity.java|   5 +
 .../scheduler/TestSchedulerUtils.java  | 142 
 .../scheduler/capacity/TestCapacityScheduler.java  |  11 +-
 .../scheduler/capacity/TestLeafQueue.java  | 143 +++-
 .../scheduler/policy/MockSchedulableEntity.java|  15 +-
 .../scheduler/policy/TestFairOrderingPolicy.java   |  12 +-
 .../scheduler/policy/TestFifoOrderingPolicy.java   |   2 +-
 .../TestFifoOrderingPolicyForPendingApps.java  |   5 +-
 ...tFifoOrderingPolicyWithExclusivePartitions.java | 244 +
 23 files changed, 820 insertions(+), 55 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 76d4500..7139818 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -3207,6 +3207,12 @@ public class YarnConfiguration extends Configuration {
   public static final String DEFAULT_NODELABEL_CONFIGURATION_TYPE =
   CENTRALIZED_NODELABEL_CONFIGURATION_TYPE;
 
+  public static final String EXCLUSIVE_ENFORCED_PARTITIONS_SUFFIX
+  = "exclusive-enforced-partitions";
+
+  public static final String EXCLUSIVE_ENFORCED_PARTITIONS = NODE_LABELS_PREFIX
+  + EXCLUSIVE_ENFORCED_PARTITIONS_SUFFIX;
+
   public static final String MAX_CLUSTER_LEVEL_APPLICATION_PRIORITY =
   YARN_PREFIX + "cluster.max-application-priority";
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index 6574016..0c26b4b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -3557,4 +3557,13 @@
 6
   
 
+  
+
+  Comma-separated list of partitions. If a label P is in this list,
+  then the RM will enforce that an app has resource requests with label
+  P iff that app's node label expression is P.
+
+yarn.node-labels.exclusive-enforced-partitions
+
+  
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
index 0baf17a..65bbaca 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
@@ -71,6 +71,7 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.Allocation;
 import 

[hadoop] branch branch-3.1 updated: YARN-9730. Support forcing configured partitions to be exclusive based on app node label

2019-09-24 Thread jhung
This is an automated email from the ASF dual-hosted git repository.

jhung pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 783cbce  YARN-9730. Support forcing configured partitions to be 
exclusive based on app node label
783cbce is described below

commit 783cbced1db3adbd52a39efa825e20070c726edf
Author: Jonathan Hung 
AuthorDate: Tue Sep 24 12:13:29 2019 -0700

YARN-9730. Support forcing configured partitions to be exclusive based on 
app node label

(cherry picked from commit 73a044a63822303f792183244e25432528ecfb1e)
(cherry picked from commit dd094d79023f6598e47146166aa8c213e03d41b7)
---
 .../apache/hadoop/yarn/conf/YarnConfiguration.java |   6 +
 .../src/main/resources/yarn-default.xml|   9 +
 .../resourcemanager/DefaultAMSProcessor.java   |   6 +
 .../yarn/server/resourcemanager/RMAppManager.java  |   6 +
 .../yarn/server/resourcemanager/RMContext.java |   4 +
 .../yarn/server/resourcemanager/RMContextImpl.java |  15 ++
 .../scheduler/SchedulerApplicationAttempt.java |   9 +
 .../resourcemanager/scheduler/SchedulerUtils.java  |  23 ++
 .../capacity/CapacitySchedulerConfiguration.java   |   7 +
 .../scheduler/capacity/LeafQueue.java  |   8 +-
 .../policy/AbstractComparatorOrderingPolicy.java   |   8 +-
 .../FifoOrderingPolicyWithExclusivePartitions.java | 144 
 ...chedulableEntity.java => IteratorSelector.java} |  48 ++--
 .../scheduler/policy/OrderingPolicy.java   |   3 +-
 .../scheduler/policy/SchedulableEntity.java|   5 +
 .../scheduler/TestSchedulerUtils.java  | 142 
 .../scheduler/capacity/TestCapacityScheduler.java  |  11 +-
 .../scheduler/capacity/TestLeafQueue.java  | 145 +++-
 .../scheduler/policy/MockSchedulableEntity.java|  15 +-
 .../scheduler/policy/TestFairOrderingPolicy.java   |  12 +-
 .../scheduler/policy/TestFifoOrderingPolicy.java   |   2 +-
 .../TestFifoOrderingPolicyForPendingApps.java  |   5 +-
 ...tFifoOrderingPolicyWithExclusivePartitions.java | 244 +
 23 files changed, 821 insertions(+), 56 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 687bddb..48d3933 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -3481,6 +3481,12 @@ public class YarnConfiguration extends Configuration {
   public static final String DEFAULT_NODELABEL_CONFIGURATION_TYPE =
   CENTRALIZED_NODELABEL_CONFIGURATION_TYPE;
 
+  public static final String EXCLUSIVE_ENFORCED_PARTITIONS_SUFFIX
+  = "exclusive-enforced-partitions";
+
+  public static final String EXCLUSIVE_ENFORCED_PARTITIONS = NODE_LABELS_PREFIX
+  + EXCLUSIVE_ENFORCED_PARTITIONS_SUFFIX;
+
   public static final String MAX_CLUSTER_LEVEL_APPLICATION_PRIORITY =
   YARN_PREFIX + "cluster.max-application-priority";
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index 44b54ce..ad8519f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -3857,4 +3857,13 @@
 6
   
 
+  
+
+  Comma-separated list of partitions. If a label P is in this list,
+  then the RM will enforce that an app has resource requests with label
+  P iff that app's node label expression is P.
+
+yarn.node-labels.exclusive-enforced-partitions
+
+  
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
index d3ce241..a8b394e6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
@@ -74,6 +74,7 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.Allocation;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ContainerUpdates;
 import 

[hadoop] branch trunk updated: YARN-9730. Support forcing configured partitions to be exclusive based on app node label

2019-09-24 Thread jhung
This is an automated email from the ASF dual-hosted git repository.

jhung pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c2731d4  YARN-9730. Support forcing configured partitions to be 
exclusive based on app node label
c2731d4 is described below

commit c2731d4b6399f88f76341ed697e80652ed1b61ea
Author: Jonathan Hung 
AuthorDate: Tue Sep 24 12:13:29 2019 -0700

YARN-9730. Support forcing configured partitions to be exclusive based on 
app node label
---
 .../apache/hadoop/yarn/conf/YarnConfiguration.java |   6 +
 .../src/main/resources/yarn-default.xml|   9 +
 .../resourcemanager/DefaultAMSProcessor.java   |   6 +
 .../yarn/server/resourcemanager/RMAppManager.java  |   6 +
 .../yarn/server/resourcemanager/RMContext.java |   4 +
 .../yarn/server/resourcemanager/RMContextImpl.java |  15 ++
 .../scheduler/SchedulerApplicationAttempt.java |   9 +
 .../resourcemanager/scheduler/SchedulerUtils.java  |  23 ++
 .../capacity/CapacitySchedulerConfiguration.java   |   7 +
 .../scheduler/capacity/LeafQueue.java  |   8 +-
 .../policy/AbstractComparatorOrderingPolicy.java   |   8 +-
 .../FifoOrderingPolicyWithExclusivePartitions.java | 144 
 ...chedulableEntity.java => IteratorSelector.java} |  48 ++--
 .../scheduler/policy/OrderingPolicy.java   |   3 +-
 .../scheduler/policy/SchedulableEntity.java|   5 +
 .../scheduler/TestSchedulerUtils.java  | 142 
 .../scheduler/capacity/TestCapacityScheduler.java  |  11 +-
 .../scheduler/capacity/TestLeafQueue.java  | 145 +++-
 .../scheduler/policy/MockSchedulableEntity.java|  15 +-
 .../scheduler/policy/TestFairOrderingPolicy.java   |  12 +-
 .../scheduler/policy/TestFifoOrderingPolicy.java   |   2 +-
 .../TestFifoOrderingPolicyForPendingApps.java  |   5 +-
 ...tFifoOrderingPolicyWithExclusivePartitions.java | 244 +
 23 files changed, 821 insertions(+), 56 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 7b05905..1314bf9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -3789,6 +3789,12 @@ public class YarnConfiguration extends Configuration {
   public static final String DEFAULT_NODELABEL_CONFIGURATION_TYPE =
   CENTRALIZED_NODELABEL_CONFIGURATION_TYPE;
 
+  public static final String EXCLUSIVE_ENFORCED_PARTITIONS_SUFFIX
+  = "exclusive-enforced-partitions";
+
+  public static final String EXCLUSIVE_ENFORCED_PARTITIONS = NODE_LABELS_PREFIX
+  + EXCLUSIVE_ENFORCED_PARTITIONS_SUFFIX;
+
   public static final String MAX_CLUSTER_LEVEL_APPLICATION_PRIORITY =
   YARN_PREFIX + "cluster.max-application-priority";
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index 55e908d..4393792 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -4301,4 +4301,13 @@
 6
   
 
+  
+
+  Comma-separated list of partitions. If a label P is in this list,
+  then the RM will enforce that an app has resource requests with label
+  P iff that app's node label expression is P.
+
+yarn.node-labels.exclusive-enforced-partitions
+
+  
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
index 6763d66..4d5cb13 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
@@ -76,6 +76,7 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.Allocation;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ContainerUpdates;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler
 .SchedulerNodeReport;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils;
 

[hadoop] branch trunk updated: HDFS-14808. EC: Improper size values for corrupt ec block in LOG. Contributed by Ayush Saxena.

2019-09-24 Thread ayushsaxena
This is an automated email from the ASF dual-hosted git repository.

ayushsaxena pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 66400c1  HDFS-14808. EC: Improper size values for corrupt ec block in 
LOG. Contributed by Ayush Saxena.
66400c1 is described below

commit 66400c1cbb2b4b2f08f7db965c8b7237072bdcc4
Author: Ayush Saxena 
AuthorDate: Wed Sep 25 01:31:15 2019 +0530

HDFS-14808. EC: Improper size values for corrupt ec block in LOG. 
Contributed by Ayush Saxena.
---
 .../hadoop/hdfs/server/blockmanagement/BlockManager.java| 13 -
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 6c349ffd..0d61cad 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -3159,23 +3159,26 @@ public class BlockManager implements BlockStatsMXBean {
   + storedBlock.getGenerationStamp(), Reason.GENSTAMP_MISMATCH);
 }
 boolean wrongSize;
+long blockMapSize;
 if (storedBlock.isStriped()) {
   assert BlockIdManager.isStripedBlockID(reported.getBlockId());
   assert storedBlock.getBlockId() ==
   BlockIdManager.convertToStripedID(reported.getBlockId());
   BlockInfoStriped stripedBlock = (BlockInfoStriped) storedBlock;
   int reportedBlkIdx = BlockIdManager.getBlockIndex(reported);
-  wrongSize = reported.getNumBytes() != getInternalBlockLength(
-  stripedBlock.getNumBytes(), stripedBlock.getCellSize(),
-  stripedBlock.getDataBlockNum(), reportedBlkIdx);
+  blockMapSize = getInternalBlockLength(stripedBlock.getNumBytes(),
+  stripedBlock.getCellSize(), stripedBlock.getDataBlockNum(),
+  reportedBlkIdx);
+  wrongSize = reported.getNumBytes() != blockMapSize;
 } else {
-  wrongSize = storedBlock.getNumBytes() != reported.getNumBytes();
+  blockMapSize = storedBlock.getNumBytes();
+  wrongSize = blockMapSize != reported.getNumBytes();
 }
 if (wrongSize) {
   return new BlockToMarkCorrupt(new Block(reported), storedBlock,
   "block is " + ucState + " and reported length " +
   reported.getNumBytes() + " does not match " +
-  "length in block map " + storedBlock.getNumBytes(),
+  "length in block map " + blockMapSize,
   Reason.SIZE_MISMATCH);
 } else {
   return null; // not corrupt


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HDFS-14655. [SBN Read] Namenode crashes if one of The JN is down. Contributed by Ayush Saxena.

2019-09-24 Thread ayushsaxena
This is an automated email from the ASF dual-hosted git repository.

ayushsaxena pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 5c05854  HDFS-14655. [SBN Read] Namenode crashes if one of The JN is 
down. Contributed by Ayush Saxena.
5c05854 is described below

commit 5c058549b71966190a8353098111be66b5c0346d
Author: Ayush Saxena 
AuthorDate: Wed Sep 25 01:16:30 2019 +0530

HDFS-14655. [SBN Read] Namenode crashes if one of The JN is down. 
Contributed by Ayush Saxena.
---
 .../java/org/apache/hadoop/hdfs/DFSConfigKeys.java |  5 +++-
 .../hdfs/qjournal/client/IPCLoggerChannel.java | 14 +
 .../hadoop/hdfs/qjournal/client/QuorumCall.java| 18 
 .../hdfs/qjournal/client/QuorumJournalManager.java |  2 ++
 .../src/main/resources/hdfs-default.xml|  8 +
 .../hadoop/hdfs/qjournal/MiniJournalCluster.java   |  6 +++-
 .../qjournal/client/TestQuorumJournalManager.java  | 34 --
 7 files changed, 78 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 65d8f3a..b57013b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -1065,6 +1065,8 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   public static final String  DFS_QJOURNAL_WRITE_TXNS_TIMEOUT_KEY = 
"dfs.qjournal.write-txns.timeout.ms";
   public static final String  DFS_QJOURNAL_HTTP_OPEN_TIMEOUT_KEY = 
"dfs.qjournal.http.open.timeout.ms";
   public static final String  DFS_QJOURNAL_HTTP_READ_TIMEOUT_KEY = 
"dfs.qjournal.http.read.timeout.ms";
+  public static final String DFS_QJOURNAL_PARALLEL_READ_NUM_THREADS_KEY =
+  "dfs.qjournal.parallel-read.num-threads";
   public static final int DFS_QJOURNAL_START_SEGMENT_TIMEOUT_DEFAULT = 
2;
   public static final int DFS_QJOURNAL_PREPARE_RECOVERY_TIMEOUT_DEFAULT = 
12;
   public static final int DFS_QJOURNAL_ACCEPT_RECOVERY_TIMEOUT_DEFAULT = 
12;
@@ -1075,7 +1077,8 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   public static final int DFS_QJOURNAL_WRITE_TXNS_TIMEOUT_DEFAULT = 2;
   public static final int DFS_QJOURNAL_HTTP_OPEN_TIMEOUT_DEFAULT = 
URLConnectionFactory.DEFAULT_SOCKET_TIMEOUT;
   public static final int DFS_QJOURNAL_HTTP_READ_TIMEOUT_DEFAULT = 
URLConnectionFactory.DEFAULT_SOCKET_TIMEOUT;
-  
+  public static final int DFS_QJOURNAL_PARALLEL_READ_NUM_THREADS_DEFAULT = 5;
+
   public static final String DFS_MAX_NUM_BLOCKS_TO_LOG_KEY = 
"dfs.namenode.max-num-blocks-to-log";
   public static final long   DFS_MAX_NUM_BLOCKS_TO_LOG_DEFAULT = 1000l;
   
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
index a0c7e2c..6dfb480 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
@@ -27,6 +27,7 @@ import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
 import java.util.concurrent.TimeUnit;
 
 import org.apache.hadoop.classification.InterfaceAudience;
@@ -54,6 +55,7 @@ import org.apache.hadoop.ipc.ProtobufRpcEngine;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.security.SecurityUtil;
 import org.apache.hadoop.util.StopWatch;
+import org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
@@ -270,12 +272,14 @@ public class IPCLoggerChannel implements AsyncLogger {
*/
   @VisibleForTesting
   protected ExecutorService createParallelExecutor() {
-return Executors.newCachedThreadPool(
-new ThreadFactoryBuilder()
-.setDaemon(true)
+int numThreads =
+conf.getInt(DFSConfigKeys.DFS_QJOURNAL_PARALLEL_READ_NUM_THREADS_KEY,
+DFSConfigKeys.DFS_QJOURNAL_PARALLEL_READ_NUM_THREADS_DEFAULT);
+return new HadoopThreadPoolExecutor(1, numThreads, 60L,
+TimeUnit.SECONDS, new LinkedBlockingQueue<>(),
+new ThreadFactoryBuilder().setDaemon(true)
 .setNameFormat("Logger channel (from parallel executor) to " + 
addr)
-.setUncaughtExceptionHandler(
-UncaughtExceptionHandlers.systemExit())
+  

[hadoop] branch branch-3.2 updated: HDFS-14655. [SBN Read] Namenode crashes if one of The JN is down. Contributed by Ayush Saxena.

2019-09-24 Thread ayushsaxena
This is an automated email from the ASF dual-hosted git repository.

ayushsaxena pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new a0db762  HDFS-14655. [SBN Read] Namenode crashes if one of The JN is 
down. Contributed by Ayush Saxena.
a0db762 is described below

commit a0db762206ae7ffd1560c0bdd9b3f11f0d062f3c
Author: Ayush Saxena 
AuthorDate: Wed Sep 25 01:16:30 2019 +0530

HDFS-14655. [SBN Read] Namenode crashes if one of The JN is down. 
Contributed by Ayush Saxena.
---
 .../java/org/apache/hadoop/hdfs/DFSConfigKeys.java |  5 +++-
 .../hdfs/qjournal/client/IPCLoggerChannel.java | 14 +
 .../hadoop/hdfs/qjournal/client/QuorumCall.java| 18 
 .../hdfs/qjournal/client/QuorumJournalManager.java |  2 ++
 .../src/main/resources/hdfs-default.xml|  8 +
 .../hadoop/hdfs/qjournal/MiniJournalCluster.java   |  6 +++-
 .../qjournal/client/TestQuorumJournalManager.java  | 34 --
 7 files changed, 78 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index aed535b..428c537 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -1123,6 +1123,8 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   public static final String  DFS_QJOURNAL_WRITE_TXNS_TIMEOUT_KEY = 
"dfs.qjournal.write-txns.timeout.ms";
   public static final String  DFS_QJOURNAL_HTTP_OPEN_TIMEOUT_KEY = 
"dfs.qjournal.http.open.timeout.ms";
   public static final String  DFS_QJOURNAL_HTTP_READ_TIMEOUT_KEY = 
"dfs.qjournal.http.read.timeout.ms";
+  public static final String DFS_QJOURNAL_PARALLEL_READ_NUM_THREADS_KEY =
+  "dfs.qjournal.parallel-read.num-threads";
   public static final int DFS_QJOURNAL_START_SEGMENT_TIMEOUT_DEFAULT = 
2;
   public static final int DFS_QJOURNAL_PREPARE_RECOVERY_TIMEOUT_DEFAULT = 
12;
   public static final int DFS_QJOURNAL_ACCEPT_RECOVERY_TIMEOUT_DEFAULT = 
12;
@@ -1133,7 +1135,8 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   public static final int DFS_QJOURNAL_WRITE_TXNS_TIMEOUT_DEFAULT = 2;
   public static final int DFS_QJOURNAL_HTTP_OPEN_TIMEOUT_DEFAULT = 
URLConnectionFactory.DEFAULT_SOCKET_TIMEOUT;
   public static final int DFS_QJOURNAL_HTTP_READ_TIMEOUT_DEFAULT = 
URLConnectionFactory.DEFAULT_SOCKET_TIMEOUT;
-  
+  public static final int DFS_QJOURNAL_PARALLEL_READ_NUM_THREADS_DEFAULT = 5;
+
   public static final String DFS_MAX_NUM_BLOCKS_TO_LOG_KEY = 
"dfs.namenode.max-num-blocks-to-log";
   public static final long   DFS_MAX_NUM_BLOCKS_TO_LOG_DEFAULT = 1000l;
   
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
index 3a882e5..d5ec5ac 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
@@ -27,6 +27,7 @@ import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
 import java.util.concurrent.TimeUnit;
 
 import org.apache.hadoop.classification.InterfaceAudience;
@@ -54,6 +55,7 @@ import org.apache.hadoop.ipc.ProtobufRpcEngine;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.security.SecurityUtil;
 import org.apache.hadoop.util.StopWatch;
+import org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
@@ -270,12 +272,14 @@ public class IPCLoggerChannel implements AsyncLogger {
*/
   @VisibleForTesting
   protected ExecutorService createParallelExecutor() {
-return Executors.newCachedThreadPool(
-new ThreadFactoryBuilder()
-.setDaemon(true)
+int numThreads =
+conf.getInt(DFSConfigKeys.DFS_QJOURNAL_PARALLEL_READ_NUM_THREADS_KEY,
+DFSConfigKeys.DFS_QJOURNAL_PARALLEL_READ_NUM_THREADS_DEFAULT);
+return new HadoopThreadPoolExecutor(1, numThreads, 60L,
+TimeUnit.SECONDS, new LinkedBlockingQueue<>(),
+new ThreadFactoryBuilder().setDaemon(true)
 .setNameFormat("Logger channel (from parallel executor) to " + 
addr)
-.setUncaughtExceptionHandler(
-UncaughtExceptionHandlers.systemExit())
+  

[hadoop] branch trunk updated: HDFS-14655. [SBN Read] Namenode crashes if one of The JN is down. Contributed by Ayush Saxena.

2019-09-24 Thread ayushsaxena
This is an automated email from the ASF dual-hosted git repository.

ayushsaxena pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new eb96a30  HDFS-14655. [SBN Read] Namenode crashes if one of The JN is 
down. Contributed by Ayush Saxena.
eb96a30 is described below

commit eb96a3093ea34a7749410a63c72b6d0a9636d80f
Author: Ayush Saxena 
AuthorDate: Wed Sep 25 01:16:30 2019 +0530

HDFS-14655. [SBN Read] Namenode crashes if one of The JN is down. 
Contributed by Ayush Saxena.
---
 .../java/org/apache/hadoop/hdfs/DFSConfigKeys.java |  5 +++-
 .../hdfs/qjournal/client/IPCLoggerChannel.java | 14 +
 .../hadoop/hdfs/qjournal/client/QuorumCall.java| 18 
 .../hdfs/qjournal/client/QuorumJournalManager.java |  2 ++
 .../src/main/resources/hdfs-default.xml|  8 +
 .../hadoop/hdfs/qjournal/MiniJournalCluster.java   |  6 +++-
 .../qjournal/client/TestQuorumJournalManager.java  | 34 --
 7 files changed, 78 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 0826cef..462c81d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -1171,6 +1171,8 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   public static final String  DFS_QJOURNAL_WRITE_TXNS_TIMEOUT_KEY = 
"dfs.qjournal.write-txns.timeout.ms";
   public static final String  DFS_QJOURNAL_HTTP_OPEN_TIMEOUT_KEY = 
"dfs.qjournal.http.open.timeout.ms";
   public static final String  DFS_QJOURNAL_HTTP_READ_TIMEOUT_KEY = 
"dfs.qjournal.http.read.timeout.ms";
+  public static final String DFS_QJOURNAL_PARALLEL_READ_NUM_THREADS_KEY =
+  "dfs.qjournal.parallel-read.num-threads";
   public static final int DFS_QJOURNAL_START_SEGMENT_TIMEOUT_DEFAULT = 
2;
   public static final int DFS_QJOURNAL_PREPARE_RECOVERY_TIMEOUT_DEFAULT = 
12;
   public static final int DFS_QJOURNAL_ACCEPT_RECOVERY_TIMEOUT_DEFAULT = 
12;
@@ -1181,7 +1183,8 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   public static final int DFS_QJOURNAL_WRITE_TXNS_TIMEOUT_DEFAULT = 2;
   public static final int DFS_QJOURNAL_HTTP_OPEN_TIMEOUT_DEFAULT = 
URLConnectionFactory.DEFAULT_SOCKET_TIMEOUT;
   public static final int DFS_QJOURNAL_HTTP_READ_TIMEOUT_DEFAULT = 
URLConnectionFactory.DEFAULT_SOCKET_TIMEOUT;
-  
+  public static final int DFS_QJOURNAL_PARALLEL_READ_NUM_THREADS_DEFAULT = 5;
+
   public static final String DFS_MAX_NUM_BLOCKS_TO_LOG_KEY = 
"dfs.namenode.max-num-blocks-to-log";
   public static final long   DFS_MAX_NUM_BLOCKS_TO_LOG_DEFAULT = 1000l;
   
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
index 3a882e5..d5ec5ac 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
@@ -27,6 +27,7 @@ import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
 import java.util.concurrent.TimeUnit;
 
 import org.apache.hadoop.classification.InterfaceAudience;
@@ -54,6 +55,7 @@ import org.apache.hadoop.ipc.ProtobufRpcEngine;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.security.SecurityUtil;
 import org.apache.hadoop.util.StopWatch;
+import org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
@@ -270,12 +272,14 @@ public class IPCLoggerChannel implements AsyncLogger {
*/
   @VisibleForTesting
   protected ExecutorService createParallelExecutor() {
-return Executors.newCachedThreadPool(
-new ThreadFactoryBuilder()
-.setDaemon(true)
+int numThreads =
+conf.getInt(DFSConfigKeys.DFS_QJOURNAL_PARALLEL_READ_NUM_THREADS_KEY,
+DFSConfigKeys.DFS_QJOURNAL_PARALLEL_READ_NUM_THREADS_DEFAULT);
+return new HadoopThreadPoolExecutor(1, numThreads, 60L,
+TimeUnit.SECONDS, new LinkedBlockingQueue<>(),
+new ThreadFactoryBuilder().setDaemon(true)
 .setNameFormat("Logger channel (from parallel executor) to " + 
addr)
-.setUncaughtExceptionHandler(
-UncaughtExceptionHandlers.systemExit())
+

[hadoop] branch trunk updated (afa1006 -> f16cf87)

2019-09-24 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from afa1006  HDFS-14843. Double Synchronization in 
BlockReportLeaseManager. Contributed by David Mollitor.
 add f16cf87  HDDS-2170. Add Object IDs and Update ID to Volume Object 
(#1510)

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/hadoop/ozone/OzoneConsts.java  |   2 +
 .../hadoop/ozone/om/helpers/OmVolumeArgs.java  | 100 ++---
 .../src/main/proto/OzoneManagerProtocol.proto  |  10 +--
 .../om/request/volume/OMVolumeCreateRequest.java   |   5 ++
 .../request/volume/TestOMVolumeCreateRequest.java  |  10 ++-
 5 files changed, 105 insertions(+), 22 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-14843. Double Synchronization in BlockReportLeaseManager. Contributed by David Mollitor.

2019-09-24 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new afa1006  HDFS-14843. Double Synchronization in 
BlockReportLeaseManager. Contributed by David Mollitor.
afa1006 is described below

commit afa1006a537e1fc1adb5005cbdf4e4d8d9e98b22
Author: Inigo Goiri 
AuthorDate: Tue Sep 24 09:58:42 2019 -0700

HDFS-14843. Double Synchronization in BlockReportLeaseManager. Contributed 
by David Mollitor.
---
 .../hdfs/server/blockmanagement/BlockReportLeaseManager.java  | 8 ++--
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockReportLeaseManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockReportLeaseManager.java
index 7db05c7..2a4b6e8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockReportLeaseManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockReportLeaseManager.java
@@ -180,12 +180,8 @@ class BlockReportLeaseManager {
   /**
* Get the next block report lease ID.  Any number is valid except 0.
*/
-  private synchronized long getNextId() {
-long id;
-do {
-  id = nextId++;
-} while (id == 0);
-return id;
+  private long getNextId() {
+return ++nextId == 0L ? ++nextId : nextId;
   }
 
   public synchronized void register(DatanodeDescriptor dn) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-14837. Review of Block.java. Contributed by David Mollitor.

2019-09-24 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 816d3cb  HDFS-14837. Review of Block.java. Contributed by David 
Mollitor.
816d3cb is described below

commit 816d3cb9087c9406cc0b16bd80009562ffc7d4b3
Author: Inigo Goiri 
AuthorDate: Tue Sep 24 09:54:09 2019 -0700

HDFS-14837. Review of Block.java. Contributed by David Mollitor.
---
 .../org/apache/hadoop/hdfs/protocol/Block.java | 141 +++--
 1 file changed, 101 insertions(+), 40 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/Block.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/Block.java
index 4128ece..0a41254 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/Block.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/Block.java
@@ -17,21 +17,32 @@
  */
 package org.apache.hadoop.hdfs.protocol;
 
-import java.io.*;
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.File;
+import java.io.IOException;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
+import javax.annotation.Nonnull;
+
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
-import org.apache.hadoop.io.*;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableFactories;
+import org.apache.hadoop.io.WritableFactory;
 
-import javax.annotation.Nonnull;
-
-/**
- * A Block is a Hadoop FS primitive, identified by a
- * long.
+/**
+ * A Block is a Hadoop FS primitive, identified by its block ID (a long). A
+ * block also has an accompanying generation stamp. A generation stamp is a
+ * monotonically increasing 8-byte number for each block that is maintained
+ * persistently by the NameNode. However, for the purposes of this class, two
+ * Blocks are considered equal iff they have the same block ID.
  *
- **/
+ * @see Block#equals(Object)
+ * @see Block#hashCode()
+ * @see Block#compareTo(Block)
+ */
 @InterfaceAudience.Private
 @InterfaceStability.Evolving
 public class Block implements Writable, Comparable {
@@ -119,8 +130,7 @@ public class Block implements Writable, Comparable {
 this.numBytes = len;
 this.generationStamp = genStamp;
   }
-  /**
-   */
+
   public long getBlockId() {
 return blockId;
   }
@@ -130,17 +140,21 @@ public class Block implements Writable, Comparable 
{
   }
 
   /**
+   * Get the block name. The format of the name is in the format:
+   * 
+   * blk_1, blk_2, blk_3, etc.
+   * 
+   *
+   * @return the block name
*/
   public String getBlockName() {
-return new StringBuilder().append(BLOCK_FILE_PREFIX)
-.append(blockId).toString();
+return BLOCK_FILE_PREFIX + blockId;
   }
 
-  /**
-   */
   public long getNumBytes() {
 return numBytes;
   }
+
   public void setNumBytes(long len) {
 this.numBytes = len;
   }
@@ -161,28 +175,33 @@ public class Block implements Writable, Comparable 
{
* @return the string representation of the block
*/
   public static String toString(final Block b) {
-StringBuilder sb = new StringBuilder();
-sb.append(BLOCK_FILE_PREFIX).
-   append(b.blockId).append("_").
-   append(b.generationStamp);
-return sb.toString();
+return new StringBuilder(BLOCK_FILE_PREFIX)
+.append(b.blockId)
+.append('_')
+.append(b.generationStamp)
+.toString();
   }
 
   /**
+   * Get the block name. The format of the name is in the format:
+   * 
+   * blk_block-id_generation, blk_1_1, blk_1_2, blk_2_1, etc.
+   * 
+   *
+   * @return the full block name
*/
   @Override
   public String toString() {
-return toString(this);
+return Block.toString(this);
   }
 
   public void appendStringTo(StringBuilder sb) {
 sb.append(BLOCK_FILE_PREFIX)
   .append(blockId)
-  .append("_")
+  .append('_')
   .append(getGenerationStamp());
   }
 
-
   /
   // Writable
   /
@@ -223,32 +242,74 @@ public class Block implements Writable, Comparable 
{
 this.generationStamp = in.readLong();
   }
 
-  @Override // Comparable
+  /**
+   * Compares this Block with the specified Block for order. Returns a negative
+   * integer, zero, or a positive integer as this Block is less than, equal to,
+   * or greater than the specified Block. Blocks are ordered based on their
+   * block ID.
+   *
+   * @param b the Block to be compared
+   * @return a negative integer, zero, or a positive integer as this Block is
+   * less than, equal to, or 

[hadoop] branch trunk updated: HDFS-14868. RBF: Fix typo in TestRouterQuota. Contributed by Jinglun.

2019-09-24 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 43203b4  HDFS-14868. RBF: Fix typo in TestRouterQuota. Contributed by 
Jinglun.
43203b4 is described below

commit 43203b466ddf6c9478b07be7e749257476ed9ca8
Author: Wei-Chiu Chuang 
AuthorDate: Tue Sep 24 09:38:36 2019 -0700

HDFS-14868. RBF: Fix typo in TestRouterQuota. Contributed by Jinglun.

Reviewed-by: Ayush Saxena 
Signed-off-by: Wei-Chiu Chuang 
---
 .../apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
index f0e4dc1..5e36262 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
@@ -177,7 +177,7 @@ public class TestRouterQuota {
   }
 
   @Test
-  public void testStorageSpaceQuotaaExceed() throws Exception {
+  public void testStorageSpaceQuotaExceed() throws Exception {
 long ssQuota = 3071;
 final FileSystem nnFs1 = nnContext1.getFileSystem();
 final FileSystem nnFs2 = nnContext2.getFileSystem();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2167. Hadoop31-mr acceptance test is failing due to the shading

2019-09-24 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 91f50b9  HDDS-2167. Hadoop31-mr acceptance test is failing due to the 
shading
91f50b9 is described below

commit 91f50b98cad8f8fb6d459be6ee2e313230bfb83f
Author: Márton Elek 
AuthorDate: Tue Sep 24 17:52:36 2019 +0200

HDDS-2167. Hadoop31-mr acceptance test is failing due to the shading

Closes #1507
---
 hadoop-ozone/ozonefs/pom.xml | 4 
 1 file changed, 4 insertions(+)

diff --git a/hadoop-ozone/ozonefs/pom.xml b/hadoop-ozone/ozonefs/pom.xml
index 32e4a63..a945f40 100644
--- a/hadoop-ozone/ozonefs/pom.xml
+++ b/hadoop-ozone/ozonefs/pom.xml
@@ -132,6 +132,10 @@
   hadoop-ozone-common
 
 
+  org.apache.httpcomponents
+  httpclient
+
+
   com.google.code.findbugs
   findbugs
   3.0.1


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-13660. DistCp job fails when new data is appended in the file while the DistCp copy job is running

2019-09-24 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 51c64b3  HDFS-13660. DistCp job fails when new data is appended in the 
file while the DistCp copy job is running
51c64b3 is described below

commit 51c64b357d4bd1a0038e61df3d4b8ea0a3ad7449
Author: Mukund Thakur 
AuthorDate: Tue Sep 24 11:22:42 2019 +0100

HDFS-13660. DistCp job fails when new data is appended in the file while 
the DistCp copy job is running

This uses the length of the file known at the start of the copy to 
determine the amount of data to copy.

* If a file is appended to during the copy, the original bytes are copied.
* If a file is truncated during a copy, or the attempt to read the data 
fails with a truncated stream,
  distcp will now fail. Until now these failures were not detected.

Contributed by Mukund Thakur.

Change-Id: I576a49d951fa48d37a45a7e4c82c47488aa8e884
---
 .../org/apache/hadoop/tools/DistCpConstants.java   |  6 +++
 .../apache/hadoop/tools/mapred/CopyCommitter.java  | 10 ++--
 .../org/apache/hadoop/tools/mapred/CopyMapper.java |  3 +-
 .../tools/mapred/RetriableFileCopyCommand.java | 34 +
 .../org/apache/hadoop/tools/util/DistCpUtils.java  | 26 +-
 .../hadoop/tools/mapred/TestCopyCommitter.java |  5 +-
 .../apache/hadoop/tools/mapred/TestCopyMapper.java | 56 ++
 .../tools/mapred/TestRetriableFileCopyCommand.java | 25 +-
 .../apache/hadoop/tools/util/TestDistCpUtils.java  | 45 -
 .../tools/util/TestDistCpUtilsWithCombineMode.java |  4 +-
 10 files changed, 156 insertions(+), 58 deletions(-)

diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
index e20f206..f0adc78 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
@@ -171,4 +171,10 @@ public final class DistCpConstants {
 
   /** Filename of sorted target listing. */
   public static final String TARGET_SORTED_FILE = "target_sorted.seq";
+
+  public static final String LENGTH_MISMATCH_ERROR_MSG =
+  "Mismatch in length of source:";
+
+  public static final String CHECKSUM_MISMATCH_ERROR_MSG =
+  "Checksum mismatch between ";
 }
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
index 546062f..139bd08 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
@@ -252,7 +252,7 @@ public class CopyCommitter extends FileOutputCommitter {
   // This is the last chunk of the splits, consolidate allChunkPaths
   try {
 concatFileChunks(conf, srcFileStatus.getPath(), targetFile,
-allChunkPaths);
+allChunkPaths, srcFileStatus);
   } catch (IOException e) {
 // If the concat failed because a chunk file doesn't exist,
 // then we assume that the CopyMapper has skipped copying this
@@ -609,7 +609,8 @@ public class CopyCommitter extends FileOutputCommitter {
* Concat the passed chunk files into one and rename it the targetFile.
*/
   private void concatFileChunks(Configuration conf, Path sourceFile,
-Path targetFile, LinkedList 
allChunkPaths)
+Path targetFile, LinkedList 
allChunkPaths,
+CopyListingFileStatus srcFileStatus)
   throws IOException {
 if (allChunkPaths.size() == 1) {
   return;
@@ -637,8 +638,9 @@ public class CopyCommitter extends FileOutputCommitter {
   LOG.debug("concat: result: " + dstfs.getFileStatus(firstChunkFile));
 }
 rename(dstfs, firstChunkFile, targetFile);
-DistCpUtils.compareFileLengthsAndChecksums(
-srcfs, sourceFile, null, dstfs, targetFile, skipCrc);
+DistCpUtils.compareFileLengthsAndChecksums(srcFileStatus.getLen(),
+srcfs, sourceFile, null, dstfs,
+targetFile, skipCrc, srcFileStatus.getLen());
   }
 
   /**
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
index 336779e..f3c5b4b 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
+++ 

[hadoop] branch HDDS-2067 updated (7256c69 -> e4d4fca)

2019-09-24 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-2067
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 7256c69  HDDS-2067. Create generic service facade with 
tracing/metrics/logging support
 add 7a44cdc  Update 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/OzoneProtocolMessageDispatcher.java
 add adee693  Update 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/OzoneProtocolMessageDispatcher.java
 add 5b359af  remove RpcController
 add e4d4fca  package-info fix + removing FWIO

No new revisions were added by this update.

Summary of changes:
 .../hdds/function/FunctionWithIOException.java | 37 --
 .../apache/hadoop/hdds/function/package-info.java  |  5 +--
 .../server/OzoneProtocolMessageDispatcher.java |  5 ++-
 ...lockLocationProtocolServerSideTranslatorPB.java |  1 -
 ...OzoneManagerProtocolServerSideTranslatorPB.java |  2 +-
 5 files changed, 4 insertions(+), 46 deletions(-)
 delete mode 100644 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/function/FunctionWithIOException.java


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (e8e7d7b -> 8f1a135)

2019-09-24 Thread msingh
This is an automated email from the ASF dual-hosted git repository.

msingh pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from e8e7d7b  HADOOP-16561. [MAPREDUCE] use protobuf-maven-plugin to 
generate protobuf classes (#1500)
 add 8f1a135  HDDS-2081. Fix 
TestRatisPipelineProvider#testCreatePipelinesDnExclude. Contributed by 
Aravindan Vijayan. (#1506)

No new revisions were added by this update.

Summary of changes:
 .../scm/pipeline/TestRatisPipelineProvider.java| 26 +-
 1 file changed, 16 insertions(+), 10 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org