[hadoop] 02/02: HDDS-1832 : Improve logging for PipelineActions handling in SCM and datanode. (Change to Error logging)

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit fc229b6490a152036b6424c7c0ac5c3df9525e57
Author: Aravindan Vijayan 
AuthorDate: Fri Aug 2 11:04:52 2019 -0700

HDDS-1832 : Improve logging for PipelineActions handling in SCM and 
datanode. (Change to Error logging)

Signed-off-by: Anu Engineer 
---
 .../container/common/transport/server/ratis/XceiverServerRatis.java | 2 +-
 .../java/org/apache/hadoop/hdds/scm/pipeline/PipelineActionHandler.java | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
index 3a8b79b..54e8f3e 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
@@ -558,7 +558,7 @@ public final class XceiverServerRatis extends XceiverServer 
{
 if (triggerHB) {
   context.getParent().triggerHeartbeat();
 }
-LOG.info(
+LOG.error(
 "pipeline Action " + action.getAction() + "  on pipeline " + pipelineID
 + ".Reason : " + action.getClosePipeline().getDetailedReason());
   }
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineActionHandler.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineActionHandler.java
index 8d040f1..8d497fa 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineActionHandler.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineActionHandler.java
@@ -57,7 +57,7 @@ public class PipelineActionHandler
   pipelineID = PipelineID.
   getFromProtobuf(action.getClosePipeline().getPipelineID());
   Pipeline pipeline = pipelineManager.getPipeline(pipelineID);
-  LOG.info("Received pipeline action {} for {} from datanode {}. " +
+  LOG.error("Received pipeline action {} for {} from datanode {}. " +
   "Reason : {}", action.getAction(), pipeline,
   report.getDatanodeDetails(),
   action.getClosePipeline().getDetailedReason());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/02: HDDS-1915. Remove hadoop script from ozone distribution

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 15545c8bf1318e936fe2251bc2ef7522a36af7cd
Author: Márton Elek 
AuthorDate: Tue Aug 6 10:10:52 2019 +0200

HDDS-1915. Remove hadoop script from ozone distribution

Signed-off-by: Anu Engineer 
---
 hadoop-ozone/dist/dev-support/bin/dist-layout-stitching | 2 --
 1 file changed, 2 deletions(-)

diff --git a/hadoop-ozone/dist/dev-support/bin/dist-layout-stitching 
b/hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
index d95242e..5def094 100755
--- a/hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
+++ b/hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
@@ -94,8 +94,6 @@ run cp 
"${ROOT}/hadoop-ozone/dist/src/main/conf/ozone-site.xml" "etc/hadoop"
 run cp -f "${ROOT}/hadoop-ozone/dist/src/main/conf/log4j.properties" 
"etc/hadoop"
 run cp 
"${ROOT}/hadoop-hdds/common/src/main/resources/network-topology-default.xml" 
"etc/hadoop"
 run cp 
"${ROOT}/hadoop-hdds/common/src/main/resources/network-topology-nodegroup.xml" 
"etc/hadoop"
-run cp "${ROOT}/hadoop-common-project/hadoop-common/src/main/bin/hadoop" "bin/"
-run cp "${ROOT}/hadoop-common-project/hadoop-common/src/main/bin/hadoop.cmd" 
"bin/"
 run cp "${ROOT}/hadoop-ozone/common/src/main/bin/ozone" "bin/"
 run cp -r "${ROOT}/hadoop-ozone/dist/src/main/dockerbin" "bin/docker"
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (78b714a -> fc229b6)

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 78b714a  HDDS-1956. Aged IO Thread exits on first read
 new 15545c8  HDDS-1915. Remove hadoop script from ozone distribution
 new fc229b6  HDDS-1832 : Improve logging for PipelineActions handling in 
SCM and datanode. (Change to Error logging)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../container/common/transport/server/ratis/XceiverServerRatis.java | 2 +-
 .../java/org/apache/hadoop/hdds/scm/pipeline/PipelineActionHandler.java | 2 +-
 hadoop-ozone/dist/dev-support/bin/dist-layout-stitching | 2 --
 3 files changed, 2 insertions(+), 4 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1956. Aged IO Thread exits on first read

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 78b714a  HDDS-1956. Aged IO Thread exits on first read
78b714a is described below

commit 78b714af9c0ef4cd1b6219eee884a43eb66d1574
Author: Doroszlai, Attila 
AuthorDate: Tue Aug 13 09:52:51 2019 +0200

HDDS-1956. Aged IO Thread exits on first read

Signed-off-by: Anu Engineer 
---
 .../apache/hadoop/ozone/MiniOzoneChaosCluster.java |  8 ++---
 .../hadoop/ozone/MiniOzoneLoadGenerator.java   | 38 ++
 .../src/test/resources/log4j.properties|  2 +-
 3 files changed, 30 insertions(+), 18 deletions(-)

diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneChaosCluster.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneChaosCluster.java
index 75911df..2eef206 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneChaosCluster.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneChaosCluster.java
@@ -68,7 +68,7 @@ public class MiniOzoneChaosCluster extends 
MiniOzoneClusterImpl {
 
 this.executorService =  Executors.newSingleThreadScheduledExecutor();
 this.numDatanodes = getHddsDatanodes().size();
-LOG.info("Starting MiniOzoneChaosCluster with:{} datanodes" + 
numDatanodes);
+LOG.info("Starting MiniOzoneChaosCluster with {} datanodes", numDatanodes);
 LogUtils.setLogLevel(GrpcClientProtocolClient.LOG, Level.WARN);
   }
 
@@ -108,7 +108,7 @@ public class MiniOzoneChaosCluster extends 
MiniOzoneClusterImpl {
 LOG.info("{} Completed restarting Datanode: {}", failString,
 dn.getUuid());
   } catch (Exception e) {
-LOG.error("Failed to restartNodes Datanode", dn.getUuid());
+LOG.error("Failed to restartNodes Datanode {}", dn.getUuid(), e);
   }
 }
   }
@@ -119,7 +119,7 @@ public class MiniOzoneChaosCluster extends 
MiniOzoneClusterImpl {
 for (int i = 0; i < numNodesToFail; i++) {
   boolean shouldStop = shouldStop();
   int failedNodeIndex = getNodeToFail();
-  String stopString = shouldStop ? "Stopping" : "Starting";
+  String stopString = shouldStop ? "Stopping" : "Restarting";
   DatanodeDetails dn =
   getHddsDatanodes().get(failedNodeIndex).getDatanodeDetails();
   try {
@@ -133,7 +133,7 @@ public class MiniOzoneChaosCluster extends 
MiniOzoneClusterImpl {
 LOG.info("Completed {} DataNode {}", stopString, dn.getUuid());
 
   } catch (Exception e) {
-LOG.error("Failed to shutdown Datanode", dn.getUuid());
+LOG.error("Failed {} Datanode {}", stopString, dn.getUuid(), e);
   }
 }
   }
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneLoadGenerator.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneLoadGenerator.java
index b942447..6ced6d6 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneLoadGenerator.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneLoadGenerator.java
@@ -35,6 +35,7 @@ import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.HashMap;
 import java.util.List;
+import java.util.Optional;
 import java.util.concurrent.ThreadPoolExecutor;
 import java.util.concurrent.CompletableFuture;
 import java.util.concurrent.TimeUnit;
@@ -49,7 +50,7 @@ import java.util.concurrent.atomic.AtomicInteger;
  */
 public class MiniOzoneLoadGenerator {
 
-  static final Logger LOG =
+  private static final Logger LOG =
   LoggerFactory.getLogger(MiniOzoneLoadGenerator.class);
 
   private static String keyNameDelimiter = "_";
@@ -113,7 +114,7 @@ public class MiniOzoneLoadGenerator {
 int index = RandomUtils.nextInt();
 String keyName = writeData(index, bucket, threadName);
 
-readData(bucket, keyName);
+readData(bucket, keyName, index);
 
 deleteKey(bucket, keyName);
   } catch (Exception e) {
@@ -133,11 +134,13 @@ public class MiniOzoneLoadGenerator {
 ByteBuffer buffer = buffers.get(keyIndex % numBuffers);
 int bufferCapacity = buffer.capacity();
 
-String keyName = threadName + keyNameDelimiter + keyIndex;
+String keyName = getKeyName(keyIndex, threadName);
+LOG.trace("LOADGEN: Writing key {}", keyName);
 try (OzoneOutputStream stream = bucket.createKey(keyName,
 bufferCapacity, ReplicationType.RATIS, ReplicationFactor.THREE,
 new HashMap<>())) {
   stream.write(buffer.array());
+  LOG.trace("LOADGEN: Written key {}", keyName);
 } catch (Throwable t) {
   LOG.error("LOADGEN: Create key:{} failed with exception, skipping",
   keyName, t);
@@ -147,9 +150,9 @@ public 

[hadoop] branch trunk updated: HDDS-1920. Place ozone.om.address config key default value in ozone-site.xml

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new bf45779  HDDS-1920. Place ozone.om.address config key default value in 
ozone-site.xml
bf45779 is described below

commit bf457797f607f3aeeb2292e63f440cb13e15a2d9
Author: Siyao Meng 
AuthorDate: Tue Aug 6 14:14:26 2019 -0700

HDDS-1920. Place ozone.om.address config key default value in ozone-site.xml

Change-Id: Ic5970b383357147b74a01680aedf40bed4d3e176
Signed-off-by: Anu Engineer 
---
 hadoop-hdds/common/src/main/resources/ozone-default.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index 409cc72..d9440d7 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -550,7 +550,7 @@
   
   
 ozone.om.address
-
+0.0.0.0:9862
 OM, REQUIRED
 
   The address of the Ozone OM service. This allows clients to discover


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-16495. Fix invalid metric types in PrometheusMetricsSink (#1244)

2019-08-13 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 0f8add8  HADOOP-16495. Fix invalid metric types in 
PrometheusMetricsSink (#1244)
0f8add8 is described below

commit 0f8add8a60d159c2933a8fccffd83a64eb73eadc
Author: Akira Ajisaka 
AuthorDate: Wed Aug 14 12:24:03 2019 +0900

HADOOP-16495. Fix invalid metric types in PrometheusMetricsSink (#1244)
---
 .../metrics2/sink/PrometheusMetricsSink.java   |  5 +++--
 .../metrics2/sink/TestPrometheusMetricsSink.java   | 22 ++
 2 files changed, 25 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/sink/PrometheusMetricsSink.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/sink/PrometheusMetricsSink.java
index 291cfe3..b1e8da8 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/sink/PrometheusMetricsSink.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/sink/PrometheusMetricsSink.java
@@ -46,6 +46,7 @@ public class PrometheusMetricsSink implements MetricsSink {
 
   private static final Pattern SPLIT_PATTERN =
   Pattern.compile("(?

[hadoop] branch ozone-0.4.1 updated: HDDS-1928. Cannot run ozone-recon compose due to syntax error

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new f4d8e1b  HDDS-1928. Cannot run ozone-recon compose due to syntax error
f4d8e1b is described below

commit f4d8e1bdd71d0003cdc3a588bb81fcc3a6f71a7e
Author: Doroszlai, Attila 
AuthorDate: Wed Aug 7 20:46:17 2019 +0200

HDDS-1928. Cannot run ozone-recon compose due to syntax error

Signed-off-by: Anu Engineer 
(cherry picked from commit e6d240dc91004c468533b523358849a2611ed757)
---
 .../main/compose/ozone-recon/docker-compose.yaml   |  2 +-
 .../dist/src/main/compose/ozone-recon/test.sh  | 30 ++
 2 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/hadoop-ozone/dist/src/main/compose/ozone-recon/docker-compose.yaml 
b/hadoop-ozone/dist/src/main/compose/ozone-recon/docker-compose.yaml
index e6d25ea..4cec246 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-recon/docker-compose.yaml
+++ b/hadoop-ozone/dist/src/main/compose/ozone-recon/docker-compose.yaml
@@ -17,7 +17,7 @@
 version: "3"
 services:
datanode:
-  image: apache/ozone-runner:
+  image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
   privileged: true #required by the profiler
   volumes:
 - ../..:/opt/hadoop
diff --git a/hadoop-ozone/dist/src/main/compose/ozone-recon/test.sh 
b/hadoop-ozone/dist/src/main/compose/ozone-recon/test.sh
new file mode 100755
index 000..f4bfcc3
--- /dev/null
+++ b/hadoop-ozone/dist/src/main/compose/ozone-recon/test.sh
@@ -0,0 +1,30 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+COMPOSE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+export COMPOSE_DIR
+
+# shellcheck source=/dev/null
+source "$COMPOSE_DIR/../testlib.sh"
+
+start_docker_env
+
+execute_robot_test scm basic/basic.robot
+
+stop_docker_env
+
+generate_report


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1928. Cannot run ozone-recon compose due to syntax error

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e6d240d  HDDS-1928. Cannot run ozone-recon compose due to syntax error
e6d240d is described below

commit e6d240dc91004c468533b523358849a2611ed757
Author: Doroszlai, Attila 
AuthorDate: Wed Aug 7 20:46:17 2019 +0200

HDDS-1928. Cannot run ozone-recon compose due to syntax error

Signed-off-by: Anu Engineer 
---
 .../main/compose/ozone-recon/docker-compose.yaml   |  2 +-
 .../dist/src/main/compose/ozone-recon/test.sh  | 30 ++
 2 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/hadoop-ozone/dist/src/main/compose/ozone-recon/docker-compose.yaml 
b/hadoop-ozone/dist/src/main/compose/ozone-recon/docker-compose.yaml
index e6d25ea..4cec246 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-recon/docker-compose.yaml
+++ b/hadoop-ozone/dist/src/main/compose/ozone-recon/docker-compose.yaml
@@ -17,7 +17,7 @@
 version: "3"
 services:
datanode:
-  image: apache/ozone-runner:
+  image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
   privileged: true #required by the profiler
   volumes:
 - ../..:/opt/hadoop
diff --git a/hadoop-ozone/dist/src/main/compose/ozone-recon/test.sh 
b/hadoop-ozone/dist/src/main/compose/ozone-recon/test.sh
new file mode 100755
index 000..f4bfcc3
--- /dev/null
+++ b/hadoop-ozone/dist/src/main/compose/ozone-recon/test.sh
@@ -0,0 +1,30 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+COMPOSE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+export COMPOSE_DIR
+
+# shellcheck source=/dev/null
+source "$COMPOSE_DIR/../testlib.sh"
+
+start_docker_env
+
+execute_robot_test scm basic/basic.robot
+
+stop_docker_env
+
+generate_report


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-14491. More Clarity on Namenode UI Around Blocks and Replicas. Contributed by Siyao Meng.

2019-08-13 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c13ec7a  HDFS-14491. More Clarity on Namenode UI Around Blocks and 
Replicas. Contributed by Siyao Meng.
c13ec7a is described below

commit c13ec7ab666fc4878174a7cd952ca93941ae7c05
Author: Wei-Chiu Chuang 
AuthorDate: Tue Aug 13 17:14:08 2019 -0700

HDFS-14491. More Clarity on Namenode UI Around Blocks and Replicas. 
Contributed by Siyao Meng.
---
 hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
index b88150b..769315e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
@@ -183,7 +183,7 @@
   Total Datanode Volume 
Failures{VolumeFailuresTotal} 
({EstimatedCapacityLostTotal|fmt_bytes})
   {@eq key=nnstat.State value="active"}
 Number of Under-Replicated 
Blocks{UnderReplicatedBlocks}
-Number of Blocks Pending 
Deletion{PendingDeletionBlocks}
+Number of Blocks Pending Deletion (including 
replicas){PendingDeletionBlocks}
   {/eq}
   Block Deletion Start 
Time{BlockDeletionStartTime|date_tostring}
 {/fs}


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HDFS-14491. More Clarity on Namenode UI Around Blocks and Replicas. Contributed by Siyao Meng.

2019-08-13 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 6966b76  HDFS-14491. More Clarity on Namenode UI Around Blocks and 
Replicas. Contributed by Siyao Meng.
6966b76 is described below

commit 6966b76230d7c3add8f4b624e5065492ffc01fab
Author: Wei-Chiu Chuang 
AuthorDate: Tue Aug 13 17:14:08 2019 -0700

HDFS-14491. More Clarity on Namenode UI Around Blocks and Replicas. 
Contributed by Siyao Meng.

(cherry picked from commit 6a43d0fbd49b3ff1ce75a2334b51a98ae476e473)
(cherry picked from commit 4784165bb24228d13f4e738e0093ab0dade0bff1)
---
 hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
index 618804c..33410da 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
@@ -183,7 +183,7 @@
   Total Datanode Volume 
Failures{VolumeFailuresTotal} 
({EstimatedCapacityLostTotal|fmt_bytes})
   {@eq key=nnstat.State value="active"}
 Number of Under-Replicated 
Blocks{UnderReplicatedBlocks}
-Number of Blocks Pending 
Deletion{PendingDeletionBlocks}
+Number of Blocks Pending Deletion (including 
replicas){PendingDeletionBlocks}
   {/eq}
   Block Deletion Start 
Time{BlockDeletionStartTime|date_tostring}
 {/fs}


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HDFS-14491. More Clarity on Namenode UI Around Blocks and Replicas. Contributed by Siyao Meng.

2019-08-13 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new f5661b6  HDFS-14491. More Clarity on Namenode UI Around Blocks and 
Replicas. Contributed by Siyao Meng.
f5661b6 is described below

commit f5661b630af11833772d39e697c3025c62b99def
Author: Wei-Chiu Chuang 
AuthorDate: Tue Aug 13 17:14:08 2019 -0700

HDFS-14491. More Clarity on Namenode UI Around Blocks and Replicas. 
Contributed by Siyao Meng.

(cherry picked from commit 6a43d0fbd49b3ff1ce75a2334b51a98ae476e473)
---
 hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
index 0c7d3b8..eeceb05 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
@@ -183,7 +183,7 @@
   Total Datanode Volume 
Failures{VolumeFailuresTotal} 
({EstimatedCapacityLostTotal|fmt_bytes})
   {@eq key=nnstat.State value="active"}
 Number of Under-Replicated 
Blocks{UnderReplicatedBlocks}
-Number of Blocks Pending 
Deletion{PendingDeletionBlocks}
+Number of Blocks Pending Deletion (including 
replicas){PendingDeletionBlocks}
   {/eq}
   Block Deletion Start 
Time{BlockDeletionStartTime|date_tostring}
 {/fs}


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch ozone-0.4.1 updated: HDDS-1916. Only contract tests are run in ozonefs module

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new a066698  HDDS-1916. Only contract tests are run in ozonefs module
a066698 is described below

commit a0666982fc5a9dc4bfeff2978b166535098fb75c
Author: Doroszlai, Attila 
AuthorDate: Tue Aug 6 10:52:32 2019 +0200

HDDS-1916. Only contract tests are run in ozonefs module

Signed-off-by: Anu Engineer 
(cherry picked from commit 9691117099d7914c6297b0e4ea3852341775fb15)
---
 hadoop-ozone/ozonefs/pom.xml | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/hadoop-ozone/ozonefs/pom.xml b/hadoop-ozone/ozonefs/pom.xml
index 8ef886f..3eedf48 100644
--- a/hadoop-ozone/ozonefs/pom.xml
+++ b/hadoop-ozone/ozonefs/pom.xml
@@ -68,7 +68,11 @@
 
   
 ITestOzoneContract*.java
-
+**/Test*.java
+**/*Test.java
+**/*Tests.java
+**/*TestCase.java
+  
 
   
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1659. Define the process to add proposal/design docs to the Ozone subproject (#950)

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 50a22b6  HDDS-1659. Define the process to add proposal/design docs to 
the Ozone subproject (#950)
50a22b6 is described below

commit 50a22b66c0292d37984460991a680d9d3e8c862c
Author: Elek, Márton 
AuthorDate: Wed Aug 14 02:10:36 2019 +0200

HDDS-1659. Define the process to add proposal/design docs to the Ozone 
subproject (#950)

* HDDS-1659. Define the process to add proposal/design docs to the Ozone 
subproject
 * Remove Site improvements to display proposals
 * adding license header
 * clarify the support of the existing method
---
 .../content/design/ozone-enhancement-proposals.md  | 197 +
 1 file changed, 197 insertions(+)

diff --git a/hadoop-hdds/docs/content/design/ozone-enhancement-proposals.md 
b/hadoop-hdds/docs/content/design/ozone-enhancement-proposals.md
new file mode 100644
index 000..cc7569e
--- /dev/null
+++ b/hadoop-hdds/docs/content/design/ozone-enhancement-proposals.md
@@ -0,0 +1,197 @@
+---
+title: Ozone Enhancement Proposals
+summary: Definition of the process to share new technical proposals with the 
Ozone community.
+date: 2019-06-07
+jira: HDDS-1659
+status: accepted
+author: Anu Enginner, Marton Elek
+---
+
+
+## Problem statement
+
+Some of the biggers features requires well defined plans before the 
implementation. Until now it was managed by uploading PDF design docs to 
selected JIRA. There are multiple problems with the current practice.
+
+ 1. There is no easy way to find existing up-to-date and outdated design docs.
+ 2. Design docs usually have better description of the problem that the user 
docs
+ 3. We need better tools to discuss the design docs in the development phase 
of the doc
+
+We propose to follow the same process what we have now, but instead of 
uploading a PDF to the JIRA, create a PR to merge the proposal document to the 
documentation project.
+
+## Non-goals
+
+ * Modify the existing workflow or approval process
+ * Migrate existing documents
+ * Make it harder to create design docs (it should be easy to support the 
creation of proposals for any kind of tasks)
+ * Define how the design docs are handled/created *before* the publication 
(this proposal is about the publishing process)
+
+## Proposed solution
+
+ * Open a dedicated Jira (`HDDS-*` but with specific component)
+ * Use standard name prefix in the jira (easy to filter on the mailing list) 
`[OEP]
+ * Create a PR to add the design doc to the current documentation
+   * The content of the design can be added to the documentation (Recommended)
+   * Or can be added as external reference
+ * The design doc (or the summary with the reference) will be merged to the 
design doc folder of `hadoop-hdds/docs/content/design` (will be part of the 
docs)
+ * Discuss it as before (lazy consesus, except if somebody calls for a real 
vote)
+ * Design docs can be updated according to the changes during the 
implementation
+ * Only the implemented design docs will be visible as part of the design docs
+
+
+As a result all the design docs can be listed under the documentation page.
+
+A good design doc has the following properties:
+
+ 1. Publicly available for anybody (Please try to avoid services which are 
available only with registration, eg: google docs)
+ 2. Archived for the future (Commit it to the source OR use apache jira or 
wiki)
+ 3. Editable later (Best format is markdown, RTF is also good. PDF has a 
limitation, it's very hard to reuse the text, or create an updated design doc)
+ 4. Well structured to make it easy to comment any part of the document 
(Markdown files which are part of the pull request can be commented in the PR 
line by line)
+
+
+### Example 1: Design doc as a markdown file
+
+The easiest way to create a design doc is to create a new markdown file in a 
PR and merge it to `hadoop-hdds/docs/content/design`.
+
+ 1. Publicly available: YES, it can be linked from Apache git or github
+ 2. Archived: YES, and it's also versioned. All the change history can be 
tracked.
+ 3. Editable later: YES, as it's just a simple text file
+ 4. Commentable: YES, comment can be added to each line.
+
+### Example 2: Design doc as a PDF
+
+A very common practice of today is to create design doc on google docs and 
upload it to the JIRA.
+
+ 1. Publicy available: YES, anybody can download it from the Jira.
+ 2. Archived: YES, it's available from Apache infra.
+ 3. Editable: NO, It's harder to reuse the text to import to the docs or 
create a new design doc.
+ 4. Commentable: PARTIAL, Not as easy as a text file or the original google 
docs, but a good structure with numbered section may help
+
+
+### The format
+
+While the first version (markdown files) are the most powerful, the second 
version 

[hadoop] branch branch-3.1 updated: HDFS-14423. Percent (%) and plus (+) characters no longer work in WebHDFS.

2019-08-13 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new e66ad19  HDFS-14423. Percent (%) and plus (+) characters no longer 
work in WebHDFS.
e66ad19 is described below

commit e66ad193950df02c148b6c1b93298234c4f3ce7a
Author: Masatake Iwasaki 
AuthorDate: Mon Aug 12 12:07:16 2019 +0900

HDFS-14423. Percent (%) and plus (+) characters no longer work in WebHDFS.

Signed-off-by: Masatake Iwasaki 
(cherry picked from commit da0006fe0473e353ee2d489156248a01aa982dfd)

 Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java

(cherry picked from commit d7ca016d63d89e5c8377a035f93485a7c77c3430)
---
 .../java/org/apache/hadoop/http/HttpServer2.java   | 15 
 .../apache/hadoop/hdfs/web/WebHdfsFileSystem.java  | 42 +-
 .../datanode/web/webhdfs/WebHdfsHandler.java   |  3 +-
 .../hdfs/server/namenode/NameNodeHttpServer.java   |  6 +++-
 .../web/resources/NamenodeWebHdfsMethods.java  |  5 +--
 .../org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java | 38 
 6 files changed, 55 insertions(+), 54 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
index 7452b0b..02bd383 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
@@ -786,12 +786,27 @@ public final class HttpServer2 implements FilterContainer 
{
*/
   public void addJerseyResourcePackage(final String packageName,
   final String pathSpec) {
+addJerseyResourcePackage(packageName, pathSpec,
+Collections.emptyMap());
+  }
+
+  /**
+   * Add a Jersey resource package.
+   * @param packageName The Java package name containing the Jersey resource.
+   * @param pathSpec The path spec for the servlet
+   * @param params properties and features for ResourceConfig
+   */
+  public void addJerseyResourcePackage(final String packageName,
+  final String pathSpec, Map params) {
 LOG.info("addJerseyResourcePackage: packageName=" + packageName
 + ", pathSpec=" + pathSpec);
 final ServletHolder sh = new ServletHolder(ServletContainer.class);
 sh.setInitParameter("com.sun.jersey.config.property.resourceConfigClass",
 "com.sun.jersey.api.core.PackagesResourceConfig");
 sh.setInitParameter("com.sun.jersey.config.property.packages", 
packageName);
+for (Map.Entry entry : params.entrySet()) {
+  sh.setInitParameter(entry.getKey(), entry.getValue());
+}
 webAppContext.addServlet(sh, pathSpec);
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
index c74577a..99cab37 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -37,8 +37,6 @@ import java.net.InetSocketAddress;
 import java.net.MalformedURLException;
 import java.net.URI;
 import java.net.URL;
-import java.net.URLDecoder;
-import java.net.URLEncoder;
 import java.nio.charset.StandardCharsets;
 import java.security.PrivilegedExceptionAction;
 import java.util.ArrayList;
@@ -146,8 +144,6 @@ public class WebHdfsFileSystem extends FileSystem
   public static final String EZ_HEADER = "X-Hadoop-Accept-EZ";
   public static final String FEFINFO_HEADER = "X-Hadoop-feInfo";
 
-  public static final String SPECIAL_FILENAME_CHARACTERS_REGEX = ".*[;+%].*";
-
   /**
* Default connection factory may be overridden in tests to use smaller
* timeout values
@@ -603,44 +599,8 @@ public class WebHdfsFileSystem extends FileSystem
   final Param... parameters) throws IOException {
 //initialize URI path and query
 
-Path encodedFSPath = fspath;
-if (fspath != null) {
-  URI fspathUri = fspath.toUri();
-  String fspathUriDecoded = fspathUri.getPath();
-  boolean pathAlreadyEncoded = false;
-  try {
-fspathUriDecoded = URLDecoder.decode(fspathUri.getPath(), "UTF-8");
-//below condition check added as part of fixing HDFS-14323 to make
-//sure pathAlreadyEncoded is not set in the case the input url does
-//not have any encoded sequence already.This will help pulling data
-//from 2.x hadoop cluster to 3.x using 3.x distcp client operation
-if(!fspathUri.getPath().equals(fspathUriDecoded)) {
-

[hadoop] branch trunk updated: HDDS-1916. Only contract tests are run in ozonefs module

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 9691117  HDDS-1916. Only contract tests are run in ozonefs module
9691117 is described below

commit 9691117099d7914c6297b0e4ea3852341775fb15
Author: Doroszlai, Attila 
AuthorDate: Tue Aug 6 10:52:32 2019 +0200

HDDS-1916. Only contract tests are run in ozonefs module

Signed-off-by: Anu Engineer 
---
 hadoop-ozone/ozonefs/pom.xml | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/hadoop-ozone/ozonefs/pom.xml b/hadoop-ozone/ozonefs/pom.xml
index 02a5640..fdd27b0 100644
--- a/hadoop-ozone/ozonefs/pom.xml
+++ b/hadoop-ozone/ozonefs/pom.xml
@@ -68,7 +68,11 @@
 
   
 ITestOzoneContract*.java
-
+**/Test*.java
+**/*Test.java
+**/*Tests.java
+**/*TestCase.java
+  
 
   
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HDFS-14423. Percent (%) and plus (+) characters no longer work in WebHDFS.

2019-08-13 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new d7ca016  HDFS-14423. Percent (%) and plus (+) characters no longer 
work in WebHDFS.
d7ca016 is described below

commit d7ca016d63d89e5c8377a035f93485a7c77c3430
Author: Masatake Iwasaki 
AuthorDate: Mon Aug 12 12:07:16 2019 +0900

HDFS-14423. Percent (%) and plus (+) characters no longer work in WebHDFS.

Signed-off-by: Masatake Iwasaki 
(cherry picked from commit da0006fe0473e353ee2d489156248a01aa982dfd)

 Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
---
 .../java/org/apache/hadoop/http/HttpServer2.java   | 15 
 .../apache/hadoop/hdfs/web/WebHdfsFileSystem.java  | 42 +-
 .../datanode/web/webhdfs/WebHdfsHandler.java   |  3 +-
 .../hdfs/server/namenode/NameNodeHttpServer.java   |  6 +++-
 .../web/resources/NamenodeWebHdfsMethods.java  |  5 +--
 .../org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java | 38 
 6 files changed, 55 insertions(+), 54 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
index 4b67b63..728ce2a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
@@ -791,12 +791,27 @@ public final class HttpServer2 implements FilterContainer 
{
*/
   public void addJerseyResourcePackage(final String packageName,
   final String pathSpec) {
+addJerseyResourcePackage(packageName, pathSpec,
+Collections.emptyMap());
+  }
+
+  /**
+   * Add a Jersey resource package.
+   * @param packageName The Java package name containing the Jersey resource.
+   * @param pathSpec The path spec for the servlet
+   * @param params properties and features for ResourceConfig
+   */
+  public void addJerseyResourcePackage(final String packageName,
+  final String pathSpec, Map params) {
 LOG.info("addJerseyResourcePackage: packageName=" + packageName
 + ", pathSpec=" + pathSpec);
 final ServletHolder sh = new ServletHolder(ServletContainer.class);
 sh.setInitParameter("com.sun.jersey.config.property.resourceConfigClass",
 "com.sun.jersey.api.core.PackagesResourceConfig");
 sh.setInitParameter("com.sun.jersey.config.property.packages", 
packageName);
+for (Map.Entry entry : params.entrySet()) {
+  sh.setInitParameter(entry.getKey(), entry.getValue());
+}
 webAppContext.addServlet(sh, pathSpec);
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
index 70508ce..b316bf1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -37,8 +37,6 @@ import java.net.InetSocketAddress;
 import java.net.MalformedURLException;
 import java.net.URI;
 import java.net.URL;
-import java.net.URLDecoder;
-import java.net.URLEncoder;
 import java.nio.charset.StandardCharsets;
 import java.security.PrivilegedExceptionAction;
 import java.util.ArrayList;
@@ -147,8 +145,6 @@ public class WebHdfsFileSystem extends FileSystem
   public static final String EZ_HEADER = "X-Hadoop-Accept-EZ";
   public static final String FEFINFO_HEADER = "X-Hadoop-feInfo";
 
-  public static final String SPECIAL_FILENAME_CHARACTERS_REGEX = ".*[;+%].*";
-
   /**
* Default connection factory may be overridden in tests to use smaller
* timeout values
@@ -603,44 +599,8 @@ public class WebHdfsFileSystem extends FileSystem
   final Param... parameters) throws IOException {
 //initialize URI path and query
 
-Path encodedFSPath = fspath;
-if (fspath != null) {
-  URI fspathUri = fspath.toUri();
-  String fspathUriDecoded = fspathUri.getPath();
-  boolean pathAlreadyEncoded = false;
-  try {
-fspathUriDecoded = URLDecoder.decode(fspathUri.getPath(), "UTF-8");
-//below condition check added as part of fixing HDFS-14323 to make
-//sure pathAlreadyEncoded is not set in the case the input url does
-//not have any encoded sequence already.This will help pulling data
-//from 2.x hadoop cluster to 3.x using 3.x distcp client operation
-if(!fspathUri.getPath().equals(fspathUriDecoded)) {
-  pathAlreadyEncoded = true;
-}
-  } catch 

[hadoop] branch trunk updated: HDFS-14625. Make DefaultAuditLogger class in FSnamesystem to Abstract. Contributed by hemanthboyina.

2019-08-13 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 633b7c1  HDFS-14625. Make DefaultAuditLogger class in FSnamesystem to 
Abstract. Contributed by hemanthboyina.
633b7c1 is described below

commit 633b7c1cfecde6166899449efae6326ee03cd8c4
Author: Wei-Chiu Chuang 
AuthorDate: Tue Aug 13 16:50:49 2019 -0700

HDFS-14625. Make DefaultAuditLogger class in FSnamesystem to Abstract. 
Contributed by hemanthboyina.
---
 .../hdfs/server/namenode/DefaultAuditLogger.java   | 93 ++
 .../hadoop/hdfs/server/namenode/FSNamesystem.java  | 48 ++-
 .../hdfs/server/namenode/TestAuditLogAtDebug.java  |  4 +-
 .../hdfs/server/namenode/TestFSNamesystem.java | 13 +--
 4 files changed, 109 insertions(+), 49 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultAuditLogger.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultAuditLogger.java
new file mode 100644
index 000..9ac0bec
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultAuditLogger.java
@@ -0,0 +1,93 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.net.InetAddress;
+import java.util.HashSet;
+import java.util.Set;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import 
org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager;
+import org.apache.hadoop.ipc.CallerContext;
+import org.apache.hadoop.security.UserGroupInformation;
+
+/**
+ * This class provides an interface for Namenode and Router to Audit events
+ * information. This class can be extended and can be used when no access 
logger
+ * is defined in the config file.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public abstract class DefaultAuditLogger extends HdfsAuditLogger {
+  protected static final ThreadLocal STRING_BUILDER =
+  new ThreadLocal() {
+@Override
+protected StringBuilder initialValue() {
+  return new StringBuilder();
+}
+  };
+
+  protected volatile boolean isCallerContextEnabled;
+
+  /** The maximum bytes a caller context string can have. */
+  protected int callerContextMaxLen;
+  protected int callerSignatureMaxLen;
+
+  /** adds a tracking ID for all audit log events. */
+  protected boolean logTokenTrackingId;
+
+  /** List of commands to provide debug messages. */
+  protected Set debugCmdSet = new HashSet<>();
+
+  /**
+   * Enable or disable CallerContext.
+   *
+   * @param value true, enable CallerContext, otherwise false to disable it.
+   */
+  void setCallerContextEnabled(final boolean value) {
+isCallerContextEnabled = value;
+  }
+
+  /**
+   * Get the value indicating if CallerContext is enabled.
+   *
+   * @return true, if CallerContext is enabled, otherwise false, if it's
+   * disabled.
+   */
+  boolean getCallerContextEnabled() {
+return isCallerContextEnabled;
+  }
+
+  public abstract void initialize(Configuration conf);
+
+  public abstract void logAuditMessage(String message);
+
+  public abstract void logAuditEvent(boolean succeeded, String userName,
+  InetAddress addr, String cmd, String src, String dst, FileStatus status,
+  UserGroupInformation ugi, DelegationTokenSecretManager dtSecretManager);
+
+  public abstract void logAuditEvent(boolean succeeded, String userName,
+  InetAddress addr, String cmd, String src, String dst, FileStatus status,
+  CallerContext callerContext, UserGroupInformation ugi,
+  DelegationTokenSecretManager dtSecretManager);
+
+}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 

[hadoop] branch branch-3.2 updated: HDFS-14665. HttpFS: LISTSTATUS response is missing HDFS-specific fields (#1267) Contributed by Siyao Meng.

2019-08-13 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 88aece4  HDFS-14665. HttpFS: LISTSTATUS response is missing 
HDFS-specific fields (#1267) Contributed by Siyao Meng.
88aece4 is described below

commit 88aece40831a695c0871a056a986c04edac6ea44
Author: Siyao Meng <50227127+smen...@users.noreply.github.com>
AuthorDate: Tue Aug 13 16:27:57 2019 -0700

HDFS-14665. HttpFS: LISTSTATUS response is missing HDFS-specific fields 
(#1267) Contributed by Siyao Meng.

(cherry picked from commit 6ae8bc3a4a07c6b4e7060362b749be8c7afe0560)
---
 .../org/apache/hadoop/fs/http/client/HttpFSFileSystem.java |  3 +++
 .../org/apache/hadoop/fs/http/server/FSOperations.java | 11 +++
 .../apache/hadoop/fs/http/client/BaseTestHttpFSWith.java   | 14 ++
 3 files changed, 28 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
index 1c1b93b..1efafe7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
@@ -177,7 +177,10 @@ public class HttpFSFileSystem extends FileSystem
   public static final String ACCESS_TIME_JSON = "accessTime";
   public static final String MODIFICATION_TIME_JSON = "modificationTime";
   public static final String BLOCK_SIZE_JSON = "blockSize";
+  public static final String CHILDREN_NUM_JSON = "childrenNum";
+  public static final String FILE_ID_JSON = "fileId";
   public static final String REPLICATION_JSON = "replication";
+  public static final String STORAGEPOLICY_JSON = "storagePolicy";
   public static final String XATTRS_JSON = "XAttrs";
   public static final String XATTR_NAME_JSON = "name";
   public static final String XATTR_VALUE_JSON = "value";
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
index 7f0b5d2..3f79256 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
@@ -38,6 +38,7 @@ import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
 import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
 import org.apache.hadoop.hdfs.web.JsonUtil;
@@ -118,6 +119,16 @@ public class FSOperations {
 fileStatus.getModificationTime());
 json.put(HttpFSFileSystem.BLOCK_SIZE_JSON, fileStatus.getBlockSize());
 json.put(HttpFSFileSystem.REPLICATION_JSON, fileStatus.getReplication());
+if (fileStatus instanceof HdfsFileStatus) {
+  // Add HDFS-specific fields to response
+  HdfsFileStatus hdfsFileStatus = (HdfsFileStatus) fileStatus;
+  json.put(HttpFSFileSystem.CHILDREN_NUM_JSON,
+  hdfsFileStatus.getChildrenNum());
+  json.put(HttpFSFileSystem.FILE_ID_JSON,
+  hdfsFileStatus.getFileId());
+  json.put(HttpFSFileSystem.STORAGEPOLICY_JSON,
+  hdfsFileStatus.getStoragePolicy());
+}
 if (fileStatus.getPermission().getAclBit()) {
   json.put(HttpFSFileSystem.ACL_BIT_JSON, true);
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
index 6d1f673..d8e1379 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
@@ -42,6 +42,7 @@ import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSTestUtil;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
 import org.apache.hadoop.hdfs.protocol.SnapshotException;
 import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
@@ -350,6 +351,7 @@ public abstract class BaseTestHttpFSWith extends 

[hadoop] branch trunk updated: HDFS-14423. Percent (%) and plus (+) characters no longer work in WebHDFS.

2019-08-13 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new da0006f  HDFS-14423. Percent (%) and plus (+) characters no longer 
work in WebHDFS.
da0006f is described below

commit da0006fe0473e353ee2d489156248a01aa982dfd
Author: Masatake Iwasaki 
AuthorDate: Mon Aug 12 12:07:16 2019 +0900

HDFS-14423. Percent (%) and plus (+) characters no longer work in WebHDFS.

Signed-off-by: Masatake Iwasaki 
---
 .../java/org/apache/hadoop/http/HttpServer2.java   | 15 
 .../apache/hadoop/hdfs/web/WebHdfsFileSystem.java  | 42 +-
 .../datanode/web/webhdfs/WebHdfsHandler.java   |  3 +-
 .../hdfs/server/namenode/NameNodeHttpServer.java   |  7 +++-
 .../web/resources/NamenodeWebHdfsMethods.java  |  5 +--
 .../org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java | 38 
 6 files changed, 56 insertions(+), 54 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
index 82d7b59..04d5da1 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
@@ -818,12 +818,27 @@ public final class HttpServer2 implements FilterContainer 
{
*/
   public void addJerseyResourcePackage(final String packageName,
   final String pathSpec) {
+addJerseyResourcePackage(packageName, pathSpec,
+Collections.emptyMap());
+  }
+
+  /**
+   * Add a Jersey resource package.
+   * @param packageName The Java package name containing the Jersey resource.
+   * @param pathSpec The path spec for the servlet
+   * @param params properties and features for ResourceConfig
+   */
+  public void addJerseyResourcePackage(final String packageName,
+  final String pathSpec, Map params) {
 LOG.info("addJerseyResourcePackage: packageName=" + packageName
 + ", pathSpec=" + pathSpec);
 final ServletHolder sh = new ServletHolder(ServletContainer.class);
 sh.setInitParameter("com.sun.jersey.config.property.resourceConfigClass",
 "com.sun.jersey.api.core.PackagesResourceConfig");
 sh.setInitParameter("com.sun.jersey.config.property.packages", 
packageName);
+for (Map.Entry entry : params.entrySet()) {
+  sh.setInitParameter(entry.getKey(), entry.getValue());
+}
 webAppContext.addServlet(sh, pathSpec);
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
index 699eb4d..7d9e6d1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -37,8 +37,6 @@ import java.net.InetSocketAddress;
 import java.net.MalformedURLException;
 import java.net.URI;
 import java.net.URL;
-import java.net.URLDecoder;
-import java.net.URLEncoder;
 import java.nio.charset.StandardCharsets;
 import java.security.PrivilegedExceptionAction;
 import java.util.ArrayList;
@@ -147,8 +145,6 @@ public class WebHdfsFileSystem extends FileSystem
   public static final String EZ_HEADER = "X-Hadoop-Accept-EZ";
   public static final String FEFINFO_HEADER = "X-Hadoop-feInfo";
 
-  public static final String SPECIAL_FILENAME_CHARACTERS_REGEX = ".*[;+%].*";
-
   /**
* Default connection factory may be overridden in tests to use smaller
* timeout values
@@ -611,44 +607,8 @@ public class WebHdfsFileSystem extends FileSystem
   final Param... parameters) throws IOException {
 //initialize URI path and query
 
-Path encodedFSPath = fspath;
-if (fspath != null) {
-  URI fspathUri = fspath.toUri();
-  String fspathUriDecoded = fspathUri.getPath();
-  boolean pathAlreadyEncoded = false;
-  try {
-fspathUriDecoded = URLDecoder.decode(fspathUri.getPath(), "UTF-8");
-//below condition check added as part of fixing HDFS-14323 to make
-//sure pathAlreadyEncoded is not set in the case the input url does
-//not have any encoded sequence already.This will help pulling data
-//from 2.x hadoop cluster to 3.x using 3.x distcp client operation
-if(!fspathUri.getPath().equals(fspathUriDecoded)) {
-  pathAlreadyEncoded = true;
-}
-  } catch (IllegalArgumentException ex) {
-LOG.trace("Cannot decode URL encoded file", ex);
-  }
-  String[] fspathItems = fspathUriDecoded.split("/");
-
-  if (fspathItems.length > 0) {
-StringBuilder fsPathEncodedItems = 

[hadoop] branch ozone-0.4.1 updated (237a208 -> e6b744b)

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a change to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 237a208  HDDS-1891. Ozone fs shell command should work with default 
port when port number is not specified
 new 3eec5e1  HDDS-1961. 
TestStorageContainerManager#testScmProcessDatanodeHeartbeat is flaky.
 new e6b744b  HDDS-1917. TestOzoneRpcClientAbstract is failing.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../hadoop/ozone/TestStorageContainerManager.java  | 25 +--
 .../client/rpc/TestOzoneRpcClientAbstract.java | 52 +++---
 .../org/apache/hadoop/ozone/om/KeyManagerImpl.java |  7 ++-
 3 files changed, 43 insertions(+), 41 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/02: HDDS-1961. TestStorageContainerManager#testScmProcessDatanodeHeartbeat is flaky.

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 3eec5e1d61918e83b1f94ebfa0d864826c03465f
Author: Nanda kumar 
AuthorDate: Tue Aug 13 22:04:03 2019 +0530

HDDS-1961. TestStorageContainerManager#testScmProcessDatanodeHeartbeat is 
flaky.

Signed-off-by: Anu Engineer 
(cherry picked from commit cb390dff87a86eae22c432576be90d39f84a6ee8)
---
 .../hadoop/ozone/TestStorageContainerManager.java  | 25 +++---
 1 file changed, 12 insertions(+), 13 deletions(-)

diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java
index 3ac5993..55b184a 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java
@@ -69,6 +69,7 @@ import org.apache.hadoop.hdds.scm.server.SCMStorageConfig;
 import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
 import org.apache.hadoop.hdds.server.events.EventPublisher;
 import org.apache.hadoop.net.DNSToSwitchMapping;
+import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.net.StaticMapping;
 import org.apache.hadoop.ozone.container.ContainerTestHelper;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
@@ -500,7 +501,9 @@ public class TestStorageContainerManager {
 String scmId = UUID.randomUUID().toString();
 conf.setClass(NET_TOPOLOGY_NODE_SWITCH_MAPPING_IMPL_KEY,
 StaticMapping.class, DNSToSwitchMapping.class);
-StaticMapping.addNodeToRack(HddsUtils.getHostName(conf), "/rack1");
+StaticMapping.addNodeToRack(NetUtils.normalizeHostNames(
+Collections.singleton(HddsUtils.getHostName(conf))).get(0),
+"/rack1");
 
 final int datanodeNum = 3;
 MiniOzoneCluster cluster = MiniOzoneCluster.newBuilder(conf)
@@ -520,21 +523,17 @@ public class TestStorageContainerManager {
   Thread.sleep(heartbeatCheckerIntervalMs * 2);
 
   List allNodes = scm.getScmNodeManager().getAllNodes();
-  Assert.assertTrue(allNodes.size() == datanodeNum);
-  for (int i = 0; i < allNodes.size(); i++) {
+  Assert.assertEquals(datanodeNum, allNodes.size());
+  for (DatanodeDetails node : allNodes) {
 DatanodeInfo datanodeInfo = (DatanodeInfo) scm.getScmNodeManager()
-.getNodeByUuid(allNodes.get(i).getUuidString());
-Assert.assertTrue((datanodeInfo.getLastHeartbeatTime() - start)
->= heartbeatCheckerIntervalMs);
-Assert.assertTrue(datanodeInfo.getUuidString()
-.equals(datanodeInfo.getNetworkName()));
-Assert.assertTrue(datanodeInfo.getNetworkLocation()
-.equals("/rack1"));
+.getNodeByUuid(node.getUuidString());
+Assert.assertTrue(datanodeInfo.getLastHeartbeatTime() > start);
+Assert.assertEquals(datanodeInfo.getUuidString(),
+datanodeInfo.getNetworkName());
+Assert.assertEquals("/rack1", datanodeInfo.getNetworkLocation());
   }
 } finally {
-  if (cluster != null) {
-cluster.shutdown();
-  }
+  cluster.shutdown();
 }
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 02/02: HDDS-1917. TestOzoneRpcClientAbstract is failing.

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit e6b744b8f8519342a4e4bdc15cb4088e13e855c6
Author: Nanda kumar 
AuthorDate: Tue Aug 6 14:32:13 2019 +0530

HDDS-1917. TestOzoneRpcClientAbstract is failing.

Signed-off-by: Anu Engineer 
(cherry picked from commit 3cff73aff47695f6a48a36878191409f050f)
---
 .../client/rpc/TestOzoneRpcClientAbstract.java | 52 +++---
 .../org/apache/hadoop/ozone/om/KeyManagerImpl.java |  7 ++-
 2 files changed, 31 insertions(+), 28 deletions(-)

diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
index 4e426ba..c203fec 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
@@ -26,6 +26,7 @@ import java.util.HashMap;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
+import java.util.Optional;
 import java.util.TreeMap;
 import java.util.UUID;
 import java.util.concurrent.CountDownLatch;
@@ -2533,31 +2534,30 @@ public abstract class TestOzoneRpcClientAbstract {
   ACLType.READ_ACL, ACCESS);
   // Verify that operation successful.
   assertTrue(store.addAcl(ozObj, newAcl));
-  List acls = store.getAcl(ozObj);
-
-  assertTrue(acls.size() == expectedAcls.size());
-  boolean aclVerified = false;
-  for(OzoneAcl acl: acls) {
-if(acl.getName().equals(newAcl.getName())) {
-  assertTrue(acl.getAclList().contains(ACLType.READ_ACL));
-  aclVerified = true;
-}
-  }
-  assertTrue("New acl expected but not found.", aclVerified);
-  aclVerified = false;
+
+  assertEquals(expectedAcls.size(), store.getAcl(ozObj).size());
+  final Optional readAcl = store.getAcl(ozObj).stream()
+  .filter(acl -> acl.getName().equals(newAcl.getName())
+  && acl.getType().equals(newAcl.getType()))
+  .findFirst();
+  assertTrue("New acl expected but not found.", readAcl.isPresent());
+  assertTrue("READ_ACL should exist in current acls:"
+  + readAcl.get(),
+  readAcl.get().getAclList().contains(ACLType.READ_ACL));
+
 
   // Case:2 Remove newly added acl permission.
   assertTrue(store.removeAcl(ozObj, newAcl));
-  acls = store.getAcl(ozObj);
-  assertTrue(acls.size() == expectedAcls.size());
-  for(OzoneAcl acl: acls) {
-if(acl.getName().equals(newAcl.getName())) {
-  assertFalse("READ_ACL should not exist in current acls:" +
-  acls, acl.getAclList().contains(ACLType.READ_ACL));
-  aclVerified = true;
-}
-  }
-  assertTrue("New acl expected but not found.", aclVerified);
+
+  assertEquals(expectedAcls.size(), store.getAcl(ozObj).size());
+  final Optional nonReadAcl = store.getAcl(ozObj).stream()
+  .filter(acl -> acl.getName().equals(newAcl.getName())
+  && acl.getType().equals(newAcl.getType()))
+  .findFirst();
+  assertTrue("New acl expected but not found.", nonReadAcl.isPresent());
+  assertFalse("READ_ACL should not exist in current acls:"
+  + nonReadAcl.get(),
+  nonReadAcl.get().getAclList().contains(ACLType.READ_ACL));
 } else {
   fail("Default acl should not be empty.");
 }
@@ -2570,17 +2570,17 @@ public abstract class TestOzoneRpcClientAbstract {
   store.removeAcl(ozObj, a);
 }
 List newAcls = store.getAcl(ozObj);
-assertTrue(newAcls.size() == 0);
+assertEquals(0, newAcls.size());
 
 // Add acl's and then call getAcl.
 int aclCount = 0;
 for (OzoneAcl a : expectedAcls) {
   aclCount++;
   assertTrue(store.addAcl(ozObj, a));
-  assertTrue(store.getAcl(ozObj).size() == aclCount);
+  assertEquals(aclCount, store.getAcl(ozObj).size());
 }
 newAcls = store.getAcl(ozObj);
-assertTrue(newAcls.size() == expectedAcls.size());
+assertEquals(expectedAcls.size(), newAcls.size());
 List finalNewAcls = newAcls;
 expectedAcls.forEach(a -> assertTrue(finalNewAcls.contains(a)));
 
@@ -2591,7 +2591,7 @@ public abstract class TestOzoneRpcClientAbstract {
 ACLType.ALL, ACCESS);
 store.setAcl(ozObj, Arrays.asList(ua, ug));
 newAcls = store.getAcl(ozObj);
-assertTrue(newAcls.size() == 2);
+assertEquals(2, newAcls.size());
 assertTrue(newAcls.contains(ua));
 assertTrue(newAcls.contains(ug));
   }
diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 

[hadoop] branch trunk updated: HDFS-14665. HttpFS: LISTSTATUS response is missing HDFS-specific fields (#1267) Contributed by Siyao Meng.

2019-08-13 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 6ae8bc3  HDFS-14665. HttpFS: LISTSTATUS response is missing 
HDFS-specific fields (#1267) Contributed by Siyao Meng.
6ae8bc3 is described below

commit 6ae8bc3a4a07c6b4e7060362b749be8c7afe0560
Author: Siyao Meng <50227127+smen...@users.noreply.github.com>
AuthorDate: Tue Aug 13 16:27:57 2019 -0700

HDFS-14665. HttpFS: LISTSTATUS response is missing HDFS-specific fields 
(#1267) Contributed by Siyao Meng.
---
 .../org/apache/hadoop/fs/http/client/HttpFSFileSystem.java |  3 +++
 .../org/apache/hadoop/fs/http/server/FSOperations.java | 11 +++
 .../apache/hadoop/fs/http/client/BaseTestHttpFSWith.java   | 14 ++
 3 files changed, 28 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
index 17c75d5..ac909dd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
@@ -177,7 +177,10 @@ public class HttpFSFileSystem extends FileSystem
   public static final String ACCESS_TIME_JSON = "accessTime";
   public static final String MODIFICATION_TIME_JSON = "modificationTime";
   public static final String BLOCK_SIZE_JSON = "blockSize";
+  public static final String CHILDREN_NUM_JSON = "childrenNum";
+  public static final String FILE_ID_JSON = "fileId";
   public static final String REPLICATION_JSON = "replication";
+  public static final String STORAGEPOLICY_JSON = "storagePolicy";
   public static final String XATTRS_JSON = "XAttrs";
   public static final String XATTR_NAME_JSON = "name";
   public static final String XATTR_VALUE_JSON = "value";
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
index 7ce8a23..043f3e1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
@@ -38,6 +38,7 @@ import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
 import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
 import org.apache.hadoop.hdfs.web.JsonUtil;
@@ -118,6 +119,16 @@ public class FSOperations {
 fileStatus.getModificationTime());
 json.put(HttpFSFileSystem.BLOCK_SIZE_JSON, fileStatus.getBlockSize());
 json.put(HttpFSFileSystem.REPLICATION_JSON, fileStatus.getReplication());
+if (fileStatus instanceof HdfsFileStatus) {
+  // Add HDFS-specific fields to response
+  HdfsFileStatus hdfsFileStatus = (HdfsFileStatus) fileStatus;
+  json.put(HttpFSFileSystem.CHILDREN_NUM_JSON,
+  hdfsFileStatus.getChildrenNum());
+  json.put(HttpFSFileSystem.FILE_ID_JSON,
+  hdfsFileStatus.getFileId());
+  json.put(HttpFSFileSystem.STORAGEPOLICY_JSON,
+  hdfsFileStatus.getStoragePolicy());
+}
 if (fileStatus.getPermission().getAclBit()) {
   json.put(HttpFSFileSystem.ACL_BIT_JSON, true);
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
index 3039426..6380c41 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
@@ -44,6 +44,7 @@ import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
 import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
 import org.apache.hadoop.hdfs.protocol.SnapshotException;
 import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
@@ -360,6 +361,7 @@ public abstract class BaseTestHttpFSWith extends 
HFSTestCase {
 
   private void testListStatus() throws Exception {
 

[hadoop] branch trunk updated (68c8184 -> 3cff73a)

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 68c8184  HDDS-1891. Ozone fs shell command should work with default 
port when port number is not specified
 new cb390df  HDDS-1961. 
TestStorageContainerManager#testScmProcessDatanodeHeartbeat is flaky.
 new 3cff73a  HDDS-1917. TestOzoneRpcClientAbstract is failing.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../hadoop/ozone/TestStorageContainerManager.java  |  3 +-
 .../client/rpc/TestOzoneRpcClientAbstract.java | 52 +++---
 .../org/apache/hadoop/ozone/om/KeyManagerImpl.java |  7 ++-
 3 files changed, 32 insertions(+), 30 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/02: HDDS-1961. TestStorageContainerManager#testScmProcessDatanodeHeartbeat is flaky.

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit cb390dff87a86eae22c432576be90d39f84a6ee8
Author: Nanda kumar 
AuthorDate: Tue Aug 13 22:04:03 2019 +0530

HDDS-1961. TestStorageContainerManager#testScmProcessDatanodeHeartbeat is 
flaky.

Signed-off-by: Anu Engineer 
---
 .../test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java
index 8b0af2a..55b184a 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java
@@ -527,8 +527,7 @@ public class TestStorageContainerManager {
   for (DatanodeDetails node : allNodes) {
 DatanodeInfo datanodeInfo = (DatanodeInfo) scm.getScmNodeManager()
 .getNodeByUuid(node.getUuidString());
-Assert.assertTrue((datanodeInfo.getLastHeartbeatTime() - start)
->= heartbeatCheckerIntervalMs);
+Assert.assertTrue(datanodeInfo.getLastHeartbeatTime() > start);
 Assert.assertEquals(datanodeInfo.getUuidString(),
 datanodeInfo.getNetworkName());
 Assert.assertEquals("/rack1", datanodeInfo.getNetworkLocation());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 02/02: HDDS-1917. TestOzoneRpcClientAbstract is failing.

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 3cff73aff47695f6a48a36878191409f050f
Author: Nanda kumar 
AuthorDate: Tue Aug 6 14:32:13 2019 +0530

HDDS-1917. TestOzoneRpcClientAbstract is failing.

Signed-off-by: Anu Engineer 
---
 .../client/rpc/TestOzoneRpcClientAbstract.java | 52 +++---
 .../org/apache/hadoop/ozone/om/KeyManagerImpl.java |  7 ++-
 2 files changed, 31 insertions(+), 28 deletions(-)

diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
index eb2d048..6ed4eae 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
@@ -26,6 +26,7 @@ import java.util.HashMap;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
+import java.util.Optional;
 import java.util.TreeMap;
 import java.util.UUID;
 import java.util.concurrent.CountDownLatch;
@@ -2456,31 +2457,30 @@ public abstract class TestOzoneRpcClientAbstract {
   ACLType.READ_ACL, ACCESS);
   // Verify that operation successful.
   assertTrue(store.addAcl(ozObj, newAcl));
-  List acls = store.getAcl(ozObj);
-
-  assertTrue(acls.size() == expectedAcls.size());
-  boolean aclVerified = false;
-  for(OzoneAcl acl: acls) {
-if(acl.getName().equals(newAcl.getName())) {
-  assertTrue(acl.getAclList().contains(ACLType.READ_ACL));
-  aclVerified = true;
-}
-  }
-  assertTrue("New acl expected but not found.", aclVerified);
-  aclVerified = false;
+
+  assertEquals(expectedAcls.size(), store.getAcl(ozObj).size());
+  final Optional readAcl = store.getAcl(ozObj).stream()
+  .filter(acl -> acl.getName().equals(newAcl.getName())
+  && acl.getType().equals(newAcl.getType()))
+  .findFirst();
+  assertTrue("New acl expected but not found.", readAcl.isPresent());
+  assertTrue("READ_ACL should exist in current acls:"
+  + readAcl.get(),
+  readAcl.get().getAclList().contains(ACLType.READ_ACL));
+
 
   // Case:2 Remove newly added acl permission.
   assertTrue(store.removeAcl(ozObj, newAcl));
-  acls = store.getAcl(ozObj);
-  assertTrue(acls.size() == expectedAcls.size());
-  for(OzoneAcl acl: acls) {
-if(acl.getName().equals(newAcl.getName())) {
-  assertFalse("READ_ACL should not exist in current acls:" +
-  acls, acl.getAclList().contains(ACLType.READ_ACL));
-  aclVerified = true;
-}
-  }
-  assertTrue("New acl expected but not found.", aclVerified);
+
+  assertEquals(expectedAcls.size(), store.getAcl(ozObj).size());
+  final Optional nonReadAcl = store.getAcl(ozObj).stream()
+  .filter(acl -> acl.getName().equals(newAcl.getName())
+  && acl.getType().equals(newAcl.getType()))
+  .findFirst();
+  assertTrue("New acl expected but not found.", nonReadAcl.isPresent());
+  assertFalse("READ_ACL should not exist in current acls:"
+  + nonReadAcl.get(),
+  nonReadAcl.get().getAclList().contains(ACLType.READ_ACL));
 } else {
   fail("Default acl should not be empty.");
 }
@@ -2493,17 +2493,17 @@ public abstract class TestOzoneRpcClientAbstract {
   store.removeAcl(ozObj, a);
 }
 List newAcls = store.getAcl(ozObj);
-assertTrue(newAcls.size() == 0);
+assertEquals(0, newAcls.size());
 
 // Add acl's and then call getAcl.
 int aclCount = 0;
 for (OzoneAcl a : expectedAcls) {
   aclCount++;
   assertTrue(store.addAcl(ozObj, a));
-  assertTrue(store.getAcl(ozObj).size() == aclCount);
+  assertEquals(aclCount, store.getAcl(ozObj).size());
 }
 newAcls = store.getAcl(ozObj);
-assertTrue(newAcls.size() == expectedAcls.size());
+assertEquals(expectedAcls.size(), newAcls.size());
 List finalNewAcls = newAcls;
 expectedAcls.forEach(a -> assertTrue(finalNewAcls.contains(a)));
 
@@ -2514,7 +2514,7 @@ public abstract class TestOzoneRpcClientAbstract {
 ACLType.ALL, ACCESS);
 store.setAcl(ozObj, Arrays.asList(ua, ug));
 newAcls = store.getAcl(ozObj);
-assertTrue(newAcls.size() == 2);
+assertEquals(2, newAcls.size());
 assertTrue(newAcls.contains(ua));
 assertTrue(newAcls.contains(ug));
   }
diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
index 3685bc7..d3e957c 100644
--- 

[hadoop] branch ozone-0.4.1 updated: HDDS-1891. Ozone fs shell command should work with default port when port number is not specified

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new 237a208  HDDS-1891. Ozone fs shell command should work with default 
port when port number is not specified
237a208 is described below

commit 237a20860734232af828c949c821fee7498e2d9f
Author: Siyao Meng 
AuthorDate: Fri Aug 2 12:54:04 2019 -0700

HDDS-1891. Ozone fs shell command should work with default port when port 
number is not specified

Signed-off-by: Anu Engineer 
(cherry picked from commit 68c818415aedf672e35b8ecd9dfd0cb33c43a91e)
---
 .../hadoop/fs/ozone/BasicOzoneFileSystem.java  | 17 +++---
 .../fs/ozone/TestOzoneFileSystemWithMocks.java | 37 ++
 2 files changed, 50 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
 
b/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
index 6a52746..27bc925 100644
--- 
a/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
+++ 
b/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
@@ -43,6 +43,7 @@ import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.PathIsNotEmptyDirectoryException;
 import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.ozone.OmUtils;
 import org.apache.hadoop.ozone.om.exceptions.OMException;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.token.Token;
@@ -54,6 +55,7 @@ import static 
org.apache.hadoop.fs.ozone.Constants.OZONE_DEFAULT_USER;
 import static org.apache.hadoop.fs.ozone.Constants.OZONE_USER_DIR;
 import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER;
 import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_SCHEME;
+
 import org.apache.http.client.utils.URIBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -85,9 +87,10 @@ public class BasicOzoneFileSystem extends FileSystem {
   private static final Pattern URL_SCHEMA_PATTERN =
   Pattern.compile("([^\\.]+)\\.([^\\.]+)\\.{0,1}(.*)");
 
-  private static final String URI_EXCEPTION_TEXT = "Ozone file system url " +
-  "should be either one of the two forms: " +
+  private static final String URI_EXCEPTION_TEXT = "Ozone file system URL " +
+  "should be one of the following formats: " +
   "o3fs://bucket.volume/key  OR " +
+  "o3fs://bucket.volume.om-host.example.com/key  OR " +
   "o3fs://bucket.volume.om-host.example.com:5678/key";
 
   @Override
@@ -113,11 +116,17 @@ public class BasicOzoneFileSystem extends FileSystem {
 String omPort = String.valueOf(-1);
 if (!isEmpty(remaining)) {
   String[] parts = remaining.split(":");
-  if (parts.length != 2) {
+  // Array length should be either 1(host) or 2(host:port)
+  if (parts.length > 2) {
 throw new IllegalArgumentException(URI_EXCEPTION_TEXT);
   }
   omHost = parts[0];
-  omPort = parts[1];
+  if (parts.length == 2) {
+omPort = parts[1];
+  } else {
+// If port number is not specified, read it from config
+omPort = String.valueOf(OmUtils.getOmRpcPort(conf));
+  }
   if (!isNumber(omPort)) {
 throw new IllegalArgumentException(URI_EXCEPTION_TEXT);
   }
diff --git 
a/hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java
 
b/hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java
index 7109327..51fd3c8 100644
--- 
a/hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java
+++ 
b/hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java
@@ -27,6 +27,7 @@ import java.net.URI;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.OmUtils;
 import org.apache.hadoop.ozone.client.ObjectStore;
 import org.apache.hadoop.ozone.client.OzoneBucket;
 import org.apache.hadoop.ozone.client.OzoneClient;
@@ -79,6 +80,42 @@ public class TestOzoneFileSystemWithMocks {
   }
 
   @Test
+  public void testFSUriWithHostPortUnspecified() throws Exception {
+Configuration conf = new OzoneConfiguration();
+final int omPort = OmUtils.getOmRpcPort(conf);
+
+OzoneClient ozoneClient = mock(OzoneClient.class);
+ObjectStore objectStore = mock(ObjectStore.class);
+OzoneVolume volume = mock(OzoneVolume.class);
+OzoneBucket bucket = mock(OzoneBucket.class);
+
+when(ozoneClient.getObjectStore()).thenReturn(objectStore);
+when(objectStore.getVolume(eq("volume1"))).thenReturn(volume);
+

[hadoop] branch trunk updated: HDDS-1891. Ozone fs shell command should work with default port when port number is not specified

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 68c8184  HDDS-1891. Ozone fs shell command should work with default 
port when port number is not specified
68c8184 is described below

commit 68c818415aedf672e35b8ecd9dfd0cb33c43a91e
Author: Siyao Meng 
AuthorDate: Fri Aug 2 12:54:04 2019 -0700

HDDS-1891. Ozone fs shell command should work with default port when port 
number is not specified

Signed-off-by: Anu Engineer 
---
 .../hadoop/fs/ozone/BasicOzoneFileSystem.java  | 17 +++---
 .../fs/ozone/TestOzoneFileSystemWithMocks.java | 37 ++
 2 files changed, 50 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
 
b/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
index 6a52746..27bc925 100644
--- 
a/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
+++ 
b/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
@@ -43,6 +43,7 @@ import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.PathIsNotEmptyDirectoryException;
 import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.ozone.OmUtils;
 import org.apache.hadoop.ozone.om.exceptions.OMException;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.token.Token;
@@ -54,6 +55,7 @@ import static 
org.apache.hadoop.fs.ozone.Constants.OZONE_DEFAULT_USER;
 import static org.apache.hadoop.fs.ozone.Constants.OZONE_USER_DIR;
 import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER;
 import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_SCHEME;
+
 import org.apache.http.client.utils.URIBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -85,9 +87,10 @@ public class BasicOzoneFileSystem extends FileSystem {
   private static final Pattern URL_SCHEMA_PATTERN =
   Pattern.compile("([^\\.]+)\\.([^\\.]+)\\.{0,1}(.*)");
 
-  private static final String URI_EXCEPTION_TEXT = "Ozone file system url " +
-  "should be either one of the two forms: " +
+  private static final String URI_EXCEPTION_TEXT = "Ozone file system URL " +
+  "should be one of the following formats: " +
   "o3fs://bucket.volume/key  OR " +
+  "o3fs://bucket.volume.om-host.example.com/key  OR " +
   "o3fs://bucket.volume.om-host.example.com:5678/key";
 
   @Override
@@ -113,11 +116,17 @@ public class BasicOzoneFileSystem extends FileSystem {
 String omPort = String.valueOf(-1);
 if (!isEmpty(remaining)) {
   String[] parts = remaining.split(":");
-  if (parts.length != 2) {
+  // Array length should be either 1(host) or 2(host:port)
+  if (parts.length > 2) {
 throw new IllegalArgumentException(URI_EXCEPTION_TEXT);
   }
   omHost = parts[0];
-  omPort = parts[1];
+  if (parts.length == 2) {
+omPort = parts[1];
+  } else {
+// If port number is not specified, read it from config
+omPort = String.valueOf(OmUtils.getOmRpcPort(conf));
+  }
   if (!isNumber(omPort)) {
 throw new IllegalArgumentException(URI_EXCEPTION_TEXT);
   }
diff --git 
a/hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java
 
b/hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java
index 7109327..51fd3c8 100644
--- 
a/hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java
+++ 
b/hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java
@@ -27,6 +27,7 @@ import java.net.URI;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.OmUtils;
 import org.apache.hadoop.ozone.client.ObjectStore;
 import org.apache.hadoop.ozone.client.OzoneBucket;
 import org.apache.hadoop.ozone.client.OzoneClient;
@@ -79,6 +80,42 @@ public class TestOzoneFileSystemWithMocks {
   }
 
   @Test
+  public void testFSUriWithHostPortUnspecified() throws Exception {
+Configuration conf = new OzoneConfiguration();
+final int omPort = OmUtils.getOmRpcPort(conf);
+
+OzoneClient ozoneClient = mock(OzoneClient.class);
+ObjectStore objectStore = mock(ObjectStore.class);
+OzoneVolume volume = mock(OzoneVolume.class);
+OzoneBucket bucket = mock(OzoneBucket.class);
+
+when(ozoneClient.getObjectStore()).thenReturn(objectStore);
+when(objectStore.getVolume(eq("volume1"))).thenReturn(volume);
+when(volume.getBucket("bucket1")).thenReturn(bucket);
+
+

[hadoop] branch ozone-0.4.1 updated: HDDS-1488. Scm cli command to start/stop replication manager.

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new 0299f8c  HDDS-1488. Scm cli command to start/stop replication manager.
0299f8c is described below

commit 0299f8c7c5d06d8ea36fd960e96d0606092dc4a6
Author: Nanda kumar 
AuthorDate: Sat Aug 3 19:01:29 2019 +0530

HDDS-1488. Scm cli command to start/stop replication manager.

Signed-off-by: Anu Engineer 
(cherry picked from commit 69b74e90167041f561bfcccf5a4e46ea208c467e)
---
 .../hdds/scm/client/ContainerOperationClient.java  | 17 ++
 .../apache/hadoop/hdds/scm/client/ScmClient.java   | 19 +++
 .../protocol/StorageContainerLocationProtocol.java | 18 +++
 ...inerLocationProtocolClientSideTranslatorPB.java | 39 ++
 .../org/apache/hadoop/ozone/audit/SCMAction.java   |  5 +-
 ...inerLocationProtocolServerSideTranslatorPB.java | 46 +
 .../proto/StorageContainerLocationProtocol.proto   | 31 +++
 .../hdds/scm/container/ReplicationManager.java | 36 -
 .../hdds/scm/server/SCMClientProtocolServer.java   | 21 
 .../hdds/scm/container/TestReplicationManager.java | 16 ++
 .../hdds/scm/cli/ReplicationManagerCommands.java   | 54 +++
 .../scm/cli/ReplicationManagerStartSubcommand.java | 53 +++
 .../cli/ReplicationManagerStatusSubcommand.java| 60 ++
 .../scm/cli/ReplicationManagerStopSubcommand.java  | 55 
 .../org/apache/hadoop/hdds/scm/cli/SCMCLI.java |  3 +-
 15 files changed, 457 insertions(+), 16 deletions(-)

diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
index 3077f9f..e2856d7 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
@@ -459,4 +459,21 @@ public class ContainerOperationClient implements ScmClient 
{
   public boolean forceExitSafeMode() throws IOException {
 return storageContainerLocationClient.forceExitSafeMode();
   }
+
+  @Override
+  public void startReplicationManager() throws IOException {
+storageContainerLocationClient.startReplicationManager();
+  }
+
+  @Override
+  public void stopReplicationManager() throws IOException {
+storageContainerLocationClient.stopReplicationManager();
+  }
+
+  @Override
+  public boolean getReplicationManagerStatus() throws IOException {
+return storageContainerLocationClient.getReplicationManagerStatus();
+  }
+
+
 }
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
index 85821ac..c2dd5f9 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
@@ -203,4 +203,23 @@ public interface ScmClient extends Closeable {
* @throws IOException
*/
   boolean forceExitSafeMode() throws IOException;
+
+  /**
+   * Start ReplicationManager.
+   */
+  void startReplicationManager() throws IOException;
+
+  /**
+   * Stop ReplicationManager.
+   */
+  void stopReplicationManager() throws IOException;
+
+  /**
+   * Returns ReplicationManager status.
+   *
+   * @return True if ReplicationManager is running, false otherwise.
+   */
+  boolean getReplicationManagerStatus() throws IOException;
+
+
 }
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
index cc220a5..565ce47 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
@@ -177,4 +177,22 @@ public interface StorageContainerLocationProtocol extends 
Closeable {
* @throws IOException
*/
   boolean forceExitSafeMode() throws IOException;
+
+  /**
+   * Start ReplicationManager.
+   */
+  void startReplicationManager() throws IOException;
+
+  /**
+   * Stop ReplicationManager.
+   */
+  void stopReplicationManager() throws IOException;
+
+  /**
+   * Returns ReplicationManager status.
+   *
+   * @return True if ReplicationManager is running, false otherwise.
+   */
+  boolean getReplicationManagerStatus() throws IOException;
+
 }
diff --git 

[hadoop] branch trunk updated: HDDS-1488. Scm cli command to start/stop replication manager.

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 69b74e9  HDDS-1488. Scm cli command to start/stop replication manager.
69b74e9 is described below

commit 69b74e90167041f561bfcccf5a4e46ea208c467e
Author: Nanda kumar 
AuthorDate: Sat Aug 3 19:01:29 2019 +0530

HDDS-1488. Scm cli command to start/stop replication manager.

Signed-off-by: Anu Engineer 
---
 .../hdds/scm/client/ContainerOperationClient.java  | 17 ++
 .../apache/hadoop/hdds/scm/client/ScmClient.java   | 19 +++
 .../protocol/StorageContainerLocationProtocol.java | 18 +++
 ...inerLocationProtocolClientSideTranslatorPB.java | 39 ++
 .../org/apache/hadoop/ozone/audit/SCMAction.java   |  5 +-
 ...inerLocationProtocolServerSideTranslatorPB.java | 46 +
 .../proto/StorageContainerLocationProtocol.proto   | 31 +++
 .../hdds/scm/container/ReplicationManager.java | 36 -
 .../hdds/scm/server/SCMClientProtocolServer.java   | 21 
 .../hdds/scm/container/TestReplicationManager.java | 16 ++
 .../hdds/scm/cli/ReplicationManagerCommands.java   | 54 +++
 .../scm/cli/ReplicationManagerStartSubcommand.java | 53 +++
 .../cli/ReplicationManagerStatusSubcommand.java| 60 ++
 .../scm/cli/ReplicationManagerStopSubcommand.java  | 55 
 .../org/apache/hadoop/hdds/scm/cli/SCMCLI.java |  3 +-
 15 files changed, 457 insertions(+), 16 deletions(-)

diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
index 3077f9f..e2856d7 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
@@ -459,4 +459,21 @@ public class ContainerOperationClient implements ScmClient 
{
   public boolean forceExitSafeMode() throws IOException {
 return storageContainerLocationClient.forceExitSafeMode();
   }
+
+  @Override
+  public void startReplicationManager() throws IOException {
+storageContainerLocationClient.startReplicationManager();
+  }
+
+  @Override
+  public void stopReplicationManager() throws IOException {
+storageContainerLocationClient.stopReplicationManager();
+  }
+
+  @Override
+  public boolean getReplicationManagerStatus() throws IOException {
+return storageContainerLocationClient.getReplicationManagerStatus();
+  }
+
+
 }
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
index 85821ac..c2dd5f9 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
@@ -203,4 +203,23 @@ public interface ScmClient extends Closeable {
* @throws IOException
*/
   boolean forceExitSafeMode() throws IOException;
+
+  /**
+   * Start ReplicationManager.
+   */
+  void startReplicationManager() throws IOException;
+
+  /**
+   * Stop ReplicationManager.
+   */
+  void stopReplicationManager() throws IOException;
+
+  /**
+   * Returns ReplicationManager status.
+   *
+   * @return True if ReplicationManager is running, false otherwise.
+   */
+  boolean getReplicationManagerStatus() throws IOException;
+
+
 }
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
index cc220a5..565ce47 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
@@ -177,4 +177,22 @@ public interface StorageContainerLocationProtocol extends 
Closeable {
* @throws IOException
*/
   boolean forceExitSafeMode() throws IOException;
+
+  /**
+   * Start ReplicationManager.
+   */
+  void startReplicationManager() throws IOException;
+
+  /**
+   * Stop ReplicationManager.
+   */
+  void stopReplicationManager() throws IOException;
+
+  /**
+   * Returns ReplicationManager status.
+   *
+   * @return True if ReplicationManager is running, false otherwise.
+   */
+  boolean getReplicationManagerStatus() throws IOException;
+
 }
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolClientSideTranslatorPB.java
 

[hadoop] branch trunk updated: HDDS-1886. Use ArrayList#clear to address audit failure scenario

2019-08-13 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 689a80d  HDDS-1886. Use ArrayList#clear to address audit failure 
scenario
689a80d is described below

commit 689a80d3ce310c3b617537550a529b9a1dc80f4b
Author: dchitlangia 
AuthorDate: Thu Aug 1 02:06:03 2019 -0400

HDDS-1886. Use ArrayList#clear to address audit failure scenario

Signed-off-by: Anu Engineer 
---
 .../test/java/org/apache/hadoop/ozone/audit/TestOzoneAuditLogger.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/TestOzoneAuditLogger.java
 
b/hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/TestOzoneAuditLogger.java
index 77a6c0b..518ddae 100644
--- 
a/hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/TestOzoneAuditLogger.java
+++ 
b/hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/TestOzoneAuditLogger.java
@@ -153,7 +153,7 @@ public class TestOzoneAuditLogger {
 assertTrue(lines.size() != 0);
 assertTrue(expected.equalsIgnoreCase(lines.get(0)));
 //empty the file
-lines.remove(0);
+lines.clear();
 FileUtils.writeLines(file, lines, false);
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2 updated: HADOOP-16459. Backport of HADOOP-16266. Add more fine-grained processing time metrics to the RPC layer. Contributed by Christopher Gregorian.

2019-08-13 Thread xkrogen
This is an automated email from the ASF dual-hosted git repository.

xkrogen pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 99cd181  HADOOP-16459. Backport of HADOOP-16266. Add more fine-grained 
processing time metrics to the RPC layer. Contributed by Christopher Gregorian.
99cd181 is described below

commit 99cd181a789faf31422ca5931476286f91afd338
Author: Christopher Gregorian 
AuthorDate: Mon Apr 29 15:37:25 2019 -0700

HADOOP-16459. Backport of HADOOP-16266. Add more fine-grained processing 
time metrics to the RPC layer. Contributed by Christopher Gregorian.

This commit also includes the follow-on commit 
827a84778a4e3b8f165806dfd2966f0951a5e575.

(cherry-picked from f96a2df38d889f29314c57f4d94227b2e419a11f)
(cherry-picked from d4492bdd9edec60c236aff85de50b963097e5a7f)
(cherry-picked from 7b8f08f59e5c8906930ccc67b967b7cfcbd41768)
(cherry picked from ec00431eaaa25eff5bb5e0cafb52de685187a159)
---
 .../org/apache/hadoop/ipc/CallQueueManager.java|   5 +-
 .../org/apache/hadoop/ipc/DecayRpcScheduler.java   |  12 +-
 .../org/apache/hadoop/ipc/DefaultRpcScheduler.java |   4 +-
 .../java/org/apache/hadoop/ipc/ExternalCall.java   |   5 +
 .../org/apache/hadoop/ipc/ProcessingDetails.java   |  96 +
 .../org/apache/hadoop/ipc/ProtobufRpcEngine.java   |  31 +
 .../java/org/apache/hadoop/ipc/RpcScheduler.java   |  11 +-
 .../main/java/org/apache/hadoop/ipc/Server.java| 152 -
 .../org/apache/hadoop/ipc/WritableRpcEngine.java   |  20 +--
 .../hadoop/ipc/metrics/RpcDetailedMetrics.java |   6 +-
 .../org/apache/hadoop/ipc/metrics/RpcMetrics.java  |  63 ++---
 .../hadoop-common/src/site/markdown/Metrics.md |   9 ++
 .../apache/hadoop/ipc/TestProcessingDetails.java   |  61 +
 .../org/apache/hadoop/ipc/TestProtoBufRpc.java |  16 ++-
 .../test/java/org/apache/hadoop/ipc/TestRPC.java   |  18 ++-
 .../java/org/apache/hadoop/ipc/TestRpcBase.java|  28 
 .../src/test/proto/test_rpc_service.proto  |   1 +
 .../hdfs/server/namenode/FSNamesystemLock.java |  66 ++---
 .../namenode/ha/TestConsistentReadsObserver.java   |  11 +-
 19 files changed, 480 insertions(+), 135 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
index e73ef53..765ce18 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
@@ -192,9 +192,8 @@ public class CallQueueManager
 return scheduler.shouldBackOff(e);
   }
 
-  void addResponseTime(String name, int priorityLevel, int queueTime,
-  int processingTime) {
-scheduler.addResponseTime(name, priorityLevel, queueTime, processingTime);
+  void addResponseTime(String name, Schedulable e, ProcessingDetails details) {
+scheduler.addResponseTime(name, e, details);
   }
 
   // This should be only called once per call and cached in the call object
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
index f8c8dd3..8c1365e 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
@@ -55,6 +55,8 @@ import com.google.common.annotations.VisibleForTesting;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static org.apache.hadoop.ipc.ProcessingDetails.Timing;
+
 /**
  * The decay RPC scheduler counts incoming requests in a map, then
  * decays the counts at a fixed time interval. The scheduler is optimized
@@ -592,14 +594,18 @@ public class DecayRpcScheduler implements RpcScheduler,
   }
 
   @Override
-  public void addResponseTime(String name, int priorityLevel, int queueTime,
-  int processingTime) {
+  public void addResponseTime(String callName, Schedulable schedulable,
+  ProcessingDetails details) {
+int priorityLevel = schedulable.getPriorityLevel();
+long queueTime = details.get(Timing.QUEUE, TimeUnit.MILLISECONDS);
+long processingTime = details.get(Timing.PROCESSING, 
TimeUnit.MILLISECONDS);
+
 responseTimeCountInCurrWindow.getAndIncrement(priorityLevel);
 responseTimeTotalInCurrWindow.getAndAdd(priorityLevel,
 queueTime+processingTime);
 if (LOG.isDebugEnabled()) {
   LOG.debug("addResponseTime for call: {}  priority: {} queueTime: {} " +
-  "processingTime: {} ", name, priorityLevel, queueTime,
+  "processingTime: {} ", callName, priorityLevel, 

[hadoop] branch branch-2.8 updated: YARN-9442. container working directory has group read permissions. Contributed by Jim Brennan.

2019-08-13 Thread ebadger
This is an automated email from the ASF dual-hosted git repository.

ebadger pushed a commit to branch branch-2.8
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.8 by this push:
 new 9b55832  YARN-9442. container working directory has group read 
permissions. Contributed by Jim Brennan.
9b55832 is described below

commit 9b5583272f92ed20fc77a04bfcd7de8344ee0a33
Author: Eric Badger 
AuthorDate: Tue Aug 13 21:13:40 2019 +

YARN-9442. container working directory has group read permissions. 
Contributed by Jim Brennan.
---
 .../container-executor/impl/container-executor.c   | 70 +++---
 .../test/test-container-executor.c | 11 
 2 files changed, 59 insertions(+), 22 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index 37cc659..33b2591 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -627,8 +627,8 @@ int check_dir(const char* npath, mode_t st_mode, mode_t 
desired, int finalCompon
  */
 static int create_container_directories(const char* user, const char *app_id,
 const char *container_id, char* const* local_dir, char* const* log_dir, 
const char *work_dir) {
-  // create dirs as 0750
-  const mode_t perms = S_IRWXU | S_IRGRP | S_IXGRP;
+  // create dirs as 0710
+  const mode_t perms = S_IRWXU | S_IXGRP;
   if (app_id == NULL || container_id == NULL || user == NULL || user_detail == 
NULL || user_detail->pw_name == NULL) {
 fprintf(LOGFILE,
 "Either app_id, container_id or the user passed is null.\n");
@@ -669,6 +669,9 @@ static int create_container_directories(const char* user, 
const char *app_id,
 sprintf(combined_name, "%s/%s", app_id, container_id);
 
 char* const* log_dir_ptr;
+// Log dirs need 750 access
+const mode_t logdir_perms = S_IRWXU | S_IRGRP | S_IXGRP;
+
 for(log_dir_ptr = log_dir; *log_dir_ptr != NULL; ++log_dir_ptr) {
   char *container_log_dir = get_app_log_directory(*log_dir_ptr, 
combined_name);
   int check = check_nm_local_dir(nm_uid, *log_dir_ptr);
@@ -682,7 +685,7 @@ static int create_container_directories(const char* user, 
const char *app_id,
   if (container_log_dir == NULL) {
 free(combined_name);
 return -1;
-  } else if (mkdirs(container_log_dir, perms) != 0) {
+  } else if (mkdirs(container_log_dir, logdir_perms) != 0) {
free(container_log_dir);
   } else {
result = 0;
@@ -1082,6 +1085,37 @@ int is_mount_cgroups_support_enabled() {
 }
 
 /**
+ * Function to create the application directories.
+ * Returns pointer to primary_app_dir or NULL if it fails.
+ */
+static char *create_app_dirs(const char *user,
+ const char *app_id,
+ char* const* local_dirs)
+{
+  // 750
+  mode_t permissions = S_IRWXU | S_IRGRP | S_IXGRP;
+  char* const* nm_root;
+  char *primary_app_dir = NULL;
+  for(nm_root=local_dirs; *nm_root != NULL; ++nm_root) {
+char *app_dir = get_app_directory(*nm_root, user, app_id);
+if (app_dir == NULL) {
+  // try the next one
+} else if (mkdirs(app_dir, permissions) != 0) {
+  free(app_dir);
+} else if (primary_app_dir == NULL) {
+  primary_app_dir = app_dir;
+} else {
+  free(app_dir);
+}
+  }
+
+  if (primary_app_dir == NULL) {
+fprintf(LOGFILE, "Did not create any app directories\n");
+  }
+  return primary_app_dir;
+}
+
+/**
  * Function to prepare the application directories for the container.
  */
 int initialize_app(const char *user, const char *app_id,
@@ -1116,25 +1150,9 @@ int initialize_app(const char *user, const char *app_id,
 return -1;
   }
 
-  // 750
-  mode_t permissions = S_IRWXU | S_IRGRP | S_IXGRP;
-  char* const* nm_root;
-  char *primary_app_dir = NULL;
-  for(nm_root=local_dirs; *nm_root != NULL; ++nm_root) {
-char *app_dir = get_app_directory(*nm_root, user, app_id);
-if (app_dir == NULL) {
-  // try the next one
-} else if (mkdirs(app_dir, permissions) != 0) {
-  free(app_dir);
-} else if (primary_app_dir == NULL) {
-  primary_app_dir = app_dir;
-} else {
-  free(app_dir);
-}
-  }
-
+  // Create application directories
+  char *primary_app_dir = create_app_dirs(user, app_id, local_dirs);
   if (primary_app_dir == NULL) {
-fprintf(LOGFILE, "Did not create any app directories\n");
 return -1;
   }
 
@@ -1471,8 +1489,16 @@ int create_local_dirs(const char * user, 

[hadoop] branch branch-2.9 updated: YARN-9442. container working directory has group read permissions. Contributed by Jim Brennan.

2019-08-13 Thread ebadger
This is an automated email from the ASF dual-hosted git repository.

ebadger pushed a commit to branch branch-2.9
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.9 by this push:
 new fcaa2c4  YARN-9442. container working directory has group read 
permissions. Contributed by Jim Brennan.
fcaa2c4 is described below

commit fcaa2c4607119d74a890071be78b8b8c2b2f1604
Author: Eric Badger 
AuthorDate: Tue Aug 13 17:41:10 2019 +

YARN-9442. container working directory has group read permissions. 
Contributed by Jim Brennan.

(cherry picked from commit 2ac029b949f041da2ee04da441c5f9f85e1f2c64)

Conflicts:

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c

(cherry picked from commit cec71691be76577718b22f936aea9e2b2cd100ea)

Conflicts:

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c

(cherry picked from commit db88224e8f9d164ac811fcca9efe4a350cebecd1)
---
 .../container-executor/impl/container-executor.c   | 73 +++---
 .../test/test-container-executor.c | 12 
 2 files changed, 62 insertions(+), 23 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index a4803be..e6dcac7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -695,8 +695,8 @@ int check_dir(const char* npath, mode_t st_mode, mode_t 
desired, int finalCompon
  */
 static int create_container_directories(const char* user, const char *app_id,
 const char *container_id, char* const* local_dir, char* const* log_dir, 
const char *work_dir) {
-  // create dirs as 0750
-  const mode_t perms = S_IRWXU | S_IRGRP | S_IXGRP;
+  // create dirs as 0710
+  const mode_t perms = S_IRWXU | S_IXGRP;
   if (user == NULL || app_id == NULL || container_id == NULL ||
   local_dir == NULL || log_dir == NULL || work_dir == NULL ||
   user_detail == NULL || user_detail->pw_name == NULL) {
@@ -739,6 +739,9 @@ static int create_container_directories(const char* user, 
const char *app_id,
 sprintf(combined_name, "%s/%s", app_id, container_id);
 
 char* const* log_dir_ptr;
+// Log dirs need 750 access
+const mode_t logdir_perms = S_IRWXU | S_IRGRP | S_IXGRP;
+
 for(log_dir_ptr = log_dir; *log_dir_ptr != NULL; ++log_dir_ptr) {
   char *container_log_dir = get_app_log_directory(*log_dir_ptr, 
combined_name);
   int check = check_nm_local_dir(nm_uid, *log_dir_ptr);
@@ -752,8 +755,8 @@ static int create_container_directories(const char* user, 
const char *app_id,
   if (container_log_dir == NULL) {
 free(combined_name);
 return OUT_OF_MEMORY;
-  } else if (mkdirs(container_log_dir, perms) != 0) {
-   free(container_log_dir);
+  } else if (mkdirs(container_log_dir, logdir_perms) != 0) {
+free(container_log_dir);
   } else {
result = 0;
free(container_log_dir);
@@ -1115,6 +1118,37 @@ int create_log_dirs(const char *app_id, char * const * 
log_dirs) {
 
 
 /**
+ * Function to create the application directories.
+ * Returns pointer to primary_app_dir or NULL if it fails.
+ */
+static char *create_app_dirs(const char *user,
+ const char *app_id,
+ char* const* local_dirs)
+{
+  // 750
+  mode_t permissions = S_IRWXU | S_IRGRP | S_IXGRP;
+  char* const* nm_root;
+  char *primary_app_dir = NULL;
+  for(nm_root=local_dirs; *nm_root != NULL; ++nm_root) {
+char *app_dir = get_app_directory(*nm_root, user, app_id);
+if (app_dir == NULL) {
+  // try the next one
+} else if (mkdirs(app_dir, permissions) != 0) {
+  free(app_dir);
+} else if (primary_app_dir == NULL) {
+  primary_app_dir = app_dir;
+} else {
+  free(app_dir);
+}
+  }
+
+  if (primary_app_dir == NULL) {
+fprintf(LOGFILE, "Did not create any app directories\n");
+  }
+  return primary_app_dir;
+}
+
+/**
  * Function to prepare the application directories for the container.
  */
 int initialize_app(const char *user, const char *app_id,
@@ -1149,25 +1183,9 @@ int initialize_app(const char *user, const char *app_id,
 return -1;
   }
 
-  // 750
-  mode_t permissions = S_IRWXU | S_IRGRP | S_IXGRP;
-  char* const* nm_root;
-  char *primary_app_dir = NULL;
-  

[hadoop] branch branch-2 updated: YARN-9442. container working directory has group read permissions. Contributed by Jim Brennan.

2019-08-13 Thread ebadger
This is an automated email from the ASF dual-hosted git repository.

ebadger pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new afa9a40  YARN-9442. container working directory has group read 
permissions. Contributed by Jim Brennan.
afa9a40 is described below

commit afa9a4084da7c81fdf85843aaad04667ae5e73c4
Author: Eric Badger 
AuthorDate: Tue Aug 13 17:27:43 2019 +

YARN-9442. container working directory has group read permissions. 
Contributed by Jim Brennan.

(cherry picked from commit 2ac029b949f041da2ee04da441c5f9f85e1f2c64)

Conflicts:

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c

(cherry picked from commit cec71691be76577718b22f936aea9e2b2cd100ea)

Conflicts:

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c

(cherry picked from commit db88224e8f9d164ac811fcca9efe4a350cebecd1)
---
 .../container-executor/impl/container-executor.c   | 73 +++---
 .../test/test-container-executor.c | 12 
 2 files changed, 62 insertions(+), 23 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index 04b31ae..dab027b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -695,8 +695,8 @@ int check_dir(const char* npath, mode_t st_mode, mode_t 
desired, int finalCompon
  */
 static int create_container_directories(const char* user, const char *app_id,
 const char *container_id, char* const* local_dir, char* const* log_dir, 
const char *work_dir) {
-  // create dirs as 0750
-  const mode_t perms = S_IRWXU | S_IRGRP | S_IXGRP;
+  // create dirs as 0710
+  const mode_t perms = S_IRWXU | S_IXGRP;
   if (user == NULL || app_id == NULL || container_id == NULL ||
   local_dir == NULL || log_dir == NULL || work_dir == NULL ||
   user_detail == NULL || user_detail->pw_name == NULL) {
@@ -739,6 +739,9 @@ static int create_container_directories(const char* user, 
const char *app_id,
 sprintf(combined_name, "%s/%s", app_id, container_id);
 
 char* const* log_dir_ptr;
+// Log dirs need 750 access
+const mode_t logdir_perms = S_IRWXU | S_IRGRP | S_IXGRP;
+
 for(log_dir_ptr = log_dir; *log_dir_ptr != NULL; ++log_dir_ptr) {
   char *container_log_dir = get_app_log_directory(*log_dir_ptr, 
combined_name);
   int check = check_nm_local_dir(nm_uid, *log_dir_ptr);
@@ -752,8 +755,8 @@ static int create_container_directories(const char* user, 
const char *app_id,
   if (container_log_dir == NULL) {
 free(combined_name);
 return OUT_OF_MEMORY;
-  } else if (mkdirs(container_log_dir, perms) != 0) {
-   free(container_log_dir);
+  } else if (mkdirs(container_log_dir, logdir_perms) != 0) {
+free(container_log_dir);
   } else {
result = 0;
free(container_log_dir);
@@ -1166,6 +1169,37 @@ int create_container_log_dirs(const char *container_id, 
const char *app_id,
 }
 
 /**
+ * Function to create the application directories.
+ * Returns pointer to primary_app_dir or NULL if it fails.
+ */
+static char *create_app_dirs(const char *user,
+ const char *app_id,
+ char* const* local_dirs)
+{
+  // 750
+  mode_t permissions = S_IRWXU | S_IRGRP | S_IXGRP;
+  char* const* nm_root;
+  char *primary_app_dir = NULL;
+  for(nm_root=local_dirs; *nm_root != NULL; ++nm_root) {
+char *app_dir = get_app_directory(*nm_root, user, app_id);
+if (app_dir == NULL) {
+  // try the next one
+} else if (mkdirs(app_dir, permissions) != 0) {
+  free(app_dir);
+} else if (primary_app_dir == NULL) {
+  primary_app_dir = app_dir;
+} else {
+  free(app_dir);
+}
+  }
+
+  if (primary_app_dir == NULL) {
+fprintf(LOGFILE, "Did not create any app directories\n");
+  }
+  return primary_app_dir;
+}
+
+/**
  * Function to prepare the application directories for the container.
  */
 int initialize_app(const char *user, const char *app_id,
@@ -1208,25 +1242,9 @@ int initialize_app(const char *user, const char *app_id,
 return -1;
   }
 
-  // 750
-  mode_t permissions = S_IRWXU | S_IRGRP | S_IXGRP;
-  char* const* nm_root;
-  char *primary_app_dir = NULL;
-  

[hadoop] branch branch-3.0 updated: YARN-9442. container working directory has group read permissions. Contributed by Jim Brennan.

2019-08-13 Thread ebadger
This is an automated email from the ASF dual-hosted git repository.

ebadger pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new db88224  YARN-9442. container working directory has group read 
permissions. Contributed by Jim Brennan.
db88224 is described below

commit db88224e8f9d164ac811fcca9efe4a350cebecd1
Author: Eric Badger 
AuthorDate: Tue Aug 13 17:19:57 2019 +

YARN-9442. container working directory has group read permissions. 
Contributed by Jim Brennan.

(cherry picked from commit 2ac029b949f041da2ee04da441c5f9f85e1f2c64)

Conflicts:

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c

(cherry picked from commit cec71691be76577718b22f936aea9e2b2cd100ea)

Conflicts:

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
---
 .../container-executor/impl/container-executor.c   | 73 +++---
 .../test/test-container-executor.c | 12 
 2 files changed, 62 insertions(+), 23 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index e85ea63..5e39096 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -695,8 +695,8 @@ int check_dir(const char* npath, mode_t st_mode, mode_t 
desired, int finalCompon
  */
 static int create_container_directories(const char* user, const char *app_id,
 const char *container_id, char* const* local_dir, char* const* log_dir, 
const char *work_dir) {
-  // create dirs as 0750
-  const mode_t perms = S_IRWXU | S_IRGRP | S_IXGRP;
+  // create dirs as 0710
+  const mode_t perms = S_IRWXU | S_IXGRP;
   if (user == NULL || app_id == NULL || container_id == NULL ||
   local_dir == NULL || log_dir == NULL || work_dir == NULL ||
   user_detail == NULL || user_detail->pw_name == NULL) {
@@ -739,6 +739,9 @@ static int create_container_directories(const char* user, 
const char *app_id,
 sprintf(combined_name, "%s/%s", app_id, container_id);
 
 char* const* log_dir_ptr;
+// Log dirs need 750 access
+const mode_t logdir_perms = S_IRWXU | S_IRGRP | S_IXGRP;
+
 for(log_dir_ptr = log_dir; *log_dir_ptr != NULL; ++log_dir_ptr) {
   char *container_log_dir = get_app_log_directory(*log_dir_ptr, 
combined_name);
   int check = check_nm_local_dir(nm_uid, *log_dir_ptr);
@@ -752,8 +755,8 @@ static int create_container_directories(const char* user, 
const char *app_id,
   if (container_log_dir == NULL) {
 free(combined_name);
 return OUT_OF_MEMORY;
-  } else if (mkdirs(container_log_dir, perms) != 0) {
-   free(container_log_dir);
+  } else if (mkdirs(container_log_dir, logdir_perms) != 0) {
+free(container_log_dir);
   } else {
result = 0;
free(container_log_dir);
@@ -1176,6 +1179,37 @@ int create_container_log_dirs(const char *container_id, 
const char *app_id,
 }
 
 /**
+ * Function to create the application directories.
+ * Returns pointer to primary_app_dir or NULL if it fails.
+ */
+static char *create_app_dirs(const char *user,
+ const char *app_id,
+ char* const* local_dirs)
+{
+  // 750
+  mode_t permissions = S_IRWXU | S_IRGRP | S_IXGRP;
+  char* const* nm_root;
+  char *primary_app_dir = NULL;
+  for(nm_root=local_dirs; *nm_root != NULL; ++nm_root) {
+char *app_dir = get_app_directory(*nm_root, user, app_id);
+if (app_dir == NULL) {
+  // try the next one
+} else if (mkdirs(app_dir, permissions) != 0) {
+  free(app_dir);
+} else if (primary_app_dir == NULL) {
+  primary_app_dir = app_dir;
+} else {
+  free(app_dir);
+}
+  }
+
+  if (primary_app_dir == NULL) {
+fprintf(LOGFILE, "Did not create any app directories\n");
+  }
+  return primary_app_dir;
+}
+
+/**
  * Function to prepare the application directories for the container.
  */
 int initialize_app(const char *user, const char *app_id,
@@ -1218,25 +1252,9 @@ int initialize_app(const char *user, const char *app_id,
 return -1;
   }
 
-  // 750
-  mode_t permissions = S_IRWXU | S_IRGRP | S_IXGRP;
-  char* const* nm_root;
-  char *primary_app_dir = NULL;
-  for(nm_root=local_dirs; *nm_root != NULL; ++nm_root) {
-char *app_dir 

[hadoop] branch branch-3.1 updated: YARN-9442. container working directory has group read permissions. Contributed by Jim Brennan.

2019-08-13 Thread ebadger
This is an automated email from the ASF dual-hosted git repository.

ebadger pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new a995e63  YARN-9442. container working directory has group read 
permissions. Contributed by Jim Brennan.
a995e63 is described below

commit a995e6352f0b8450eb8b66cda995d2477a8e2b61
Author: Eric Badger 
AuthorDate: Tue Aug 13 17:16:57 2019 +

YARN-9442. container working directory has group read permissions. 
Contributed by Jim Brennan.

(cherry picked from commit 2ac029b949f041da2ee04da441c5f9f85e1f2c64)

Conflicts:

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c

(cherry picked from commit cec71691be76577718b22f936aea9e2b2cd100ea)
---
 .../container-executor/impl/container-executor.c   | 71 +++---
 .../test/test-container-executor.c | 12 
 2 files changed, 61 insertions(+), 22 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index 7e86e88..00d2e86 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -721,8 +721,8 @@ int check_dir(const char* npath, mode_t st_mode, mode_t 
desired, int finalCompon
  */
 static int create_container_directories(const char* user, const char *app_id,
 const char *container_id, char* const* local_dir, char* const* log_dir, 
const char *work_dir) {
-  // create dirs as 0750
-  const mode_t perms = S_IRWXU | S_IRGRP | S_IXGRP;
+  // create dirs as 0710
+  const mode_t perms = S_IRWXU | S_IXGRP;
   if (user == NULL || app_id == NULL || container_id == NULL ||
   local_dir == NULL || log_dir == NULL || work_dir == NULL ||
   user_detail == NULL || user_detail->pw_name == NULL) {
@@ -764,6 +764,9 @@ static int create_container_directories(const char* user, 
const char *app_id,
   } else {
 sprintf(combined_name, "%s/%s", app_id, container_id);
 char* const* log_dir_ptr;
+// Log dirs need 750 access
+const mode_t logdir_perms = S_IRWXU | S_IRGRP | S_IXGRP;
+
 for(log_dir_ptr = log_dir; *log_dir_ptr != NULL; ++log_dir_ptr) {
   char *container_log_dir = get_app_log_directory(*log_dir_ptr, 
combined_name);
   int check = check_nm_local_dir(nm_uid, *log_dir_ptr);
@@ -777,7 +780,7 @@ static int create_container_directories(const char* user, 
const char *app_id,
   if (container_log_dir == NULL) {
 free(combined_name);
 return OUT_OF_MEMORY;
-  } else if (mkdirs(container_log_dir, perms) != 0) {
+  } else if (mkdirs(container_log_dir, logdir_perms) != 0) {
 free(container_log_dir);
   } else {
 result = 0;
@@ -1229,6 +1232,37 @@ int create_container_log_dirs(const char *container_id, 
const char *app_id,
 }
 
 /**
+ * Function to create the application directories.
+ * Returns pointer to primary_app_dir or NULL if it fails.
+ */
+static char *create_app_dirs(const char *user,
+ const char *app_id,
+ char* const* local_dirs)
+{
+  // 750
+  mode_t permissions = S_IRWXU | S_IRGRP | S_IXGRP;
+  char* const* nm_root;
+  char *primary_app_dir = NULL;
+  for(nm_root=local_dirs; *nm_root != NULL; ++nm_root) {
+char *app_dir = get_app_directory(*nm_root, user, app_id);
+if (app_dir == NULL) {
+  // try the next one
+} else if (mkdirs(app_dir, permissions) != 0) {
+  free(app_dir);
+} else if (primary_app_dir == NULL) {
+  primary_app_dir = app_dir;
+} else {
+  free(app_dir);
+}
+  }
+
+  if (primary_app_dir == NULL) {
+fprintf(LOGFILE, "Did not create any app directories\n");
+  }
+  return primary_app_dir;
+}
+
+/**
  * Function to prepare the application directories for the container.
  */
 int initialize_app(const char *user, const char *app_id,
@@ -1271,25 +1305,9 @@ int initialize_app(const char *user, const char *app_id,
 return -1;
   }
 
-  // 750
-  mode_t permissions = S_IRWXU | S_IRGRP | S_IXGRP;
-  char* const* nm_root;
-  char *primary_app_dir = NULL;
-  for(nm_root=local_dirs; *nm_root != NULL; ++nm_root) {
-char *app_dir = get_app_directory(*nm_root, user, app_id);
-if (app_dir == NULL) {
-  // try the next one
-} else if (mkdirs(app_dir, permissions) != 0) {
-  free(app_dir);
-} else if (primary_app_dir == NULL) {
-  

[hadoop] branch branch-3.2 updated: YARN-9442. container working directory has group read permissions. Contributed by Jim Brennan.

2019-08-13 Thread ebadger
This is an automated email from the ASF dual-hosted git repository.

ebadger pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new cec7169  YARN-9442. container working directory has group read 
permissions. Contributed by Jim Brennan.
cec7169 is described below

commit cec71691be76577718b22f936aea9e2b2cd100ea
Author: Eric Badger 
AuthorDate: Tue Aug 13 16:34:29 2019 +

YARN-9442. container working directory has group read permissions. 
Contributed by Jim Brennan.

(cherry picked from commit 2ac029b949f041da2ee04da441c5f9f85e1f2c64)

Conflicts:

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
---
 .../container-executor/impl/container-executor.c   | 71 +++---
 .../test/test-container-executor.c | 12 
 2 files changed, 61 insertions(+), 22 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index 7e86e88..00d2e86 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -721,8 +721,8 @@ int check_dir(const char* npath, mode_t st_mode, mode_t 
desired, int finalCompon
  */
 static int create_container_directories(const char* user, const char *app_id,
 const char *container_id, char* const* local_dir, char* const* log_dir, 
const char *work_dir) {
-  // create dirs as 0750
-  const mode_t perms = S_IRWXU | S_IRGRP | S_IXGRP;
+  // create dirs as 0710
+  const mode_t perms = S_IRWXU | S_IXGRP;
   if (user == NULL || app_id == NULL || container_id == NULL ||
   local_dir == NULL || log_dir == NULL || work_dir == NULL ||
   user_detail == NULL || user_detail->pw_name == NULL) {
@@ -764,6 +764,9 @@ static int create_container_directories(const char* user, 
const char *app_id,
   } else {
 sprintf(combined_name, "%s/%s", app_id, container_id);
 char* const* log_dir_ptr;
+// Log dirs need 750 access
+const mode_t logdir_perms = S_IRWXU | S_IRGRP | S_IXGRP;
+
 for(log_dir_ptr = log_dir; *log_dir_ptr != NULL; ++log_dir_ptr) {
   char *container_log_dir = get_app_log_directory(*log_dir_ptr, 
combined_name);
   int check = check_nm_local_dir(nm_uid, *log_dir_ptr);
@@ -777,7 +780,7 @@ static int create_container_directories(const char* user, 
const char *app_id,
   if (container_log_dir == NULL) {
 free(combined_name);
 return OUT_OF_MEMORY;
-  } else if (mkdirs(container_log_dir, perms) != 0) {
+  } else if (mkdirs(container_log_dir, logdir_perms) != 0) {
 free(container_log_dir);
   } else {
 result = 0;
@@ -1229,6 +1232,37 @@ int create_container_log_dirs(const char *container_id, 
const char *app_id,
 }
 
 /**
+ * Function to create the application directories.
+ * Returns pointer to primary_app_dir or NULL if it fails.
+ */
+static char *create_app_dirs(const char *user,
+ const char *app_id,
+ char* const* local_dirs)
+{
+  // 750
+  mode_t permissions = S_IRWXU | S_IRGRP | S_IXGRP;
+  char* const* nm_root;
+  char *primary_app_dir = NULL;
+  for(nm_root=local_dirs; *nm_root != NULL; ++nm_root) {
+char *app_dir = get_app_directory(*nm_root, user, app_id);
+if (app_dir == NULL) {
+  // try the next one
+} else if (mkdirs(app_dir, permissions) != 0) {
+  free(app_dir);
+} else if (primary_app_dir == NULL) {
+  primary_app_dir = app_dir;
+} else {
+  free(app_dir);
+}
+  }
+
+  if (primary_app_dir == NULL) {
+fprintf(LOGFILE, "Did not create any app directories\n");
+  }
+  return primary_app_dir;
+}
+
+/**
  * Function to prepare the application directories for the container.
  */
 int initialize_app(const char *user, const char *app_id,
@@ -1271,25 +1305,9 @@ int initialize_app(const char *user, const char *app_id,
 return -1;
   }
 
-  // 750
-  mode_t permissions = S_IRWXU | S_IRGRP | S_IXGRP;
-  char* const* nm_root;
-  char *primary_app_dir = NULL;
-  for(nm_root=local_dirs; *nm_root != NULL; ++nm_root) {
-char *app_dir = get_app_directory(*nm_root, user, app_id);
-if (app_dir == NULL) {
-  // try the next one
-} else if (mkdirs(app_dir, permissions) != 0) {
-  free(app_dir);
-} else if (primary_app_dir == NULL) {
-  primary_app_dir = app_dir;
-} else {
-  free(app_dir);
-}
-  }
-
+  

[hadoop] branch trunk updated: YARN-9442. container working directory has group read permissions. Contributed by Jim Brennan.

2019-08-13 Thread ebadger
This is an automated email from the ASF dual-hosted git repository.

ebadger pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2ac029b  YARN-9442. container working directory has group read 
permissions. Contributed by Jim Brennan.
2ac029b is described below

commit 2ac029b949f041da2ee04da441c5f9f85e1f2c64
Author: Eric Badger 
AuthorDate: Tue Aug 13 16:16:49 2019 +

YARN-9442. container working directory has group read permissions. 
Contributed by Jim Brennan.
---
 .../container-executor/impl/container-executor.c   | 71 +++---
 .../test/test-container-executor.c | 11 
 2 files changed, 60 insertions(+), 22 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index 69dee35..318356d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -736,8 +736,8 @@ int check_dir(const char* npath, mode_t st_mode, mode_t 
desired, int finalCompon
  */
 static int create_container_directories(const char* user, const char *app_id,
 const char *container_id, char* const* local_dir, char* const* log_dir, 
const char *work_dir) {
-  // create dirs as 0750
-  const mode_t perms = S_IRWXU | S_IRGRP | S_IXGRP;
+  // create dirs as 0710
+  const mode_t perms = S_IRWXU | S_IXGRP;
   if (user == NULL || app_id == NULL || container_id == NULL ||
   local_dir == NULL || log_dir == NULL || work_dir == NULL ||
   user_detail == NULL || user_detail->pw_name == NULL) {
@@ -779,6 +779,9 @@ static int create_container_directories(const char* user, 
const char *app_id,
   } else {
 sprintf(combined_name, "%s/%s", app_id, container_id);
 char* const* log_dir_ptr;
+// Log dirs need 750 access
+const mode_t logdir_perms = S_IRWXU | S_IRGRP | S_IXGRP;
+
 for(log_dir_ptr = log_dir; *log_dir_ptr != NULL; ++log_dir_ptr) {
   char *container_log_dir = get_app_log_directory(*log_dir_ptr, 
combined_name);
   int check = check_nm_local_dir(nm_uid, *log_dir_ptr);
@@ -792,7 +795,7 @@ static int create_container_directories(const char* user, 
const char *app_id,
   if (container_log_dir == NULL) {
 free(combined_name);
 return OUT_OF_MEMORY;
-  } else if (mkdirs(container_log_dir, perms) != 0) {
+  } else if (mkdirs(container_log_dir, logdir_perms) != 0) {
 free(container_log_dir);
   } else {
 result = 0;
@@ -1238,6 +1241,37 @@ int create_container_log_dirs(const char *container_id, 
const char *app_id,
 }
 
 /**
+ * Function to create the application directories.
+ * Returns pointer to primary_app_dir or NULL if it fails.
+ */
+static char *create_app_dirs(const char *user,
+ const char *app_id,
+ char* const* local_dirs)
+{
+  // 750
+  mode_t permissions = S_IRWXU | S_IRGRP | S_IXGRP;
+  char* const* nm_root;
+  char *primary_app_dir = NULL;
+  for(nm_root=local_dirs; *nm_root != NULL; ++nm_root) {
+char *app_dir = get_app_directory(*nm_root, user, app_id);
+if (app_dir == NULL) {
+  // try the next one
+} else if (mkdirs(app_dir, permissions) != 0) {
+  free(app_dir);
+} else if (primary_app_dir == NULL) {
+  primary_app_dir = app_dir;
+} else {
+  free(app_dir);
+}
+  }
+
+  if (primary_app_dir == NULL) {
+fprintf(LOGFILE, "Did not create any app directories\n");
+  }
+  return primary_app_dir;
+}
+
+/**
  * Function to prepare the application directories for the container.
  */
 int initialize_app(const char *user, const char *app_id,
@@ -1280,25 +1314,9 @@ int initialize_app(const char *user, const char *app_id,
 return -1;
   }
 
-  // 750
-  mode_t permissions = S_IRWXU | S_IRGRP | S_IXGRP;
-  char* const* nm_root;
-  char *primary_app_dir = NULL;
-  for(nm_root=local_dirs; *nm_root != NULL; ++nm_root) {
-char *app_dir = get_app_directory(*nm_root, user, app_id);
-if (app_dir == NULL) {
-  // try the next one
-} else if (mkdirs(app_dir, permissions) != 0) {
-  free(app_dir);
-} else if (primary_app_dir == NULL) {
-  primary_app_dir = app_dir;
-} else {
-  free(app_dir);
-}
-  }
-
+  // Create application directories
+  char *primary_app_dir = create_app_dirs(user, app_id, local_dirs);
   if (primary_app_dir == NULL) {
-fprintf(LOGFILE, "Did not create any app directories\n");
 return -1;
   }
 
@@ -1738,8 +1756,17 @@ int 

[hadoop] branch trunk updated: HDFS-14717. [Dynamometer] Remove explicit search for JUnit dependency JAR from Dynamometer Client as it is packaged in the primary JAR. Contributed by Kevin Su.

2019-08-13 Thread xkrogen
This is an automated email from the ASF dual-hosted git repository.

xkrogen pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 274966e  HDFS-14717. [Dynamometer] Remove explicit search for JUnit 
dependency JAR from Dynamometer Client as it is packaged in the primary JAR. 
Contributed by Kevin Su.
274966e is described below

commit 274966e675d03875d4522440d1e4d0ab1ba04f23
Author: Erik Krogen 
AuthorDate: Tue Aug 13 08:52:59 2019 -0700

HDFS-14717. [Dynamometer] Remove explicit search for JUnit dependency JAR 
from Dynamometer Client as it is packaged in the primary JAR. Contributed by 
Kevin Su.
---
 .../src/main/java/org/apache/hadoop/tools/dynamometer/Client.java   | 6 +-
 .../org/apache/hadoop/tools/dynamometer/TestDynamometerInfra.java   | 2 +-
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/Client.java
 
b/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/Client.java
index eba9c70..3863993 100644
--- 
a/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/Client.java
+++ 
b/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/Client.java
@@ -97,7 +97,6 @@ import 
org.apache.hadoop.yarn.client.api.YarnClientApplication;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.YarnException;
 import org.apache.hadoop.yarn.util.Records;
-import org.junit.Assert;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -253,10 +252,7 @@ public class Client extends Configured implements Tool {
*/
   public static void main(String[] args) throws Exception {
 Client client = new Client(
-ClassUtil.findContainingJar(ApplicationMaster.class),
-// JUnit is required by MiniDFSCluster at runtime, but is not included
-// in standard Hadoop dependencies, so it must explicitly included here
-ClassUtil.findContainingJar(Assert.class));
+ClassUtil.findContainingJar(ApplicationMaster.class));
 System.exit(ToolRunner.run(new YarnConfiguration(), client, args));
   }
 
diff --git 
a/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/test/java/org/apache/hadoop/tools/dynamometer/TestDynamometerInfra.java
 
b/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/test/java/org/apache/hadoop/tools/dynamometer/TestDynamometerInfra.java
index 5d5ccf1..b008095 100644
--- 
a/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/test/java/org/apache/hadoop/tools/dynamometer/TestDynamometerInfra.java
+++ 
b/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/test/java/org/apache/hadoop/tools/dynamometer/TestDynamometerInfra.java
@@ -122,7 +122,7 @@ public class TestDynamometerInfra {
   private static final String HADOOP_BIN_PATH_KEY = "dyno.hadoop.bin.path";
   private static final String HADOOP_BIN_VERSION_KEY =
   "dyno.hadoop.bin.version";
-  private static final String HADOOP_BIN_VERSION_DEFAULT = "3.1.1";
+  private static final String HADOOP_BIN_VERSION_DEFAULT = "3.1.2";
   private static final String FSIMAGE_FILENAME = "fsimage_0061740";
   private static final String VERSION_FILENAME = "VERSION";
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2 updated: HDFS-14370. Add exponential backoff to the edit log tailer to avoid spinning on empty edit tail requests. Contributed by Erik Krogen.

2019-08-13 Thread xkrogen
This is an automated email from the ASF dual-hosted git repository.

xkrogen pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 719214f  HDFS-14370. Add exponential backoff to the edit log tailer to 
avoid spinning on empty edit tail requests. Contributed by Erik Krogen.
719214f is described below

commit 719214ff30283195715208d7d1f99c883fc53e50
Author: Erik Krogen 
AuthorDate: Wed Jul 24 15:46:32 2019 -0700

HDFS-14370. Add exponential backoff to the edit log tailer to avoid 
spinning on empty edit tail requests. Contributed by Erik Krogen.

(cherry picked from 827dbb11e24be294b40088a8aa46086ba8ca4ba8)
(cherry picked from 016aa139406d1a151fbec3fb8912eb08e0f638d7)
(cherry picked from f6ce2f4a50898b16556089fe18a6717a15570d85)
(cherry picked from 5657e45fb2b904bb31a4e3b70a360477ba288c15)
---
 .../java/org/apache/hadoop/hdfs/DFSConfigKeys.java |  2 +
 .../hdfs/server/namenode/ha/EditLogTailer.java | 51 +---
 .../src/main/resources/hdfs-default.xml| 22 -
 .../src/site/markdown/ObserverNameNode.md  | 22 -
 .../hdfs/server/namenode/ha/TestEditLogTailer.java | 54 +-
 5 files changed, 141 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 597353f..41cc8e6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -818,6 +818,8 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final int DFS_HA_LOGROLL_PERIOD_DEFAULT = 2 * 60; // 2m
   public static final String DFS_HA_TAILEDITS_PERIOD_KEY = 
"dfs.ha.tail-edits.period";
   public static final int DFS_HA_TAILEDITS_PERIOD_DEFAULT = 60; // 1m
+  public static final String DFS_HA_TAILEDITS_PERIOD_BACKOFF_MAX_KEY = 
"dfs.ha.tail-edits.period.backoff-max";
+  public static final int DFS_HA_TAILEDITS_PERIOD_BACKOFF_MAX_DEFAULT = 0; // 
disabled
   public static final String DFS_HA_TAILEDITS_ROLLEDITS_TIMEOUT_KEY =
   "dfs.ha.tail-edits.rolledits.timeout";
   public static final int DFS_HA_TAILEDITS_ROLLEDITS_TIMEOUT_DEFAULT = 60; // 
1m
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java
index f9c4fed..5b90ce3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java
@@ -125,10 +125,17 @@ public class EditLogTailer {
   private final ExecutorService rollEditsRpcExecutor;
 
   /**
-   * How often the Standby should check if there are new finalized segment(s)
-   * available to be read from.
+   * How often the tailer should check if there are new edit log entries
+   * ready to be consumed. This is the initial delay before any backoff.
*/
   private final long sleepTimeMs;
+  /**
+   * The maximum time the tailer should wait between checking for new edit log
+   * entries. Exponential backoff will be applied when an edit log tail is
+   * performed but no edits are available to be read. If this is less than or
+   * equal to 0, backoff is disabled.
+   */
+  private final long maxSleepTimeMs;
 
   private final int nnCount;
   private NamenodeProtocol cachedActiveProxy = null;
@@ -183,6 +190,20 @@ public class EditLogTailer {
 
 sleepTimeMs = conf.getInt(DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_KEY,
 DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_DEFAULT) * 1000;
+long maxSleepTimeMsTemp = conf.getTimeDuration(
+DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_BACKOFF_MAX_KEY,
+DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_BACKOFF_MAX_DEFAULT,
+TimeUnit.MILLISECONDS);
+if (maxSleepTimeMsTemp > 0 && maxSleepTimeMsTemp < sleepTimeMs) {
+  LOG.warn(DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_BACKOFF_MAX_KEY
+  + " was configured to be " + maxSleepTimeMsTemp
+  + " ms, but this is less than "
+  + DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_KEY
+  + ". Disabling backoff when tailing edit logs.");
+  maxSleepTimeMs = 0;
+} else {
+  maxSleepTimeMs = maxSleepTimeMsTemp;
+}
 
 maxRetries = 
conf.getInt(DFSConfigKeys.DFS_HA_TAILEDITS_ALL_NAMESNODES_RETRY_KEY,
 DFSConfigKeys.DFS_HA_TAILEDITS_ALL_NAMESNODES_RETRY_DEFAULT);
@@ -263,7 +284,7 @@ public class EditLogTailer {
   }
 
   @VisibleForTesting
-  public void doTailEdits() throws IOException, InterruptedException {
+  

[hadoop] branch trunk updated: HDFS-13505. Turn on HDFS ACLs by default. Contributed by Siyao Meng.

2019-08-13 Thread ayushsaxena
This is an automated email from the ASF dual-hosted git repository.

ayushsaxena pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e9b6b81  HDFS-13505. Turn on HDFS ACLs by default. Contributed by 
Siyao Meng.
e9b6b81 is described below

commit e9b6b81de44ff5fb9f833cfc32c69b644eb46bad
Author: Ayush Saxena 
AuthorDate: Tue Aug 13 19:17:10 2019 +0530

HDFS-13505. Turn on HDFS ACLs by default. Contributed by Siyao Meng.
---
 .../src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java  | 2 +-
 hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml  | 4 ++--
 .../hadoop-hdfs/src/site/markdown/HdfsPermissionsGuide.md| 2 +-
 .../org/apache/hadoop/hdfs/server/namenode/TestAclConfigFlag.java| 5 +
 4 files changed, 5 insertions(+), 8 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 32db6a5..15f5a41 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -299,7 +299,7 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   HdfsClientConfigKeys.DeprecatedKeys.DFS_PERMISSIONS_SUPERUSERGROUP_KEY;
   public static final String  DFS_PERMISSIONS_SUPERUSERGROUP_DEFAULT = 
"supergroup";
   public static final String  DFS_NAMENODE_ACLS_ENABLED_KEY = 
"dfs.namenode.acls.enabled";
-  public static final boolean DFS_NAMENODE_ACLS_ENABLED_DEFAULT = false;
+  public static final boolean DFS_NAMENODE_ACLS_ENABLED_DEFAULT = true;
   public static final String DFS_NAMENODE_POSIX_ACL_INHERITANCE_ENABLED_KEY =
   "dfs.namenode.posix.acl.inheritance.enabled";
   public static final boolean
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 2f7a4ad..8b57fde 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -510,10 +510,10 @@
 
 
   dfs.namenode.acls.enabled
-  false
+  true
   
 Set to true to enable support for HDFS ACLs (Access Control Lists).  By
-default, ACLs are disabled.  When ACLs are disabled, the NameNode rejects
+default, ACLs are enabled.  When ACLs are disabled, the NameNode rejects
 all RPCs related to setting or getting ACLs.
   
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsPermissionsGuide.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsPermissionsGuide.md
index a4a3b7d..3c284c9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsPermissionsGuide.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsPermissionsGuide.md
@@ -319,7 +319,7 @@ Configuration Parameters
 *   `dfs.namenode.acls.enabled = true`
 
 Set to true to enable support for HDFS ACLs (Access Control Lists). By
-default, ACLs are disabled. When ACLs are disabled, the NameNode rejects
+default, ACLs are enabled. When ACLs are disabled, the NameNode rejects
 all attempts to set an ACL.
 
 *   `dfs.namenode.posix.acl.inheritance.enabled`
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAclConfigFlag.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAclConfigFlag.java
index 36539e5..33f9081 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAclConfigFlag.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAclConfigFlag.java
@@ -160,10 +160,7 @@ public class TestAclConfigFlag {
   private void initCluster(boolean format, boolean aclsEnabled)
   throws Exception {
 Configuration conf = new Configuration();
-// not explicitly setting to false, should be false by default
-if (aclsEnabled) {
-  conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_ACLS_ENABLED_KEY, true);
-}
+conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_ACLS_ENABLED_KEY, aclsEnabled);
 cluster = new MiniDFSCluster.Builder(conf).numDataNodes(1).format(format)
   .build();
 cluster.waitActive();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 02/02: YARN-9135. NM State store ResourceMappings serialization are tested with Strings instead of real Device objects. Contributed by Peter Bacsko

2019-08-13 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit cb91ab73b088ad68c5757cff3734d2667f5cb71c
Author: Szilard Nemeth 
AuthorDate: Fri Jul 12 17:20:42 2019 +0200

YARN-9135. NM State store ResourceMappings serialization are tested with 
Strings instead of real Device objects. Contributed by Peter Bacsko

(cherry picked from commit 8b3c6791b13fc57891cf81e83d4b626b4f2932e6)
---
 .../resources/numa/NumaResourceAllocation.java | 59 ++
 .../resources/numa/NumaResourceAllocator.java  | 34 -
 .../recovery/NMLeveldbStateStoreService.java   |  5 +-
 .../recovery/TestNMLeveldbStateStoreService.java   | 52 +++
 4 files changed, 91 insertions(+), 59 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/numa/NumaResourceAllocation.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/numa/NumaResourceAllocation.java
index f8d4739..e91ac3e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/numa/NumaResourceAllocation.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/numa/NumaResourceAllocation.java
@@ -17,9 +17,11 @@
  */
 package 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.numa;
 
+import com.google.common.collect.ImmutableMap;
+
 import java.io.Serializable;
-import java.util.HashMap;
 import java.util.Map;
+import java.util.Objects;
 import java.util.Set;
 
 /**
@@ -28,27 +30,18 @@ import java.util.Set;
  */
 public class NumaResourceAllocation implements Serializable {
   private static final long serialVersionUID = 6339719798446595123L;
-  private Map nodeVsMemory;
-  private Map nodeVsCpus;
+  private final ImmutableMap nodeVsMemory;
+  private final ImmutableMap nodeVsCpus;
 
-  public NumaResourceAllocation() {
-nodeVsMemory = new HashMap<>();
-nodeVsCpus = new HashMap<>();
+  public NumaResourceAllocation(Map memoryAllocations,
+  Map cpuAllocations) {
+nodeVsMemory = ImmutableMap.copyOf(memoryAllocations);
+nodeVsCpus = ImmutableMap.copyOf(cpuAllocations);
   }
 
   public NumaResourceAllocation(String memNodeId, long memory, String 
cpuNodeId,
   int cpus) {
-this();
-nodeVsMemory.put(memNodeId, memory);
-nodeVsCpus.put(cpuNodeId, cpus);
-  }
-
-  public void addMemoryNode(String memNodeId, long memory) {
-nodeVsMemory.put(memNodeId, memory);
-  }
-
-  public void addCpuNode(String cpuNodeId, int cpus) {
-nodeVsCpus.put(cpuNodeId, cpus);
+this(ImmutableMap.of(memNodeId, memory), ImmutableMap.of(cpuNodeId, cpus));
   }
 
   public Set getMemNodes() {
@@ -59,11 +52,37 @@ public class NumaResourceAllocation implements Serializable 
{
 return nodeVsCpus.keySet();
   }
 
-  public Map getNodeVsMemory() {
+  public ImmutableMap getNodeVsMemory() {
 return nodeVsMemory;
   }
 
-  public Map getNodeVsCpus() {
+  public ImmutableMap getNodeVsCpus() {
 return nodeVsCpus;
   }
-}
+
+  @Override
+  public String toString() {
+return "NumaResourceAllocation{" +
+"nodeVsMemory=" + nodeVsMemory +
+", nodeVsCpus=" + nodeVsCpus +
+'}';
+  }
+
+  @Override
+  public boolean equals(Object o) {
+if (this == o) {
+  return true;
+}
+if (o == null || getClass() != o.getClass()) {
+  return false;
+}
+NumaResourceAllocation that = (NumaResourceAllocation) o;
+return Objects.equals(nodeVsMemory, that.nodeVsMemory) &&
+Objects.equals(nodeVsCpus, that.nodeVsCpus);
+  }
+
+  @Override
+  public int hashCode() {
+return Objects.hash(nodeVsMemory, nodeVsCpus);
+  }
+}
\ No newline at end of file
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/numa/NumaResourceAllocator.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/numa/NumaResourceAllocator.java
index e152bda..7b49b1a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/numa/NumaResourceAllocator.java
+++ 

[hadoop] branch branch-3.1 updated (b040eb9 -> cb91ab7)

2019-08-13 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a change to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from b040eb9  HDFS-14148. HDFS OIV ReverseXML SnapshotSection parser throws 
exception when there are more than one snapshottable directory (#1274) 
Contributed by Siyao Meng.
 new a762a6b  Revert "YARN-9135. NM State store ResourceMappings 
serialization are tested with Strings instead of real Device objects. 
Contributed by Peter Bacsko"
 new cb91ab7  YARN-9135. NM State store ResourceMappings serialization are 
tested with Strings instead of real Device objects. Contributed by Peter Bacsko

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../containermanager/linux/resources/numa/NumaResourceAllocator.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/02: Revert "YARN-9135. NM State store ResourceMappings serialization are tested with Strings instead of real Device objects. Contributed by Peter Bacsko"

2019-08-13 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit a762a6be2943ec54f72b294678d93fee6dbd8921
Author: Szilard Nemeth 
AuthorDate: Tue Aug 13 15:44:50 2019 +0200

Revert "YARN-9135. NM State store ResourceMappings serialization are tested 
with Strings instead of real Device objects. Contributed by Peter Bacsko"

This reverts commit b20fd9e21295add7e80f07b471bba5c76e433aed.
Commit is reverted since unnecessary files were added, accidentally.
---
 .../resources/numa/NumaResourceAllocation.java | 59 --
 .../resources/numa/NumaResourceAllocator.java  | 34 +
 .../recovery/NMLeveldbStateStoreService.java   |  5 +-
 .../recovery/TestNMLeveldbStateStoreService.java   | 52 ---
 4 files changed, 59 insertions(+), 91 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/numa/NumaResourceAllocation.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/numa/NumaResourceAllocation.java
index e91ac3e..f8d4739 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/numa/NumaResourceAllocation.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/numa/NumaResourceAllocation.java
@@ -17,11 +17,9 @@
  */
 package 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.numa;
 
-import com.google.common.collect.ImmutableMap;
-
 import java.io.Serializable;
+import java.util.HashMap;
 import java.util.Map;
-import java.util.Objects;
 import java.util.Set;
 
 /**
@@ -30,18 +28,27 @@ import java.util.Set;
  */
 public class NumaResourceAllocation implements Serializable {
   private static final long serialVersionUID = 6339719798446595123L;
-  private final ImmutableMap nodeVsMemory;
-  private final ImmutableMap nodeVsCpus;
+  private Map nodeVsMemory;
+  private Map nodeVsCpus;
 
-  public NumaResourceAllocation(Map memoryAllocations,
-  Map cpuAllocations) {
-nodeVsMemory = ImmutableMap.copyOf(memoryAllocations);
-nodeVsCpus = ImmutableMap.copyOf(cpuAllocations);
+  public NumaResourceAllocation() {
+nodeVsMemory = new HashMap<>();
+nodeVsCpus = new HashMap<>();
   }
 
   public NumaResourceAllocation(String memNodeId, long memory, String 
cpuNodeId,
   int cpus) {
-this(ImmutableMap.of(memNodeId, memory), ImmutableMap.of(cpuNodeId, cpus));
+this();
+nodeVsMemory.put(memNodeId, memory);
+nodeVsCpus.put(cpuNodeId, cpus);
+  }
+
+  public void addMemoryNode(String memNodeId, long memory) {
+nodeVsMemory.put(memNodeId, memory);
+  }
+
+  public void addCpuNode(String cpuNodeId, int cpus) {
+nodeVsCpus.put(cpuNodeId, cpus);
   }
 
   public Set getMemNodes() {
@@ -52,37 +59,11 @@ public class NumaResourceAllocation implements Serializable 
{
 return nodeVsCpus.keySet();
   }
 
-  public ImmutableMap getNodeVsMemory() {
+  public Map getNodeVsMemory() {
 return nodeVsMemory;
   }
 
-  public ImmutableMap getNodeVsCpus() {
+  public Map getNodeVsCpus() {
 return nodeVsCpus;
   }
-
-  @Override
-  public String toString() {
-return "NumaResourceAllocation{" +
-"nodeVsMemory=" + nodeVsMemory +
-", nodeVsCpus=" + nodeVsCpus +
-'}';
-  }
-
-  @Override
-  public boolean equals(Object o) {
-if (this == o) {
-  return true;
-}
-if (o == null || getClass() != o.getClass()) {
-  return false;
-}
-NumaResourceAllocation that = (NumaResourceAllocation) o;
-return Objects.equals(nodeVsMemory, that.nodeVsMemory) &&
-Objects.equals(nodeVsCpus, that.nodeVsCpus);
-  }
-
-  @Override
-  public int hashCode() {
-return Objects.hash(nodeVsMemory, nodeVsCpus);
-  }
-}
\ No newline at end of file
+}
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/numa/NumaResourceAllocator.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/numa/NumaResourceAllocator.java
index f95e55e..e152bda 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/numa/NumaResourceAllocator.java
+++ 

[hadoop] branch ozone-0.4.1 updated: HDDS-1952. Disable TestMiniChaosOzoneCluster in integration.sh. (#1284)

2019-08-13 Thread nanda
This is an automated email from the ASF dual-hosted git repository.

nanda pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new 569d75b  HDDS-1952. Disable TestMiniChaosOzoneCluster in 
integration.sh. (#1284)
569d75b is described below

commit 569d75bd363bdee626b3ee63fcb9a5576873703a
Author: Doroszlai, Attila 
AuthorDate: Tue Aug 13 19:07:19 2019 +0530

HDDS-1952. Disable TestMiniChaosOzoneCluster in integration.sh. (#1284)

Signed-off-by: Nanda kumar 
(cherry picked from commit 3dc22d6ef12157d804a43c28e029b86d88cc4b5b)
---
 hadoop-ozone/dev-support/checks/integration.sh | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hadoop-ozone/dev-support/checks/integration.sh 
b/hadoop-ozone/dev-support/checks/integration.sh
index 02d9b8b..8170c2e 100755
--- a/hadoop-ozone/dev-support/checks/integration.sh
+++ b/hadoop-ozone/dev-support/checks/integration.sh
@@ -18,7 +18,8 @@ cd "$DIR/../../.." || exit 1
 
 export MAVEN_OPTS="-Xmx4096m"
 mvn -B install -f pom.ozone.xml -DskipTests
-mvn -B -fn test -f pom.ozone.xml -pl 
:hadoop-ozone-integration-test,:hadoop-ozone-filesystem,:hadoop-ozone-tools
+mvn -B -fn test -f pom.ozone.xml -pl 
:hadoop-ozone-integration-test,:hadoop-ozone-filesystem,:hadoop-ozone-tools \
+  -Dtest=\!TestMiniChaosOzoneCluster
 module_failed_tests=$(find "." -name 'TEST*.xml' -print0 \
 | xargs -0 -n1 "grep" -l -E "

[hadoop] branch trunk updated: HDDS-1952. Disable TestMiniChaosOzoneCluster in integration.sh. (#1284)

2019-08-13 Thread nanda
This is an automated email from the ASF dual-hosted git repository.

nanda pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 3dc22d6  HDDS-1952. Disable TestMiniChaosOzoneCluster in 
integration.sh. (#1284)
3dc22d6 is described below

commit 3dc22d6ef12157d804a43c28e029b86d88cc4b5b
Author: Doroszlai, Attila 
AuthorDate: Tue Aug 13 19:07:19 2019 +0530

HDDS-1952. Disable TestMiniChaosOzoneCluster in integration.sh. (#1284)

Signed-off-by: Nanda kumar 
---
 hadoop-ozone/dev-support/checks/integration.sh | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hadoop-ozone/dev-support/checks/integration.sh 
b/hadoop-ozone/dev-support/checks/integration.sh
index 02d9b8b..8170c2e 100755
--- a/hadoop-ozone/dev-support/checks/integration.sh
+++ b/hadoop-ozone/dev-support/checks/integration.sh
@@ -18,7 +18,8 @@ cd "$DIR/../../.." || exit 1
 
 export MAVEN_OPTS="-Xmx4096m"
 mvn -B install -f pom.ozone.xml -DskipTests
-mvn -B -fn test -f pom.ozone.xml -pl 
:hadoop-ozone-integration-test,:hadoop-ozone-filesystem,:hadoop-ozone-tools
+mvn -B -fn test -f pom.ozone.xml -pl 
:hadoop-ozone-integration-test,:hadoop-ozone-filesystem,:hadoop-ozone-tools \
+  -Dtest=\!TestMiniChaosOzoneCluster
 module_failed_tests=$(find "." -name 'TEST*.xml' -print0 \
 | xargs -0 -n1 "grep" -l -E "

[hadoop] branch trunk updated: YARN-9744. RollingLevelDBTimelineStore.getEntityByTime fails with NPE. Contributed by Prabhu Joseph.

2019-08-13 Thread abmodi
This is an automated email from the ASF dual-hosted git repository.

abmodi pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new b4097b9  YARN-9744. RollingLevelDBTimelineStore.getEntityByTime fails 
with NPE. Contributed by Prabhu Joseph.
b4097b9 is described below

commit b4097b96a39bad6214b01989e7f2fb37dad70793
Author: Abhishek Modi 
AuthorDate: Tue Aug 13 19:04:00 2019 +0530

YARN-9744. RollingLevelDBTimelineStore.getEntityByTime fails with NPE. 
Contributed by Prabhu Joseph.
---
 .../timeline/RollingLevelDBTimelineStore.java  | 55 --
 1 file changed, 29 insertions(+), 26 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/RollingLevelDBTimelineStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/RollingLevelDBTimelineStore.java
index 9ebcc23..e85505f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/RollingLevelDBTimelineStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/RollingLevelDBTimelineStore.java
@@ -793,39 +793,42 @@ public class RollingLevelDBTimelineStore extends 
AbstractService implements
 entity = getEntity(entityId, entityType, startTime, queryFields,
 iterator, key, kp.getOffset());
   }
-  // determine if the retrieved entity matches the provided secondary
-  // filters, and if so add it to the list of entities to return
-  boolean filterPassed = true;
-  if (secondaryFilters != null) {
-for (NameValuePair filter : secondaryFilters) {
-  Object v = entity.getOtherInfo().get(filter.getName());
-  if (v == null) {
-Set vs = entity.getPrimaryFilters()
-.get(filter.getName());
-if (vs == null || !vs.contains(filter.getValue())) {
+
+  if (entity != null) {
+// determine if the retrieved entity matches the provided secondary
+// filters, and if so add it to the list of entities to return
+boolean filterPassed = true;
+if (secondaryFilters != null) {
+  for (NameValuePair filter : secondaryFilters) {
+Object v = entity.getOtherInfo().get(filter.getName());
+if (v == null) {
+  Set vs = entity.getPrimaryFilters()
+  .get(filter.getName());
+  if (vs == null || !vs.contains(filter.getValue())) {
+filterPassed = false;
+break;
+  }
+} else if (!v.equals(filter.getValue())) {
   filterPassed = false;
   break;
 }
-  } else if (!v.equals(filter.getValue())) {
-filterPassed = false;
-break;
   }
 }
-  }
-  if (filterPassed) {
-if (entity.getDomainId() == null) {
-  entity.setDomainId(DEFAULT_DOMAIN_ID);
-}
-if (checkAcl == null || checkAcl.check(entity)) {
-  // Remove primary filter and other info if they are added for
-  // matching secondary filters
-  if (addPrimaryFilters) {
-entity.setPrimaryFilters(null);
+if (filterPassed) {
+  if (entity.getDomainId() == null) {
+entity.setDomainId(DEFAULT_DOMAIN_ID);
   }
-  if (addOtherInfo) {
-entity.setOtherInfo(null);
+  if (checkAcl == null || checkAcl.check(entity)) {
+// Remove primary filter and other info if they are added for
+// matching secondary filters
+if (addPrimaryFilters) {
+  entity.setPrimaryFilters(null);
+}
+if (addOtherInfo) {
+  entity.setOtherInfo(null);
+}
+entities.addEntity(entity);
   }
-  entities.addEntity(entity);
 }
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch ozone-0.4.1 updated: HDDS-1908. TestMultiBlockWritesWithDnFailures is failing (#1282)

2019-08-13 Thread nanda
This is an automated email from the ASF dual-hosted git repository.

nanda pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new 2b7b0aa  HDDS-1908. TestMultiBlockWritesWithDnFailures is failing 
(#1282)
2b7b0aa is described below

commit 2b7b0aaa89db05e685bd9aeaa6853ff070cf3c0a
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Tue Aug 13 12:08:55 2019 +0200

HDDS-1908. TestMultiBlockWritesWithDnFailures is failing (#1282)

(cherry picked from commit 0b507d2ddf132985b43b4e2d3ad11d7fd2d90cd3)
---
 .../client/rpc/TestFailureHandlingByClient.java| 65 +-
 .../rpc/TestMultiBlockWritesWithDnFailures.java| 76 ++
 2 files changed, 67 insertions(+), 74 deletions(-)

diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestFailureHandlingByClient.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestFailureHandlingByClient.java
index 7c014cc..9f95be5 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestFailureHandlingByClient.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestFailureHandlingByClient.java
@@ -39,6 +39,7 @@ import org.apache.hadoop.ozone.container.ContainerTestHelper;
 import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
 import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.junit.After;
 import org.junit.Assert;
 import org.junit.Test;
 
@@ -65,7 +66,6 @@ public class TestFailureHandlingByClient {
   private String volumeName;
   private String bucketName;
   private String keyString;
-  private int maxRetries;
 
   /**
* Create a MiniDFSCluster for testing.
@@ -76,7 +76,6 @@ public class TestFailureHandlingByClient {
*/
   private void init() throws Exception {
 conf = new OzoneConfiguration();
-maxRetries = 100;
 chunkSize = (int) OzoneConsts.MB;
 blockSize = 4 * chunkSize;
 conf.setTimeDuration(OzoneConfigKeys.OZONE_CLIENT_WATCH_REQUEST_TIMEOUT, 5,
@@ -114,7 +113,8 @@ public class TestFailureHandlingByClient {
   /**
* Shutdown MiniDFSCluster.
*/
-  private void shutdown() {
+  @After
+  public void shutdown() {
 if (cluster != null) {
   cluster.shutdown();
 }
@@ -159,61 +159,6 @@ public class TestFailureHandlingByClient {
 OmKeyInfo keyInfo = cluster.getOzoneManager().lookupKey(keyArgs);
 Assert.assertEquals(data.length, keyInfo.getDataSize());
 validateData(keyName, data);
-shutdown();
-  }
-
-
-  @Test
-  public void testMultiBlockWritesWithIntermittentDnFailures()
-  throws Exception {
-startCluster();
-String keyName = UUID.randomUUID().toString();
-OzoneOutputStream key =
-createKey(keyName, ReplicationType.RATIS, 6 * blockSize);
-String data = ContainerTestHelper
-.getFixedLengthString(keyString, blockSize + chunkSize);
-key.write(data.getBytes());
-
-// get the name of a valid container
-Assert.assertTrue(key.getOutputStream() instanceof KeyOutputStream);
-KeyOutputStream keyOutputStream =
-(KeyOutputStream) key.getOutputStream();
-List streamEntryList =
-keyOutputStream.getStreamEntries();
-
-// Assert that 6 block will be preallocated
-Assert.assertEquals(6, streamEntryList.size());
-key.write(data.getBytes());
-key.flush();
-long containerId = streamEntryList.get(0).getBlockID().getContainerID();
-BlockID blockId = streamEntryList.get(0).getBlockID();
-ContainerInfo container =
-cluster.getStorageContainerManager().getContainerManager()
-.getContainer(ContainerID.valueof(containerId));
-Pipeline pipeline =
-cluster.getStorageContainerManager().getPipelineManager()
-.getPipeline(container.getPipelineID());
-List datanodes = pipeline.getNodes();
-cluster.shutdownHddsDatanode(datanodes.get(0));
-
-// The write will fail but exception will be handled and length will be
-// updated correctly in OzoneManager once the steam is closed
-key.write(data.getBytes());
-
-// shutdown the second datanode
-cluster.shutdownHddsDatanode(datanodes.get(1));
-key.write(data.getBytes());
-key.close();
-OmKeyArgs keyArgs = new OmKeyArgs.Builder().setVolumeName(volumeName)
-.setBucketName(bucketName).setType(HddsProtos.ReplicationType.RATIS)
-.setFactor(HddsProtos.ReplicationFactor.THREE).setKeyName(keyName)
-.setRefreshPipeline(true)
-.build();
-OmKeyInfo keyInfo = cluster.getOzoneManager().lookupKey(keyArgs);
-Assert.assertEquals(4 * data.getBytes().length, keyInfo.getDataSize());
-validateData(keyName,
-data.concat(data).concat(data).concat(data).getBytes());

[hadoop] branch trunk updated: HDDS-1908. TestMultiBlockWritesWithDnFailures is failing (#1282)

2019-08-13 Thread shashikant
This is an automated email from the ASF dual-hosted git repository.

shashikant pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 0b507d2  HDDS-1908. TestMultiBlockWritesWithDnFailures is failing 
(#1282)
0b507d2 is described below

commit 0b507d2ddf132985b43b4e2d3ad11d7fd2d90cd3
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Tue Aug 13 12:08:55 2019 +0200

HDDS-1908. TestMultiBlockWritesWithDnFailures is failing (#1282)
---
 .../client/rpc/TestFailureHandlingByClient.java| 65 +-
 .../rpc/TestMultiBlockWritesWithDnFailures.java| 76 ++
 2 files changed, 67 insertions(+), 74 deletions(-)

diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestFailureHandlingByClient.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestFailureHandlingByClient.java
index 3c7a25e..edb796b 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestFailureHandlingByClient.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestFailureHandlingByClient.java
@@ -42,6 +42,7 @@ import org.apache.hadoop.ozone.container.ContainerTestHelper;
 import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
 import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.junit.After;
 import org.junit.Assert;
 import org.junit.Test;
 
@@ -71,7 +72,6 @@ public class TestFailureHandlingByClient {
   private String volumeName;
   private String bucketName;
   private String keyString;
-  private int maxRetries;
 
   /**
* Create a MiniDFSCluster for testing.
@@ -82,7 +82,6 @@ public class TestFailureHandlingByClient {
*/
   private void init() throws Exception {
 conf = new OzoneConfiguration();
-maxRetries = 100;
 chunkSize = (int) OzoneConsts.MB;
 blockSize = 4 * chunkSize;
 conf.setTimeDuration(OzoneConfigKeys.OZONE_CLIENT_WATCH_REQUEST_TIMEOUT, 5,
@@ -125,7 +124,8 @@ public class TestFailureHandlingByClient {
   /**
* Shutdown MiniDFSCluster.
*/
-  private void shutdown() {
+  @After
+  public void shutdown() {
 if (cluster != null) {
   cluster.shutdown();
 }
@@ -170,61 +170,6 @@ public class TestFailureHandlingByClient {
 OmKeyInfo keyInfo = cluster.getOzoneManager().lookupKey(keyArgs);
 Assert.assertEquals(data.length, keyInfo.getDataSize());
 validateData(keyName, data);
-shutdown();
-  }
-
-
-  @Test
-  public void testMultiBlockWritesWithIntermittentDnFailures()
-  throws Exception {
-startCluster();
-String keyName = UUID.randomUUID().toString();
-OzoneOutputStream key =
-createKey(keyName, ReplicationType.RATIS, 6 * blockSize);
-String data = ContainerTestHelper
-.getFixedLengthString(keyString, blockSize + chunkSize);
-key.write(data.getBytes());
-
-// get the name of a valid container
-Assert.assertTrue(key.getOutputStream() instanceof KeyOutputStream);
-KeyOutputStream keyOutputStream =
-(KeyOutputStream) key.getOutputStream();
-List streamEntryList =
-keyOutputStream.getStreamEntries();
-
-// Assert that 6 block will be preallocated
-Assert.assertEquals(6, streamEntryList.size());
-key.write(data.getBytes());
-key.flush();
-long containerId = streamEntryList.get(0).getBlockID().getContainerID();
-BlockID blockId = streamEntryList.get(0).getBlockID();
-ContainerInfo container =
-cluster.getStorageContainerManager().getContainerManager()
-.getContainer(ContainerID.valueof(containerId));
-Pipeline pipeline =
-cluster.getStorageContainerManager().getPipelineManager()
-.getPipeline(container.getPipelineID());
-List datanodes = pipeline.getNodes();
-cluster.shutdownHddsDatanode(datanodes.get(0));
-
-// The write will fail but exception will be handled and length will be
-// updated correctly in OzoneManager once the steam is closed
-key.write(data.getBytes());
-
-// shutdown the second datanode
-cluster.shutdownHddsDatanode(datanodes.get(1));
-key.write(data.getBytes());
-key.close();
-OmKeyArgs keyArgs = new OmKeyArgs.Builder().setVolumeName(volumeName)
-.setBucketName(bucketName).setType(HddsProtos.ReplicationType.RATIS)
-.setFactor(HddsProtos.ReplicationFactor.THREE).setKeyName(keyName)
-.setRefreshPipeline(true)
-.build();
-OmKeyInfo keyInfo = cluster.getOzoneManager().lookupKey(keyArgs);
-Assert.assertEquals(4 * data.getBytes().length, keyInfo.getDataSize());
-validateData(keyName,
-data.concat(data).concat(data).concat(data).getBytes());
-shutdown();
   }
 
   @Test
@@ -270,7 +215,6 @@ public class 

[hadoop] branch ozone-0.4.1 updated: HDDS-1951. Wrong symbolic release name on 0.4.1 branch. (#1273)

2019-08-13 Thread nanda
This is an automated email from the ASF dual-hosted git repository.

nanda pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new 7538798  HDDS-1951. Wrong symbolic release name on 0.4.1 branch. 
(#1273)
7538798 is described below

commit 7538798fa047047adc07b4b10d22e67d54401d89
Author: Márton Elek 
AuthorDate: Tue Aug 13 15:25:59 2019 +0530

HDDS-1951. Wrong symbolic release name on 0.4.1 branch. (#1273)

Signed-off-by: Nanda kumar 
---
 hadoop-ozone/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-ozone/pom.xml b/hadoop-ozone/pom.xml
index be626bd..6a05b3a 100644
--- a/hadoop-ozone/pom.xml
+++ b/hadoop-ozone/pom.xml
@@ -31,7 +31,7 @@
 0.4.1-SNAPSHOT
 0.4.0-2337318-SNAPSHOT
 1.60
-Crater Lake
+Biscayne
 ${ozone.version}
 3.0.0-M1
 4.0


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org