[hadoop] branch docker-hadoop-3 updated: HADOOP-17320. Update apache/hadoop:3 to 3.3.0 release (#2415)

2020-11-16 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch docker-hadoop-3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/docker-hadoop-3 by this push:
 new 8b60d22  HADOOP-17320. Update apache/hadoop:3 to 3.3.0 release (#2415)
8b60d22 is described below

commit 8b60d22a605be2f3a7381f3e681659c86c21209d
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Mon Nov 16 14:01:50 2020 +0100

HADOOP-17320. Update apache/hadoop:3 to 3.3.0 release (#2415)
---
 Dockerfile => .dockerignore | 12 +---
 .gitignore  |  1 +
 Dockerfile  |  2 +-
 build.sh|  6 +++---
 config  |  3 +++
 5 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/Dockerfile b/.dockerignore
similarity index 62%
copy from Dockerfile
copy to .dockerignore
index 422c6e4..fc66d88 100644
--- a/Dockerfile
+++ b/.dockerignore
@@ -13,10 +13,8 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-FROM apache/hadoop-runner
-ARG 
HADOOP_URL=https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=hadoop/common/hadoop-3.2.0/hadoop-3.2.0.tar.gz
-WORKDIR /opt
-RUN sudo rm -rf /opt/hadoop && wget $HADOOP_URL -O hadoop.tar.gz && tar zxf 
hadoop.tar.gz && rm hadoop.tar.gz && mv hadoop* hadoop && rm -rf 
/opt/hadoop/share/doc
-WORKDIR /opt/hadoop
-ADD log4j.properties /opt/hadoop/etc/hadoop/log4j.properties
-RUN sudo chown -R hadoop:users /opt/hadoop/etc/hadoop/*
+.git
+.gitignore
+build
+build.sh
+README.md
diff --git a/.gitignore b/.gitignore
index bee8a64..ce8c7f3 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1 +1,2 @@
 __pycache__
+build
diff --git a/Dockerfile b/Dockerfile
index 422c6e4..6d93eee 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -14,7 +14,7 @@
 # limitations under the License.
 
 FROM apache/hadoop-runner
-ARG 
HADOOP_URL=https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=hadoop/common/hadoop-3.2.0/hadoop-3.2.0.tar.gz
+ARG 
HADOOP_URL=https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=hadoop/common/hadoop-3.3.0/hadoop-3.3.0.tar.gz
 WORKDIR /opt
 RUN sudo rm -rf /opt/hadoop && wget $HADOOP_URL -O hadoop.tar.gz && tar zxf 
hadoop.tar.gz && rm hadoop.tar.gz && mv hadoop* hadoop && rm -rf 
/opt/hadoop/share/doc
 WORKDIR /opt/hadoop
diff --git a/build.sh b/build.sh
index 85aacd5..ec07985 100755
--- a/build.sh
+++ b/build.sh
@@ -17,11 +17,11 @@
 DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
 set -e
 mkdir -p build
-if [ ! -d "$DIR/build/apache-rat-0.12" ]; then
-   wget 
"https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=creadur/apache-rat-0.12/apache-rat-0.12-bin.tar.gz";
 -O "$DIR/build/apache-rat.tar.gz"
+if [ ! -d "$DIR/build/apache-rat-0.13" ]; then
+   wget 
"https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=creadur/apache-rat-0.13/apache-rat-0.13-bin.tar.gz";
 -O "$DIR/build/apache-rat.tar.gz"
cd $DIR/build
tar zvxf apache-rat.tar.gz
cd -
 fi
-java -jar $DIR/build/apache-rat-0.12/apache-rat-0.12.jar $DIR -e public -e 
apache-rat-0.12 -e .git -e .gitignore
+java -jar $DIR/build/apache-rat-0.13/apache-rat-0.13.jar $DIR -e public -e 
apache-rat-0.13 -e .git -e .gitignore
 docker build -t apache/hadoop:3 .
diff --git a/config b/config
index 9071494..7b06298 100644
--- a/config
+++ b/config
@@ -18,6 +18,9 @@ CORE-SITE.XML_fs.defaultFS=hdfs://namenode
 HDFS-SITE.XML_dfs.namenode.rpc-address=namenode:8020
 HDFS-SITE.XML_dfs.replication=1
 MAPRED-SITE.XML_mapreduce.framework.name=yarn
+MAPRED-SITE.XML_yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=$HADOOP_HOME
+MAPRED-SITE.XML_mapreduce.map.env=HADOOP_MAPRED_HOME=$HADOOP_HOME
+MAPRED-SITE.XML_mapreduce.reduce.env=HADOOP_MAPRED_HOME=$HADOOP_HOME
 YARN-SITE.XML_yarn.resourcemanager.hostname=resourcemanager
 YARN-SITE.XML_yarn.nodemanager.pmem-check-enabled=false
 YARN-SITE.XML_yarn.nodemanager.delete.debug-delay-sec=600


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2265. integration.sh may report false negative

2019-10-09 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2d81abc  HDDS-2265. integration.sh may report false negative
2d81abc is described below

commit 2d81abce5ecfec555eda4819a6e2f5b22e1cd9b8
Author: Doroszlai, Attila 
AuthorDate: Wed Oct 9 16:17:40 2019 +0200

HDDS-2265. integration.sh may report false negative

Closes #1608
---
 hadoop-ozone/dev-support/checks/_mvn_unit_report.sh | 5 +
 1 file changed, 5 insertions(+)

diff --git a/hadoop-ozone/dev-support/checks/_mvn_unit_report.sh 
b/hadoop-ozone/dev-support/checks/_mvn_unit_report.sh
index df19330..81551d1 100755
--- a/hadoop-ozone/dev-support/checks/_mvn_unit_report.sh
+++ b/hadoop-ozone/dev-support/checks/_mvn_unit_report.sh
@@ -45,6 +45,11 @@ grep -A1 'Crashed tests' "${REPORT_DIR}/output.log" \
   | cut -f2- -d' ' \
   | sort -u >> "${REPORT_DIR}/summary.txt"
 
+## Check if Maven was killed
+if grep -q 'Killed.* mvn .* test ' "${REPORT_DIR}/output.log"; then
+  echo 'Maven test run was killed' >> "${REPORT_DIR}/summary.txt"
+fi
+
 #Collect of all of the report failes of FAILED tests
 while IFS= read -r -d '' dir; do
while IFS=$'\n' read -r file; do


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (4b0a5bc -> b034350)

2019-10-09 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 4b0a5bc  HDDS-2217. Remove log4j and audit configuration from the 
docker-config files
 add b034350  Squashed commit of the following:

No new revisions were added by this update.

Summary of changes:
 hadoop-ozone/dev-support/checks/_mvn_unit_report.sh | 5 -
 1 file changed, 5 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (6f1ab95 -> 1f954e6)

2019-10-09 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 6f1ab95  YARN-9128. Use SerializationUtils from apache commons to 
serialize / deserialize ResourceMappings. Contributed by Zoltan Siegl
 add 1f954e6  HDDS-2217. Remove log4j and audit configuration from the 
docker-config files

No new revisions were added by this update.

Summary of changes:
 .../dist/src/main/compose/ozone-hdfs/docker-config | 46 ---
 .../dist/src/main/compose/ozone-mr/common-config   |  9 
 .../src/main/compose/ozone-om-ha/docker-config | 45 --
 .../src/main/compose/ozone-recon/docker-config | 47 +--
 .../src/main/compose/ozone-topology/docker-config  | 49 
 .../dist/src/main/compose/ozone/docker-config  | 45 --
 .../src/main/compose/ozoneblockade/docker-config   | 45 --
 .../dist/src/main/compose/ozoneperf/docker-config  | 13 --
 .../src/main/compose/ozones3-haproxy/docker-config | 48 
 .../dist/src/main/compose/ozones3/docker-config| 48 
 .../src/main/compose/ozonescripts/docker-config|  7 +--
 .../src/main/compose/ozonesecure-mr/docker-config  | 46 ---
 .../src/main/compose/ozonesecure/docker-config | 53 --
 13 files changed, 2 insertions(+), 499 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2217. Remove log4j and audit configuration from the docker-config files

2019-10-09 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 4b0a5bc  HDDS-2217. Remove log4j and audit configuration from the 
docker-config files
4b0a5bc is described below

commit 4b0a5bca465c84265b8305e001809fd1f986e8da
Author: Doroszlai, Attila 
AuthorDate: Wed Oct 9 15:51:00 2019 +0200

HDDS-2217. Remove log4j and audit configuration from the docker-config files

Closes #1582
---
 hadoop-ozone/dev-support/checks/_mvn_unit_report.sh | 5 +
 1 file changed, 5 insertions(+)

diff --git a/hadoop-ozone/dev-support/checks/_mvn_unit_report.sh 
b/hadoop-ozone/dev-support/checks/_mvn_unit_report.sh
index df19330..81551d1 100755
--- a/hadoop-ozone/dev-support/checks/_mvn_unit_report.sh
+++ b/hadoop-ozone/dev-support/checks/_mvn_unit_report.sh
@@ -45,6 +45,11 @@ grep -A1 'Crashed tests' "${REPORT_DIR}/output.log" \
   | cut -f2- -d' ' \
   | sort -u >> "${REPORT_DIR}/summary.txt"
 
+## Check if Maven was killed
+if grep -q 'Killed.* mvn .* test ' "${REPORT_DIR}/output.log"; then
+  echo 'Maven test run was killed' >> "${REPORT_DIR}/summary.txt"
+fi
+
 #Collect of all of the report failes of FAILED tests
 while IFS= read -r -d '' dir; do
while IFS=$'\n' read -r file; do


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HDDS-2071 created (now 65cb751)

2019-10-07 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-2071
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at 65cb751  address review comments

No new revisions were added by this update.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2252. Enable gdpr robot test in daily build

2019-10-07 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 7f332eb  HDDS-2252. Enable gdpr robot test in daily build
7f332eb is described below

commit 7f332ebf8b67d1ebf03f4fac9596ee18a99054cc
Author: dchitlangia 
AuthorDate: Mon Oct 7 11:35:39 2019 +0200

HDDS-2252. Enable gdpr robot test in daily build

Closes #1602
---
 hadoop-ozone/dist/src/main/compose/ozone/test.sh | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/hadoop-ozone/dist/src/main/compose/ozone/test.sh 
b/hadoop-ozone/dist/src/main/compose/ozone/test.sh
index fbae76d..e06f817 100755
--- a/hadoop-ozone/dist/src/main/compose/ozone/test.sh
+++ b/hadoop-ozone/dist/src/main/compose/ozone/test.sh
@@ -31,6 +31,8 @@ start_docker_env
 
 execute_robot_test scm basic/basic.robot
 
+execute_robot_test scm gdpr/gdpr.robot
+
 stop_docker_env
 
 generate_report


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (fb1ecff -> 579dc87)

2019-10-05 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from fb1ecff  Revert "YARN-9873. Mutation API Config Change updates Version 
Number. Contributed by Prabhu Joseph"
 add 579dc87  HDDS-2251. Add an option to customize unit.sh and 
integration.sh parameters

No new revisions were added by this update.

Summary of changes:
 hadoop-ozone/dev-support/checks/integration.sh | 2 +-
 hadoop-ozone/dev-support/checks/unit.sh| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (8de4374 -> a3cf54c)

2019-10-04 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 8de4374  HDDS-2158. Fixing Json Injection Issue in JsonUtils. (#1486)
 add a3cf54c  HDDS-2250. Generated configs missing from 
ozone-filesystem-lib jars

No new revisions were added by this update.

Summary of changes:
 hadoop-ozone/ozonefs-lib-current/pom.xml | 3 +++
 1 file changed, 3 insertions(+)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (531cc93 -> f826420)

2019-10-04 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 531cc93  HDDS-. Add a method to update ByteBuffer in 
PureJavaCrc32/PureJavaCrc32C. (#1595)
 add f826420  HDDS-2230. Invalid entries in ozonesecure-mr config. 
(Addendum)

No new revisions were added by this update.

Summary of changes:
 hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml | 3 +++
 1 file changed, 3 insertions(+)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2216. Rename HADOOP_RUNNER_VERSION to OZONE_RUNNER_VERSION in compose .env files.

2019-10-04 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new bca014b  HDDS-2216. Rename HADOOP_RUNNER_VERSION to 
OZONE_RUNNER_VERSION in compose .env files.
bca014b is described below

commit bca014b0e03fb37711022ee6ed4272c346cdf5c9
Author: cxorm 
AuthorDate: Thu Oct 3 20:47:36 2019 +0800

HDDS-2216. Rename HADOOP_RUNNER_VERSION to OZONE_RUNNER_VERSION in compose 
.env files.

Closes #1570.
---
 hadoop-ozone/dev-support/checks/blockade.sh  |  2 +-
 hadoop-ozone/dist/src/main/compose/ozone-hdfs/.env   |  2 +-
 .../dist/src/main/compose/ozone-hdfs/docker-compose.yaml |  6 +++---
 hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop27/.env|  2 +-
 .../src/main/compose/ozone-mr/hadoop27/docker-compose.yaml   |  8 
 hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop31/.env|  2 +-
 .../src/main/compose/ozone-mr/hadoop31/docker-compose.yaml   |  8 
 hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop32/.env|  2 +-
 .../src/main/compose/ozone-mr/hadoop32/docker-compose.yaml   |  8 
 hadoop-ozone/dist/src/main/compose/ozone-om-ha/.env  |  2 +-
 .../dist/src/main/compose/ozone-om-ha/docker-compose.yaml| 10 +-
 hadoop-ozone/dist/src/main/compose/ozone-recon/.env  |  2 +-
 .../dist/src/main/compose/ozone-recon/docker-compose.yaml|  8 
 hadoop-ozone/dist/src/main/compose/ozone-topology/.env   |  2 +-
 .../dist/src/main/compose/ozone-topology/docker-compose.yaml | 12 ++--
 hadoop-ozone/dist/src/main/compose/ozone/.env|  2 +-
 hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml |  6 +++---
 hadoop-ozone/dist/src/main/compose/ozoneblockade/.env|  2 +-
 .../dist/src/main/compose/ozoneblockade/docker-compose.yaml  |  8 
 hadoop-ozone/dist/src/main/compose/ozoneperf/.env|  2 +-
 .../dist/src/main/compose/ozoneperf/docker-compose.yaml  | 10 +-
 hadoop-ozone/dist/src/main/compose/ozones3-haproxy/.env  |  2 +-
 .../src/main/compose/ozones3-haproxy/docker-compose.yaml | 12 ++--
 hadoop-ozone/dist/src/main/compose/ozones3/.env  |  2 +-
 .../dist/src/main/compose/ozones3/docker-compose.yaml|  8 
 hadoop-ozone/dist/src/main/compose/ozonescripts/.env |  2 +-
 hadoop-ozone/dist/src/main/compose/ozonesecure-mr/.env   |  2 +-
 .../dist/src/main/compose/ozonesecure-mr/docker-compose.yaml |  8 
 hadoop-ozone/dist/src/main/compose/ozonesecure/.env  |  2 +-
 .../dist/src/main/compose/ozonesecure/docker-compose.yaml| 10 +-
 .../network-tests/src/test/blockade/ozone/cluster.py |  4 ++--
 31 files changed, 79 insertions(+), 79 deletions(-)

diff --git a/hadoop-ozone/dev-support/checks/blockade.sh 
b/hadoop-ozone/dev-support/checks/blockade.sh
index f8b25c1..a48d2b5 100755
--- a/hadoop-ozone/dev-support/checks/blockade.sh
+++ b/hadoop-ozone/dev-support/checks/blockade.sh
@@ -21,7 +21,7 @@ OZONE_VERSION=$(grep "" "$DIR/../../pom.xml" | 
sed 's/<[^>]*>//g'
 cd "$DIR/../../dist/target/ozone-$OZONE_VERSION/tests" || exit 1
 
 source 
${DIR}/../../dist/target/ozone-${OZONE_VERSION}/compose/ozoneblockade/.env
-export HADOOP_RUNNER_VERSION
+export OZONE_RUNNER_VERSION
 export HDDS_VERSION
 
 python -m pytest -s blockade
diff --git a/hadoop-ozone/dist/src/main/compose/ozone-hdfs/.env 
b/hadoop-ozone/dist/src/main/compose/ozone-hdfs/.env
index 8916fc3..df9065c 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-hdfs/.env
+++ b/hadoop-ozone/dist/src/main/compose/ozone-hdfs/.env
@@ -15,4 +15,4 @@
 # limitations under the License.
 
 HADOOP_VERSION=3
-HADOOP_RUNNER_VERSION=${docker.ozone-runner.version}
\ No newline at end of file
+OZONE_RUNNER_VERSION=${docker.ozone-runner.version}
diff --git a/hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-compose.yaml 
b/hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-compose.yaml
index cd06635..7d8295d 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-compose.yaml
+++ b/hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-compose.yaml
@@ -37,7 +37,7 @@ services:
   env_file:
 - ./docker-config
om:
-  image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
+  image: apache/ozone-runner:${OZONE_RUNNER_VERSION}
   volumes:
  - ../..:/opt/hadoop
   ports:
@@ -48,7 +48,7 @@ services:
   - ./docker-config
   command: ["ozone","om"]
scm:
-  image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
+  image: apache/ozone-runner:${OZONE_RUNNER_VERSION}
   volumes:
  - ../..:/opt/hadoop
   ports:
@@ -59,7 +59,7 @@ services:
   ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION
 

[hadoop] 01/01: Merge remote-tracking branch 'origin/trunk' into HDDS-1880-Decom

2019-10-04 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch HDDS-1880-Decom
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit ec70207838d5b29fa0b534b13c103865e50a35e8
Merge: fd5e877 6171a41
Author: Márton Elek 
AuthorDate: Fri Oct 4 14:17:38 2019 +0200

Merge remote-tracking branch 'origin/trunk' into HDDS-1880-Decom

 BUILDING.txt   |  31 -
 dev-support/docker/Dockerfile  |  19 +-
 .../hadoop-cos/dev-support/findbugs-exclude.xml|  18 +
 hadoop-cloud-storage-project/hadoop-cos/pom.xml| 140 
 .../site/markdown/cloud-storage/index.md   | 367 ++
 .../hadoop-cos/site/resources/css/site.css |  29 +
 .../java/org/apache/hadoop/fs/cosn/BufferPool.java | 245 +++
 .../hadoop/fs/cosn/ByteBufferInputStream.java  |  89 +++
 .../hadoop/fs/cosn/ByteBufferOutputStream.java |  74 ++
 .../apache/hadoop/fs/cosn/ByteBufferWrapper.java   | 103 +++
 .../java/org/apache/hadoop/fs/cosn/Constants.java  |  35 +-
 .../main/java/org/apache/hadoop/fs/cosn/CosN.java  |  31 +-
 .../org/apache/hadoop/fs/cosn/CosNConfigKeys.java  |  86 +++
 .../apache/hadoop/fs/cosn/CosNCopyFileContext.java |  66 ++
 .../apache/hadoop/fs/cosn/CosNCopyFileTask.java|  68 ++
 .../apache/hadoop/fs/cosn/CosNFileReadTask.java| 125 
 .../org/apache/hadoop/fs/cosn/CosNFileSystem.java  | 814 +
 .../org/apache/hadoop/fs/cosn/CosNInputStream.java | 365 +
 .../apache/hadoop/fs/cosn/CosNOutputStream.java| 284 +++
 .../java/org/apache/hadoop/fs/cosn/CosNUtils.java  | 167 +
 .../hadoop/fs/cosn/CosNativeFileSystemStore.java   | 768 +++
 .../org/apache/hadoop/fs/cosn/FileMetadata.java|  68 ++
 .../hadoop/fs/cosn/NativeFileSystemStore.java  |  99 +++
 .../org/apache/hadoop/fs/cosn/PartialListing.java  |  64 ++
 .../main/java/org/apache/hadoop/fs/cosn/Unit.java  |  27 +-
 .../fs/cosn/auth/COSCredentialProviderList.java| 139 
 .../EnvironmentVariableCredentialProvider.java |  55 ++
 .../fs/cosn/auth/NoAuthWithCOSException.java   |  32 +-
 .../fs/cosn/auth/SimpleCredentialProvider.java |  54 ++
 .../apache/hadoop/fs/cosn/auth/package-info.java   |  19 +-
 .../org/apache/hadoop/fs/cosn/package-info.java|  19 +-
 .../apache/hadoop/fs/cosn/CosNTestConfigKey.java   |  30 +-
 .../org/apache/hadoop/fs/cosn/CosNTestUtils.java   |  78 ++
 .../apache/hadoop/fs/cosn/TestCosNInputStream.java | 167 +
 .../hadoop/fs/cosn/TestCosNOutputStream.java   |  87 +++
 .../hadoop/fs/cosn/contract/CosNContract.java  |  36 +-
 .../fs/cosn/contract/TestCosNContractCreate.java   |  26 +-
 .../fs/cosn/contract/TestCosNContractDelete.java   |  26 +-
 .../fs/cosn/contract/TestCosNContractDistCp.java   |  54 ++
 .../contract/TestCosNContractGetFileStatus.java|  27 +-
 .../fs/cosn/contract/TestCosNContractMkdir.java|  26 +-
 .../fs/cosn/contract/TestCosNContractOpen.java |  26 +-
 .../fs/cosn/contract/TestCosNContractRename.java   |  26 +-
 .../fs/cosn/contract/TestCosNContractRootDir.java  |  27 +-
 .../fs/cosn/contract/TestCosNContractSeek.java |  26 +-
 .../hadoop/fs/cosn/contract/package-info.java  |  19 +-
 .../src/test/resources/contract/cosn.xml   | 120 +++
 .../hadoop-cos/src/test/resources/core-site.xml| 107 +++
 .../hadoop-cos/src/test/resources/log4j.properties |  18 +
 hadoop-cloud-storage-project/pom.xml   |   1 +
 .../apache/hadoop/crypto/CryptoInputStream.java|  67 +-
 .../org/apache/hadoop/fs/AbstractFileSystem.java   |  16 +-
 .../hadoop/fs/ByteBufferPositionedReadable.java|  24 +
 .../org/apache/hadoop/fs/ChecksumFileSystem.java   |  22 +
 .../apache/hadoop/fs/CommonPathCapabilities.java   | 126 
 .../org/apache/hadoop/fs/DelegateToFileSystem.java |   7 +
 .../org/apache/hadoop/fs/FSDataInputStream.java|  23 +-
 .../java/org/apache/hadoop/fs/FileContext.java |  23 +-
 .../main/java/org/apache/hadoop/fs/FileSystem.java |  30 +-
 .../org/apache/hadoop/fs/FilterFileSystem.java |   7 +
 .../main/java/org/apache/hadoop/fs/FilterFs.java   |   5 +
 .../main/java/org/apache/hadoop/fs/Globber.java| 208 +-
 .../java/org/apache/hadoop/fs/HarFileSystem.java   |  19 +-
 .../org/apache/hadoop/fs/PathCapabilities.java |  61 ++
 .../org/apache/hadoop/fs/RawLocalFileSystem.java   |  19 +
 .../hadoop/fs/http/AbstractHttpFileSystem.java |  18 +
 .../apache/hadoop/fs/impl/FsLinkResolution.java|  98 +++
 .../hadoop/fs/impl/PathCapabilitiesSupport.java|  40 +-
 .../java/org/apache/hadoop/fs/shell/Mkdir.java |   4 +-
 .../hadoop/fs/viewfs/ChRootedFileSystem.java   |   6 +
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  32 +
 .../apache/hadoop/util/NodeHealthScriptRunner.java |   1 +
 .../src/site/markdown/DeprecatedProperties.md  |   4 +
 .../src/site/markdown/filesystem/filesystem.md |   5 +-
 .../src/sit

[hadoop] branch HDDS-1880-Decom updated (fd5e877 -> ec70207)

2019-10-04 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-1880-Decom
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from fd5e877  Merge branch 'trunk' into HDDS-1880-Decom
 add 3d78b12  YARN-9762. Add submission context label to audit logs. 
Contributed by Manoj Kumar
 add 3fd3d74  HDDS-2161. Create RepeatedKeyInfo structure to be saved in 
deletedTable
 add 6cbe5d3  HDDS-2160. Add acceptance test for ozonesecure-mr compose. 
Contributed by Xiaoyu Yao. (#1490)
 add 0a716bd  HDDS-2159. Fix Race condition in ProfileServlet#pid.
 add bfe1dac  HADOOP-16560. [YARN] use protobuf-maven-plugin to generate 
protobuf classes (#1496)
 add e8e7d7b  HADOOP-16561. [MAPREDUCE] use protobuf-maven-plugin to 
generate protobuf classes (#1500)
 add 8f1a135  HDDS-2081. Fix 
TestRatisPipelineProvider#testCreatePipelinesDnExclude. Contributed by 
Aravindan Vijayan. (#1506)
 add 51c64b3  HDFS-13660. DistCp job fails when new data is appended in the 
file while the DistCp copy job is running
 add 91f50b9  HDDS-2167. Hadoop31-mr acceptance test is failing due to the 
shading
 add 43203b4  HDFS-14868. RBF: Fix typo in TestRouterQuota. Contributed by 
Jinglun.
 add 816d3cb  HDFS-14837. Review of Block.java. Contributed by David 
Mollitor.
 add afa1006  HDFS-14843. Double Synchronization in 
BlockReportLeaseManager. Contributed by David Mollitor.
 add f16cf87  HDDS-2170. Add Object IDs and Update ID to Volume Object 
(#1510)
 add eb96a30  HDFS-14655. [SBN Read] Namenode crashes if one of The JN is 
down. Contributed by Ayush Saxena.
 add 66400c1  HDFS-14808. EC: Improper size values for corrupt ec block in 
LOG. Contributed by Ayush Saxena.
 add c2731d4  YARN-9730. Support forcing configured partitions to be 
exclusive based on app node label
 add 6917754  HDDS-2172.Ozone shell should remove description about REST 
protocol support. Contributed by Siddharth Wagle.
 add a346381  HDDS-2168. TestOzoneManagerDoubleBufferWithOMResponse 
sometimes fails with out of memory error (#1509)
 add 3f89084  HDFS-14845. Ignore AuthenticationFilterInitializer for 
HttpFSServerWebServer and honor hadoop.http.authentication configs.
 add bec0864  YARN-9808. Zero length files in container log output haven't 
got a header. Contributed by Adam Antal
 add c724577  YARN-6715. Fix documentation about NodeHealthScriptRunner. 
Contributed by Peter Bacsko
 add 8baebb5  HDDS-2171. Dangling links in test report due to incompatible 
realpath
 add e6fb6ee  HDDS-1738. Add nullable annotation for OMResponse classes
 add e346e36  HADOOP-15691 Add PathCapabilities to FileSystem and 
FileContext.
 add 16f626f  HDDS-2165. Freon fails if bucket does not exists
 add c89d22d  HADOOP-16602. mvn package fails in hadoop-aws.
 add bdaaa3b  HDFS-14832. RBF: Add Icon for ReadOnly False. Contributed by 
hemanthboyina
 add f647185  HDDS-2067. Create generic service facade with 
tracing/metrics/logging support
 add 606e341  Addendum to YARN-9730. Support forcing configured partitions 
to be exclusive based on app node label
 add 587a8ee  HDFS-14874. Fix TestHDFSCLI and TestDFSShell test break 
because of logging change in mkdir (#1522). Contributed by Gabor Bota.
 add 7b6219a  HDDS-2182. Fix checkstyle violations introduced by HDDS-1738
 add a3f6893  HDFS-14873. Fix dfsadmin doc for triggerBlockReport. 
Contributed by Fei Hui.
 add 1a2a352  HDFS-11934. Add assertion to 
TestDefaultNameNodePort#testGetAddressFromConf. Contributed by Nikhil Navadiya.
 add 18a8c24  YARN-9857. TestDelegationTokenRenewer throws NPE but tests 
pass. Contributed by Ahmed Hussein
 add 06998a1  HDDS-2180. Add Object ID and update ID on VolumeList Object. 
(#1526)
 add b1e55cf  HDFS-14461. RBF: Fix intermittently failing kerberos related 
unit test. Contributed by Xiaoqiao He.
 add 2adcc3c  HDFS-14785. [SBN read] Change client logging to be less 
aggressive. Contributed by Chen Liang.
 add c55ac6a  HDDS-2174. Delete GDPR Encryption Key from metadata when a 
Key is deleted
 add b6ef8cc  HDD-2193. Adding container related metrics in SCM.
 add 0371e95  HDDS-2179. ConfigFileGenerator fails with Java 10 or newer
 add 9bf7a6e  HDDS-2149. Replace findbugs with spotbugs
 add 2870668  Make upstream aware of 3.1.3 release.
 add 8a9ede5  HADOOP-15616. Incorporate Tencent Cloud COS File System 
Implementation. Contributed by Yang Yu.
 add a93a139  HDDS-2185. createmrenv failure not reflected in acceptance 
test result
 add ce58c05  HDFS-14849. Erasure Coding: the internal block is replicated 
many times when datanode is decommissioning. Contributed by HuangTao.
 add 13b427f  HDFS-14564: Add libhdfs APIs for readFully; add readFully to 
ByteBufferPositionedReadable (#963) Contributed by Sahil Takiar.
 add 14b4fbc  HDDS-1146. Adding container related

[hadoop] branch trunk updated: HDDS-2199. In SCMNodeManager dnsToUuidMap cannot track multiple DNs on the same host

2019-10-04 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 6171a41  HDDS-2199. In SCMNodeManager dnsToUuidMap cannot track 
multiple DNs on the same host
6171a41 is described below

commit 6171a41b4c29a4039b53209df546c4c42a278464
Author: S O'Donnell 
AuthorDate: Fri Oct 4 14:00:06 2019 +0200

HDDS-2199. In SCMNodeManager dnsToUuidMap cannot track multiple DNs on the 
same host

Closes #1551
---
 .../apache/hadoop/hdds/scm/node/NodeManager.java   |  8 +--
 .../hadoop/hdds/scm/node/SCMNodeManager.java   | 51 
 .../hdds/scm/server/SCMBlockProtocolServer.java|  7 ++-
 .../hadoop/hdds/scm/container/MockNodeManager.java | 36 ++--
 .../hadoop/hdds/scm/node/TestSCMNodeManager.java   | 67 +-
 .../testutils/ReplicationNodeManagerMock.java  |  5 +-
 6 files changed, 149 insertions(+), 25 deletions(-)

diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
index d8890fb..fd8bb87 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
@@ -192,11 +192,11 @@ public interface NodeManager extends 
StorageContainerNodeProtocol,
   DatanodeDetails getNodeByUuid(String uuid);
 
   /**
-   * Given datanode address(Ipaddress or hostname), returns the DatanodeDetails
-   * for the node.
+   * Given datanode address(Ipaddress or hostname), returns a list of
+   * DatanodeDetails for the datanodes running at that address.
*
* @param address datanode address
-   * @return the given datanode, or null if not found
+   * @return the given datanode, or empty list if none found
*/
-  DatanodeDetails getNodeByAddress(String address);
+  List getNodesByAddress(String address);
 }
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
index d3df858..ed65ed3 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
@@ -25,11 +25,13 @@ import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
+import java.util.LinkedList;
 import java.util.UUID;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ScheduledFuture;
 import java.util.stream.Collectors;
 
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
@@ -98,7 +100,7 @@ public class SCMNodeManager implements NodeManager {
   private final NetworkTopology clusterMap;
   private final DNSToSwitchMapping dnsToSwitchMapping;
   private final boolean useHostname;
-  private final ConcurrentHashMap dnsToUuidMap =
+  private final ConcurrentHashMap> dnsToUuidMap =
   new ConcurrentHashMap<>();
 
   /**
@@ -260,7 +262,7 @@ public class SCMNodeManager implements NodeManager {
   }
   nodeStateManager.addNode(datanodeDetails);
   clusterMap.add(datanodeDetails);
-  dnsToUuidMap.put(dnsName, datanodeDetails.getUuidString());
+  addEntryTodnsToUuidMap(dnsName, datanodeDetails.getUuidString());
   // Updating Node Report, as registration is successful
   processNodeReport(datanodeDetails, nodeReport);
   LOG.info("Registered Data node : {}", datanodeDetails);
@@ -276,6 +278,26 @@ public class SCMNodeManager implements NodeManager {
   }
 
   /**
+   * Add an entry to the dnsToUuidMap, which maps hostname / IP to the DNs
+   * running on that host. As each address can have many DNs running on it,
+   * this is a one to many mapping.
+   * @param dnsName String representing the hostname or IP of the node
+   * @param uuid String representing the UUID of the registered node.
+   */
+  @SuppressFBWarnings(value="AT_OPERATION_SEQUENCE_ON_CONCURRENT_ABSTRACTION",
+  justification="The method is synchronized and this is the only place "+
+  "dnsToUuidMap is modified")
+  private synchronized void addEntryTodnsToUuidMap(
+  String dnsName, String uuid) {
+Set dnList = dnsToUuidMap.get(dnsName);
+if (dnList == null) {
+  dnList = ConcurrentHashMap.newKeySet();
+  dnsToUuidMap.put(dnsName, dnList);
+}
+dnList.add(uuid);
+  }
+
+  /**
* Send heartbeat to indicate the datanode is alive and doing well.
*
* @param data

[hadoop] branch trunk updated (bffcd33 -> d061c84)

2019-10-04 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from bffcd33  HDDS-2230. Invalid entries in ozonesecure-mr config
 add d061c84  HDDS-2140. Add robot test for GDPR feature

No new revisions were added by this update.

Summary of changes:
 .../dist/src/main/smoketest/gdpr/gdpr.robot| 89 ++
 1 file changed, 89 insertions(+)
 create mode 100644 hadoop-ozone/dist/src/main/smoketest/gdpr/gdpr.robot


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (a9849f6 -> bffcd33)

2019-10-04 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from a9849f6  Revert "HDDS- (#1578)" (#1594)
 add bffcd33  HDDS-2230. Invalid entries in ozonesecure-mr config

No new revisions were added by this update.

Summary of changes:
 .../compose/ozonesecure-mr/docker-compose.yaml | 29 ++
 .../src/main/compose/ozonesecure-mr/docker-config  | 28 +++--
 2 files changed, 39 insertions(+), 18 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (c99a121 -> ec8f691)

2019-10-03 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from c99a121  HDFS-14637. Namenode may not replicate blocks to meet the 
policy after enabling upgradeDomain. Contributed by Stephen O'Donnell.
 add ec8f691  HDDS-2225. SCM fails to start in most unsecure environments 
due to leftover secure config

No new revisions were added by this update.

Summary of changes:
 .../dist/src/main/compose/ozonesecure-mr/docker-compose.yaml   | 10 --
 1 file changed, 10 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2187. ozone-mr test fails with No FileSystem for scheme "o3fs"

2019-10-02 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new f1ba9bf  HDDS-2187. ozone-mr test fails with No FileSystem for scheme 
"o3fs"
f1ba9bf is described below

commit f1ba9bfad75acf40faabd5b2f30cbd920fa800ec
Author: Doroszlai, Attila 
AuthorDate: Wed Oct 2 12:57:23 2019 +0200

HDDS-2187. ozone-mr test fails with No FileSystem for scheme "o3fs"

Closes #1537
---
 .../dist/src/main/compose/ozonesecure-mr/docker-config   |  3 +--
 .../META-INF/services/org.apache.hadoop.fs.FileSystem| 16 
 2 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config 
b/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config
index f5c5fbd..5bd3c92 100644
--- a/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config
+++ b/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config
@@ -62,7 +62,6 @@ 
HADOOP-POLICY.XML_org.apache.hadoop.yarn.server.api.ResourceTracker.acl=*
 HDFS-SITE.XML_rpc.metrics.quantile.enable=true
 HDFS-SITE.XML_rpc.metrics.percentiles.intervals=60,300
 
-CORE-SITE.xml_fs.o3fs.impl=org.apache.hadoop.fs.ozone.OzoneFileSystem
 CORE-SITE.xml_fs.AbstractFileSystem.o3fs.impl=org.apache.hadoop.fs.ozone.OzFs
 CORE-SITE.xml_fs.defaultFS=o3fs://bucket1.vol1/
 
@@ -175,4 +174,4 @@ KERBEROS_SERVER=kdc
 JAVA_HOME=/usr/lib/jvm/jre
 JSVC_HOME=/usr/bin
 SLEEP_SECONDS=5
-KERBEROS_ENABLED=true
\ No newline at end of file
+KERBEROS_ENABLED=true
diff --git 
a/hadoop-ozone/tools/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
 
b/hadoop-ozone/tools/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
new file mode 100644
index 000..0368002
--- /dev/null
+++ 
b/hadoop-ozone/tools/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
@@ -0,0 +1,16 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+org.apache.hadoop.fs.ozone.OzoneFileSystem


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2166. Some RPC metrics are missing from SCM prometheus endpoint

2019-10-01 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 918b470  HDDS-2166. Some RPC metrics are missing from SCM prometheus 
endpoint
918b470 is described below

commit 918b470deb35c892efcfa8ceba211a38cbe7bf4c
Author: Márton Elek 
AuthorDate: Tue Oct 1 17:41:45 2019 +0200

HDDS-2166. Some RPC metrics are missing from SCM prometheus endpoint

Closes #1505
---
 .../hadoop/hdds/server/PrometheusMetricsSink.java  | 16 +++--
 .../hdds/server/TestPrometheusMetricsSink.java | 77 --
 2 files changed, 84 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
index 39c8c8b..f37d323 100644
--- 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
+++ 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
@@ -69,8 +69,10 @@ public class PrometheusMetricsSink implements MetricsSink {
 .append(key)
 .append(" ")
 .append(metrics.type().toString().toLowerCase())
-.append("\n")
-.append(key)
+.append("\n");
+
+StringBuilder prometheusMetricKey = new StringBuilder();
+prometheusMetricKey.append(key)
 .append("{");
 String sep = "";
 
@@ -80,7 +82,7 @@ public class PrometheusMetricsSink implements MetricsSink {
 
   //ignore specific tag which includes sub-hierarchy
   if (!tagName.equals("numopenconnectionsperuser")) {
-builder.append(sep)
+prometheusMetricKey.append(sep)
 .append(tagName)
 .append("=\"")
 .append(tag.value())
@@ -88,10 +90,14 @@ public class PrometheusMetricsSink implements MetricsSink {
 sep = ",";
   }
 }
-builder.append("} ");
+prometheusMetricKey.append("}");
+
+String prometheusMetricKeyAsString = prometheusMetricKey.toString();
+builder.append(prometheusMetricKeyAsString);
+builder.append(" ");
 builder.append(metrics.value());
 builder.append("\n");
-metricLines.put(key, builder.toString());
+metricLines.put(prometheusMetricKeyAsString, builder.toString());
 
   }
 }
diff --git 
a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestPrometheusMetricsSink.java
 
b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestPrometheusMetricsSink.java
index e233f65..f2683b5 100644
--- 
a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestPrometheusMetricsSink.java
+++ 
b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestPrometheusMetricsSink.java
@@ -21,17 +21,19 @@ import java.io.ByteArrayOutputStream;
 import java.io.IOException;
 import java.io.OutputStreamWriter;
 
+import org.apache.hadoop.metrics2.MetricsInfo;
+import org.apache.hadoop.metrics2.MetricsSource;
 import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.MetricsTag;
 import org.apache.hadoop.metrics2.annotation.Metric;
 import org.apache.hadoop.metrics2.annotation.Metrics;
 import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
 import org.apache.hadoop.metrics2.lib.MutableCounterLong;
 
+import static java.nio.charset.StandardCharsets.UTF_8;
 import org.junit.Assert;
 import org.junit.Test;
 
-import static java.nio.charset.StandardCharsets.UTF_8;
-
 /**
  * Test prometheus Sink.
  */
@@ -60,7 +62,6 @@ public class TestPrometheusMetricsSink {
 
 //THEN
 String writtenMetrics = stream.toString(UTF_8.name());
-System.out.println(writtenMetrics);
 Assert.assertTrue(
 "The expected metric line is missing from prometheus metrics output",
 writtenMetrics.contains(
@@ -72,6 +73,49 @@ public class TestPrometheusMetricsSink {
   }
 
   @Test
+  public void testPublishWithSameName() throws IOException {
+//GIVEN
+MetricsSystem metrics = DefaultMetricsSystem.instance();
+
+metrics.init("test");
+PrometheusMetricsSink sink = new PrometheusMetricsSink();
+metrics.register("Prometheus", "Prometheus", sink);
+metrics.register("FooBar", "fooBar", (MetricsSource) (collector, all) -> {
+  collector.addRecord("RpcMetrics").add(new MetricsTag(PORT_INFO, "1234"))
+  .addGauge(COUNTER_INFO, 123).endRecord();
+
+  collector.addRecord("RpcMetrics").add(new MetricsTag(
+  PORT_INFO, &q

[hadoop] branch trunk updated (a530ac3 -> b46d823)

2019-09-30 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from a530ac3  HDDS-2153. Add a config to tune max pending requests in Ratis 
leader
 add b46d823  HDDS-2202. Remove unused import in OmUtils

No new revisions were added by this update.

Summary of changes:
 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java | 1 -
 1 file changed, 1 deletion(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (d6b0a8d -> a530ac3)

2019-09-30 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from d6b0a8d  HDDS-2183. Container and pipline subcommands of scmcli should 
be grouped
 add a530ac3  HDDS-2153. Add a config to tune max pending requests in Ratis 
leader

No new revisions were added by this update.

Summary of changes:
 .../src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java   | 5 +
 .../src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java| 5 +
 hadoop-hdds/common/src/main/resources/ozone-default.xml   | 8 
 .../common/transport/server/ratis/XceiverServerRatis.java | 6 ++
 4 files changed, 24 insertions(+)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (760b523 -> d6b0a8d)

2019-09-30 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 760b523  Revert "HDFS-14305. Fix serial number calculation in 
BlockTokenSecretManager to avoid token key ID overlap between NameNodes. 
Contributed by He Xiaoqiao."
 add d6b0a8d  HDDS-2183. Container and pipline subcommands of scmcli should 
be grouped

No new revisions were added by this update.

Summary of changes:
 .../org/apache/hadoop/hdds/scm/cli/SCMCLI.java | 22 --
 .../hdds/scm/cli/container/CloseSubcommand.java|  7 +++
 .../ContainerCommands.java}| 21 -
 .../hdds/scm/cli/container/CreateSubcommand.java   |  5 ++---
 .../hdds/scm/cli/container/DeleteSubcommand.java   |  7 +++
 .../hdds/scm/cli/container/InfoSubcommand.java |  5 ++---
 .../hdds/scm/cli/container/ListSubcommand.java |  5 ++---
 .../cli/pipeline/ActivatePipelineSubcommand.java   | 11 +--
 .../scm/cli/pipeline/ClosePipelineSubcommand.java  | 11 +--
 .../cli/pipeline/DeactivatePipelineSubcommand.java | 11 +--
 .../scm/cli/pipeline/ListPipelinesSubcommand.java  | 11 +--
 .../PipelineCommands.java} | 20 +++-
 12 files changed, 59 insertions(+), 77 deletions(-)
 copy 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/{ReplicationManagerCommands.java
 => container/ContainerCommands.java} (73%)
 copy 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/{ReplicationManagerCommands.java
 => pipeline/PipelineCommands.java} (73%)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (8a9ede5 -> a93a139)

2019-09-27 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 8a9ede5  HADOOP-15616. Incorporate Tencent Cloud COS File System 
Implementation. Contributed by Yang Yu.
 add a93a139  HDDS-2185. createmrenv failure not reflected in acceptance 
test result

No new revisions were added by this update.

Summary of changes:
 hadoop-ozone/dist/src/main/compose/testlib.sh | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (587a8ee -> 7b6219a)

2019-09-26 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 587a8ee  HDFS-14874. Fix TestHDFSCLI and TestDFSShell test break 
because of logging change in mkdir (#1522). Contributed by Gabor Bota.
 add 7b6219a  HDDS-2182. Fix checkstyle violations introduced by HDDS-1738

No new revisions were added by this update.

Summary of changes:
 .../ozone/om/response/bucket/OMBucketDeleteResponse.java   |  1 -
 .../hadoop/ozone/om/response/key/OMKeyCommitResponse.java  |  3 ++-
 .../hadoop/ozone/om/response/key/OMKeyDeleteResponse.java  | 10 +++---
 .../hadoop/ozone/om/response/key/OMKeyPurgeResponse.java   |  3 ++-
 .../hadoop/ozone/om/response/key/OMKeyRenameResponse.java  |  4 ++--
 .../response/s3/multipart/S3MultipartUploadAbortResponse.java  |  3 ++-
 6 files changed, 15 insertions(+), 9 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2165. Freon fails if bucket does not exists

2019-09-25 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 16f626f  HDDS-2165. Freon fails if bucket does not exists
16f626f is described below

commit 16f626f7f05982ee74a55d7248a2ad510683bfc6
Author: Doroszlai, Attila 
AuthorDate: Wed Sep 25 12:19:10 2019 +0200

HDDS-2165. Freon fails if bucket does not exists

Closes #1503
---
 .../main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java| 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java
 
b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java
index 0303479..f9b5e03 100644
--- 
a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java
+++ 
b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java
@@ -267,8 +267,9 @@ public class BaseFreonGenerator {
   } catch (OMException ex) {
 if (ex.getResult() == ResultCodes.BUCKET_NOT_FOUND) {
   volume.createBucket(bucketName);
+} else {
+  throw ex;
 }
-throw ex;
   }
 }
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1738. Add nullable annotation for OMResponse classes

2019-09-25 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e6fb6ee  HDDS-1738. Add nullable annotation for OMResponse classes
e6fb6ee is described below

commit e6fb6ee94d8a295384b2367bc1e2967cd1ba888c
Author: Yi Sheng 
AuthorDate: Wed Sep 25 12:14:07 2019 +0200

HDDS-1738. Add nullable annotation for OMResponse classes

Closes #1499
---
 .../ozone/om/response/bucket/OMBucketCreateResponse.java  |  8 ++--
 .../ozone/om/response/bucket/OMBucketDeleteResponse.java  |  5 -
 .../ozone/om/response/bucket/OMBucketSetPropertyResponse.java |  7 +--
 .../ozone/om/response/file/OMDirectoryCreateResponse.java |  3 ++-
 .../hadoop/ozone/om/response/file/OMFileCreateResponse.java   |  3 ++-
 .../hadoop/ozone/om/response/key/OMAllocateBlockResponse.java |  6 --
 .../hadoop/ozone/om/response/key/OMKeyCommitResponse.java |  6 --
 .../hadoop/ozone/om/response/key/OMKeyCreateResponse.java |  3 ++-
 .../hadoop/ozone/om/response/key/OMKeyDeleteResponse.java |  7 +--
 .../hadoop/ozone/om/response/key/OMKeyPurgeResponse.java  |  3 ++-
 .../hadoop/ozone/om/response/key/OMKeyRenameResponse.java |  6 --
 .../response/s3/multipart/S3MultipartUploadAbortResponse.java |  4 +++-
 .../s3/multipart/S3MultipartUploadCommitPartResponse.java | 11 +++
 .../s3/multipart/S3MultipartUploadCompleteResponse.java   |  3 ++-
 .../ozone/om/response/volume/OMVolumeAclOpResponse.java   |  3 ++-
 .../ozone/om/response/volume/OMVolumeCreateResponse.java  |  4 +++-
 .../ozone/om/response/volume/OMVolumeDeleteResponse.java  |  4 +++-
 .../ozone/om/response/volume/OMVolumeSetOwnerResponse.java|  4 +++-
 .../ozone/om/response/volume/OMVolumeSetQuotaResponse.java|  4 +++-
 19 files changed, 66 insertions(+), 28 deletions(-)

diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketCreateResponse.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketCreateResponse.java
index 9b24910..3f800d3 100644
--- 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketCreateResponse.java
+++ 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketCreateResponse.java
@@ -28,6 +28,9 @@ import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
 .OMResponse;
 import org.apache.hadoop.hdds.utils.db.BatchOperation;
 
+import javax.annotation.Nullable;
+import javax.annotation.Nonnull;
+
 /**
  * Response for CreateBucket request.
  */
@@ -35,8 +38,8 @@ public final class OMBucketCreateResponse extends 
OMClientResponse {
 
   private final OmBucketInfo omBucketInfo;
 
-  public OMBucketCreateResponse(OmBucketInfo omBucketInfo,
-  OMResponse omResponse) {
+  public OMBucketCreateResponse(@Nullable OmBucketInfo omBucketInfo,
+  @Nonnull OMResponse omResponse) {
 super(omResponse);
 this.omBucketInfo = omBucketInfo;
   }
@@ -56,6 +59,7 @@ public final class OMBucketCreateResponse extends 
OMClientResponse {
 }
   }
 
+  @Nullable
   public OmBucketInfo getOmBucketInfo() {
 return omBucketInfo;
   }
diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketDeleteResponse.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketDeleteResponse.java
index 5dd6cdf..0079851 100644
--- 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketDeleteResponse.java
+++ 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketDeleteResponse.java
@@ -25,6 +25,9 @@ import org.apache.hadoop.ozone.om.response.OMClientResponse;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
 import org.apache.hadoop.hdds.utils.db.BatchOperation;
 
+import javax.annotation.Nullable;
+import javax.annotation.Nonnull;
+
 /**
  * Response for DeleteBucket request.
  */
@@ -35,7 +38,7 @@ public final class OMBucketDeleteResponse extends 
OMClientResponse {
 
   public OMBucketDeleteResponse(
   String volumeName, String bucketName,
-  OzoneManagerProtocolProtos.OMResponse omResponse) {
+  @Nonnull OzoneManagerProtocolProtos.OMResponse omResponse) {
 super(omResponse);
 this.volumeName = volumeName;
 this.bucketName = bucketName;
diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketSetPropertyResponse.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketSetPropertyResponse.java
index d5e88c6..f9ce204 100644
--- 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket

[hadoop] branch trunk updated: HDDS-2171. Dangling links in test report due to incompatible realpath

2019-09-25 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 8baebb5  HDDS-2171. Dangling links in test report due to incompatible 
realpath
8baebb5 is described below

commit 8baebb54e13b518ead45e5afabdbf18c48e6efa8
Author: Doroszlai, Attila 
AuthorDate: Wed Sep 25 12:02:42 2019 +0200

HDDS-2171. Dangling links in test report due to incompatible realpath

Closes #1515
---
 hadoop-ozone/dev-support/checks/_mvn_unit_report.sh | 16 +---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/hadoop-ozone/dev-support/checks/_mvn_unit_report.sh 
b/hadoop-ozone/dev-support/checks/_mvn_unit_report.sh
index 9525a9f..df19330 100755
--- a/hadoop-ozone/dev-support/checks/_mvn_unit_report.sh
+++ b/hadoop-ozone/dev-support/checks/_mvn_unit_report.sh
@@ -16,6 +16,16 @@
 
 REPORT_DIR=${REPORT_DIR:-$PWD}
 
+_realpath() {
+  if realpath "$@" > /dev/null; then
+realpath "$@"
+  else
+local relative_to
+relative_to=$(realpath "${1/--relative-to=/}") || return 1
+realpath "$2" | sed -e "s@${relative_to}/@@"
+  fi
+}
+
 ## generate summary txt file
 find "." -name 'TEST*.xml' -print0 \
 | xargs -n1 -0 "grep" -l -E "> "$SUMMARY_FILE"
 done
 done


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HDDS-2067 updated (e4d4fca -> 63c730a)

2019-09-25 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-2067
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


omit e4d4fca  package-info fix + removing FWIO
omit 5b359af  remove RpcController
omit adee693  Update 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/OzoneProtocolMessageDispatcher.java
omit 7a44cdc  Update 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/OzoneProtocolMessageDispatcher.java
omit 7256c69  HDDS-2067. Create generic service facade with 
tracing/metrics/logging support
 add aa664d7  HADOOP-16138. hadoop fs mkdir / of nonexistent abfs container 
raises NPE (#1302). Contributed by Gabor Bota.
 add 2b5fc95  HADOOP-16591 Fix S3A ITest*MRjob failures.
 add c30e495  HDFS-14853. NPE in 
DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is not 
present. Contributed by Ranith Sardar.
 add 3d78b12  YARN-9762. Add submission context label to audit logs. 
Contributed by Manoj Kumar
 add 3fd3d74  HDDS-2161. Create RepeatedKeyInfo structure to be saved in 
deletedTable
 add 6cbe5d3  HDDS-2160. Add acceptance test for ozonesecure-mr compose. 
Contributed by Xiaoyu Yao. (#1490)
 add 0a716bd  HDDS-2159. Fix Race condition in ProfileServlet#pid.
 add bfe1dac  HADOOP-16560. [YARN] use protobuf-maven-plugin to generate 
protobuf classes (#1496)
 add e8e7d7b  HADOOP-16561. [MAPREDUCE] use protobuf-maven-plugin to 
generate protobuf classes (#1500)
 add 8f1a135  HDDS-2081. Fix 
TestRatisPipelineProvider#testCreatePipelinesDnExclude. Contributed by 
Aravindan Vijayan. (#1506)
 add 51c64b3  HDFS-13660. DistCp job fails when new data is appended in the 
file while the DistCp copy job is running
 add 91f50b9  HDDS-2167. Hadoop31-mr acceptance test is failing due to the 
shading
 add 43203b4  HDFS-14868. RBF: Fix typo in TestRouterQuota. Contributed by 
Jinglun.
 add 816d3cb  HDFS-14837. Review of Block.java. Contributed by David 
Mollitor.
 add afa1006  HDFS-14843. Double Synchronization in 
BlockReportLeaseManager. Contributed by David Mollitor.
 add f16cf87  HDDS-2170. Add Object IDs and Update ID to Volume Object 
(#1510)
 add eb96a30  HDFS-14655. [SBN Read] Namenode crashes if one of The JN is 
down. Contributed by Ayush Saxena.
 add 66400c1  HDFS-14808. EC: Improper size values for corrupt ec block in 
LOG. Contributed by Ayush Saxena.
 add c2731d4  YARN-9730. Support forcing configured partitions to be 
exclusive based on app node label
 add 6917754  HDDS-2172.Ozone shell should remove description about REST 
protocol support. Contributed by Siddharth Wagle.
 add a346381  HDDS-2168. TestOzoneManagerDoubleBufferWithOMResponse 
sometimes fails with out of memory error (#1509)
 add 3f89084  HDFS-14845. Ignore AuthenticationFilterInitializer for 
HttpFSServerWebServer and honor hadoop.http.authentication configs.
 add bec0864  YARN-9808. Zero length files in container log output haven't 
got a header. Contributed by Adam Antal
 add c724577  YARN-6715. Fix documentation about NodeHealthScriptRunner. 
Contributed by Peter Bacsko
 add 5d472d1  HDDS-2067. Create generic service facade with 
tracing/metrics/logging support
 add e20836e  Update 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/OzoneProtocolMessageDispatcher.java
 add 89d5e22  Update 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/OzoneProtocolMessageDispatcher.java
 add c17eb3e  remove RpcController
 add ed6ac73  package-info fix + removing FWIO
 add 63c730a  fix checkstyle issue

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (e4d4fca)
\
 N -- N -- N   refs/heads/HDDS-2067 (63c730a)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/hadoop/fs/shell/Mkdir.java |  11 +-
 .../apache/hadoop/util/NodeHealthScriptRunner.java |   1 +
 .../src/site/markdown/DeprecatedProperties.md  |   4 +
 .../hadoop/util/TestNodeHealthScriptRunner.java|   9 +
 .../java/org/apache/hadoop/ozone/OzoneConsts.java  |   2 +
 .../server/OzoneProtocolMessageDispatcher.java |   1 -
 .../apache/hadoop/hdds/server/ProfileServlet.java  |  10 +-
 .../org/apache/hadoop/hdfs/protocol/Block.java | 141 +++

[hadoop] branch trunk updated: HDDS-2167. Hadoop31-mr acceptance test is failing due to the shading

2019-09-24 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 91f50b9  HDDS-2167. Hadoop31-mr acceptance test is failing due to the 
shading
91f50b9 is described below

commit 91f50b98cad8f8fb6d459be6ee2e313230bfb83f
Author: Márton Elek 
AuthorDate: Tue Sep 24 17:52:36 2019 +0200

HDDS-2167. Hadoop31-mr acceptance test is failing due to the shading

Closes #1507
---
 hadoop-ozone/ozonefs/pom.xml | 4 
 1 file changed, 4 insertions(+)

diff --git a/hadoop-ozone/ozonefs/pom.xml b/hadoop-ozone/ozonefs/pom.xml
index 32e4a63..a945f40 100644
--- a/hadoop-ozone/ozonefs/pom.xml
+++ b/hadoop-ozone/ozonefs/pom.xml
@@ -132,6 +132,10 @@
   hadoop-ozone-common
 
 
+  org.apache.httpcomponents
+  httpclient
+
+
   com.google.code.findbugs
   findbugs
   3.0.1


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HDDS-2067 updated (7256c69 -> e4d4fca)

2019-09-24 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-2067
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 7256c69  HDDS-2067. Create generic service facade with 
tracing/metrics/logging support
 add 7a44cdc  Update 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/OzoneProtocolMessageDispatcher.java
 add adee693  Update 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/OzoneProtocolMessageDispatcher.java
 add 5b359af  remove RpcController
 add e4d4fca  package-info fix + removing FWIO

No new revisions were added by this update.

Summary of changes:
 .../hdds/function/FunctionWithIOException.java | 37 --
 .../apache/hadoop/hdds/function/package-info.java  |  5 +--
 .../server/OzoneProtocolMessageDispatcher.java |  5 ++-
 ...lockLocationProtocolServerSideTranslatorPB.java |  1 -
 ...OzoneManagerProtocolServerSideTranslatorPB.java |  2 +-
 5 files changed, 4 insertions(+), 46 deletions(-)
 delete mode 100644 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/function/FunctionWithIOException.java


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HDDS-2067 created (now 7256c69)

2019-09-23 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-2067
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at 7256c69  HDDS-2067. Create generic service facade with 
tracing/metrics/logging support

No new revisions were added by this update.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (28913f7 -> c9900a0)

2019-09-19 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 28913f7  HDDS-2148. Remove redundant code in CreateBucketHandler.java
 add c9900a0  HDDS-2141. Missing total number of operations

No new revisions were added by this update.

Summary of changes:
 .../src/main/resources/webapps/ozoneManager/om-metrics.html   | 2 +-
 .../src/main/resources/webapps/ozoneManager/ozoneManager.js   | 4 +++-
 2 files changed, 4 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2148. Remove redundant code in CreateBucketHandler.java

2019-09-19 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 28913f7  HDDS-2148. Remove redundant code in CreateBucketHandler.java
28913f7 is described below

commit 28913f733e53c75e97397953a71f06191308c9b8
Author: dchitlangia 
AuthorDate: Thu Sep 19 12:26:53 2019 +0200

HDDS-2148. Remove redundant code in CreateBucketHandler.java

Closes #1471
---
 .../org/apache/hadoop/ozone/web/ozShell/bucket/CreateBucketHandler.java  | 1 -
 1 file changed, 1 deletion(-)

diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/CreateBucketHandler.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/CreateBucketHandler.java
index 237a7b2..b4951e8 100644
--- 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/CreateBucketHandler.java
+++ 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/CreateBucketHandler.java
@@ -88,7 +88,6 @@ public class CreateBucketHandler extends Handler {
   System.out.printf("Volume Name : %s%n", volumeName);
   System.out.printf("Bucket Name : %s%n", bucketName);
   if (bekName != null) {
-bb.setBucketEncryptionKey(bekName);
 System.out.printf("Bucket Encryption enabled with Key Name: %s%n",
 bekName);
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2119. Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-19 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e78848f  HDDS-2119. Use checkstyle.xml and suppressions.xml in 
hdds/ozone projects for checkstyle validation
e78848f is described below

commit e78848fc3cb113733ea640f0aa3abbb271b16005
Author: Nanda kumar 
AuthorDate: Thu Sep 19 11:14:59 2019 +0200

HDDS-2119. Use checkstyle.xml and suppressions.xml in hdds/ozone projects 
for checkstyle validation

Closes #1435
---
 .../checkstyle/checkstyle-noframes-sorted.xsl  | 189 
 hadoop-hdds/dev-support/checkstyle/checkstyle.xml  | 196 +
 .../dev-support/checkstyle/suppressions.xml|  21 +++
 hadoop-hdds/pom.xml|   1 -
 pom.ozone.xml  |  26 ++-
 5 files changed, 429 insertions(+), 4 deletions(-)

diff --git a/hadoop-hdds/dev-support/checkstyle/checkstyle-noframes-sorted.xsl 
b/hadoop-hdds/dev-support/checkstyle/checkstyle-noframes-sorted.xsl
new file mode 100644
index 000..7f2aedf
--- /dev/null
+++ b/hadoop-hdds/dev-support/checkstyle/checkstyle-noframes-sorted.xsl
@@ -0,0 +1,189 @@
+
+http://www.w3.org/1999/XSL/Transform"; 
version="1.0">
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+  
+
+  
+.bannercell {
+  border: 0px;
+  padding: 0px;
+}
+body {
+  margin-left: 10;
+  margin-right: 10;
+  font:normal 80% arial,helvetica,sanserif;
+  background-color:#FF;
+  color:#00;
+}
+.a td {
+  background: #efefef;
+}
+.b td {
+  background: #fff;
+}
+th, td {
+  text-align: left;
+  vertical-align: top;
+}
+th {
+  font-weight:bold;
+  background: #ccc;
+  color: black;
+}
+table, th, td {
+  font-size:100%;
+  border: none
+}
+table.log tr td, tr th {
+
+}
+h2 {
+  font-weight:bold;
+  font-size:140%;
+  margin-bottom: 5;
+}
+h3 {
+  font-size:100%;
+  font-weight:bold;
+  background: #525D76;
+  color: white;
+  text-decoration: none;
+  padding: 5px;
+  margin-right: 2px;
+  margin-left: 2px;
+  margin-bottom: 0;
+}
+
+
+
+  
+  
+  
+  
+
+  
+
+CheckStyle Audit
+  
+
+  Designed for use with CheckStyle and Ant.
+
+  
+  
+
+  
+  
+  
+
+  
+  
+  
+
+  
+  
+
+  
+
+
+  
+
+
+  
+Files
+
+  
+Name
+Errors
+  
+  
+
+
+
+  
+  
+  
+
+  
+
+  
+
+  
+
+File 
+
+
+  
+Error Description
+Line
+  
+  
+
+
+  
+  
+  
+
+  
+
+Back to top
+  
+
+  
+Summary
+
+
+
+  
+Files
+Errors
+  
+  
+
+
+
+  
+
+  
+
+  
+
+  a
+  b
+
+  
+
+
+
diff --git a/hadoop-hdds/dev-support/checkstyle/checkstyle.xml 
b/hadoop-hdds/dev-support/checkstyle/checkstyle.xml
new file mode 100644
index 000..1c43741
--- /dev/null
+++ b/hadoop-hdds/dev-support/checkstyle/checkstyle.xml
@@ -0,0 +1,196 @@
+
+https://checkstyle.org/dtds/configuration_1_2.dtd";>
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+  
+  
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 
+
+
+
+
+
+
+
+
+
+  
+
+
+
+
+
+
+
+
+
+
+
+
+  
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+  
+  
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/hadoop-hdds/dev-support/checkstyle/suppressions.xml 
b/hadoop-hdds/dev-support/checkstyle/suppressions.xml
new file mode 100644
index 000..7bc9479
--- /dev/null
+++ b/hadoop-hdds/dev-support/checkstyle/supp

[hadoop] branch trunk updated: HDDS-2016. Add option to enforce GDPR in Bucket Create command

2019-09-19 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 5c963a7  HDDS-2016. Add option to enforce GDPR in Bucket Create command
5c963a7 is described below

commit 5c963a75d648cb36e7e36884f61616831229b25a
Author: dchitlangia 
AuthorDate: Thu Sep 19 10:58:01 2019 +0200

HDDS-2016. Add option to enforce GDPR in Bucket Create command

Closes #1458
---
 hadoop-hdds/docs/content/gdpr/GDPR in Ozone.md | 42 ++
 hadoop-hdds/docs/content/gdpr/_index.md| 38 
 hadoop-hdds/docs/content/shell/BucketCommands.md   |  2 ++
 .../hadoop/ozone/om/helpers/OmBucketArgs.java  |  2 ++
 .../hadoop/ozone/om/helpers/OmBucketInfo.java  |  2 ++
 .../web/ozShell/bucket/CreateBucketHandler.java| 14 
 .../ozone/web/ozShell/keys/InfoKeyHandler.java |  6 
 7 files changed, 106 insertions(+)

diff --git a/hadoop-hdds/docs/content/gdpr/GDPR in Ozone.md 
b/hadoop-hdds/docs/content/gdpr/GDPR in Ozone.md
new file mode 100644
index 000..dd23e04
--- /dev/null
+++ b/hadoop-hdds/docs/content/gdpr/GDPR in Ozone.md
@@ -0,0 +1,42 @@
+---
+title: "GDPR in Ozone"
+date: "2019-September-17"
+weight: 5
+summary: GDPR in Ozone
+icon: user
+---
+
+
+
+Enabling GDPR compliance in Ozone is very straight forward. During bucket
+creation, you can specify `--enforcegdpr=true` or `-g=true` and this will
+ensure the bucket is GDPR compliant. Thus, any key created under this bucket
+will automatically be GDPR compliant.
+
+GDPR can only be enabled on a new bucket. For existing buckets, you would
+have to create a new GDPR compliant bucket and copy data from old bucket into
+ new bucket to take advantage of GDPR.
+
+Example to create a GDPR compliant bucket:
+
+`ozone sh bucket create --enforcegdpr=true /hive/jan`
+
+`ozone sh bucket create -g=true /hive/jan`
+
+If you want to create an ordinary bucket then you can skip `--enforcegdpr`
+and `-g` flags.
\ No newline at end of file
diff --git a/hadoop-hdds/docs/content/gdpr/_index.md 
b/hadoop-hdds/docs/content/gdpr/_index.md
new file mode 100644
index 000..9888369
--- /dev/null
+++ b/hadoop-hdds/docs/content/gdpr/_index.md
@@ -0,0 +1,38 @@
+---
+title: GDPR
+name: GDPR
+identifier: gdpr
+menu: main
+weight: 5
+---
+
+
+{{}}
+  The General Data Protection Regulation (GDPR) is a law that governs 
how personal data should be handled. This is an European Union law, but due to 
the nature of software oftentimes spills into other geographies.
+  Ozone supports GDPR's Right to Erasure(Right to be Forgotten).
+{{}}
+
+
+If you would like to understand Ozone's GDPR framework at a greater
+depth, please take a look at https://issues.apache.org/jira/secure/attachment/12978992/Ozone%20GDPR%20Framework.pdf";>Ozone
 GDPR Framework.
+
+
+Once you create a GDPR compliant bucket, any key created in that bucket will 
+automatically by GDPR compliant.
+
+
diff --git a/hadoop-hdds/docs/content/shell/BucketCommands.md 
b/hadoop-hdds/docs/content/shell/BucketCommands.md
index f59f1ad..e817349 100644
--- a/hadoop-hdds/docs/content/shell/BucketCommands.md
+++ b/hadoop-hdds/docs/content/shell/BucketCommands.md
@@ -35,8 +35,10 @@ The `bucket create` command allows users to create a bucket.
 
 | Arguments  |  Comment|
 ||-|
+| -g, \-\-enforcegdpr| Optional, if set to true it creates a GDPR 
compliant bucket, if not specified or set to false, it creates an ordinary 
bucket.
 |  Uri   | The name of the bucket in 
**/volume/bucket** format.
 
+
 {{< highlight bash >}}
 ozone sh bucket create /hive/jan
 {{< /highlight >}}
diff --git 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketArgs.java
 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketArgs.java
index 8a938a9..aa6e8f5 100644
--- 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketArgs.java
+++ 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketArgs.java
@@ -112,6 +112,8 @@ public final class OmBucketArgs extends WithMetadata 
implements Auditable {
 Map auditMap = new LinkedHashMap<>();
 auditMap.put(OzoneConsts.VOLUME, this.volumeName);
 auditMap.put(OzoneConsts.BUCKET, this.bucketName);
+auditMap.put(OzoneConsts.GDPR_FLAG,
+this.metadata.get(OzoneConsts.GDPR_FLAG));
 auditMap.put(OzoneConsts.IS_VERSION_ENABLED,
 String.valueOf(this.isVersionEnabled));
 if(this.storageType != null){
diff --git 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketInfo.java
 
b/hadoop-ozone

[hadoop] branch trunk updated (ef478fe -> 1029060)

2019-09-19 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from ef478fe  HDDS-730. ozone fs cli prints hadoop fs in usage
 add 1029060  HDDS-2147. Include dumpstream in test report

No new revisions were added by this update.

Summary of changes:
 hadoop-ozone/dev-support/checks/_mvn_unit_report.sh | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-730. ozone fs cli prints hadoop fs in usage

2019-09-19 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new ef478fe  HDDS-730. ozone fs cli prints hadoop fs in usage
ef478fe is described below

commit ef478fe73e72692b660de818d8c8faa9a155a10b
Author: cxorm 
AuthorDate: Thu Sep 19 09:16:12 2019 +0200

HDDS-730. ozone fs cli prints hadoop fs in usage

Closes #1464
---
 hadoop-ozone/common/src/main/bin/ozone |   2 +-
 .../org/apache/hadoop/fs/ozone/OzoneFsShell.java   | 100 +
 2 files changed, 101 insertions(+), 1 deletion(-)

diff --git a/hadoop-ozone/common/src/main/bin/ozone 
b/hadoop-ozone/common/src/main/bin/ozone
index e257519..cd8f202 100755
--- a/hadoop-ozone/common/src/main/bin/ozone
+++ b/hadoop-ozone/common/src/main/bin/ozone
@@ -178,7 +178,7 @@ function ozonecmd_case
   OZONE_RUN_ARTIFACT_NAME="hadoop-ozone-recon"
 ;;
 fs)
-  HADOOP_CLASSNAME=org.apache.hadoop.fs.FsShell
+  HADOOP_CLASSNAME=org.apache.hadoop.fs.ozone.OzoneFsShell
   OZONE_RUN_ARTIFACT_NAME="hadoop-ozone-tools"
 ;;
 scmcli)
diff --git 
a/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFsShell.java
 
b/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFsShell.java
new file mode 100644
index 000..873c843
--- /dev/null
+++ 
b/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFsShell.java
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone;
+
+import java.io.IOException;
+import java.io.PrintStream;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.LinkedList;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FsShell;
+import org.apache.hadoop.fs.shell.Command;
+import org.apache.hadoop.fs.shell.CommandFactory;
+import org.apache.hadoop.fs.shell.FsCommand;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.tools.TableListing;
+import org.apache.hadoop.tracing.TraceUtils;
+import org.apache.hadoop.util.StringUtils;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.htrace.core.TraceScope;
+import org.apache.htrace.core.Tracer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/** Provide command line access to a Ozone FileSystem. */
+@InterfaceAudience.Private
+public class OzoneFsShell extends FsShell {
+
+  private final String ozoneUsagePrefix = "Usage: ozone fs [generic options]";
+
+  /**
+   * Default ctor with no configuration.  Be sure to invoke
+   * {@link #setConf(Configuration)} with a valid configuration prior
+   * to running commands.
+   */
+  public OzoneFsShell() { this(null); }
+
+  /**
+   * Construct a OzoneFsShell with the given configuration.  Commands can be
+   * executed via {@link #run(String[])}
+   * @param conf the hadoop configuration
+   */
+  public OzoneFsShell(Configuration conf) { super(conf); }
+
+  protected void registerCommands(CommandFactory factory) {
+// TODO: DFSAdmin subclasses FsShell so need to protect the command
+// registration.  This class should morph into a base class for
+// commands, and then this method can be abstract
+if (this.getClass().equals(OzoneFsShell.class)) {
+  factory.registerCommands(FsCommand.class);
+}
+  }
+
+  @Override
+  protected String getUsagePrefix() {
+return ozoneUsagePrefix;
+  }
+
+  /**
+   * main() has some simple utility methods
+   * @param argv the command and its arguments
+   * @throws Exception upon error
+   */
+  public static void main(String argv[]) throws Exception {
+OzoneFsShell shell = newShellInstance();
+Configuration conf = new Configuration();
+conf.setQuietMode(false);
+shell.setConf(conf);
+int res;
+try {
+  res = ToolRunner.run(shell, argv);
+} finally {
+  shell.close();
+}
+System.exit(res);
+ 

[hadoop] 01/02: Wrong commit message: Revert second "HDDS-2143. Rename classes under package org.apache.hadoop.utils"

2019-09-18 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit c28e731156b14a834cf70783527b83784ecfaa2d
Author: Márton Elek 
AuthorDate: Wed Sep 18 17:09:27 2019 +0200

Wrong commit message: Revert second "HDDS-2143. Rename classes under 
package org.apache.hadoop.utils"

This reverts commit 111b08a330cd463f377375f12413cee3673d4d51.
---
 .../main/java/org/apache/hadoop/ozone/om/OMNodeDetails.java | 13 -
 1 file changed, 13 deletions(-)

diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMNodeDetails.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMNodeDetails.java
index fc8c818..d399ca9 100644
--- 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMNodeDetails.java
+++ 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMNodeDetails.java
@@ -54,19 +54,6 @@ public final class OMNodeDetails {
 this.httpsAddress = httpsAddress;
   }
 
-  @Override
-  public String toString() {
-return "OMNodeDetails["
-+ "omServiceId=" + omServiceId +
-", omNodeId=" + omNodeId +
-", rpcAddress=" + rpcAddress +
-", rpcPort=" + rpcPort +
-", ratisPort=" + ratisPort +
-", httpAddress=" + httpAddress +
-", httpsAddress=" + httpsAddress +
-"]";
-  }
-
   /**
* Builder class for OMNodeDetails.
*/


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 02/02: HDDS-2065. Implement OMNodeDetails#toString

2019-09-18 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 8d9e9ec3e578786372a86c19cbf0fa28b8a99a6f
Author: Siyao Meng 
AuthorDate: Wed Sep 18 17:10:26 2019 +0200

HDDS-2065. Implement OMNodeDetails#toString

Closes #1387
---
 .../main/java/org/apache/hadoop/ozone/om/OMNodeDetails.java | 13 +
 1 file changed, 13 insertions(+)

diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMNodeDetails.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMNodeDetails.java
index d399ca9..fc8c818 100644
--- 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMNodeDetails.java
+++ 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMNodeDetails.java
@@ -54,6 +54,19 @@ public final class OMNodeDetails {
 this.httpsAddress = httpsAddress;
   }
 
+  @Override
+  public String toString() {
+return "OMNodeDetails["
++ "omServiceId=" + omServiceId +
+", omNodeId=" + omNodeId +
+", rpcAddress=" + rpcAddress +
+", rpcPort=" + rpcPort +
+", ratisPort=" + ratisPort +
+", httpAddress=" + httpAddress +
+", httpsAddress=" + httpsAddress +
+"]";
+  }
+
   /**
* Builder class for OMNodeDetails.
*/


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (111b08a -> 8d9e9ec)

2019-09-18 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 111b08a  HDDS-2143. Rename classes under package 
org.apache.hadoop.utils
 new c28e731  Wrong commit message: Revert second "HDDS-2143. Rename 
classes under package org.apache.hadoop.utils"
 new 8d9e9ec  HDDS-2065. Implement OMNodeDetails#toString

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (6d4b20c -> 111b08a)

2019-09-18 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 6d4b20c  HDDS-2143. Rename classes under package 
org.apache.hadoop.utils
 add 111b08a  HDDS-2143. Rename classes under package 
org.apache.hadoop.utils

No new revisions were added by this update.

Summary of changes:
 .../main/java/org/apache/hadoop/ozone/om/OMNodeDetails.java | 13 +
 1 file changed, 13 insertions(+)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (285ed0a -> 6d4b20c)

2019-09-18 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 285ed0a  HDDS-2137. HddsClientUtils and OzoneUtils have duplicate 
verifyResourceName()
 add 6d4b20c  HDDS-2143. Rename classes under package 
org.apache.hadoop.utils

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/hadoop/hdds/HddsConfigKeys.java  |  2 +-
 .../apache/hadoop/hdds/cli/HddsVersionProvider.java  |  2 +-
 .../hadoop/{ => hdds}/utils/BackgroundService.java   |  2 +-
 .../hadoop/{ => hdds}/utils/BackgroundTask.java  |  2 +-
 .../hadoop/{ => hdds}/utils/BackgroundTaskQueue.java |  2 +-
 .../{ => hdds}/utils/BackgroundTaskResult.java   |  2 +-
 .../hadoop/{ => hdds}/utils/BatchOperation.java  |  2 +-
 .../hadoop/{ => hdds}/utils/EntryConsumer.java   |  2 +-
 .../hadoop/{ => hdds}/utils/HddsVersionInfo.java |  2 +-
 .../apache/hadoop/{ => hdds}/utils/LevelDBStore.java |  4 ++--
 .../{ => hdds}/utils/LevelDBStoreIterator.java   | 11 ---
 .../hadoop/{ => hdds}/utils/MetaStoreIterator.java   |  2 +-
 .../hadoop/{ => hdds}/utils/MetadataKeyFilters.java  |  2 +-
 .../hadoop/{ => hdds}/utils/MetadataStore.java   |  4 ++--
 .../{ => hdds}/utils/MetadataStoreBuilder.java   |  2 +-
 .../hadoop/{ => hdds}/utils/RetriableTask.java   |  2 +-
 .../apache/hadoop/{ => hdds}/utils/RocksDBStore.java |  2 +-
 .../{ => hdds}/utils/RocksDBStoreIterator.java   | 10 --
 .../hadoop/{ => hdds}/utils/RocksDBStoreMBean.java   |  2 +-
 .../apache/hadoop/{ => hdds}/utils/Scheduler.java|  2 +-
 .../org/apache/hadoop/{ => hdds}/utils/UniqueId.java |  2 +-
 .../apache/hadoop/{ => hdds}/utils/VersionInfo.java  |  2 +-
 .../hadoop/{ => hdds}/utils/db/BatchOperation.java   |  2 +-
 .../{ => hdds}/utils/db/ByteArrayKeyValue.java   |  4 ++--
 .../org/apache/hadoop/{ => hdds}/utils/db/Codec.java |  2 +-
 .../hadoop/{ => hdds}/utils/db/CodecRegistry.java|  2 +-
 .../hadoop/{ => hdds}/utils/db/DBCheckpoint.java |  2 +-
 .../hadoop/{ => hdds}/utils/db/DBConfigFromFile.java |  2 +-
 .../apache/hadoop/{ => hdds}/utils/db/DBProfile.java |  2 +-
 .../apache/hadoop/{ => hdds}/utils/db/DBStore.java   |  4 ++--
 .../hadoop/{ => hdds}/utils/db/DBStoreBuilder.java   |  2 +-
 .../hadoop/{ => hdds}/utils/db/DBUpdatesWrapper.java |  2 +-
 .../hadoop/{ => hdds}/utils/db/IntegerCodec.java |  2 +-
 .../apache/hadoop/{ => hdds}/utils/db/LongCodec.java |  2 +-
 .../{ => hdds}/utils/db/RDBBatchOperation.java   |  2 +-
 .../{ => hdds}/utils/db/RDBCheckpointManager.java|  2 +-
 .../apache/hadoop/{ => hdds}/utils/db/RDBStore.java  |  6 +++---
 .../hadoop/{ => hdds}/utils/db/RDBStoreIterator.java |  2 +-
 .../apache/hadoop/{ => hdds}/utils/db/RDBTable.java  |  2 +-
 .../{ => hdds}/utils/db/RocksDBCheckpoint.java   |  2 +-
 .../utils/db/SequenceNumberNotFoundException.java|  2 +-
 .../hadoop/{ => hdds}/utils/db/StringCodec.java  |  2 +-
 .../org/apache/hadoop/{ => hdds}/utils/db/Table.java |  6 +++---
 .../hadoop/{ => hdds}/utils/db/TableConfig.java  |  2 +-
 .../hadoop/{ => hdds}/utils/db/TableIterator.java|  2 +-
 .../hadoop/{ => hdds}/utils/db/TypedTable.java   | 20 ++--
 .../hadoop/{ => hdds}/utils/db/cache/CacheKey.java   |  2 +-
 .../{ => hdds}/utils/db/cache/CacheResult.java   |  2 +-
 .../hadoop/{ => hdds}/utils/db/cache/CacheValue.java |  2 +-
 .../hadoop/{ => hdds}/utils/db/cache/EpochEntry.java |  2 +-
 .../hadoop/{ => hdds}/utils/db/cache/TableCache.java | 11 +--
 .../{ => hdds}/utils/db/cache/TableCacheImpl.java|  2 +-
 .../{ => hdds}/utils/db/cache/package-info.java  |  2 +-
 .../hadoop/{ => hdds}/utils/db/package-info.java |  2 +-
 .../apache/hadoop/{ => hdds}/utils/package-info.java |  2 +-
 .../hadoop/{ => hdds}/utils/TestHddsIdFactory.java   |  2 +-
 .../hadoop/{ => hdds}/utils/TestMetadataStore.java   | 11 +--
 .../hadoop/{ => hdds}/utils/TestRetriableTask.java   |  2 +-
 .../{ => hdds}/utils/TestRocksDBStoreMBean.java  |  2 +-
 .../{ => hdds}/utils/db/TestDBConfigFromFile.java|  4 ++--
 .../{ => hdds}/utils/db/TestDBStoreBuilder.java  |  2 +-
 .../hadoop/{ => hdds}/utils/db/TestRDBStore.java |  2 +-
 .../{ => hdds}/utils/db/TestRDBTableStore.java   |  2 +-
 .../{ => hdds}/utils/db/TestTypedRDBTableStore.java  |  8 
 .../utils/db/cache/TestTableCacheImpl.java   |  2 +-
 .../{ => hdds}/utils/db/cache/package-info.java  |  2 +-
 .../hadoop/{ => hdds}/utils/db/package-info.java |  2 +-
 .../apache/hadoop/{ => hdds}/utils/package-info.java |  2 +-
 .../commandhandler/DeleteBlocksCommandHandler.java   |  2 +-
 .../ozone/container/co

[hadoop] branch trunk updated (087ed86 -> 285ed0a)

2019-09-18 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 087ed86  HDDS-2138. OM bucket operations do not add up
 add 285ed0a  HDDS-2137. HddsClientUtils and OzoneUtils have duplicate 
verifyResourceName()

No new revisions were added by this update.

Summary of changes:
 .../hadoop/hdds/scm/client/HddsClientUtils.java| 40 +---
 .../apache/hadoop/ozone/web/utils/OzoneUtils.java  | 73 +-
 2 files changed, 17 insertions(+), 96 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (419dd0f -> 087ed86)

2019-09-18 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 419dd0f  HDDS-2134. OM Metrics graphs include empty request type
 add 087ed86  HDDS-2138. OM bucket operations do not add up

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/hadoop/ozone/om/OMMetrics.java | 28 +++---
 1 file changed, 14 insertions(+), 14 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2134. OM Metrics graphs include empty request type

2019-09-18 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 419dd0f  HDDS-2134. OM Metrics graphs include empty request type
419dd0f is described below

commit 419dd0faf6b49a9a876d88caf26cf692caeaa620
Author: Doroszlai, Attila 
AuthorDate: Wed Sep 18 14:09:15 2019 +0200

HDDS-2134. OM Metrics graphs include empty request type

Closes #1451
---
 .../src/main/resources/webapps/ozoneManager/ozoneManager.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-ozone/ozone-manager/src/main/resources/webapps/ozoneManager/ozoneManager.js
 
b/hadoop-ozone/ozone-manager/src/main/resources/webapps/ozoneManager/ozoneManager.js
index ca03554..00d5614 100644
--- 
a/hadoop-ozone/ozone-manager/src/main/resources/webapps/ozoneManager/ozoneManager.js
+++ 
b/hadoop-ozone/ozone-manager/src/main/resources/webapps/ozoneManager/ozoneManager.js
@@ -69,7 +69,7 @@
 var groupedMetrics = {others: [], nums: {}};
 var metrics = result.data.beans[0]
 for (var key in metrics) {
-var numericalStatistic = 
key.match(/Num([A-Z][a-z]+)(.+?)(Fails)?$/);
+var numericalStatistic = 
key.match(/Num([A-Z][a-z]+)([A-Z].+?)(Fails)?$/);
 if (numericalStatistic) {
 var type = numericalStatistic[1];
 var name = numericalStatistic[2];


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (e97f0f1 -> 15fded2)

2019-09-18 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from e97f0f1  HADOOP-16565. Region must be provided when requesting session 
credentials or SdkClientException will be thrown (#1454). Contributed by Gabor 
Bota.
 add 15fded2  HDDS-2022. Add additional freon tests

No new revisions were added by this update.

Summary of changes:
 hadoop-ozone/tools/pom.xml |   9 +
 .../hadoop/ozone/freon/BaseFreonGenerator.java | 333 +
 .../hadoop/ozone/freon/ContentGenerator.java   |  62 
 .../java/org/apache/hadoop/ozone/freon/Freon.java  |  11 +-
 .../hadoop/ozone/freon/HadoopFsGenerator.java  |  99 ++
 .../hadoop/ozone/freon/HadoopFsValidator.java  | 100 +++
 .../hadoop/ozone/freon/OmBucketGenerator.java  |  85 ++
 .../apache/hadoop/ozone/freon/OmKeyGenerator.java  | 100 +++
 .../ozone/freon/OzoneClientKeyGenerator.java   | 114 +++
 .../ozone/freon/OzoneClientKeyValidator.java   | 100 +++
 .../org/apache/hadoop/ozone/freon/PathSchema.java  |  38 +++
 .../apache/hadoop/ozone/freon/S3KeyGenerator.java  | 110 +++
 .../apache/hadoop/ozone/freon/SameKeyReader.java   | 105 +++
 13 files changed, 1265 insertions(+), 1 deletion(-)
 create mode 100644 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java
 create mode 100644 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/ContentGenerator.java
 create mode 100644 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/HadoopFsGenerator.java
 create mode 100644 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/HadoopFsValidator.java
 create mode 100644 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/OmBucketGenerator.java
 create mode 100644 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/OmKeyGenerator.java
 create mode 100644 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/OzoneClientKeyGenerator.java
 create mode 100644 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/OzoneClientKeyValidator.java
 create mode 100644 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/PathSchema.java
 create mode 100644 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/S3KeyGenerator.java
 create mode 100644 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/SameKeyReader.java


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (55ce454 -> f3de141)

2019-09-17 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 55ce454  HADOOP-16371: Option to disable GCM for SSL connections when 
running on Java 8.
 add f3de141  HDDS-2135. OM Metric mismatch (MultipartUpload failures)

No new revisions were added by this update.

Summary of changes:
 .../src/main/java/org/apache/hadoop/ozone/om/OMMetrics.java  | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (e54977f -> 3a549ce)

2019-09-17 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from e54977f  HDDS-2132. TestKeyValueContainer is failing (#1457).
 add 3a549ce  HDDS-2120. Remove hadoop classes from ozonefs-current jar

No new revisions were added by this update.

Summary of changes:
 hadoop-ozone/ozonefs-lib-current/pom.xml | 10 ++
 1 file changed, 10 insertions(+)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2078. Get/Renew DelegationToken NPE after HDDS-1909

2019-09-16 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 56f042c  HDDS-2078. Get/Renew DelegationToken NPE after HDDS-1909
56f042c is described below

commit 56f042c48f15586eba6ca371e90253e57614ec8b
Author: Xiaoyu Yao 
AuthorDate: Mon Sep 16 16:58:10 2019 +0200

HDDS-2078. Get/Renew DelegationToken NPE after HDDS-1909

Closes #1444
---
 .../hadoop/ozone/om/request/security/OMGetDelegationTokenRequest.java | 4 ++--
 .../ozone/om/request/security/OMRenewDelegationTokenRequest.java  | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/OMGetDelegationTokenRequest.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/OMGetDelegationTokenRequest.java
index 77d16d5..be88b43 100644
--- 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/OMGetDelegationTokenRequest.java
+++ 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/OMGetDelegationTokenRequest.java
@@ -147,8 +147,8 @@ public class OMGetDelegationTokenRequest extends 
OMClientRequest {
 OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
 
 try {
-  OzoneTokenIdentifier ozoneTokenIdentifier =
-  ozoneTokenIdentifierToken.decodeIdentifier();
+  OzoneTokenIdentifier ozoneTokenIdentifier = OzoneTokenIdentifier.
+  readProtoBuf(ozoneTokenIdentifierToken.getIdentifier());
 
   // Update in memory map of token.
   long renewTime = ozoneManager.getDelegationTokenMgr()
diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/OMRenewDelegationTokenRequest.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/OMRenewDelegationTokenRequest.java
index 49cc724..11c0c82 100644
--- 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/OMRenewDelegationTokenRequest.java
+++ 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/OMRenewDelegationTokenRequest.java
@@ -123,8 +123,8 @@ public class OMRenewDelegationTokenRequest extends 
OMClientRequest {
 .setSuccess(true);
 try {
 
-  OzoneTokenIdentifier ozoneTokenIdentifier =
-  ozoneTokenIdentifierToken.decodeIdentifier();
+  OzoneTokenIdentifier ozoneTokenIdentifier = OzoneTokenIdentifier.
+  readProtoBuf(ozoneTokenIdentifierToken.getIdentifier());
 
   // Update in memory map of token.
   ozoneManager.getDelegationTokenMgr()


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch ozone-0.4.1 updated: HDDS-2124. Random next links

2019-09-16 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new dafb1c0  HDDS-2124. Random next links
dafb1c0 is described below

commit dafb1c05d18a959ea5e5e9f381ee3594ee0b43f9
Author: Doroszlai, Attila 
AuthorDate: Mon Sep 16 15:41:17 2019 +0200

HDDS-2124. Random next links

Closes #1443

(cherry picked from commit 363373e0ef96b556ed9c265376f61fa57f7ab64b)
---
 hadoop-hdds/docs/content/shell/BucketCommands.md | 2 +-
 hadoop-hdds/docs/content/start/FromSource.md | 1 +
 hadoop-hdds/docs/content/start/Kubernetes.md | 1 +
 hadoop-hdds/docs/content/start/Minikube.md   | 1 +
 hadoop-hdds/docs/content/start/OnPrem.md | 1 +
 hadoop-hdds/docs/content/start/RunningViaDocker.md   | 1 +
 hadoop-hdds/docs/content/start/StartFromDockerHub.md | 1 +
 7 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/hadoop-hdds/docs/content/shell/BucketCommands.md 
b/hadoop-hdds/docs/content/shell/BucketCommands.md
index ee14dc3..f59f1ad 100644
--- a/hadoop-hdds/docs/content/shell/BucketCommands.md
+++ b/hadoop-hdds/docs/content/shell/BucketCommands.md
@@ -1,7 +1,7 @@
 ---
 title: Bucket Commands
 summary: Bucket commands help you to manage the life cycle of a volume.
-weight: 2
+weight: 3
 ---
 

[hadoop] branch trunk updated: HDDS-2124. Random next links

2019-09-16 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 363373e  HDDS-2124. Random next links
363373e is described below

commit 363373e0ef96b556ed9c265376f61fa57f7ab64b
Author: Doroszlai, Attila 
AuthorDate: Mon Sep 16 15:41:17 2019 +0200

HDDS-2124. Random next links

Closes #1443
---
 hadoop-hdds/docs/content/shell/BucketCommands.md | 2 +-
 hadoop-hdds/docs/content/start/FromSource.md | 1 +
 hadoop-hdds/docs/content/start/Kubernetes.md | 1 +
 hadoop-hdds/docs/content/start/Minikube.md   | 1 +
 hadoop-hdds/docs/content/start/OnPrem.md | 1 +
 hadoop-hdds/docs/content/start/RunningViaDocker.md   | 1 +
 hadoop-hdds/docs/content/start/StartFromDockerHub.md | 1 +
 7 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/hadoop-hdds/docs/content/shell/BucketCommands.md 
b/hadoop-hdds/docs/content/shell/BucketCommands.md
index ee14dc3..f59f1ad 100644
--- a/hadoop-hdds/docs/content/shell/BucketCommands.md
+++ b/hadoop-hdds/docs/content/shell/BucketCommands.md
@@ -1,7 +1,7 @@
 ---
 title: Bucket Commands
 summary: Bucket commands help you to manage the life cycle of a volume.
-weight: 2
+weight: 3
 ---
 

[hadoop] branch trunk updated: HDDS-2109. Refactor scm.container.client config

2019-09-16 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e952ecf  HDDS-2109. Refactor scm.container.client config
e952ecf is described below

commit e952ecf807527bf2010e3bcb1cc7a8f3139f322e
Author: Doroszlai, Attila 
AuthorDate: Mon Sep 16 15:11:20 2019 +0200

HDDS-2109. Refactor scm.container.client config

Closes #1426
---
 .../hadoop/hdds/scm/XceiverClientManager.java  | 91 ++
 .../hadoop/hdds/scm/client/HddsClientUtils.java|  9 +--
 .../hadoop/hdds/conf/OzoneConfiguration.java   |  9 +++
 .../org/apache/hadoop/hdds/scm/ScmConfigKeys.java  | 15 
 .../common/src/main/resources/ozone-default.xml| 33 
 .../hadoop/ozone/scm/TestXceiverClientManager.java | 21 +++--
 .../fs/ozone/BasicOzoneClientAdapterImpl.java  |  7 +-
 .../hadoop/fs/ozone/BasicOzoneFileSystem.java  |  1 -
 8 files changed, 103 insertions(+), 83 deletions(-)

diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
index 57c567e..f906ab6 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
@@ -25,6 +25,10 @@ import com.google.common.cache.CacheBuilder;
 import com.google.common.cache.RemovalListener;
 import com.google.common.cache.RemovalNotification;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.conf.Config;
+import org.apache.hadoop.hdds.conf.ConfigGroup;
+import org.apache.hadoop.hdds.conf.ConfigType;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.ozone.OzoneConfigKeys;
@@ -38,14 +42,9 @@ import java.io.IOException;
 import java.util.concurrent.Callable;
 import java.util.concurrent.TimeUnit;
 
-import static org.apache.hadoop.hdds.scm.ScmConfigKeys
-.SCM_CONTAINER_CLIENT_MAX_SIZE_DEFAULT;
-import static org.apache.hadoop.hdds.scm.ScmConfigKeys
-.SCM_CONTAINER_CLIENT_MAX_SIZE_KEY;
-import static org.apache.hadoop.hdds.scm.ScmConfigKeys
-.SCM_CONTAINER_CLIENT_STALE_THRESHOLD_DEFAULT;
-import static org.apache.hadoop.hdds.scm.ScmConfigKeys
-.SCM_CONTAINER_CLIENT_STALE_THRESHOLD_KEY;
+import static java.util.concurrent.TimeUnit.MILLISECONDS;
+import static org.apache.hadoop.hdds.conf.ConfigTag.OZONE;
+import static org.apache.hadoop.hdds.conf.ConfigTag.PERFORMANCE;
 
 /**
  * XceiverClientManager is responsible for the lifecycle of XceiverClient
@@ -76,20 +75,21 @@ public class XceiverClientManager implements Closeable {
* @param conf configuration
*/
   public XceiverClientManager(Configuration conf) {
+this(conf, OzoneConfiguration.of(conf).getObject(ScmClientConfig.class));
+  }
+
+  public XceiverClientManager(Configuration conf, ScmClientConfig clientConf) {
+Preconditions.checkNotNull(clientConf);
 Preconditions.checkNotNull(conf);
-int maxSize = conf.getInt(SCM_CONTAINER_CLIENT_MAX_SIZE_KEY,
-SCM_CONTAINER_CLIENT_MAX_SIZE_DEFAULT);
-long staleThresholdMs = conf.getTimeDuration(
-SCM_CONTAINER_CLIENT_STALE_THRESHOLD_KEY,
-SCM_CONTAINER_CLIENT_STALE_THRESHOLD_DEFAULT, TimeUnit.MILLISECONDS);
+long staleThresholdMs = clientConf.getStaleThreshold(MILLISECONDS);
 this.useRatis = conf.getBoolean(
 ScmConfigKeys.DFS_CONTAINER_RATIS_ENABLED_KEY,
 ScmConfigKeys.DFS_CONTAINER_RATIS_ENABLED_DEFAULT);
 this.conf = conf;
 this.isSecurityEnabled = OzoneSecurityUtil.isSecurityEnabled(conf);
 this.clientCache = CacheBuilder.newBuilder()
-.expireAfterAccess(staleThresholdMs, TimeUnit.MILLISECONDS)
-.maximumSize(maxSize)
+.expireAfterAccess(staleThresholdMs, MILLISECONDS)
+.maximumSize(clientConf.getMaxSize())
 .removalListener(
 new RemovalListener() {
 @Override
@@ -299,4 +299,65 @@ public class XceiverClientManager implements Closeable {
 
 return metrics;
   }
+
+  /**
+   * Configuration for HDDS client.
+   */
+  @ConfigGroup(prefix = "scm.container.client")
+  public static class ScmClientConfig {
+
+private int maxSize;
+private long staleThreshold;
+private int maxOutstandingRequests;
+
+public long getStaleThreshold(TimeUnit unit) {
+  return unit.convert(staleThreshold, MILLISECONDS);
+}
+
+@Config(key = "idle.threshold",
+type = ConfigType.TIME, timeUnit = MILLISECONDS,
+defaultValue = "10s",
+tags = { OZONE, PERFORMANCE },
+description =
+"In the standalone pipe

[hadoop] branch trunk updated: HDDS-2096. Ozone ACL document missing AddAcl API

2019-09-16 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new b633438  HDDS-2096. Ozone ACL document missing AddAcl API
b633438 is described below

commit b6334381dfcb09970deff1e235cd9d5ff2b55268
Author: Xiaoyu Yao 
AuthorDate: Mon Sep 16 14:54:20 2019 +0200

HDDS-2096. Ozone ACL document missing AddAcl API

Closes #1427
---
 hadoop-hdds/docs/content/security/SecurityAcls.md | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/hadoop-hdds/docs/content/security/SecurityAcls.md 
b/hadoop-hdds/docs/content/security/SecurityAcls.md
index b010233..b85dcca 100644
--- a/hadoop-hdds/docs/content/security/SecurityAcls.md
+++ b/hadoop-hdds/docs/content/security/SecurityAcls.md
@@ -78,5 +78,7 @@ supported are:
 of the ozone object and a list of ACLs.
 2. **GetAcl** – This API will take the name and type of the ozone object
 and will return a list of ACLs.
-3. **RemoveAcl** - This API will take the name, type of the
+3. **AddAcl** - This API will take the name, type of the ozone object, the
+ACL, and add it to existing ACL entries of the ozone object.
+4. **RemoveAcl** - This API will take the name, type of the
 ozone object and the ACL that has to be removed.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (85b1c72 -> 1e13fe6)

2019-09-16 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 85b1c72  HDDS-2129. Using dist profile fails with pom.ozone.xml as 
parent pom (#1449)
 add 1e13fe6  HDDS-2044.Remove 'ozone' from the recon module names.

No new revisions were added by this update.

Summary of changes:
 hadoop-ozone/dist/dev-support/bin/dist-layout-stitching |   2 +-
 hadoop-ozone/pom.xml|   8 
 hadoop-ozone/{ozone-recon-codegen => recon-codegen}/pom.xml |   0
 .../org/hadoop/ozone/recon/codegen/JooqCodeGenerator.java   |   0
 .../ozone/recon/codegen/ReconSchemaGenerationModule.java|   0
 .../org/hadoop/ozone/recon/codegen/TableNamingStrategy.java |   0
 .../java/org/hadoop/ozone/recon/codegen/package-info.java   |   0
 .../ozone/recon/schema/ReconInternalSchemaDefinition.java   |   0
 .../hadoop/ozone/recon/schema/ReconSchemaDefinition.java|   0
 .../hadoop/ozone/recon/schema/StatsSchemaDefinition.java|   0
 .../ozone/recon/schema/UtilizationSchemaDefinition.java |   0
 .../java/org/hadoop/ozone/recon/schema/package-info.java|   0
 .../dev-support/findbugsExcludeFile.xml |   0
 hadoop-ozone/{ozone-recon => recon}/pom.xml |   0
 .../apache/hadoop/ozone/recon/ConfigurationProvider.java|   0
 .../java/org/apache/hadoop/ozone/recon/ReconConstants.java  |   0
 .../apache/hadoop/ozone/recon/ReconControllerModule.java|   0
 .../ozone/recon/ReconGuiceServletContextListener.java   |   0
 .../java/org/apache/hadoop/ozone/recon/ReconHttpServer.java |   0
 .../apache/hadoop/ozone/recon/ReconRestServletModule.java   |   0
 .../java/org/apache/hadoop/ozone/recon/ReconServer.java |   0
 .../apache/hadoop/ozone/recon/ReconServerConfigKeys.java|   0
 .../apache/hadoop/ozone/recon/ReconTaskBindingModule.java   |   0
 .../main/java/org/apache/hadoop/ozone/recon/ReconUtils.java |   0
 .../apache/hadoop/ozone/recon/api/ContainerKeyService.java  |   0
 .../apache/hadoop/ozone/recon/api/UtilizationService.java   |   0
 .../org/apache/hadoop/ozone/recon/api/package-info.java |   0
 .../hadoop/ozone/recon/api/types/ContainerKeyPrefix.java|   0
 .../hadoop/ozone/recon/api/types/ContainerMetadata.java |   0
 .../hadoop/ozone/recon/api/types/ContainersResponse.java|   0
 .../apache/hadoop/ozone/recon/api/types/IsoDateAdapter.java |   0
 .../apache/hadoop/ozone/recon/api/types/KeyMetadata.java|   0
 .../apache/hadoop/ozone/recon/api/types/KeysResponse.java   |   0
 .../apache/hadoop/ozone/recon/api/types/package-info.java   |   0
 .../java/org/apache/hadoop/ozone/recon/package-info.java|   0
 .../ozone/recon/persistence/DataSourceConfiguration.java|   0
 .../ozone/recon/persistence/DefaultDataSourceProvider.java  |   0
 .../ozone/recon/persistence/JooqPersistenceModule.java  |   0
 .../recon/persistence/TransactionalMethodInterceptor.java   |   0
 .../apache/hadoop/ozone/recon/persistence/package-info.java |   0
 .../hadoop/ozone/recon/recovery/ReconOMMetadataManager.java |   0
 .../ozone/recon/recovery/ReconOmMetadataManagerImpl.java|   0
 .../apache/hadoop/ozone/recon/recovery/package-info.java|   0
 .../hadoop/ozone/recon/spi/ContainerDBServiceProvider.java  |   0
 .../hadoop/ozone/recon/spi/HddsDatanodeServiceProvider.java |   0
 .../hadoop/ozone/recon/spi/OzoneManagerServiceProvider.java |   0
 .../ozone/recon/spi/StorageContainerServiceProvider.java|   0
 .../recon/spi/impl/ContainerDBServiceProviderImpl.java  |   0
 .../ozone/recon/spi/impl/ContainerKeyPrefixCodec.java   |   0
 .../recon/spi/impl/OzoneManagerServiceProviderImpl.java |   0
 .../ozone/recon/spi/impl/ReconContainerDBProvider.java  |   0
 .../apache/hadoop/ozone/recon/spi/impl/package-info.java|   0
 .../org/apache/hadoop/ozone/recon/spi/package-info.java |   0
 .../hadoop/ozone/recon/tasks/ContainerKeyMapperTask.java|   0
 .../apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java  |   0
 .../apache/hadoop/ozone/recon/tasks/OMDBUpdateEvent.java|   0
 .../apache/hadoop/ozone/recon/tasks/OMDBUpdatesHandler.java |   0
 .../apache/hadoop/ozone/recon/tasks/OMUpdateEventBatch.java |   0
 .../apache/hadoop/ozone/recon/tasks/ReconDBUpdateTask.java  |   0
 .../hadoop/ozone/recon/tasks/ReconTaskController.java   |   0
 .../hadoop/ozone/recon/tasks/ReconTaskControllerImpl.java   |   0
 .../org/apache/hadoop/ozone/recon/tasks/package-info.java   |   0
 .../src/main/resources/webapps/recon/WEB-INF/web.xml|   0
 .../main/resources/webapps/recon/ozone-recon-web/.gitignore |   0
 .../main/resources/webapps/recon/ozone-recon-web/LICENSE|   0
 .../src/main/resources/webapps/recon/ozone-recon-web/NOTICE |   0
 .../main/resources/webapps/recon/ozone-recon-web/README.md  |   0
 .../webapps/recon/ozone-recon-web/config-overrides.js   |   0
 .../r

[hadoop] branch trunk updated (6a9f7ca -> 9a931b8)

2019-09-13 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 6a9f7ca  Revert "HDDS-2057. Incorrect Default OM Port in Ozone FS URI 
Error Message."
 add 9a931b8  HDDS-2125. maven-javadoc-plugin.version is missing in 
pom.ozone.xml

No new revisions were added by this update.

Summary of changes:
 pom.ozone.xml | 1 +
 1 file changed, 1 insertion(+)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2106. Avoid usage of hadoop projects as parent of hdds/ozone

2019-09-11 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new f537410  HDDS-2106. Avoid usage of hadoop projects as parent of 
hdds/ozone
f537410 is described below

commit f537410563e3966581442acc77f7d9f7fd95e3e5
Author: Márton Elek 
AuthorDate: Thu Sep 12 02:38:41 2019 +0200

HDDS-2106. Avoid usage of hadoop projects as parent of hdds/ozone

Closes #1423
---
 hadoop-hdds/pom.xml  |7 +-
 hadoop-ozone/pom.xml |7 +-
 pom.ozone.xml| 1818 +-
 3 files changed, 1820 insertions(+), 12 deletions(-)

diff --git a/hadoop-hdds/pom.xml b/hadoop-hdds/pom.xml
index e6b1f85..35f941e 100644
--- a/hadoop-hdds/pom.xml
+++ b/hadoop-hdds/pom.xml
@@ -19,9 +19,9 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd";>
   4.0.0
   
 org.apache.hadoop
-hadoop-project
-3.2.0
-
+hadoop-main-ozone
+0.5.0-SNAPSHOT
+../pom.ozone.xml
   
 
   hadoop-hdds
@@ -43,7 +43,6 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd";>
   
 
   
-3.2.0
 
 0.5.0-SNAPSHOT
 
diff --git a/hadoop-ozone/pom.xml b/hadoop-ozone/pom.xml
index bc1d321..cae0f55 100644
--- a/hadoop-ozone/pom.xml
+++ b/hadoop-ozone/pom.xml
@@ -15,9 +15,9 @@
   4.0.0
   
 org.apache.hadoop
-hadoop-project
-3.2.0
-
+hadoop-main-ozone
+0.5.0-SNAPSHOT
+../pom.ozone.xml
   
   hadoop-ozone
   0.5.0-SNAPSHOT
@@ -26,7 +26,6 @@
   pom
 
   
-3.2.0
 0.5.0-SNAPSHOT
 0.5.0-SNAPSHOT
 0.4.0-78e95b9-SNAPSHOT
diff --git a/pom.ozone.xml b/pom.ozone.xml
index b866c35..1a18d70 100644
--- a/pom.ozone.xml
+++ b/pom.ozone.xml
@@ -23,6 +23,11 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xs
   Apache Hadoop Ozone Main
   pom
 
+  
+hadoop-hdds
+hadoop-ozone
+  
+
   
 
   ${distMgmtStagingId}
@@ -81,15 +86,1820 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xs
 1.5
 bash
 
+
+2019
+
+false
+
true
+9.3.24.v20180605
+_
+_
+
+
+4
+
+
+
+1.0.9
+
+2.11.0
+
+0.8.2.1
+
+3.2.0
+1.0.13
+
+${project.build.directory}/test-dir
+${test.build.dir}
+
+
+
+
${basedir}/../../hadoop-common-project/hadoop-common/target
+file:///dev/urandom
+
+
+1.7.7
+
+
+1.19
+
+
+1.9.13
+2.9.5
+
+
+4.5.2
+4.4.4
+
+
+1.7.25
+
+
+1.1
+
+
+
+2.5.0
+${env.HADOOP_PROTOC_PATH}
+
+3.4.13
+2.12.0
+3.0.0
+3.1.0-RC1
+2.1.7
+
+11.0.2
+4.0
+2.9.9
+
+
+2.0.0-M21
+1.0.0-M33
+
+
+0.3.0-eca3531-SNAPSHOT
+1.0-alpha-1
+3.3.1
+2.4.12
+6.2.1.jre7
+2.7.5
+
+
+0.5.1
+3.5.0
+1.10.0
+1.5.0.Final
+
+
+1.8
+
+
+
+[${javac.version},)
+[3.3.0,)
+
+
+-Xmx2048m 
-XX:+HeapDumpOnOutOfMemoryError
+2.21.0
+
${maven-surefire-plugin.version}
+
${maven-surefire-plugin.version}
+
+2.5
+3.1
+2.5.1
+2.6
+3.2.0
+2.5
+3.1.0
+2.3
+1.2
+
1.5
+3.0.0-M1
+0.12
+2.8.1
+1.9
+3.0.2
+1.3.1
+1.0-beta-1
+1.0-alpha-8
+900
+1.11.375
+2.3.4
+1.5
+
+${hadoop.version}
+
+1.5.4
+1.16
+1.2.6
+2.0.0-beta-1
   
 
-  
-hadoop-hdds
-hadoop-ozone
-  
+
+  
+
+  
+com.squareup.okhttp
+okhttp
+${okhttp.version}
+  
+  
+com.squareup.okhttp3
+mockwebserver
+3.7.0
+test
+  
+  
+jdiff
+jdiff
+${jdiff.version}
+  
+  
+org.apache.hadoop
+hadoop-assemblies
+${hadoop.version}
+  
+  
+org.apache.hadoop
+hadoop-annotations
+${hadoop.version}
+  
+  
+org.apache.hadoop
+hadoop-client-modules
+${hadoop.version}
+pom
+  
+  
+org.apache.hadoop
+hadoop-client-api
+${hadoop.version}
+  
+  
+org.apache.hadoop
+hadoop-client-check-invariants
+${hadoop.version}
+pom
+  
+  
+org.apache.hadoop
+hadoop-client-check-test-invariants
+${hadoop.version}
+pom
+  
+  
+org.apache.hadoop
+hadoop-client-integration-tests
+${hadoop.version}
+  
+  
+org.apache.hadoop
+hadoop-client-runtime
+${hadoop.version}
+  
+  
+org.apache.hadoop
+hadoop-client-minicluster
+${hadoop.version}
+  
+  
+org.apache.hadoop
+hadoop-common
+${hadoop.version}
+  
+  
+org.apache.hadoop
+hadoop-common
+${had

[hadoop] branch trunk updated: HDDS-2045. Partially started compose cluster left running

2019-08-29 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c749f62  HDDS-2045. Partially started compose cluster left running
c749f62 is described below

commit c749f6247075274954f8302dd45feee984d9bd10
Author: Doroszlai, Attila 
AuthorDate: Thu Aug 29 09:46:50 2019 +0200

HDDS-2045. Partially started compose cluster left running

Closes #1358
---
 hadoop-ozone/dist/src/main/compose/testlib.sh | 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/hadoop-ozone/dist/src/main/compose/testlib.sh 
b/hadoop-ozone/dist/src/main/compose/testlib.sh
index ffc6da2..9aa7c48 100755
--- a/hadoop-ozone/dist/src/main/compose/testlib.sh
+++ b/hadoop-ozone/dist/src/main/compose/testlib.sh
@@ -82,9 +82,14 @@ start_docker_env(){
   local -i datanode_count=${1:-3}
 
   docker-compose -f "$COMPOSE_FILE" down
-  docker-compose -f "$COMPOSE_FILE" up -d --scale datanode="${datanode_count}"
-  wait_for_datanodes "$COMPOSE_FILE" "${datanode_count}"
-  sleep 10
+  docker-compose -f "$COMPOSE_FILE" up -d --scale datanode="${datanode_count}" 
\
+&& wait_for_datanodes "$COMPOSE_FILE" "${datanode_count}" \
+&& sleep 10
+
+  if [[ $? -gt 0 ]]; then
+docker-compose -f "$COMPOSE_FILE" down
+return 1
+  fi
 }
 
 ## @description  Execute robot tests in a specific container.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: Revert "HDDS-1596. Create service endpoint to download configuration from SCM."

2019-08-29 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 371c9eb  Revert "HDDS-1596. Create service endpoint to download 
configuration from SCM."
371c9eb is described below

commit 371c9eb6a69de8f45008ff6f4033a5fa78ccf2f6
Author: Márton Elek 
AuthorDate: Thu Aug 29 09:25:03 2019 +0200

Revert "HDDS-1596. Create service endpoint to download configuration from 
SCM."

This reverts commit c0499bd70455e67bef9a1e00da73e25c9e2cc0ff.
---
 .../hadoop/hdds/conf/OzoneConfiguration.java   | 30 +---
 .../hadoop/hdds/discovery/DiscoveryUtil.java   | 90 --
 .../apache/hadoop/hdds/discovery/package-info.java | 22 --
 .../apache/hadoop/ozone/HddsDatanodeService.java   |  8 +-
 .../apache/hadoop/hdds/server/BaseHttpServer.java  | 13 
 .../org/apache/hadoop/hdds/server/ServerUtils.java | 12 +--
 .../apache/hadoop/hdds/server/TestServerUtils.java | 17 
 hadoop-hdds/pom.xml| 21 -
 hadoop-hdds/server-scm/pom.xml | 18 -
 .../hdds/discovery/ConfigurationEndpoint.java  | 60 ---
 .../hadoop/hdds/discovery/ConfigurationXml.java| 44 ---
 .../hdds/discovery/ConfigurationXmlEntry.java  | 56 --
 .../hdds/discovery/DiscoveryApplication.java   | 35 -
 .../apache/hadoop/hdds/discovery/package-info.java | 22 --
 .../hdds/scm/server/SCMBlockProtocolServer.java|  9 +--
 .../server/StorageContainerManagerHttpServer.java  | 15 +---
 .../scm/server/StorageContainerManagerStarter.java |  5 --
 .../src/main/compose/ozone/docker-compose.yaml | 19 +
 .../dist/src/main/compose/ozone/docker-config  | 15 +++-
 .../hadoop/ozone/om/OzoneManagerStarter.java   |  5 --
 hadoop-ozone/ozonefs/pom.xml   |  6 --
 .../java/org/apache/hadoop/ozone/s3/Gateway.java   |  5 --
 22 files changed, 23 insertions(+), 504 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
index dfcf320..b32ad63 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
@@ -47,43 +47,19 @@ public class OzoneConfiguration extends Configuration {
   }
 
   public OzoneConfiguration() {
-this(false);
-  }
-
-  private OzoneConfiguration(boolean justTheDefaults) {
 OzoneConfiguration.activate();
 loadDefaults();
-if (!justTheDefaults) {
-  loadConfigFiles();
-}
-  }
-
-  private void loadConfigFiles() {
-addResource("ozone-global.xml");
-addResource("ozone-site.xml");
   }
 
   public OzoneConfiguration(Configuration conf) {
-this(conf, false);
-  }
-
-  private OzoneConfiguration(Configuration conf, boolean justTheDefaults) {
 super(conf);
 //load the configuration from the classloader of the original conf.
 setClassLoader(conf.getClassLoader());
 if (!(conf instanceof OzoneConfiguration)) {
   loadDefaults();
-  //here we load the REAL configuration.
-  if (!justTheDefaults) {
-loadConfigFiles();
-  }
 }
   }
 
-  public static OzoneConfiguration createWithDefaultsOnly() {
-return new OzoneConfiguration(true);
-  }
-
   private void loadDefaults() {
 try {
   //there could be multiple ozone-default-generated.xml files on the
@@ -98,6 +74,7 @@ public class OzoneConfiguration extends Configuration {
 } catch (IOException e) {
   e.printStackTrace();
 }
+addResource("ozone-site.xml");
   }
 
   public List readPropertyFromXml(URL url) throws JAXBException {
@@ -339,9 +316,4 @@ public class OzoneConfiguration extends Configuration {
 }
 return props;
   }
-
-  @Override
-  public synchronized Properties getProps() {
-return super.getProps();
-  }
 }
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/discovery/DiscoveryUtil.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/discovery/DiscoveryUtil.java
deleted file mode 100644
index 42adfc7..000
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/discovery/DiscoveryUtil.java
+++ /dev/null
@@ -1,90 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * 
- 

[hadoop] branch trunk updated: HDDS-2024. rat.sh: grep: warning: recursive search of stdin

2019-08-23 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 75bf090  HDDS-2024. rat.sh: grep: warning: recursive search of stdin
75bf090 is described below

commit 75bf090990d5237e2f76f83d00dce5259c39a294
Author: Doroszlai, Attila 
AuthorDate: Fri Aug 23 12:32:40 2019 +0200

HDDS-2024. rat.sh: grep: warning: recursive search of stdin

Closes #1343
---
 hadoop-ozone/dev-support/checks/rat.sh | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/hadoop-ozone/dev-support/checks/rat.sh 
b/hadoop-ozone/dev-support/checks/rat.sh
index 68ca56e..480e4d3 100755
--- a/hadoop-ozone/dev-support/checks/rat.sh
+++ b/hadoop-ozone/dev-support/checks/rat.sh
@@ -17,7 +17,7 @@ DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 
&& pwd )"
 cd "$DIR/../../.." || exit 1
 
 mkdir -p target
-REPORT_FILE="$DIR/../../../target/rat-aggretaged.txt"
+REPORT_FILE="$DIR/../../../target/rat-aggregated.txt"
 mkdir -p "$(dirname "$REPORT_FILE")"
 
 cd hadoop-hdds || exit 1
@@ -26,8 +26,8 @@ cd ../hadoop-ozone || exit 1
 mvn -B -fn org.apache.rat:apache-rat-plugin:0.13:check
 
 cd "$DIR/../../.." || exit 1
-grep -r --include=rat.txt "!" | tee "$REPORT_FILE"
-if [ "$(cat target/rat-aggregated.txt)" ]; then
+grep -r --include=rat.txt "!" hadoop-hdds hadoop-ozone | tee "$REPORT_FILE"
+if [[ -s "${REPORT_FILE}" ]]; then
exit 1
 fi
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (e2a5548 -> d3fe993)

2019-08-23 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from e2a5548  HDDS-2023. Fix rat check failures in trunk
 add d3fe993  HDDS-2023. Fix rat check failures in trunk (addendum)

No new revisions were added by this update.

Summary of changes:
 hadoop-hdds/docs/pom.xml | 2 +-
 hadoop-hdds/pom.xml  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2023. Fix rat check failures in trunk

2019-08-23 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e2a5548  HDDS-2023. Fix rat check failures in trunk
e2a5548 is described below

commit e2a55482ee59624a3c1d6cd16d0acb8104201071
Author: Vivek Ratnavel Subramanian 
AuthorDate: Fri Aug 23 12:02:45 2019 +0200

HDDS-2023. Fix rat check failures in trunk

Closes #1342
---
 .../org/apache/hadoop/utils/db/cache/CacheResult.java  | 18 ++
 hadoop-hdds/pom.xml|  2 +-
 .../apache/hadoop/ozone/util/BooleanBiFunction.java| 18 ++
 .../compose/ozones3-haproxy/haproxy-conf/haproxy.cfg   | 16 
 .../client/rpc/TestOzoneRpcClientForAclAuditLog.java   | 18 ++
 .../om/ratis/utils/OzoneManagerDoubleBufferHelper.java | 18 ++
 .../request/bucket/acl/OMBucketRemoveAclRequest.java   | 18 ++
 .../om/request/bucket/acl/OMBucketSetAclRequest.java   | 18 ++
 .../hadoop/ozone/om/request/key/OMKeyPurgeRequest.java | 18 ++
 .../s3/multipart/S3MultipartUploadCompleteRequest.java | 18 ++
 .../om/request/volume/acl/OMVolumeAclRequest.java  | 18 ++
 .../ozone/om/response/key/OMKeyPurgeResponse.java  | 18 ++
 .../multipart/S3MultipartUploadCompleteResponse.java   | 18 ++
 .../multipart/TestS3MultipartUploadAbortRequest.java   | 18 ++
 .../TestS3MultipartUploadCompleteRequest.java  | 18 ++
 .../response/s3/bucket/TestS3BucketDeleteResponse.java | 18 ++
 .../multipart/TestS3MultipartUploadAbortResponse.java  | 18 ++
 17 files changed, 287 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/CacheResult.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/CacheResult.java
index 76b8381..41a856c 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/CacheResult.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/CacheResult.java
@@ -1,3 +1,21 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
 package org.apache.hadoop.utils.db.cache;
 
 import java.util.Objects;
diff --git a/hadoop-hdds/pom.xml b/hadoop-hdds/pom.xml
index 7c01601..9a4ce48 100644
--- a/hadoop-hdds/pom.xml
+++ b/hadoop-hdds/pom.xml
@@ -294,7 +294,7 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd";>
 
src/main/resources/webapps/static/nvd3-1.8.5.min.css
 
src/main/resources/webapps/static/nvd3-1.8.5.min.js.map
 
src/main/resources/webapps/static/nvd3-1.8.5.min.js
-
src/main/resources/webapps/static/jquery-3.4.1.min.js
+
src/main/resources/webapps/static/js/jquery-3.4.1.min.js
 
src/main/resources/webapps/static/bootstrap-3.4.1/**
 src/test/resources/additionalfields.container
 src/test/resources/incorrect.checksum.container
diff --git 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/util/BooleanBiFunction.java
 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/util/BooleanBiFunction.java
index a70f4b0..82398b7 100644
--- 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/util/BooleanBiFunction.java
+++ 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/util/BooleanBiFunction.java
@@ -1,3 +1,21 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distri

[hadoop] branch trunk updated: HDDS-1912. start-ozone.sh fail due to ozone-config.sh not found. Contributed by kevin su.

2019-08-12 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new dfe772d  HDDS-1912. start-ozone.sh fail due to ozone-config.sh not 
found. Contributed by kevin su.
dfe772d is described below

commit dfe772d234c55dc35833e80e88d9659f15890490
Author: Márton Elek 
AuthorDate: Mon Aug 12 13:02:47 2019 +0200

HDDS-1912. start-ozone.sh fail due to ozone-config.sh not found. 
Contributed by kevin su.
---
 hadoop-ozone/common/src/main/bin/start-ozone.sh | 5 +
 1 file changed, 5 insertions(+)

diff --git a/hadoop-ozone/common/src/main/bin/start-ozone.sh 
b/hadoop-ozone/common/src/main/bin/start-ozone.sh
index a05b9ae..9ddaab6 100755
--- a/hadoop-ozone/common/src/main/bin/start-ozone.sh
+++ b/hadoop-ozone/common/src/main/bin/start-ozone.sh
@@ -42,6 +42,11 @@ HADOOP_NEW_CONFIG=true
 if [[ -f "${HADOOP_LIBEXEC_DIR}/ozone-config.sh" ]]; then
   # shellcheck disable=SC1090
   . "${HADOOP_LIBEXEC_DIR}/ozone-config.sh"
+elif [[ -f "${bin}/../libexec/ozone-config.sh" ]]; then
+  HADOOP_HOME="${bin}/../"
+  HADOOP_LIBEXEC_DIR="${HADOOP_HOME}/libexec"
+  HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_HOME}/libexec"
+  . "${HADOOP_LIBEXEC_DIR}/ozone-config.sh"
 else
   echo "ERROR: Cannot execute ${HADOOP_LIBEXEC_DIR}/ozone-config.sh." 2>&1
   exit 1


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch ozone-0.4.1 updated: HDDS-1926. The new caching layer is used for old OM requests but not updated

2019-08-08 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new eb828dc  HDDS-1926. The new caching layer is used for old OM requests 
but not updated
eb828dc is described below

commit eb828dc1e40bf526b9b7d3e0a40b228b68bc76c8
Author: Bharat Viswanadham 
AuthorDate: Thu Aug 8 15:52:04 2019 +0200

HDDS-1926. The new caching layer is used for old OM requests but not updated

Closes #1247
---
 .../hadoop/ozone/om/TestOzoneManagerRestart.java   | 214 +
 .../hadoop/ozone/om/OmMetadataManagerImpl.java |  18 +-
 2 files changed, 230 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerRestart.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerRestart.java
new file mode 100644
index 000..76841dd
--- /dev/null
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerRestart.java
@@ -0,0 +1,214 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om;
+
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.hdds.client.ReplicationFactor;
+import org.apache.hadoop.hdds.client.ReplicationType;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneKey;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
+import org.apache.hadoop.ozone.web.handlers.UserArgs;
+import org.apache.hadoop.ozone.web.utils.OzoneUtils;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.UUID;
+
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_ENABLED;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS_WILDCARD;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_OPEN_KEY_EXPIRE_THRESHOLD_SECONDS;
+import static org.junit.Assert.fail;
+
+/**
+ * Test some client operations after cluster starts. And perform restart and
+ * then performs client operations and check the behavior is expected or not.
+ */
+public class TestOzoneManagerRestart {
+  private MiniOzoneCluster cluster = null;
+  private UserArgs userArgs;
+  private OzoneConfiguration conf;
+  private String clusterId;
+  private String scmId;
+  private String omId;
+
+  @Rule
+  public Timeout timeout = new Timeout(6);
+
+  /**
+   * Create a MiniDFSCluster for testing.
+   * 
+   * Ozone is made active by setting OZONE_ENABLED = true
+   *
+   * @throws IOException
+   */
+  @Before
+  public void init() throws Exception {
+conf = new OzoneConfiguration();
+clusterId = UUID.randomUUID().toString();
+scmId = UUID.randomUUID().toString();
+omId = UUID.randomUUID().toString();
+conf.setBoolean(OZONE_ACL_ENABLED, true);
+conf.setInt(OZONE_OPEN_KEY_EXPIRE_THRESHOLD_SECONDS, 2);
+conf.set(OZONE_ADMINISTRATORS, OZONE_ADMINISTRATORS_WILDCARD);
+cluster =  MiniOzoneCluster.newBuilder(conf)
+.setClusterId(clusterId)
+.setScmId(scmId)
+.setOmId(omId)
+.build();
+cluster.waitForClusterToBeReady();
+userArgs = new UserArgs(null, OzoneUtils.getRequestID(),
+null, null, null, null);
+  }
+
+  /**
+   * Shutdown MiniDFSCluster.
+   */
+  @After
+  public void shutdown() {
+if (cluster != null) {
+  cluster.shutdown();
+}
+  }
+
+  @Test
+  public void testRestartOMWithVolumeOperation() throws Exception {
+String 

[hadoop] branch trunk updated: HDDS-1926. The new caching layer is used for old OM requests but not updated

2019-08-08 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 63161cf  HDDS-1926. The new caching layer is used for old OM requests 
but not updated
63161cf is described below

commit 63161cf590d43fe7f6c905946b029d893b774d77
Author: Bharat Viswanadham 
AuthorDate: Thu Aug 8 15:52:04 2019 +0200

HDDS-1926. The new caching layer is used for old OM requests but not updated

Closes #1247
---
 .../hadoop/ozone/om/TestOzoneManagerRestart.java   | 214 +
 .../hadoop/ozone/om/OmMetadataManagerImpl.java |  18 +-
 2 files changed, 230 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerRestart.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerRestart.java
new file mode 100644
index 000..76841dd
--- /dev/null
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerRestart.java
@@ -0,0 +1,214 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om;
+
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.hdds.client.ReplicationFactor;
+import org.apache.hadoop.hdds.client.ReplicationType;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneKey;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
+import org.apache.hadoop.ozone.web.handlers.UserArgs;
+import org.apache.hadoop.ozone.web.utils.OzoneUtils;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.UUID;
+
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_ENABLED;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS_WILDCARD;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_OPEN_KEY_EXPIRE_THRESHOLD_SECONDS;
+import static org.junit.Assert.fail;
+
+/**
+ * Test some client operations after cluster starts. And perform restart and
+ * then performs client operations and check the behavior is expected or not.
+ */
+public class TestOzoneManagerRestart {
+  private MiniOzoneCluster cluster = null;
+  private UserArgs userArgs;
+  private OzoneConfiguration conf;
+  private String clusterId;
+  private String scmId;
+  private String omId;
+
+  @Rule
+  public Timeout timeout = new Timeout(6);
+
+  /**
+   * Create a MiniDFSCluster for testing.
+   * 
+   * Ozone is made active by setting OZONE_ENABLED = true
+   *
+   * @throws IOException
+   */
+  @Before
+  public void init() throws Exception {
+conf = new OzoneConfiguration();
+clusterId = UUID.randomUUID().toString();
+scmId = UUID.randomUUID().toString();
+omId = UUID.randomUUID().toString();
+conf.setBoolean(OZONE_ACL_ENABLED, true);
+conf.setInt(OZONE_OPEN_KEY_EXPIRE_THRESHOLD_SECONDS, 2);
+conf.set(OZONE_ADMINISTRATORS, OZONE_ADMINISTRATORS_WILDCARD);
+cluster =  MiniOzoneCluster.newBuilder(conf)
+.setClusterId(clusterId)
+.setScmId(scmId)
+.setOmId(omId)
+.build();
+cluster.waitForClusterToBeReady();
+userArgs = new UserArgs(null, OzoneUtils.getRequestID(),
+null, null, null, null);
+  }
+
+  /**
+   * Shutdown MiniDFSCluster.
+   */
+  @After
+  public void shutdown() {
+if (cluster != null) {
+  cluster.shutdown();
+}
+  }
+
+  @Test
+  public void testRestartOMWithVolumeOperation() throws Exception {
+String 

[hadoop] branch ozone-0.4.1 updated: HDDS-1924. ozone sh bucket path command does not exist

2019-08-07 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new 3211958  HDDS-1924. ozone sh bucket path command does not exist
3211958 is described below

commit 32119585f02e545b3591319faf5b42cfbf022902
Author: Doroszlai, Attila 
AuthorDate: Wed Aug 7 18:21:37 2019 +0200

HDDS-1924. ozone sh bucket path command does not exist

Closes #1245

(cherry picked from commit 0520f5cedee0565a342a12a787ff9737f34691b1)
---
 hadoop-hdds/docs/content/interface/S3.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-hdds/docs/content/interface/S3.md 
b/hadoop-hdds/docs/content/interface/S3.md
index 952ce00..dc9b451 100644
--- a/hadoop-hdds/docs/content/interface/S3.md
+++ b/hadoop-hdds/docs/content/interface/S3.md
@@ -119,7 +119,7 @@ To show the storage location of a S3 bucket, use the `ozone 
s3 path 
 ```
 aws s3api --endpoint-url http://localhost:9878 create-bucket --bucket=bucket1
 
-ozone sh bucket path bucket1
+ozone s3 path bucket1
 Volume name for S3Bucket is : s3thisisakey
 Ozone FileSystem Uri is : o3fs://bucket1.s3thisisakey
 ```


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1924. ozone sh bucket path command does not exist

2019-08-07 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 0520f5c  HDDS-1924. ozone sh bucket path command does not exist
0520f5c is described below

commit 0520f5cedee0565a342a12a787ff9737f34691b1
Author: Doroszlai, Attila 
AuthorDate: Wed Aug 7 18:21:37 2019 +0200

HDDS-1924. ozone sh bucket path command does not exist

Closes #1245
---
 hadoop-hdds/docs/content/interface/S3.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-hdds/docs/content/interface/S3.md 
b/hadoop-hdds/docs/content/interface/S3.md
index 952ce00..dc9b451 100644
--- a/hadoop-hdds/docs/content/interface/S3.md
+++ b/hadoop-hdds/docs/content/interface/S3.md
@@ -119,7 +119,7 @@ To show the storage location of a S3 bucket, use the `ozone 
s3 path 
 ```
 aws s3api --endpoint-url http://localhost:9878 create-bucket --bucket=bucket1
 
-ozone sh bucket path bucket1
+ozone s3 path bucket1
 Volume name for S3Bucket is : s3thisisakey
 Ozone FileSystem Uri is : o3fs://bucket1.s3thisisakey
 ```


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HDDS-1682 deleted (was b039f75)

2019-08-05 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-1682
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


 was b039f75  HDDS-1682. TestEventWatcher.testMetrics is flaky

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HDDS-901 deleted (was 992dd9d)

2019-08-05 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-901
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


 was 992dd9d  HDDS-901. MultipartUpload: S3 API for Initiate multipart 
upload. Contributed by Bharat Viswanadham.

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HDDS-1469 deleted (was 28dc12b)

2019-08-05 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-1469
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


 was 28dc12b  fix unit tests

This change permanently discards the following revisions:

 discard 28dc12b  fix unit tests
 discard 64ff28f  reestore ozone.scm.heartbeat.thread.interval
 discard 9038996  RAT and javadoc fixes
 discard 4c7d7f4  address review comments
 discard d7767a7  HDDS-1469. Generate default configuration fragments based on 
annotations. Contributed by Elek, Marton.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HDDS-1383 deleted (was 84e9243)

2019-08-05 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-1383
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


 was 84e9243  upgrade

This change permanently discards the following revisions:

 discard 84e9243  upgrade


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HDDS-1412 deleted (was bfa41ca)

2019-08-05 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-1412
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


 was bfa41ca  fix RAT headers

This change permanently discards the following revisions:

 discard bfa41ca  fix RAT headers
 discard 390bb13  HDDS-1412. Provide example k8s deployment files as part of 
the release package


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch ozone-0.4.1 updated: HDDS-1877. hadoop31-mapreduce fails due to wrong HADOOP_VERSION

2019-07-31 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new 4638a91  HDDS-1877. hadoop31-mapreduce fails due to wrong 
HADOOP_VERSION
4638a91 is described below

commit 4638a91fb6e31dbe74c02d6d15e6ab334f91da7a
Author: Doroszlai, Attila 
AuthorDate: Wed Jul 31 16:07:27 2019 +0200

HDDS-1877. hadoop31-mapreduce fails due to wrong HADOOP_VERSION

Closes #1193

(cherry picked from commit ac8ed7b5db01cd3ca5acc1f6370119742ba1a49f)
---
 hadoop-ozone/dev-support/checks/acceptance.sh | 1 -
 1 file changed, 1 deletion(-)

diff --git a/hadoop-ozone/dev-support/checks/acceptance.sh 
b/hadoop-ozone/dev-support/checks/acceptance.sh
index 4a50e08..1e80ad4 100755
--- a/hadoop-ozone/dev-support/checks/acceptance.sh
+++ b/hadoop-ozone/dev-support/checks/acceptance.sh
@@ -16,7 +16,6 @@
 DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
 cd "$DIR/../../.." || exit 1
 
-export HADOOP_VERSION=3
 OZONE_VERSION=$(grep "" "$DIR/../../pom.xml" | sed 
's/<[^>]*>//g'|  sed 's/^[ \t]*//')
 cd "$DIR/../../dist/target/ozone-$OZONE_VERSION/compose" || exit 1
 ./test-all.sh


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1877. hadoop31-mapreduce fails due to wrong HADOOP_VERSION

2019-07-31 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new ac8ed7b  HDDS-1877. hadoop31-mapreduce fails due to wrong 
HADOOP_VERSION
ac8ed7b is described below

commit ac8ed7b5db01cd3ca5acc1f6370119742ba1a49f
Author: Doroszlai, Attila 
AuthorDate: Wed Jul 31 16:07:27 2019 +0200

HDDS-1877. hadoop31-mapreduce fails due to wrong HADOOP_VERSION

Closes #1193
---
 hadoop-ozone/dev-support/checks/acceptance.sh | 1 -
 1 file changed, 1 deletion(-)

diff --git a/hadoop-ozone/dev-support/checks/acceptance.sh 
b/hadoop-ozone/dev-support/checks/acceptance.sh
index 4a50e08..1e80ad4 100755
--- a/hadoop-ozone/dev-support/checks/acceptance.sh
+++ b/hadoop-ozone/dev-support/checks/acceptance.sh
@@ -16,7 +16,6 @@
 DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
 cd "$DIR/../../.." || exit 1
 
-export HADOOP_VERSION=3
 OZONE_VERSION=$(grep "" "$DIR/../../pom.xml" | sed 
's/<[^>]*>//g'|  sed 's/^[ \t]*//')
 cd "$DIR/../../dist/target/ozone-$OZONE_VERSION/compose" || exit 1
 ./test-all.sh


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch ozone-0.4.1 updated: HDDS-1876. hadoop27 acceptance test cannot be run

2019-07-31 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new 3afcbb7  HDDS-1876. hadoop27 acceptance test cannot be run
3afcbb7 is described below

commit 3afcbb7ea5a29277f647e3f8c234edaa6a3331cf
Author: Doroszlai, Attila 
AuthorDate: Wed Jul 31 16:02:30 2019 +0200

HDDS-1876. hadoop27 acceptance test cannot be run

Closes #1191
---
 hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop27/{.env~HEAD => .env} | 0
 1 file changed, 0 insertions(+), 0 deletions(-)

diff --git a/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop27/.env~HEAD 
b/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop27/.env
similarity index 100%
rename from hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop27/.env~HEAD
rename to hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop27/.env


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch ozone-0.4.1 updated: HDDS-1725. pv-test example to test csi is not working

2019-07-29 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new 83fb9f9  HDDS-1725. pv-test example to test csi is not working
83fb9f9 is described below

commit 83fb9f9bec0b0b3731a2d6635f1a08cb6ee54168
Author: Márton Elek 
AuthorDate: Wed Jun 26 22:37:14 2019 +0200

HDDS-1725. pv-test example to test csi is not working

(cherry picked from commit 90afb7bf8cef7ec6b6e456c87a1c86e406f5f461)
---
 .../main/k8s/definitions/pv-test/flekszible.yaml   |  2 +-
 .../definitions/pv-test/nginx-conf-configmap.yaml  | 37 ---
 .../pv-test/webserver-deployment.yaml} | 45 ++---
 .../pv-test/webserver-service.yaml}|  6 +-
 .../pv-test/webserver-volume.yaml} |  2 +-
 .../src/main/k8s/examples/ozone-csi/Flekszible | 26 
 .../k8s/examples/ozone-csi/config-configmap.yaml   | 37 ---
 .../k8s/examples/ozone-csi/datanode-daemonset.yaml | 60 --
 .../k8s/examples/ozone-csi/datanode-service.yaml   | 28 -
 .../examples/ozone-csi/datanode-statefulset.yaml   | 66 ---
 .../k8s/examples/ozone-csi/om-statefulset.yaml | 65 ---
 .../ozone-csi/pv-test/nginx-conf-configmap.yaml| 38 ---
 .../main/k8s/examples/ozone-csi/s3g-service.yaml   | 28 -
 .../main/k8s/examples/ozone-csi/scm-service.yaml   | 28 -
 .../k8s/examples/ozone-csi/scm-statefulset.yaml| 73 --
 .../src/main/k8s/examples/ozone-dev/Flekszible |  9 +++
 .../k8s/examples/ozone-dev/config-configmap.yaml   |  3 +
 .../csi}/csi-node-daemonset.yaml   |  0
 .../csi}/csi-ozone-clusterrole.yaml|  0
 .../csi}/csi-ozone-clusterrolebinding.yaml |  2 +-
 .../csi}/csi-ozone-serviceaccount.yaml |  2 +-
 .../csi}/csi-provisioner-deployment.yaml   |  0
 .../csi}/org.apache.hadoop.ozone-csidriver.yaml|  0
 .../csi}/ozone-storageclass.yaml   |  0
 .../examples/ozone-dev/datanode-statefulset.yaml   |  6 +-
 .../k8s/examples/ozone-dev/om-statefulset.yaml |  6 +-
 .../prometheus-operator-clusterrolebinding.yaml|  2 +-
 .../ozone-csi-test-webserver-deployment.yaml}  | 28 -
 ...-csi-test-webserver-persistentvolumeclaim.yaml} |  5 +-
 .../pv-test/ozone-csi-test-webserver-service.yaml} |  9 +--
 .../k8s/examples/ozone-dev/s3g-statefulset.yaml|  6 +-
 .../k8s/examples/ozone-dev/scm-statefulset.yaml| 12 ++--
 .../dist/src/main/k8s/examples/ozone/Flekszible| 25 +++-
 .../main/k8s/examples/ozone/config-configmap.yaml  |  3 +
 .../csi}/csi-node-daemonset.yaml   |  0
 .../csi}/csi-ozone-clusterrole.yaml|  0
 .../csi}/csi-ozone-clusterrolebinding.yaml |  2 +-
 .../csi/csi-ozone-serviceaccount.yaml} |  6 ++
 .../csi}/csi-provisioner-deployment.yaml   |  0
 .../csi}/org.apache.hadoop.ozone-csidriver.yaml|  0
 .../csi}/ozone-storageclass.yaml   |  0
 .../k8s/examples/ozone/datanode-statefulset.yaml   |  6 +-
 .../main/k8s/examples/ozone/om-statefulset.yaml|  6 +-
 .../ozone-csi-test-webserver-deployment.yaml}  | 39 ++--
 ...-csi-test-webserver-persistentvolumeclaim.yaml} |  2 +-
 .../pv-test/ozone-csi-test-webserver-service.yaml} | 13 ++--
 .../main/k8s/examples/ozone/s3g-statefulset.yaml   |  6 +-
 .../main/k8s/examples/ozone/scm-statefulset.yaml   | 12 ++--
 48 files changed, 139 insertions(+), 612 deletions(-)

diff --git a/hadoop-ozone/dist/src/main/k8s/definitions/pv-test/flekszible.yaml 
b/hadoop-ozone/dist/src/main/k8s/definitions/pv-test/flekszible.yaml
index bfe82ff..54203bd 100644
--- a/hadoop-ozone/dist/src/main/k8s/definitions/pv-test/flekszible.yaml
+++ b/hadoop-ozone/dist/src/main/k8s/definitions/pv-test/flekszible.yaml
@@ -13,4 +13,4 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-description: Nginx example deployment with persistent volume claim.
+description: Simple python based webserver with persistent volume claim.
diff --git 
a/hadoop-ozone/dist/src/main/k8s/definitions/pv-test/nginx-conf-configmap.yaml 
b/hadoop-ozone/dist/src/main/k8s/definitions/pv-test/nginx-conf-configmap.yaml
deleted file mode 100644
index 1fd8941..000
--- 
a/hadoop-ozone/dist/src/main/k8s/definitions/pv-test/nginx-conf-configmap.yaml
+++ /dev/null
@@ -1,37 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"

[hadoop] branch trunk updated: HDDS-1725. pv-test example to test csi is not working

2019-07-29 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 90afb7b  HDDS-1725. pv-test example to test csi is not working
90afb7b is described below

commit 90afb7bf8cef7ec6b6e456c87a1c86e406f5f461
Author: Márton Elek 
AuthorDate: Wed Jun 26 22:37:14 2019 +0200

HDDS-1725. pv-test example to test csi is not working
---
 .../main/k8s/definitions/pv-test/flekszible.yaml   |  2 +-
 .../definitions/pv-test/nginx-conf-configmap.yaml  | 37 ---
 .../pv-test/webserver-deployment.yaml} | 45 ++---
 .../pv-test/webserver-service.yaml}|  6 +-
 .../pv-test/webserver-volume.yaml} |  2 +-
 .../src/main/k8s/examples/ozone-csi/Flekszible | 26 
 .../k8s/examples/ozone-csi/config-configmap.yaml   | 37 ---
 .../k8s/examples/ozone-csi/datanode-daemonset.yaml | 60 --
 .../k8s/examples/ozone-csi/datanode-service.yaml   | 28 -
 .../examples/ozone-csi/datanode-statefulset.yaml   | 66 ---
 .../k8s/examples/ozone-csi/om-statefulset.yaml | 65 ---
 .../ozone-csi/pv-test/nginx-conf-configmap.yaml| 38 ---
 .../main/k8s/examples/ozone-csi/s3g-service.yaml   | 28 -
 .../main/k8s/examples/ozone-csi/scm-service.yaml   | 28 -
 .../k8s/examples/ozone-csi/scm-statefulset.yaml| 73 --
 .../src/main/k8s/examples/ozone-dev/Flekszible |  9 +++
 .../k8s/examples/ozone-dev/config-configmap.yaml   |  3 +
 .../csi}/csi-node-daemonset.yaml   |  0
 .../csi}/csi-ozone-clusterrole.yaml|  0
 .../csi}/csi-ozone-clusterrolebinding.yaml |  2 +-
 .../csi}/csi-ozone-serviceaccount.yaml |  2 +-
 .../csi}/csi-provisioner-deployment.yaml   |  0
 .../csi}/org.apache.hadoop.ozone-csidriver.yaml|  0
 .../csi}/ozone-storageclass.yaml   |  0
 .../examples/ozone-dev/datanode-statefulset.yaml   |  6 +-
 .../k8s/examples/ozone-dev/om-statefulset.yaml |  6 +-
 .../prometheus-operator-clusterrolebinding.yaml|  2 +-
 .../ozone-csi-test-webserver-deployment.yaml}  | 28 -
 ...-csi-test-webserver-persistentvolumeclaim.yaml} |  5 +-
 .../pv-test/ozone-csi-test-webserver-service.yaml} |  9 +--
 .../k8s/examples/ozone-dev/s3g-statefulset.yaml|  6 +-
 .../k8s/examples/ozone-dev/scm-statefulset.yaml| 12 ++--
 .../dist/src/main/k8s/examples/ozone/Flekszible| 25 +++-
 .../main/k8s/examples/ozone/config-configmap.yaml  |  3 +
 .../csi}/csi-node-daemonset.yaml   |  0
 .../csi}/csi-ozone-clusterrole.yaml|  0
 .../csi}/csi-ozone-clusterrolebinding.yaml |  2 +-
 .../csi/csi-ozone-serviceaccount.yaml} |  6 ++
 .../csi}/csi-provisioner-deployment.yaml   |  0
 .../csi}/org.apache.hadoop.ozone-csidriver.yaml|  0
 .../csi}/ozone-storageclass.yaml   |  0
 .../k8s/examples/ozone/datanode-statefulset.yaml   |  6 +-
 .../main/k8s/examples/ozone/om-statefulset.yaml|  6 +-
 .../ozone-csi-test-webserver-deployment.yaml}  | 39 ++--
 ...-csi-test-webserver-persistentvolumeclaim.yaml} |  2 +-
 .../pv-test/ozone-csi-test-webserver-service.yaml} | 13 ++--
 .../main/k8s/examples/ozone/s3g-statefulset.yaml   |  6 +-
 .../main/k8s/examples/ozone/scm-statefulset.yaml   | 12 ++--
 48 files changed, 139 insertions(+), 612 deletions(-)

diff --git a/hadoop-ozone/dist/src/main/k8s/definitions/pv-test/flekszible.yaml 
b/hadoop-ozone/dist/src/main/k8s/definitions/pv-test/flekszible.yaml
index bfe82ff..54203bd 100644
--- a/hadoop-ozone/dist/src/main/k8s/definitions/pv-test/flekszible.yaml
+++ b/hadoop-ozone/dist/src/main/k8s/definitions/pv-test/flekszible.yaml
@@ -13,4 +13,4 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-description: Nginx example deployment with persistent volume claim.
+description: Simple python based webserver with persistent volume claim.
diff --git 
a/hadoop-ozone/dist/src/main/k8s/definitions/pv-test/nginx-conf-configmap.yaml 
b/hadoop-ozone/dist/src/main/k8s/definitions/pv-test/nginx-conf-configmap.yaml
deleted file mode 100644
index 1fd8941..000
--- 
a/hadoop-ozone/dist/src/main/k8s/definitions/pv-test/nginx-conf-configmap.yaml
+++ /dev/null
@@ -1,37 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a c

[hadoop] branch trunk updated (61ec03c -> b039f75)

2019-07-29 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 61ec03c  HDDS-1852. Fix typo in TestOmAcls
 add b039f75  HDDS-1682. TestEventWatcher.testMetrics is flaky

No new revisions were added by this update.

Summary of changes:
 .../hadoop/hdds/server/events/EventWatcher.java| 11 +-
 .../hdds/server/events/TestEventWatcher.java   | 24 +++---
 2 files changed, 23 insertions(+), 12 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HDDS-1682 created (now b039f75)

2019-07-29 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-1682
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at b039f75  HDDS-1682. TestEventWatcher.testMetrics is flaky

This branch includes the following new commits:

 new b039f75  HDDS-1682. TestEventWatcher.testMetrics is flaky

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/01: HDDS-1682. TestEventWatcher.testMetrics is flaky

2019-07-29 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch HDDS-1682
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit b039f7591f7a9988ca283da0e15aec1ac858e24c
Author: Márton Elek 
AuthorDate: Thu Jun 13 16:27:23 2019 +0200

HDDS-1682. TestEventWatcher.testMetrics is flaky

Closes #962.
---
 .../hadoop/hdds/server/events/EventWatcher.java| 11 +-
 .../hdds/server/events/TestEventWatcher.java   | 24 +++---
 2 files changed, 23 insertions(+), 12 deletions(-)

diff --git 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventWatcher.java
 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventWatcher.java
index a118c3e..301c71e 100644
--- 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventWatcher.java
+++ 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventWatcher.java
@@ -143,14 +143,15 @@ public abstract class EventWatcher= 2);
 
 DefaultMetricsSystem.shutdown();
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1852. Fix typo in TestOmAcls

2019-07-29 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 61ec03c  HDDS-1852. Fix typo in TestOmAcls
61ec03c is described below

commit 61ec03c966d2d1b082851c737521a1df16faa2d8
Author: Doroszlai, Attila 
AuthorDate: Mon Jul 29 10:46:11 2019 +0200

HDDS-1852. Fix typo in TestOmAcls

Closes #1173
---
 .../src/test/java/org/apache/hadoop/ozone/om/TestOmAcls.java  | 8 +++-
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOmAcls.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOmAcls.java
index 138e23d..c437564 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOmAcls.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOmAcls.java
@@ -24,7 +24,6 @@ import org.apache.hadoop.hdds.protocol.StorageType;
 import org.apache.hadoop.hdfs.server.datanode.ObjectStoreHandler;
 import org.apache.hadoop.ozone.MiniOzoneCluster;
 import org.apache.hadoop.ozone.OzoneTestUtils;
-import org.apache.hadoop.ozone.om.exceptions.OMException;
 import org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes;
 import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
 import org.apache.hadoop.ozone.security.acl.IOzoneObj;
@@ -80,7 +79,7 @@ public class TestOmAcls {
 omId = UUID.randomUUID().toString();
 conf.setBoolean(OZONE_ACL_ENABLED, true);
 conf.setInt(OZONE_OPEN_KEY_EXPIRE_THRESHOLD_SECONDS, 2);
-conf.setClass(OZONE_ACL_AUTHORIZER_CLASS, OzoneAccessAuthrizerTest.class,
+conf.setClass(OZONE_ACL_AUTHORIZER_CLASS, OzoneAccessAuthorizerTest.class,
 IAccessAuthorizer.class);
 cluster = MiniOzoneCluster.newBuilder(conf)
 .setClusterId(clusterId)
@@ -165,11 +164,10 @@ public class TestOmAcls {
 /**
  * Test implementation to negative case.
  */
-class OzoneAccessAuthrizerTest implements IAccessAuthorizer {
+class OzoneAccessAuthorizerTest implements IAccessAuthorizer {
 
   @Override
-  public boolean checkAccess(IOzoneObj ozoneObject, RequestContext context)
-  throws OMException {
+  public boolean checkAccess(IOzoneObj ozoneObject, RequestContext context) {
 return false;
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch ozone-0.4.1 updated: HDDS-1867. Invalid Prometheus metric name from JvmMetrics

2019-07-29 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new e4c16a3  HDDS-1867. Invalid Prometheus metric name from JvmMetrics
e4c16a3 is described below

commit e4c16a30d998ea9bdcb8b3bee44cbef0f9fc7d6b
Author: Doroszlai, Attila 
AuthorDate: Mon Jul 29 10:27:55 2019 +0200

HDDS-1867. Invalid Prometheus metric name from JvmMetrics

Closes #1172

(cherry picked from commit 902ff4a2f60cba8e8489dde40e3c8b8ba30a75b4)
---
 .../org/apache/hadoop/hdds/server/PrometheusMetricsSink.java  |  7 +--
 .../apache/hadoop/hdds/server/TestPrometheusMetricsSink.java  | 11 +++
 2 files changed, 16 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
index df25cfc..f23d528 100644
--- 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
+++ 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
@@ -49,6 +49,9 @@ public class PrometheusMetricsSink implements MetricsSink {
   private static final Pattern SPLIT_PATTERN =
   Pattern.compile("(?

[hadoop] branch trunk updated: HDDS-1867. Invalid Prometheus metric name from JvmMetrics

2019-07-29 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 902ff4a  HDDS-1867. Invalid Prometheus metric name from JvmMetrics
902ff4a is described below

commit 902ff4a2f60cba8e8489dde40e3c8b8ba30a75b4
Author: Doroszlai, Attila 
AuthorDate: Mon Jul 29 10:27:55 2019 +0200

HDDS-1867. Invalid Prometheus metric name from JvmMetrics

Closes #1172
---
 .../org/apache/hadoop/hdds/server/PrometheusMetricsSink.java  |  7 +--
 .../apache/hadoop/hdds/server/TestPrometheusMetricsSink.java  | 11 +++
 2 files changed, 16 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
index df25cfc..f23d528 100644
--- 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
+++ 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
@@ -49,6 +49,9 @@ public class PrometheusMetricsSink implements MetricsSink {
   private static final Pattern SPLIT_PATTERN =
   Pattern.compile("(?

[hadoop] branch ozone-0.4.1 updated: HDDS-1785. OOM error in Freon due to the concurrency handling

2019-07-17 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new 669c447  HDDS-1785. OOM error in Freon due to the concurrency handling
669c447 is described below

commit 669c4478db0fede0ba51313daa4873b74ed61cf5
Author: Doroszlai, Attila 
AuthorDate: Wed Jul 17 14:47:01 2019 +0200

HDDS-1785. OOM error in Freon due to the concurrency handling

Closes #1085

(cherry picked from commit 256fcc160edce5031bee83ffbd1aed60e148430d)
---
 .../hadoop/ozone/freon/RandomKeyGenerator.java | 308 -
 1 file changed, 174 insertions(+), 134 deletions(-)

diff --git 
a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
 
b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
index 6e1e02c..5198ac3 100644
--- 
a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
+++ 
b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
@@ -25,9 +25,11 @@ import java.io.PrintStream;
 import java.text.SimpleDateFormat;
 import java.util.ArrayList;
 import java.util.HashMap;
+import java.util.Map;
 import java.util.UUID;
 import java.util.concurrent.BlockingQueue;
 import java.util.concurrent.Callable;
+import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.TimeUnit;
@@ -95,8 +97,6 @@ public final class RandomKeyGenerator implements 
Callable {
 KEY_WRITE
   }
 
-  private static final String RATIS = "ratis";
-
   private static final String DURATION_FORMAT = "HH:mm:ss,SSS";
 
   private static final int QUANTILES = 10;
@@ -112,8 +112,8 @@ public final class RandomKeyGenerator implements 
Callable {
   private static final Logger LOG =
   LoggerFactory.getLogger(RandomKeyGenerator.class);
 
-  private boolean completed = false;
-  private Exception exception = null;
+  private volatile boolean completed = false;
+  private volatile Exception exception = null;
 
   @Option(names = "--numOfThreads",
   description = "number of threads to be launched for the run",
@@ -193,6 +193,14 @@ public final class RandomKeyGenerator implements 
Callable {
 
   private AtomicLong totalBytesWritten;
 
+  private int totalBucketCount;
+  private long totalKeyCount;
+  private AtomicInteger volumeCounter;
+  private AtomicInteger bucketCounter;
+  private AtomicLong keyCounter;
+  private Map volumes;
+  private Map buckets;
+
   private AtomicInteger numberOfVolumesCreated;
   private AtomicInteger numberOfBucketsCreated;
   private AtomicLong numberOfKeysAdded;
@@ -226,6 +234,11 @@ public final class RandomKeyGenerator implements 
Callable {
 numberOfVolumesCreated = new AtomicInteger();
 numberOfBucketsCreated = new AtomicInteger();
 numberOfKeysAdded = new AtomicLong();
+volumeCounter = new AtomicInteger();
+bucketCounter = new AtomicInteger();
+keyCounter = new AtomicLong();
+volumes = new ConcurrentHashMap<>();
+buckets = new ConcurrentHashMap<>();
 ozoneClient = OzoneClientFactory.getClient(configuration);
 objectStore = ozoneClient.getObjectStore();
 for (FreonOps ops : FreonOps.values()) {
@@ -259,6 +272,9 @@ public final class RandomKeyGenerator implements 
Callable {
   }
 }
 
+totalBucketCount = numOfVolumes * numOfBuckets;
+totalKeyCount = totalBucketCount * numOfKeys;
+
 LOG.info("Number of Threads: " + numOfThreads);
 threadPoolSize = numOfThreads;
 executor = Executors.newFixedThreadPool(threadPoolSize);
@@ -269,9 +285,8 @@ public final class RandomKeyGenerator implements 
Callable {
 LOG.info("Number of Keys per Bucket: {}.", numOfKeys);
 LOG.info("Key size: {} bytes", keySize);
 LOG.info("Buffer size: {} bytes", bufferSize);
-for (int i = 0; i < numOfVolumes; i++) {
-  String volumeName = "vol-" + i + "-" + 
RandomStringUtils.randomNumeric(5);
-  executor.submit(new VolumeProcessor(volumeName));
+for (int i = 0; i < numOfThreads; i++) {
+  executor.submit(new ObjectCreator());
 }
 
 Thread validator = null;
@@ -286,22 +301,15 @@ public final class RandomKeyGenerator implements 
Callable {
   LOG.info("Data validation is enabled.");
 }
 
-Supplier currentValue;
-long maxValue;
-
-currentValue = () -> numberOfKeysAdded.get();
-maxValue = numOfVolumes *
-numOfBuckets *
-numOfKeys;
-
-progressbar = new ProgressBar(System.out, maxValue, currentValue);
+Supplier currentValue = numberOfKeysAdded::get;
+progressbar = new ProgressBar(System.out, totalKeyCount, curr

[hadoop] branch trunk updated (b4466a3 -> 256fcc1)

2019-07-17 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from b4466a3  HADOOP-16341. ShutDownHookManager: Regressed performance on 
Hook removals after HADOOP-15679
 add 256fcc1  HDDS-1785. OOM error in Freon due to the concurrency handling

No new revisions were added by this update.

Summary of changes:
 .../hadoop/ozone/freon/RandomKeyGenerator.java | 308 -
 1 file changed, 174 insertions(+), 134 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch ozone-0.4.1 updated: HDDS-1793. Acceptance test of ozone-topology cluster is failing

2019-07-16 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new 711fd71  HDDS-1793. Acceptance test of ozone-topology cluster is 
failing
711fd71 is described below

commit 711fd71c31023d4199e7802295ef7c5fe4efe305
Author: Doroszlai, Attila 
AuthorDate: Tue Jul 16 16:52:14 2019 +0200

HDDS-1793. Acceptance test of ozone-topology cluster is failing

Closes #1096
---
 .../src/main/compose/ozone-net-topology/test.sh|  2 +-
 hadoop-ozone/dist/src/main/compose/testlib.sh  | 28 +-
 2 files changed, 18 insertions(+), 12 deletions(-)

diff --git a/hadoop-ozone/dist/src/main/compose/ozone-net-topology/test.sh 
b/hadoop-ozone/dist/src/main/compose/ozone-net-topology/test.sh
index f36fb48..2843de2 100755
--- a/hadoop-ozone/dist/src/main/compose/ozone-net-topology/test.sh
+++ b/hadoop-ozone/dist/src/main/compose/ozone-net-topology/test.sh
@@ -21,7 +21,7 @@ export COMPOSE_DIR
 # shellcheck source=/dev/null
 source "$COMPOSE_DIR/../testlib.sh"
 
-start_docker_env
+start_docker_env 4
 
 #Due to the limitation of the current auditparser test, it should be the
 #first test in a clean cluster.
diff --git a/hadoop-ozone/dist/src/main/compose/testlib.sh 
b/hadoop-ozone/dist/src/main/compose/testlib.sh
index 410e059..065c53f 100755
--- a/hadoop-ozone/dist/src/main/compose/testlib.sh
+++ b/hadoop-ozone/dist/src/main/compose/testlib.sh
@@ -28,9 +28,12 @@ mkdir -p "$RESULT_DIR"
 #Should be writeable from the docker containers where user is different.
 chmod ogu+w "$RESULT_DIR"
 
-## @description wait until 3 datanodes are up (or 30 seconds)
+## @description wait until datanodes are up (or 30 seconds)
 ## @param the docker-compose file
+## @param number of datanodes to wait for (default: 3)
 wait_for_datanodes(){
+  local compose_file=$1
+  local -i datanode_count=${2:-3}
 
   #Reset the timer
   SECONDS=0
@@ -40,19 +43,19 @@ wait_for_datanodes(){
 
  #This line checks the number of HEALTHY datanodes registered in scm over 
the
  # jmx HTTP servlet
- datanodes=$(docker-compose -f "$1" exec -T scm curl -s 
'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
 | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value')
-  if [[ "$datanodes" == "3" ]]; then
+ datanodes=$(docker-compose -f "${compose_file}" exec -T scm curl -s 
'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
 | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value')
+ if [[ "$datanodes" ]]; then
+   if [[ ${datanodes} -ge ${datanode_count} ]]; then
 
-#It's up and running. Let's return from the function.
+ #It's up and running. Let's return from the function.
  echo "$datanodes datanodes are up and registered to the scm"
  return
-  else
+   else
 
- #Print it only if a number. Could be not a number if scm is not yet 
started
- if [[ "$datanodes" ]]; then
-echo "$datanodes datanode is up and healthy (until now)"
+   #Print it only if a number. Could be not a number if scm is not yet 
started
+   echo "$datanodes datanode is up and healthy (until now)"
  fi
-  fi
+ fi
 
   sleep 2
done
@@ -60,10 +63,13 @@ wait_for_datanodes(){
 }
 
 ## @description  Starts a docker-compose based test environment
+## @param number of datanodes to start and wait for (default: 3)
 start_docker_env(){
+  local -i datanode_count=${1:-3}
+
   docker-compose -f "$COMPOSE_FILE" down
-  docker-compose -f "$COMPOSE_FILE" up -d --scale datanode=3
-  wait_for_datanodes "$COMPOSE_FILE"
+  docker-compose -f "$COMPOSE_FILE" up -d --scale datanode="${datanode_count}"
+  wait_for_datanodes "$COMPOSE_FILE" "${datanode_count}"
   sleep 10
 }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1793. Acceptance test of ozone-topology cluster is failing

2019-07-16 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c5e3ab5  HDDS-1793. Acceptance test of ozone-topology cluster is 
failing
c5e3ab5 is described below

commit c5e3ab5a4d80c2650050d54f01abc7b0c6259619
Author: Doroszlai, Attila 
AuthorDate: Tue Jul 16 16:52:14 2019 +0200

HDDS-1793. Acceptance test of ozone-topology cluster is failing

Closes #1096
---
 .../compose/ozone-topology/docker-compose.yaml |  6 -
 .../dist/src/main/compose/ozone-topology/test.sh   |  2 +-
 hadoop-ozone/dist/src/main/compose/testlib.sh  | 28 +-
 3 files changed, 18 insertions(+), 18 deletions(-)

diff --git 
a/hadoop-ozone/dist/src/main/compose/ozone-topology/docker-compose.yaml 
b/hadoop-ozone/dist/src/main/compose/ozone-topology/docker-compose.yaml
index b14f398..7b99a7b 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-topology/docker-compose.yaml
+++ b/hadoop-ozone/dist/src/main/compose/ozone-topology/docker-compose.yaml
@@ -19,7 +19,6 @@ services:
datanode_1:
   image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
   privileged: true #required by the profiler
-  container_name: datanode_1
   volumes:
 - ../..:/opt/hadoop
   ports:
@@ -34,7 +33,6 @@ services:
datanode_2:
   image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
   privileged: true #required by the profiler
-  container_name: datanode_2
   volumes:
 - ../..:/opt/hadoop
   ports:
@@ -49,7 +47,6 @@ services:
datanode_3:
   image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
   privileged: true #required by the profiler
-  container_name: datanode_3
   volumes:
 - ../..:/opt/hadoop
   ports:
@@ -64,7 +61,6 @@ services:
datanode_4:
   image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
   privileged: true #required by the profiler
-  container_name: datanode_4
   volumes:
 - ../..:/opt/hadoop
   ports:
@@ -79,7 +75,6 @@ services:
om:
   image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
   privileged: true #required by the profiler
-  container_name: om
   volumes:
  - ../..:/opt/hadoop
   ports:
@@ -95,7 +90,6 @@ services:
scm:
   image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
   privileged: true #required by the profiler
-  container_name: scm
   volumes:
  - ../..:/opt/hadoop
   ports:
diff --git a/hadoop-ozone/dist/src/main/compose/ozone-topology/test.sh 
b/hadoop-ozone/dist/src/main/compose/ozone-topology/test.sh
index f36fb48..2843de2 100755
--- a/hadoop-ozone/dist/src/main/compose/ozone-topology/test.sh
+++ b/hadoop-ozone/dist/src/main/compose/ozone-topology/test.sh
@@ -21,7 +21,7 @@ export COMPOSE_DIR
 # shellcheck source=/dev/null
 source "$COMPOSE_DIR/../testlib.sh"
 
-start_docker_env
+start_docker_env 4
 
 #Due to the limitation of the current auditparser test, it should be the
 #first test in a clean cluster.
diff --git a/hadoop-ozone/dist/src/main/compose/testlib.sh 
b/hadoop-ozone/dist/src/main/compose/testlib.sh
index 410e059..065c53f 100755
--- a/hadoop-ozone/dist/src/main/compose/testlib.sh
+++ b/hadoop-ozone/dist/src/main/compose/testlib.sh
@@ -28,9 +28,12 @@ mkdir -p "$RESULT_DIR"
 #Should be writeable from the docker containers where user is different.
 chmod ogu+w "$RESULT_DIR"
 
-## @description wait until 3 datanodes are up (or 30 seconds)
+## @description wait until datanodes are up (or 30 seconds)
 ## @param the docker-compose file
+## @param number of datanodes to wait for (default: 3)
 wait_for_datanodes(){
+  local compose_file=$1
+  local -i datanode_count=${2:-3}
 
   #Reset the timer
   SECONDS=0
@@ -40,19 +43,19 @@ wait_for_datanodes(){
 
  #This line checks the number of HEALTHY datanodes registered in scm over 
the
  # jmx HTTP servlet
- datanodes=$(docker-compose -f "$1" exec -T scm curl -s 
'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
 | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value')
-  if [[ "$datanodes" == "3" ]]; then
+ datanodes=$(docker-compose -f "${compose_file}" exec -T scm curl -s 
'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
 | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value')
+ if [[ "$datanodes" ]]; then
+   if [[ ${datanodes} -ge ${datanode_count} ]]; then
 
-#It's up and running. Let's return from the function.
+ #It's up and running. Let's return from the function.
  echo "$datanodes datanodes are up and registered to the scm"
  return

[hadoop] branch ozone-0.4.1 updated (eaec8e2 -> 3c41bc7)

2019-07-15 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from eaec8e2  Revert "HDDS-1735. Create separate unit and integration test 
executor dev-support script"
 new 97897b6  HDDS-1735. Create separate unit and integration test executor 
dev-support script
 new 3c41bc7  HDDS-1800. Result of author check is inverted

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 hadoop-ozone/dev-support/checks/acceptance.sh   |  6 +-
 hadoop-ozone/dev-support/checks/author.sh   | 15 ++-
 hadoop-ozone/dev-support/checks/build.sh|  5 -
 hadoop-ozone/dev-support/checks/checkstyle.sh   | 10 --
 hadoop-ozone/dev-support/checks/findbugs.sh | 12 +++-
 .../dev-support/checks/{unit.sh => integration.sh}  | 12 
 hadoop-ozone/dev-support/checks/isolation.sh|  7 +--
 hadoop-ozone/dev-support/checks/rat.sh  | 17 +
 .../dev-support/checks/{acceptance.sh => shellcheck.sh} | 15 ---
 hadoop-ozone/dev-support/checks/unit.sh |  8 
 10 files changed, 76 insertions(+), 31 deletions(-)
 copy hadoop-ozone/dev-support/checks/{unit.sh => integration.sh} (72%)
 copy hadoop-ozone/dev-support/checks/{acceptance.sh => shellcheck.sh} (64%)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/02: HDDS-1735. Create separate unit and integration test executor dev-support script

2019-07-15 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 97897b6ab747e973fd05feac70e4c89bce3daa07
Author: Márton Elek 
AuthorDate: Sat Jun 29 01:59:44 2019 +0200

HDDS-1735. Create separate unit and integration test executor dev-support 
script

(cherry picked from commit 62a057b8d647730498ea9a04d57f18b4520d09cf)
---
 hadoop-ozone/dev-support/checks/acceptance.sh   |  6 +-
 hadoop-ozone/dev-support/checks/author.sh   | 14 ++
 hadoop-ozone/dev-support/checks/build.sh|  5 -
 hadoop-ozone/dev-support/checks/checkstyle.sh   | 10 --
 hadoop-ozone/dev-support/checks/findbugs.sh | 12 +++-
 .../dev-support/checks/{unit.sh => integration.sh}  | 12 
 hadoop-ozone/dev-support/checks/isolation.sh|  7 +--
 hadoop-ozone/dev-support/checks/rat.sh  | 17 +
 .../dev-support/checks/{acceptance.sh => shellcheck.sh} | 15 ---
 hadoop-ozone/dev-support/checks/unit.sh |  8 
 10 files changed, 76 insertions(+), 30 deletions(-)

diff --git a/hadoop-ozone/dev-support/checks/acceptance.sh 
b/hadoop-ozone/dev-support/checks/acceptance.sh
index 8de920f..4a50e08 100755
--- a/hadoop-ozone/dev-support/checks/acceptance.sh
+++ b/hadoop-ozone/dev-support/checks/acceptance.sh
@@ -14,6 +14,10 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+cd "$DIR/../../.." || exit 1
+
 export HADOOP_VERSION=3
-"$DIR/../../../hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/test-all.sh"
+OZONE_VERSION=$(grep "" "$DIR/../../pom.xml" | sed 
's/<[^>]*>//g'|  sed 's/^[ \t]*//')
+cd "$DIR/../../dist/target/ozone-$OZONE_VERSION/compose" || exit 1
+./test-all.sh
 exit $?
diff --git a/hadoop-ozone/dev-support/checks/author.sh 
b/hadoop-ozone/dev-support/checks/author.sh
index 43caa70..d5a469c 100755
--- a/hadoop-ozone/dev-support/checks/author.sh
+++ b/hadoop-ozone/dev-support/checks/author.sh
@@ -13,10 +13,16 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-mkdir -p ./target
-grep -r --include="*.java" "@author" .
-if [ $? -gt 0 ]; then
+DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+cd "$DIR/../../.." || exit 1
+
+#hide this tring to not confuse yetus
+AUTHOR="uthor"
+AUTHOR="@a${AUTHOR}"
+
+grep -r --include="*.java" "$AUTHOR" .
+if grep -r --include="*.java" "$AUTHOR" .; then
   exit 0
 else
-  exit -1
+  exit 1
 fi
diff --git a/hadoop-ozone/dev-support/checks/build.sh 
b/hadoop-ozone/dev-support/checks/build.sh
index 6a7811e..1197330 100755
--- a/hadoop-ozone/dev-support/checks/build.sh
+++ b/hadoop-ozone/dev-support/checks/build.sh
@@ -13,6 +13,9 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
+DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+cd "$DIR/../../.." || exit 1
+
 export MAVEN_OPTS="-Xmx4096m"
-mvn -am -pl :hadoop-ozone-dist -P hdds -Dmaven.javadoc.skip=true -DskipTests 
clean install
+mvn -B -f pom.ozone.xml -Dmaven.javadoc.skip=true -DskipTests clean install
 exit $?
diff --git a/hadoop-ozone/dev-support/checks/checkstyle.sh 
b/hadoop-ozone/dev-support/checks/checkstyle.sh
index 0d80fbc..c4de528 100755
--- a/hadoop-ozone/dev-support/checks/checkstyle.sh
+++ b/hadoop-ozone/dev-support/checks/checkstyle.sh
@@ -13,11 +13,17 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-mvn -fn checkstyle:check -am -pl :hadoop-ozone-dist -Phdds
+DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+cd "$DIR/../../.." || exit 1
+
+mvn -B -fn checkstyle:check -f pom.ozone.xml
+
+#Print out the exact violations with parsing XML results with sed
+find "." -name checkstyle-errors.xml -print0  | xargs -0 sed  '$!N; 
//d'
 
 violations=$(grep -r error --include checkstyle-errors.xml .| wc -l)
 if [[ $violations -gt 0 ]]; then
 echo "There are $violations checkstyle violations"
-exit -1
+exit 1
 fi
 exit 0
diff --git

[hadoop] 02/02: HDDS-1800. Result of author check is inverted

2019-07-15 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 3c41bc7f918e8e850966ef65a68cb7caa3dc009b
Author: Doroszlai, Attila 
AuthorDate: Mon Jul 15 18:00:10 2019 +0200

HDDS-1800. Result of author check is inverted

Closes #1092
---
 hadoop-ozone/dev-support/checks/author.sh | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/hadoop-ozone/dev-support/checks/author.sh 
b/hadoop-ozone/dev-support/checks/author.sh
index d5a469c..f50a396 100755
--- a/hadoop-ozone/dev-support/checks/author.sh
+++ b/hadoop-ozone/dev-support/checks/author.sh
@@ -20,9 +20,8 @@ cd "$DIR/../../.." || exit 1
 AUTHOR="uthor"
 AUTHOR="@a${AUTHOR}"
 
-grep -r --include="*.java" "$AUTHOR" .
 if grep -r --include="*.java" "$AUTHOR" .; then
-  exit 0
-else
   exit 1
+else
+  exit 0
 fi


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch ozone-0.4.1 updated: Revert "HDDS-1735. Create separate unit and integration test executor dev-support script"

2019-07-15 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new eaec8e2  Revert "HDDS-1735. Create separate unit and integration test 
executor dev-support script"
eaec8e2 is described below

commit eaec8e20d94c5c06fd87804458f0ed65a40ae7e5
Author: Márton Elek 
AuthorDate: Mon Jul 15 18:13:20 2019 +0200

Revert "HDDS-1735. Create separate unit and integration test executor 
dev-support script"

This reverts commit e8ea4dda04abbf02d426f5df882f1ac89b89f05b.
---
 hadoop-ozone/dev-support/checks/acceptance.sh  |  3 +--
 hadoop-ozone/dev-support/checks/author.sh  |  1 +
 hadoop-ozone/dev-support/checks/build.sh   |  2 +-
 hadoop-ozone/dev-support/checks/checkstyle.sh  |  5 +
 hadoop-ozone/dev-support/checks/findbugs.sh|  2 +-
 hadoop-ozone/dev-support/checks/integration.sh | 25 -
 hadoop-ozone/dev-support/checks/rat.sh |  5 +
 hadoop-ozone/dev-support/checks/unit.sh|  2 +-
 8 files changed, 7 insertions(+), 38 deletions(-)

diff --git a/hadoop-ozone/dev-support/checks/acceptance.sh 
b/hadoop-ozone/dev-support/checks/acceptance.sh
index 258c4e2..8de920f 100755
--- a/hadoop-ozone/dev-support/checks/acceptance.sh
+++ b/hadoop-ozone/dev-support/checks/acceptance.sh
@@ -15,6 +15,5 @@
 # limitations under the License.
 DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
 export HADOOP_VERSION=3
-OZONE_VERSION=$(cat $DIR/../../pom.xml  | grep "" | sed 
's/<[^>]*>//g'|  sed 's/^[ \t]*//')
-"$DIR/../../dist/target/ozone-$OZONE_VERSION/compose/test-all.sh"
+"$DIR/../../../hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/test-all.sh"
 exit $?
diff --git a/hadoop-ozone/dev-support/checks/author.sh 
b/hadoop-ozone/dev-support/checks/author.sh
index 56d15a5..43caa70 100755
--- a/hadoop-ozone/dev-support/checks/author.sh
+++ b/hadoop-ozone/dev-support/checks/author.sh
@@ -13,6 +13,7 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
+mkdir -p ./target
 grep -r --include="*.java" "@author" .
 if [ $? -gt 0 ]; then
   exit 0
diff --git a/hadoop-ozone/dev-support/checks/build.sh 
b/hadoop-ozone/dev-support/checks/build.sh
index 71bf778..6a7811e 100755
--- a/hadoop-ozone/dev-support/checks/build.sh
+++ b/hadoop-ozone/dev-support/checks/build.sh
@@ -14,5 +14,5 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 export MAVEN_OPTS="-Xmx4096m"
-mvn -B -f pom.ozone.xml -Dmaven.javadoc.skip=true -DskipTests clean install
+mvn -am -pl :hadoop-ozone-dist -P hdds -Dmaven.javadoc.skip=true -DskipTests 
clean install
 exit $?
diff --git a/hadoop-ozone/dev-support/checks/checkstyle.sh 
b/hadoop-ozone/dev-support/checks/checkstyle.sh
index 323cbc8..0d80fbc 100755
--- a/hadoop-ozone/dev-support/checks/checkstyle.sh
+++ b/hadoop-ozone/dev-support/checks/checkstyle.sh
@@ -13,10 +13,7 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-mvn -B -fn checkstyle:check -f pom.ozone.xml
-
-#Print out the exact violations with parsing XML results with sed
-find -name checkstyle-errors.xml | xargs sed  '$!N; //d'
+mvn -fn checkstyle:check -am -pl :hadoop-ozone-dist -Phdds
 
 violations=$(grep -r error --include checkstyle-errors.xml .| wc -l)
 if [[ $violations -gt 0 ]]; then
diff --git a/hadoop-ozone/dev-support/checks/findbugs.sh 
b/hadoop-ozone/dev-support/checks/findbugs.sh
index c8bd40b..1328492 100755
--- a/hadoop-ozone/dev-support/checks/findbugs.sh
+++ b/hadoop-ozone/dev-support/checks/findbugs.sh
@@ -20,7 +20,7 @@ mkdir -p ./target
 rm "$FINDBUGS_ALL_FILE" || true
 touch "$FINDBUGS_ALL_FILE"
 
-mvn -B compile -fn findbugs:check -Dfindbugs.failOnError=false  -f 
pom.ozone.xml
+mvn -fn findbugs:check -Dfindbugs.failOnError=false  -am -pl 
:hadoop-ozone-dist -Phdds
 
 find hadoop-ozone -name findbugsXml.xml | xargs -n1 convertXmlToText | tee -a 
"${FINDBUGS_ALL_FILE}"
 find hadoop-hdds -name findbugsXml.xml | xargs -n1 convertXmlToText | tee -a 
"${FINDBUGS_ALL_FILE}"
diff --git a/hadoop-ozone/dev-support/checks/integration.sh 
b/hadoop-ozone/dev-support/checks/integration.sh
deleted file mode 100755
index 8270d4f..000
--- a/hadoop-ozone/dev-support/checks/integration.sh
+++ /dev/null
@@ -1,25 +0,0 @@
-#!/usr/bin/env bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE f

[hadoop] branch trunk updated: HDDS-1800. Result of author check is inverted

2019-07-15 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 61bbdee  HDDS-1800. Result of author check is inverted
61bbdee is described below

commit 61bbdeee193d8bdcbadbc2823a3e63aab0c83422
Author: Doroszlai, Attila 
AuthorDate: Mon Jul 15 18:00:10 2019 +0200

HDDS-1800. Result of author check is inverted

Closes #1092
---
 hadoop-ozone/dev-support/checks/author.sh | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/hadoop-ozone/dev-support/checks/author.sh 
b/hadoop-ozone/dev-support/checks/author.sh
index d5a469c..f50a396 100755
--- a/hadoop-ozone/dev-support/checks/author.sh
+++ b/hadoop-ozone/dev-support/checks/author.sh
@@ -20,9 +20,8 @@ cd "$DIR/../../.." || exit 1
 AUTHOR="uthor"
 AUTHOR="@a${AUTHOR}"
 
-grep -r --include="*.java" "$AUTHOR" .
 if grep -r --include="*.java" "$AUTHOR" .; then
-  exit 0
-else
   exit 1
+else
+  exit 0
 fi


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch ozone-0.4.1 updated: HDDS-1735. Create separate unit and integration test executor dev-support script

2019-07-15 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new e8ea4dd  HDDS-1735. Create separate unit and integration test executor 
dev-support script
e8ea4dd is described below

commit e8ea4dda04abbf02d426f5df882f1ac89b89f05b
Author: Márton Elek 
AuthorDate: Sat Jun 29 01:59:44 2019 +0200

HDDS-1735. Create separate unit and integration test executor dev-support 
script

(cherry picked from commit 0bae9e8ec8b53a3b484eaa01a3fa3f177d56b3e4)
---
 hadoop-ozone/dev-support/checks/acceptance.sh   | 3 ++-
 hadoop-ozone/dev-support/checks/author.sh   | 1 -
 hadoop-ozone/dev-support/checks/build.sh| 2 +-
 hadoop-ozone/dev-support/checks/checkstyle.sh   | 5 -
 hadoop-ozone/dev-support/checks/findbugs.sh | 2 +-
 hadoop-ozone/dev-support/checks/{unit.sh => integration.sh} | 3 ++-
 hadoop-ozone/dev-support/checks/rat.sh  | 5 -
 hadoop-ozone/dev-support/checks/unit.sh | 2 +-
 8 files changed, 15 insertions(+), 8 deletions(-)

diff --git a/hadoop-ozone/dev-support/checks/acceptance.sh 
b/hadoop-ozone/dev-support/checks/acceptance.sh
index 8de920f..258c4e2 100755
--- a/hadoop-ozone/dev-support/checks/acceptance.sh
+++ b/hadoop-ozone/dev-support/checks/acceptance.sh
@@ -15,5 +15,6 @@
 # limitations under the License.
 DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
 export HADOOP_VERSION=3
-"$DIR/../../../hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/test-all.sh"
+OZONE_VERSION=$(cat $DIR/../../pom.xml  | grep "" | sed 
's/<[^>]*>//g'|  sed 's/^[ \t]*//')
+"$DIR/../../dist/target/ozone-$OZONE_VERSION/compose/test-all.sh"
 exit $?
diff --git a/hadoop-ozone/dev-support/checks/author.sh 
b/hadoop-ozone/dev-support/checks/author.sh
index 43caa70..56d15a5 100755
--- a/hadoop-ozone/dev-support/checks/author.sh
+++ b/hadoop-ozone/dev-support/checks/author.sh
@@ -13,7 +13,6 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-mkdir -p ./target
 grep -r --include="*.java" "@author" .
 if [ $? -gt 0 ]; then
   exit 0
diff --git a/hadoop-ozone/dev-support/checks/build.sh 
b/hadoop-ozone/dev-support/checks/build.sh
index 6a7811e..71bf778 100755
--- a/hadoop-ozone/dev-support/checks/build.sh
+++ b/hadoop-ozone/dev-support/checks/build.sh
@@ -14,5 +14,5 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 export MAVEN_OPTS="-Xmx4096m"
-mvn -am -pl :hadoop-ozone-dist -P hdds -Dmaven.javadoc.skip=true -DskipTests 
clean install
+mvn -B -f pom.ozone.xml -Dmaven.javadoc.skip=true -DskipTests clean install
 exit $?
diff --git a/hadoop-ozone/dev-support/checks/checkstyle.sh 
b/hadoop-ozone/dev-support/checks/checkstyle.sh
index 0d80fbc..323cbc8 100755
--- a/hadoop-ozone/dev-support/checks/checkstyle.sh
+++ b/hadoop-ozone/dev-support/checks/checkstyle.sh
@@ -13,7 +13,10 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-mvn -fn checkstyle:check -am -pl :hadoop-ozone-dist -Phdds
+mvn -B -fn checkstyle:check -f pom.ozone.xml
+
+#Print out the exact violations with parsing XML results with sed
+find -name checkstyle-errors.xml | xargs sed  '$!N; //d'
 
 violations=$(grep -r error --include checkstyle-errors.xml .| wc -l)
 if [[ $violations -gt 0 ]]; then
diff --git a/hadoop-ozone/dev-support/checks/findbugs.sh 
b/hadoop-ozone/dev-support/checks/findbugs.sh
index 1328492..c8bd40b 100755
--- a/hadoop-ozone/dev-support/checks/findbugs.sh
+++ b/hadoop-ozone/dev-support/checks/findbugs.sh
@@ -20,7 +20,7 @@ mkdir -p ./target
 rm "$FINDBUGS_ALL_FILE" || true
 touch "$FINDBUGS_ALL_FILE"
 
-mvn -fn findbugs:check -Dfindbugs.failOnError=false  -am -pl 
:hadoop-ozone-dist -Phdds
+mvn -B compile -fn findbugs:check -Dfindbugs.failOnError=false  -f 
pom.ozone.xml
 
 find hadoop-ozone -name findbugsXml.xml | xargs -n1 convertXmlToText | tee -a 
"${FINDBUGS_ALL_FILE}"
 find hadoop-hdds -name findbugsXml.xml | xargs -n1 convertXmlToText | tee -a 
"${FINDBUGS_ALL_FILE}"
diff --git a/hadoop-ozone/dev-support/checks/unit.sh 
b/hadoop-ozone/dev-support/checks/integration.sh
similarity index 88%
copy from hadoop-ozone/dev-support/checks/unit.sh
copy to hadoop-ozone/dev-support/checks/integration.sh
index d839f22..8270d4f 100755
--- a/hadoop-ozone/dev-support/checks/unit.sh
+++ b/hadoop-ozone/dev-support/che

[hadoop] 02/02: HDDS-1791. Update network-tests/src/test/blockade/README.md file

2019-07-12 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 5b99872d77371c3472c9ad80e2a1e24facf10d2c
Author: Nanda kumar 
AuthorDate: Fri Jul 12 14:38:22 2019 +0200

HDDS-1791. Update network-tests/src/test/blockade/README.md file

Closes #1083

(cherry picked from commit 7b8177ba0fe1a5a0b322af75e77547baac761865)
---
 .../network-tests/src/test/blockade/README.md  | 40 ++
 1 file changed, 11 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/README.md 
b/hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/README.md
index b9f3c73..7fb62b3 100644
--- 
a/hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/README.md
+++ 
b/hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/README.md
@@ -16,45 +16,27 @@
 Following python packages need to be installed before running the tests :
 
 1. blockade
-2. pytest==2.8.7
+2. pytest==3.2.0
 
 Running test as part of the maven build:
 
-mvn clean verify -Pit
-
-Running test as part of the released binary:
-
-You can execute all blockade tests with following command-lines:
-
 ```
-cd $DIRECTORY_OF_OZONE
-python -m pytest -s  tests/blockade/
+mvn clean verify -Pit
 ```
 
-You can also execute fewer blockade tests with following command-lines:
-
-```
-cd $DIRECTORY_OF_OZONE
-python -m pytest -s  tests/blockade/
-e.g: python -m pytest -s tests/blockade/test_blockade_datanode_isolation.py
-```
+Running test as part of the released binary:
 
-You can change the default 'sleep' interval in the tests with following
-command-lines:
+You can execute all blockade tests with following command:
 
 ```
-cd $DIRECTORY_OF_OZONE
-python -m pytest -s  tests/blockade/ --containerStatusSleep=
-
-e.g: python -m pytest -s  tests/blockade/ --containerStatusSleep=720
+cd $OZONE_HOME
+python -m pytest tests/blockade
 ```
 
-By default, second phase of the tests will not be run.
-In order to run the second phase of the tests, you can run following
-command-lines:
-
-```
-cd $DIRECTORY_OF_OZONE
-python -m pytest -s  tests/blockade/ --runSecondPhase=true
+You can also execute specific blockade tests with following command:
 
 ```
+cd $OZONE_HOME
+python -m pytest tests/blockade/< PATH TO PYTHON FILE >
+e.g: python -m pytest tests/blockade/test_blockade_datanode_isolation.py
+```
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/02: HDDS-1778. Fix existing blockade tests. (#1068)

2019-07-12 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 7fd0953a30d9aa06fb83ce8dba19fa68d1c820d1
Author: Nanda kumar 
AuthorDate: Wed Jul 10 22:13:59 2019 +0530

HDDS-1778. Fix existing blockade tests. (#1068)

(cherry picked from commit efb916457fc5af868cb7003ee99e0ce3a050a4d2)
---
 .../src/main/compose/ozoneblockade/docker-config   |   3 +
 .../test/blockade/clusterUtils/cluster_utils.py| 335 -
 .../blockade/{blockadeUtils => ozone}/blockade.py  |  16 +-
 .../src/test/blockade/ozone/client.py  |  75 +++
 .../src/test/blockade/ozone/cluster.py | 526 ++---
 .../__init__.py => ozone/constants.py} |  11 +-
 .../src/test/blockade/ozone/container.py   | 117 +
 .../__init__.py => ozone/exceptions.py}|  10 +-
 .../src/test/blockade/{ => ozone}/util.py  |  56 ++-
 .../test/blockade/test_blockade_client_failure.py  | 158 +++
 .../blockade/test_blockade_datanode_isolation.py   | 228 -
 .../src/test/blockade/test_blockade_flaky.py   |  42 +-
 .../test/blockade/test_blockade_mixed_failure.py   | 240 --
 ...t_blockade_mixed_failure_three_nodes_isolate.py | 357 ++
 .../test_blockade_mixed_failure_two_nodes.py   | 275 +--
 .../test/blockade/test_blockade_scm_isolation.py   | 252 --
 16 files changed, 1185 insertions(+), 1516 deletions(-)

diff --git a/hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-config 
b/hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-config
index f5e6a92..8347998 100644
--- a/hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-config
+++ b/hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-config
@@ -23,12 +23,15 @@ OZONE-SITE.XML_ozone.scm.block.client.address=scm
 OZONE-SITE.XML_ozone.metadata.dirs=/data/metadata
 OZONE-SITE.XML_ozone.handler.type=distributed
 OZONE-SITE.XML_ozone.scm.client.address=scm
+OZONE-SITE.XML_ozone.client.max.retries=10
+OZONE-SITE.XML_ozone.scm.stale.node.interval=2m
 OZONE-SITE.XML_ozone.scm.dead.node.interval=5m
 OZONE-SITE.XML_ozone.replication=1
 OZONE-SITE.XML_hdds.datanode.dir=/data/hdds
 OZONE-SITE.XML_ozone.scm.pipeline.owner.container.count=1
 OZONE-SITE.XML_ozone.scm.pipeline.destroy.timeout=15s
 OZONE-SITE.XML_hdds.heartbeat.interval=2s
+OZONE-SITE.XML_hdds.scm.wait.time.after.safemode.exit=30s
 OZONE-SITE.XML_hdds.scm.replication.thread.interval=5s
 OZONE-SITE.XML_hdds.scm.replication.event.timeout=7s
 OZONE-SITE.XML_dfs.ratis.server.failure.duration=25s
diff --git 
a/hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/clusterUtils/cluster_utils.py
 
b/hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/clusterUtils/cluster_utils.py
deleted file mode 100644
index 53e3fa0..000
--- 
a/hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/clusterUtils/cluster_utils.py
+++ /dev/null
@@ -1,335 +0,0 @@
-#!/usr/bin/python
-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-from subprocess import call
-
-import subprocess
-import logging
-import time
-import re
-import os
-import yaml
-
-
-logger = logging.getLogger(__name__)
-
-
-class ClusterUtils(object):
-  """
-  This class contains all cluster related operations.
-  """
-
-  @classmethod
-  def cluster_setup(cls, docker_compose_file, datanode_count,
-destroy_existing_cluster=True):
-"""start a blockade cluster"""
-logger.info("compose file :%s", docker_compose_file)
-logger.info("number of DNs :%d", datanode_count)
-if destroy_existing_cluster:
-  call(["docker-compose", "-f", docker_compose_file, "down"])
-call(["docker-compose", "-f", docker_compose_file, "up", "-d",
-  "--scale", "datanode=" + str(datanode_count)])
-
-logger.info("Waiting 30s for cluster start up...")
-time.sleep(30)
-output = subprocess.check_output(["docker-com

[hadoop] branch ozone-0.4.1 updated (24ed608 -> 5b99872)

2019-07-12 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 24ed608   HDDS-1752. ConcurrentModificationException while handling 
DeadNodeHandler event. (#1080)
 new 7fd0953  HDDS-1778. Fix existing blockade tests. (#1068)
 new 5b99872  HDDS-1791. Update network-tests/src/test/blockade/README.md 
file

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../src/main/compose/ozoneblockade/docker-config   |   3 +
 .../network-tests/src/test/blockade/README.md  |  40 +-
 .../src/test/blockade/blockadeUtils/__init__.py|  14 -
 .../src/test/blockade/clusterUtils/__init__.py |  14 -
 .../test/blockade/clusterUtils/cluster_utils.py| 335 -
 .../blockade/{blockadeUtils => ozone}/blockade.py  |  16 +-
 .../src/test/blockade/ozone/client.py  |  75 +++
 .../src/test/blockade/ozone/cluster.py | 526 ++---
 .../src/test/blockade/ozone/constants.py   |  15 +-
 .../src/test/blockade/ozone/container.py   | 117 +
 .../src/test/blockade/ozone/exceptions.py  |  14 +-
 .../src/test/blockade/{ => ozone}/util.py  |  56 ++-
 .../test/blockade/test_blockade_client_failure.py  | 158 +++
 .../blockade/test_blockade_datanode_isolation.py   | 228 -
 .../src/test/blockade/test_blockade_flaky.py   |  42 +-
 .../test/blockade/test_blockade_mixed_failure.py   | 240 --
 ...t_blockade_mixed_failure_three_nodes_isolate.py | 357 ++
 .../test_blockade_mixed_failure_two_nodes.py   | 275 +--
 .../test/blockade/test_blockade_scm_isolation.py   | 252 --
 19 files changed, 1190 insertions(+), 1587 deletions(-)
 delete mode 100644 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/blockadeUtils/__init__.py
 delete mode 100644 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/clusterUtils/__init__.py
 delete mode 100644 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/clusterUtils/cluster_utils.py
 rename 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/{blockadeUtils
 => ozone}/blockade.py (86%)
 create mode 100644 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/ozone/client.py
 copy 
hadoop-common-project/hadoop-common/src/test/scripts/process_with_sigterm_trap.sh
 => 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/ozone/constants.py
 (81%)
 create mode 100644 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/ozone/container.py
 copy 
hadoop-common-project/hadoop-common/src/test/scripts/process_with_sigterm_trap.sh
 => 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/ozone/exceptions.py
 (82%)
 rename hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/{ => 
ozone}/util.py (54%)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1790. Fix checkstyle issues in TestDataScrubber

2019-07-12 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 190e434  HDDS-1790. Fix checkstyle issues in TestDataScrubber
190e434 is described below

commit 190e4349d77e7ae0601ff81a70c7569c72833ee3
Author: Nanda kumar 
AuthorDate: Fri Jul 12 14:42:29 2019 +0200

HDDS-1790. Fix checkstyle issues in TestDataScrubber

Closes #1082
---
 .../apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java| 12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java
index abed3eb..0f35e50 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java
@@ -154,17 +154,20 @@ public class TestDataScrubber {
 Assert.assertTrue(cs.containerCount() > 0);
 
 // delete the chunks directory.
-File chunksDir = new File(c.getContainerData().getContainerPath(), 
"chunks");
+File chunksDir = new File(c.getContainerData().getContainerPath(),
+"chunks");
 deleteDirectory(chunksDir);
 Assert.assertFalse(chunksDir.exists());
 
-ContainerScrubber sb = new ContainerScrubber(ozoneConfig, 
oc.getController());
+ContainerScrubber sb = new ContainerScrubber(ozoneConfig,
+oc.getController());
 sb.scrub(c);
 
 // wait for the incremental container report to propagate to SCM
 Thread.sleep(5000);
 
-ContainerManager cm = 
cluster.getStorageContainerManager().getContainerManager();
+ContainerManager cm = cluster.getStorageContainerManager()
+.getContainerManager();
 Set replicas = cm.getContainerReplicas(
 ContainerID.valueof(c.getContainerData().getContainerID()));
 Assert.assertEquals(1, replicas.size());
@@ -184,7 +187,8 @@ public class TestDataScrubber {
   }
 
   private boolean verifyRatisReplication(String volumeName, String bucketName,
- String keyName, ReplicationType type, 
ReplicationFactor factor)
+ String keyName, ReplicationType type,
+ ReplicationFactor factor)
   throws IOException {
 OmKeyArgs keyArgs = new OmKeyArgs.Builder()
 .setVolumeName(volumeName)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1791. Update network-tests/src/test/blockade/README.md file

2019-07-12 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 7b8177b  HDDS-1791. Update network-tests/src/test/blockade/README.md 
file
7b8177b is described below

commit 7b8177ba0fe1a5a0b322af75e77547baac761865
Author: Nanda kumar 
AuthorDate: Fri Jul 12 14:38:22 2019 +0200

HDDS-1791. Update network-tests/src/test/blockade/README.md file

Closes #1083
---
 .../network-tests/src/test/blockade/README.md  | 40 ++
 1 file changed, 11 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/README.md 
b/hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/README.md
index b9f3c73..7fb62b3 100644
--- 
a/hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/README.md
+++ 
b/hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/README.md
@@ -16,45 +16,27 @@
 Following python packages need to be installed before running the tests :
 
 1. blockade
-2. pytest==2.8.7
+2. pytest==3.2.0
 
 Running test as part of the maven build:
 
-mvn clean verify -Pit
-
-Running test as part of the released binary:
-
-You can execute all blockade tests with following command-lines:
-
 ```
-cd $DIRECTORY_OF_OZONE
-python -m pytest -s  tests/blockade/
+mvn clean verify -Pit
 ```
 
-You can also execute fewer blockade tests with following command-lines:
-
-```
-cd $DIRECTORY_OF_OZONE
-python -m pytest -s  tests/blockade/
-e.g: python -m pytest -s tests/blockade/test_blockade_datanode_isolation.py
-```
+Running test as part of the released binary:
 
-You can change the default 'sleep' interval in the tests with following
-command-lines:
+You can execute all blockade tests with following command:
 
 ```
-cd $DIRECTORY_OF_OZONE
-python -m pytest -s  tests/blockade/ --containerStatusSleep=
-
-e.g: python -m pytest -s  tests/blockade/ --containerStatusSleep=720
+cd $OZONE_HOME
+python -m pytest tests/blockade
 ```
 
-By default, second phase of the tests will not be run.
-In order to run the second phase of the tests, you can run following
-command-lines:
-
-```
-cd $DIRECTORY_OF_OZONE
-python -m pytest -s  tests/blockade/ --runSecondPhase=true
+You can also execute specific blockade tests with following command:
 
 ```
+cd $OZONE_HOME
+python -m pytest tests/blockade/< PATH TO PYTHON FILE >
+e.g: python -m pytest tests/blockade/test_blockade_datanode_isolation.py
+```
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1384. TestBlockOutputStreamWithFailures is failing

2019-07-12 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 9119ed0  HDDS-1384. TestBlockOutputStreamWithFailures is failing
9119ed0 is described below

commit 9119ed07ff32143b548316bf69c49695196f8422
Author: Márton Elek 
AuthorDate: Thu Jul 11 12:46:39 2019 +0200

HDDS-1384. TestBlockOutputStreamWithFailures is failing

Closes #1029
---
 .../common/transport/server/XceiverServerGrpc.java | 37 +++---
 .../transport/server/ratis/XceiverServerRatis.java | 38 ---
 .../apache/hadoop/ozone/TestMiniOzoneCluster.java  | 56 --
 3 files changed, 83 insertions(+), 48 deletions(-)

diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
index 3f262a1..78c941e 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
@@ -21,6 +21,7 @@ package 
org.apache.hadoop.ozone.container.common.transport.server;
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails.Port.Name;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
 .ContainerCommandRequestProto;
@@ -51,9 +52,6 @@ import org.slf4j.LoggerFactory;
 
 import java.io.File;
 import java.io.IOException;
-import java.net.InetSocketAddress;
-import java.net.ServerSocket;
-import java.net.SocketAddress;
 import java.util.Collections;
 import java.util.List;
 import java.util.UUID;
@@ -71,6 +69,8 @@ public final class XceiverServerGrpc extends XceiverServer {
   private Server server;
   private final ContainerDispatcher storageContainer;
   private boolean isStarted;
+  private DatanodeDetails datanodeDetails;
+
 
   /**
* Constructs a Grpc server class.
@@ -84,25 +84,15 @@ public final class XceiverServerGrpc extends XceiverServer {
 Preconditions.checkNotNull(conf);
 
 this.id = datanodeDetails.getUuid();
+this.datanodeDetails = datanodeDetails;
 this.port = conf.getInt(OzoneConfigKeys.DFS_CONTAINER_IPC_PORT,
 OzoneConfigKeys.DFS_CONTAINER_IPC_PORT_DEFAULT);
-// Get an available port on current node and
-// use that as the container port
+
 if (conf.getBoolean(OzoneConfigKeys.DFS_CONTAINER_IPC_RANDOM_PORT,
 OzoneConfigKeys.DFS_CONTAINER_IPC_RANDOM_PORT_DEFAULT)) {
-  try (ServerSocket socket = new ServerSocket()) {
-socket.setReuseAddress(true);
-SocketAddress address = new InetSocketAddress(0);
-socket.bind(address);
-this.port = socket.getLocalPort();
-LOG.info("Found a free port for the server : {}", this.port);
-  } catch (IOException e) {
-LOG.error("Unable find a random free port for the server, "
-+ "fallback to use default port {}", this.port, e);
-  }
+  this.port = 0;
 }
-datanodeDetails.setPort(
-DatanodeDetails.newPort(DatanodeDetails.Port.Name.STANDALONE, port));
+
 NettyServerBuilder nettyServerBuilder =
 ((NettyServerBuilder) ServerBuilder.forPort(port))
 .maxInboundMessageSize(OzoneConsts.OZONE_SCM_CHUNK_MAX_SIZE);
@@ -165,6 +155,19 @@ public final class XceiverServerGrpc extends XceiverServer 
{
   public void start() throws IOException {
 if (!isStarted) {
   server.start();
+  int realPort = server.getPort();
+
+  if (port == 0) {
+LOG.info("{} {} is started using port {}", getClass().getSimpleName(),
+this.id, realPort);
+port = realPort;
+  }
+
+  //register the real port to the datanode details.
+  datanodeDetails.setPort(DatanodeDetails
+  .newPort(Name.STANDALONE,
+  realPort));
+
   isStarted = true;
 }
   }
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
index f6ecb54..72f6ab4 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverS

[hadoop] 04/05: HDDS-1764. Fix hidden errors in acceptance tests

2019-07-10 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 5dcaa14ff52f604dd2db4a4d37de406935f66085
Author: Márton Elek 
AuthorDate: Wed Jul 10 13:22:51 2019 +0200

HDDS-1764. Fix hidden errors in acceptance tests

Closes #1059

(cherry picked from commit 93824886e9c3323a77e60c0ef5b5b706cdfe1ea8)
---
 hadoop-ozone/dist/src/main/compose/ozone-mr/docker-compose.yaml   | 2 +-
 hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml | 4 ++--
 hadoop-ozone/dist/src/main/compose/test-all.sh| 3 ++-
 3 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/hadoop-ozone/dist/src/main/compose/ozone-mr/docker-compose.yaml 
b/hadoop-ozone/dist/src/main/compose/ozone-mr/docker-compose.yaml
index 652d64e..a0d138f 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-mr/docker-compose.yaml
+++ b/hadoop-ozone/dist/src/main/compose/ozone-mr/docker-compose.yaml
@@ -49,7 +49,7 @@ services:
   - ./docker-config
 command: ["/opt/hadoop/bin/ozone","s3g"]
   scm:
-image: apache/ozone-runner:latest:${HADOOP_RUNNER_VERSION}
+image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
 hostname: scm
 volumes:
   - ../..:/opt/hadoop
diff --git 
a/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml 
b/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml
index 135da03..31f74f0 100644
--- a/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml
+++ b/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml
@@ -64,7 +64,7 @@ services:
   - ./docker-config
 command: ["/opt/hadoop/bin/ozone","s3g"]
   scm:
-image: apache/ozone-runner:latest:${HADOOP_RUNNER_VERSION}
+image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
 hostname: scm
 volumes:
   - ../..:/opt/hadoop
@@ -120,4 +120,4 @@ services:
   - 4040:4040
 env_file:
   - docker-config
-command: ["watch","-n","10","ls"]
\ No newline at end of file
+command: ["watch","-n","10","ls"]
diff --git a/hadoop-ozone/dist/src/main/compose/test-all.sh 
b/hadoop-ozone/dist/src/main/compose/test-all.sh
index 5039207..7883e87 100755
--- a/hadoop-ozone/dist/src/main/compose/test-all.sh
+++ b/hadoop-ozone/dist/src/main/compose/test-all.sh
@@ -37,7 +37,8 @@ for test in $(find "$SCRIPT_DIR" -name test.sh); do
   ./test.sh
   ret=$?
   if [[ $ret -ne 0 ]]; then
-  RESULT=-1
+  RESULT=1
+  echo "ERROR: Test execution of $(dirname "$test") is FAILED"
   fi
   RESULT_DIR="$(dirname "$test")/result"
   cp "$RESULT_DIR"/robot-*.xml "$ALL_RESULT_DIR"


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch ozone-0.4.1 updated (b7b00bd -> 5e809ef)

2019-07-10 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from b7b00bd  HDDS-1742. Merge ozone-perf and ozonetrace example clusters
 new 6791a51  HDDS-1698. Switch to use apache/ozone-runner in the 
compose/Dockerfile (#979)
 new 7ac6ffd  HDDS-1716. Smoketest results are generated with an internal 
user
 new 661e30a  HDDS-1741. Fix prometheus configuration in ozoneperf example 
cluster
 new 5dcaa14  HDDS-1764. Fix hidden errors in acceptance tests
 new 5e809ef  HDDS-1525. Mapreduce failure when using Hadoop 2.7.5

The 5 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../apache/hadoop/ozone/client/rpc/RpcClient.java  |   8 +-
 .../dist/dev-support/bin/dist-layout-stitching |   4 +-
 hadoop-ozone/dist/pom.xml  |  18 +++-
 hadoop-ozone/dist/src/main/compose/README.md   |   6 --
 .../main/compose/common/prometheus/prometheus.yml  |   2 +-
 hadoop-ozone/dist/src/main/compose/ozone-hdfs/.env |   2 +-
 .../main/compose/ozone-hdfs/docker-compose.yaml|   6 +-
 .../ozone-mr/{docker-config => common-config}  |  44 -
 .../src/main/compose/ozone-mr/{ => hadoop27}/.env  |   9 +-
 .../ozone-mr/{ => hadoop27}/docker-compose.yaml|  51 +-
 .../ozone-mr/{.env => hadoop27/docker-config}  |   6 +-
 .../compose/{ozonefs => ozone-mr/hadoop27}/test.sh |  17 ++--
 .../src/main/compose/ozone-mr/{ => hadoop31}/.env  |   9 +-
 .../ozone-mr/{ => hadoop31}/docker-compose.yaml|  36 +++
 .../ozone-mr/{.env => hadoop31/docker-config}  |   6 +-
 .../main/compose/ozone-mr/{ => hadoop31}/test.sh   |  15 ++-
 .../src/main/compose/ozone-mr/{ => hadoop32}/.env  |   3 +-
 .../ozone-mr/{ => hadoop32}/docker-compose.yaml|  49 ++
 .../ozone-mr/{.env => hadoop32/docker-config}  |   6 +-
 .../main/compose/ozone-mr/{ => hadoop32}/test.sh   |   8 +-
 .../dist/src/main/compose/ozone-net-topology/.env  |   1 +
 .../compose/ozone-net-topology/docker-compose.yaml |  12 +--
 .../dist/src/main/compose/ozone-om-ha/.env |   2 +-
 .../main/compose/ozone-om-ha/docker-compose.yaml   |  10 +-
 .../dist/src/main/compose/ozone-recon/.env |   2 +-
 .../main/compose/ozone-recon/docker-compose.yaml   |   8 +-
 hadoop-ozone/dist/src/main/compose/ozone/.env  |   2 +-
 .../src/main/compose/ozone/docker-compose.yaml |   6 +-
 .../dist/src/main/compose/ozoneblockade/.env   |   2 +-
 .../main/compose/ozoneblockade/docker-compose.yaml |   8 +-
 hadoop-ozone/dist/src/main/compose/ozonefs/.env|  18 
 .../src/main/compose/ozonefs/docker-compose.yaml   | 100 ---
 .../dist/src/main/compose/ozonefs/docker-config|  38 
 .../dist/src/main/compose/ozonefs/hadoopo3fs.robot |  56 ---
 hadoop-ozone/dist/src/main/compose/ozoneperf/.env  |   2 +-
 .../src/main/compose/ozoneperf/docker-compose.yaml |   8 +-
 hadoop-ozone/dist/src/main/compose/ozones3/.env|   2 +-
 .../src/main/compose/ozones3/docker-compose.yaml   |   8 +-
 .../dist/src/main/compose/ozonescripts/.env|   2 +-
 .../dist/src/main/compose/ozonescripts/Dockerfile  |   2 +-
 .../dist/src/main/compose/ozonesecure-mr/.env  |   2 +-
 .../compose/ozonesecure-mr/docker-compose.yaml |  10 +-
 .../dist/src/main/compose/ozonesecure/.env |   2 +-
 .../main/compose/ozonesecure/docker-compose.yaml   |   8 +-
 hadoop-ozone/dist/src/main/compose/test-all.sh |   7 +-
 hadoop-ozone/dist/src/main/compose/testlib.sh  |  43 ++--
 hadoop-ozone/dist/src/main/{ => docker}/Dockerfile |   2 +-
 .../src/main/smoketest/ozonefs/hadoopo3fs.robot|   2 +-
 hadoop-ozone/ozonefs-lib-legacy/pom.xml|  30 +-
 .../java/org/apache/hadoop/fs/ozone/BasicOzFs.java |  45 +
 .../fs/ozone/BasicOzoneClientAdapterImpl.java  |  56 +--
 .../hadoop/fs/ozone/BasicOzoneFileSystem.java  |  52 +++---
 .../apache/hadoop/fs/ozone/FileStatusAdapter.java  | 108 +
 .../hadoop/fs/ozone/FilteredClassLoader.java   |   6 +-
 .../apache/hadoop/fs/ozone/OzoneClientAdapter.java |  20 ++--
 55 files changed, 529 insertions(+), 458 deletions(-)
 rename hadoop-ozone/dist/src/main/compose/ozone-mr/{docker-config => 
common-config} (63%)
 copy hadoop-ozone/dist/src/main/compose/ozone-mr/{ => hadoop27}/.env (75%)
 copy hadoop-ozone/dist/src/main/compose/ozone-mr/{ => 
hadoop27}/docker-compose.yaml (69%)
 copy hadoop-ozone/dist/src/main/compose/ozone-mr/{.env => 
hadoop27/docker-config} (68%)
 rename hadoop-ozone/dist/src/main/compose/{ozonefs => 
ozone-mr/hadoop27}/test.sh (65%)
 copy hadoop-ozone/dist/src/mai

  1   2   3   4   5   >