[hadoop-ozone] branch master updated: HDDS-4362. Change hadoop32 test to use 3.2 image (#1521)

2020-10-26 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new c1069a6  HDDS-4362. Change hadoop32 test to use 3.2 image (#1521)
c1069a6 is described below

commit c1069a6319efb84e4e22c2babc6c237e292b0746
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Mon Oct 26 14:25:46 2020 +0100

HDDS-4362. Change hadoop32 test to use 3.2 image (#1521)
---
 hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop32/.env | 6 +++---
 hadoop-ozone/dist/src/main/compose/ozonesecure-mr/.env| 3 ++-
 .../dist/src/main/compose/ozonesecure-mr/docker-compose.yaml  | 8 
 hadoop-ozone/dist/src/main/compose/ozonesecure-mr/test.sh | 1 +
 4 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop32/.env 
b/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop32/.env
index 87e7cce..602a96f 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop32/.env
+++ b/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop32/.env
@@ -15,7 +15,7 @@
 # limitations under the License.
 
 HDDS_VERSION=@hdds.version@
-HADOOP_IMAGE=apache/hadoop
-HADOOP_VERSION=3
+HADOOP_IMAGE=flokkr/hadoop
+HADOOP_VERSION=3.2.1
 OZONE_RUNNER_VERSION=@docker.ozone-runner.version@
-HADOOP_OPTS=
\ No newline at end of file
+HADOOP_OPTS=
diff --git a/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/.env 
b/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/.env
index 5aa2777..b4cd2f2 100644
--- a/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/.env
+++ b/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/.env
@@ -15,6 +15,7 @@
 # limitations under the License.
 
 HDDS_VERSION=${hdds.version}
-HADOOP_VERSION=3
+HADOOP_IMAGE=flokkr/hadoop
+HADOOP_VERSION=3.2.1
 OZONE_RUNNER_VERSION=${docker.ozone-runner.version}
 HADOOP_OPTS=
diff --git 
a/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml 
b/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml
index 7854f08..eecc34b 100644
--- a/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml
+++ b/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml
@@ -27,7 +27,7 @@ services:
 volumes:
   - ../..:/opt/hadoop
   kms:
-image: apache/hadoop:${HADOOP_VERSION}
+image: ${HADOOP_IMAGE}:${HADOOP_VERSION}
 networks:
   - ozone
 ports:
@@ -100,7 +100,7 @@ services:
   HADOOP_OPTS: ${HADOOP_OPTS}
 command: ["/opt/hadoop/bin/ozone","scm"]
   rm:
-image: apache/hadoop:${HADOOP_VERSION}
+image: ${HADOOP_IMAGE}:${HADOOP_VERSION}
 hostname: rm
 networks:
   - ozone
@@ -115,7 +115,7 @@ services:
   KERBEROS_KEYTABS: rm HTTP hadoop
 command: ["yarn", "resourcemanager"]
   nm:
-image: apache/hadoop:${HADOOP_VERSION}
+image: ${HADOOP_IMAGE}:${HADOOP_VERSION}
 hostname: nm
 networks:
   - ozone
@@ -129,7 +129,7 @@ services:
   KERBEROS_KEYTABS: nm HTTP
 command: ["yarn","nodemanager"]
   jhs:
-image: apache/hadoop:${HADOOP_VERSION}
+image: ${HADOOP_IMAGE}:${HADOOP_VERSION}
 container_name: jhs
 hostname: jhs
 networks:
diff --git a/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/test.sh 
b/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/test.sh
index 3763397..11989fd 100755
--- a/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/test.sh
+++ b/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/test.sh
@@ -35,6 +35,7 @@ export OZONE_DIR=/opt/ozone
 # shellcheck source=/dev/null
 source "$COMPOSE_DIR/../testlib.sh"
 
+execute_command_in_container rm sudo yum install -y krb5-workstation
 execute_robot_test rm kinit-hadoop.robot
 
 for scheme in o3fs ofs; do


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-4311. Type-safe config design doc points to OM HA (#1477)

2020-10-09 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new a1d53b0  HDDS-4311. Type-safe config design doc points to OM HA (#1477)
a1d53b0 is described below

commit a1d53b0781f5a9c89b665210e6853ed551892e47
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Fri Oct 9 14:42:45 2020 +0200

HDDS-4311. Type-safe config design doc points to OM HA (#1477)
---
 hadoop-hdds/docs/content/design/typesafeconfig.md | 10 +++---
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/hadoop-hdds/docs/content/design/typesafeconfig.md 
b/hadoop-hdds/docs/content/design/typesafeconfig.md
index 77a3b4d..dfe5ef0 100644
--- a/hadoop-hdds/docs/content/design/typesafeconfig.md
+++ b/hadoop-hdds/docs/content/design/typesafeconfig.md
@@ -2,7 +2,7 @@
 title: Type-safe configuration API
 summary: Inject configuration values based on annotations instead of using 
constants and Hadoop API
 date: 2019-04-25
-jira: HDDS-505
+jira: HDDS-1466
 status: implemented
 author: Anu Engineer, Marton Elek
 ---
@@ -22,12 +22,8 @@ author: Anu Engineer, Marton Elek
 
 # Abstract
 
- HA for Ozone Manager with the help of Ratis. High performance operation with 
caching and double-buffer.
+ Generate configuration from annotated plain Java objects to make 
configuration more structured and type safe.
  
 # Link
 
- * 
https://issues.apache.org/jira/secure/attachment/12940314/OzoneManager%20HA.pdf
-
- * 
https://issues.apache.org/jira/secure/attachment/12990063/OM%20HA%20Cache%20Design.pdf
-
- * 
https://issues.apache.org/jira/secure/attachment/12973260/Handling%20Write%20Requests%20with%20OM%20HA.pdf
\ No newline at end of file
+ * https://issues.apache.org/jira/secure/attachment/12966991/typesafe.pdf


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-3814. Drop a column family through debug cli tool (#1083)

2020-10-09 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 7704cb5  HDDS-3814. Drop a column family through debug cli tool (#1083)
7704cb5 is described below

commit 7704cb5f6920d6bbf6b446c28a2b3fe4408e0568
Author: maobaolong 
AuthorDate: Fri Oct 9 20:39:44 2020 +0800

HDDS-3814. Drop a column family through debug cli tool (#1083)
---
 .../org/apache/hadoop/ozone/debug/DBScanner.java   | 18 ++---
 .../org/apache/hadoop/ozone/debug/DropTable.java   | 81 ++
 .../apache/hadoop/ozone/debug/RocksDBUtils.java| 49 +
 3 files changed, 135 insertions(+), 13 deletions(-)

diff --git 
a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/debug/DBScanner.java 
b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/debug/DBScanner.java
index b1139df..1ceab42 100644
--- 
a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/debug/DBScanner.java
+++ 
b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/debug/DBScanner.java
@@ -37,7 +37,6 @@ import com.google.gson.GsonBuilder;
 import org.kohsuke.MetaInfServices;
 import org.rocksdb.ColumnFamilyDescriptor;
 import org.rocksdb.ColumnFamilyHandle;
-import org.rocksdb.Options;
 import org.rocksdb.RocksDB;
 import org.rocksdb.RocksIterator;
 import picocli.CommandLine;
@@ -150,19 +149,12 @@ public class DBScanner implements Callable, 
SubcommandWithParent {
 
   @Override
   public Void call() throws Exception {
-List cfs = new ArrayList<>();
+List cfs =
+RocksDBUtils.getColumnFamilyDescriptors(parent.getDbPath());
+
 final List columnFamilyHandleList =
-new ArrayList<>();
-List cfList = null;
-cfList = RocksDB.listColumnFamilies(new Options(),
-parent.getDbPath());
-if (cfList != null) {
-  for (byte[] b : cfList) {
-cfs.add(new ColumnFamilyDescriptor(b));
-  }
-}
-RocksDB rocksDB = null;
-rocksDB = RocksDB.openReadOnly(parent.getDbPath(),
+new ArrayList<>();
+RocksDB rocksDB = RocksDB.openReadOnly(parent.getDbPath(),
 cfs, columnFamilyHandleList);
 this.printAppropriateTable(columnFamilyHandleList,
rocksDB, parent.getDbPath());
diff --git 
a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/debug/DropTable.java 
b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/debug/DropTable.java
new file mode 100644
index 000..161f1b2
--- /dev/null
+++ 
b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/debug/DropTable.java
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.debug;
+
+import org.apache.hadoop.hdds.cli.SubcommandWithParent;
+import org.rocksdb.ColumnFamilyDescriptor;
+import org.rocksdb.ColumnFamilyHandle;
+import org.rocksdb.RocksDB;
+import picocli.CommandLine;
+
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.Callable;
+
+/**
+ * Drop a column Family/Table in db.
+ */
+@CommandLine.Command(
+name = "drop_column_family",
+description = "drop column family in db."
+)
+public class DropTable implements Callable, SubcommandWithParent {
+
+  @CommandLine.Option(names = {"--column_family"},
+  description = "Table name")
+  private String tableName;
+
+  @CommandLine.ParentCommand
+  private RDBParser parent;
+
+  @Override
+  public Void call() throws Exception {
+List cfs =
+RocksDBUtils.getColumnFamilyDescriptors(parent.getDbPath());
+final List columnFamilyHandleList =
+new ArrayList<>();
+try (RocksDB rocksDB = RocksDB.open(
+parent.getDbPath(), cfs, columnFamilyHandleList)) {
+  byte[] nameBytes = tableName.getBytes(StandardCharsets.UTF_8);
+  ColumnFamilyHandle toBeDeletedCf = null;
+  for (ColumnFamilyHandle cf : columnFamilyHandleList) {
+if (Arrays.equals(cf.getName(), nameBytes)) {
+

[hadoop-ozone] branch HDDS-1880-Decom updated: HDDS-4300. Removed unneeded class DatanodeAdminNodeDetails (#1465)

2020-10-05 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch HDDS-1880-Decom
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/HDDS-1880-Decom by this push:
 new f43a370  HDDS-4300. Removed unneeded class DatanodeAdminNodeDetails 
(#1465)
f43a370 is described below

commit f43a370169f2d8cc2b8635ac9e026278157b16db
Author: Stephen O'Donnell 
AuthorDate: Mon Oct 5 14:42:41 2020 +0100

HDDS-4300. Removed unneeded class DatanodeAdminNodeDetails (#1465)
---
 .../hdds/scm/node/DatanodeAdminMonitorImpl.java| 105 
 .../hdds/scm/node/DatanodeAdminNodeDetails.java| 137 -
 .../hdds/scm/node/TestDatanodeAdminMonitor.java|  43 +++
 .../scm/node/TestDatanodeAdminNodeDetails.java |  81 
 4 files changed, 67 insertions(+), 299 deletions(-)

diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeAdminMonitorImpl.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeAdminMonitorImpl.java
index f9d1a32..0bbd13d 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeAdminMonitorImpl.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeAdminMonitorImpl.java
@@ -64,9 +64,9 @@ public class DatanodeAdminMonitorImpl implements 
DatanodeAdminMonitor {
   private EventPublisher eventQueue;
   private NodeManager nodeManager;
   private ReplicationManager replicationManager;
-  private Queue pendingNodes = new ArrayDeque();
-  private Queue cancelledNodes = new ArrayDeque();
-  private Set trackedNodes = new HashSet<>();
+  private Queue pendingNodes = new ArrayDeque();
+  private Queue cancelledNodes = new ArrayDeque();
+  private Set trackedNodes = new HashSet<>();
 
   private static final Logger LOG =
   LoggerFactory.getLogger(DatanodeAdminMonitorImpl.class);
@@ -93,10 +93,8 @@ public class DatanodeAdminMonitorImpl implements 
DatanodeAdminMonitor {
*/
   @Override
   public synchronized void startMonitoring(DatanodeDetails dn, int endInHours) 
{
-DatanodeAdminNodeDetails nodeDetails =
-new DatanodeAdminNodeDetails(dn, endInHours);
-cancelledNodes.remove(nodeDetails);
-pendingNodes.add(nodeDetails);
+cancelledNodes.remove(dn);
+pendingNodes.add(dn);
   }
 
   /**
@@ -108,9 +106,8 @@ public class DatanodeAdminMonitorImpl implements 
DatanodeAdminMonitor {
*/
   @Override
   public synchronized void stopMonitoring(DatanodeDetails dn) {
-DatanodeAdminNodeDetails nodeDetails = new DatanodeAdminNodeDetails(dn, 0);
-pendingNodes.remove(nodeDetails);
-cancelledNodes.add(nodeDetails);
+pendingNodes.remove(dn);
+cancelledNodes.add(dn);
   }
 
   /**
@@ -155,20 +152,19 @@ public class DatanodeAdminMonitorImpl implements 
DatanodeAdminMonitor {
   }
 
   @VisibleForTesting
-  public Set getTrackedNodes() {
+  public Set getTrackedNodes() {
 return trackedNodes;
   }
 
   private void processCancelledNodes() {
 while (!cancelledNodes.isEmpty()) {
-  DatanodeAdminNodeDetails dn = cancelledNodes.poll();
+  DatanodeDetails dn = cancelledNodes.poll();
   try {
 stopTrackingNode(dn);
 putNodeBackInService(dn);
-LOG.info("Recommissioned node {}", dn.getDatanodeDetails());
+LOG.info("Recommissioned node {}", dn);
   } catch (NodeNotFoundException e) {
-LOG.warn("Failed processing the cancel admin request for {}",
-dn.getDatanodeDetails(), e);
+LOG.warn("Failed processing the cancel admin request for {}", dn, e);
   }
 }
   }
@@ -180,11 +176,11 @@ public class DatanodeAdminMonitorImpl implements 
DatanodeAdminMonitor {
   }
 
   private void processTransitioningNodes() {
-Iterator iterator = trackedNodes.iterator();
+Iterator iterator = trackedNodes.iterator();
 while (iterator.hasNext()) {
-  DatanodeAdminNodeDetails dn = iterator.next();
+  DatanodeDetails dn = iterator.next();
   try {
-NodeStatus status = getNodeStatus(dn.getDatanodeDetails());
+NodeStatus status = getNodeStatus(dn);
 
 if (!shouldContinueWorkflow(dn, status)) {
   abortWorkflow(dn);
@@ -193,7 +189,7 @@ public class DatanodeAdminMonitorImpl implements 
DatanodeAdminMonitor {
 }
 
 if (status.isMaintenance()) {
-  if (dn.shouldMaintenanceEnd()) {
+  if (status.operationalStateExpired()) {
 completeMaintenance(dn);
 iterator.remove();
 continue;
@@ -205,12 +201,12 @@ public class DatanodeAdminMonitorImpl implements 
DatanodeAdminMonitor {
   // Ensure the DN has received and persisted the current maint
   // state.
   && status.getOperationalState()
-

[hadoop-ozone] branch master updated: HDDS-4156. add hierarchical layout to Chinese doc (#1368)

2020-10-05 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 19cb481  HDDS-4156. add hierarchical layout to Chinese doc (#1368)
19cb481 is described below

commit 19cb481896f091ec5c02fe4010a5ab428302f38f
Author: Huang-Mu Zheng 
AuthorDate: Mon Oct 5 21:30:24 2020 +0800

HDDS-4156. add hierarchical layout to Chinese doc (#1368)
---
 hadoop-hdds/docs/content/concept/Datanodes.zh.md   | 3 +++
 hadoop-hdds/docs/content/concept/Overview.zh.md| 5 +
 hadoop-hdds/docs/content/concept/OzoneManager.zh.md| 3 +++
 hadoop-hdds/docs/content/concept/StorageContainerManager.zh.md | 3 +++
 hadoop-hdds/docs/content/concept/_index.zh.md  | 2 +-
 hadoop-hdds/docs/content/interface/CSI.zh.md   | 3 +++
 hadoop-hdds/docs/content/interface/JavaApi.zh.md   | 3 +++
 hadoop-hdds/docs/content/interface/O3fs.zh.md  | 3 +++
 hadoop-hdds/docs/content/interface/S3.zh.md| 3 +++
 hadoop-hdds/docs/content/security/SecureOzone.zh.md| 3 +++
 hadoop-hdds/docs/content/security/SecuringS3.zh.md | 3 +++
 hadoop-hdds/docs/content/security/SecuringTDE.zh.md| 3 +++
 hadoop-hdds/docs/content/security/SecurityAcls.zh.md   | 3 +++
 hadoop-hdds/docs/content/security/SecurityWithRanger.zh.md | 3 +++
 14 files changed, 42 insertions(+), 1 deletion(-)

diff --git a/hadoop-hdds/docs/content/concept/Datanodes.zh.md 
b/hadoop-hdds/docs/content/concept/Datanodes.zh.md
index fa992dc..8f129df 100644
--- a/hadoop-hdds/docs/content/concept/Datanodes.zh.md
+++ b/hadoop-hdds/docs/content/concept/Datanodes.zh.md
@@ -2,6 +2,9 @@
 title: "数据节点"
 date: "2017-09-14"
 weight: 4
+menu: 
+  main:
+ parent: 概念
 summary: Ozone 支持 Amazon S3 协议,你可以原封不动地在 Ozone 上使用基于 S3 客户端和 S3 SDK 的应用。
 ---
 

[hadoop-ozone] branch master updated: HDDS-4242. Copy PrefixInfo proto to new project hadoop-ozone/interface-storage (#1444)

2020-10-05 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new d6d27e4  HDDS-4242. Copy PrefixInfo proto to new project 
hadoop-ozone/interface-storage (#1444)
d6d27e4 is described below

commit d6d27e4edb23490cb9d1496078e5bcd0e5e8d60c
Author: Rui Wang 
AuthorDate: Mon Oct 5 04:49:22 2020 -0700

HDDS-4242. Copy PrefixInfo proto to new project 
hadoop-ozone/interface-storage (#1444)
---
 hadoop-ozone/interface-storage/pom.xml | 30 +++
 .../hadoop/ozone/om/codec/OmPrefixInfoCodec.java   |  5 +-
 .../hadoop/ozone/om/helpers/OmPrefixInfo.java  | 13 ++---
 .../hadoop/ozone/om/helpers/OzoneAclStorage.java   | 63 ++
 .../ozone/om/helpers/OzoneAclStorageUtil.java  | 62 +
 .../hadoop/ozone/om/helpers/package-info.java  | 24 +
 .../src/main/proto/OmStorageProtocol.proto | 60 +
 .../hadoop/ozone/om/helpers/TestOmPrefixInfo.java  |  0
 .../hadoop/ozone/om/helpers/package-info.java  | 24 +
 9 files changed, 273 insertions(+), 8 deletions(-)

diff --git a/hadoop-ozone/interface-storage/pom.xml 
b/hadoop-ozone/interface-storage/pom.xml
index 43ba408..9f000bf 100644
--- a/hadoop-ozone/interface-storage/pom.xml
+++ b/hadoop-ozone/interface-storage/pom.xml
@@ -35,6 +35,11 @@
 
 
 
+  com.google.protobuf
+  protobuf-java
+
+
+
   org.apache.hadoop
   hadoop-ozone-interface-client
 
@@ -63,4 +68,29 @@
 
 
   
+  
+
+  
+org.xolstice.maven.plugins
+protobuf-maven-plugin
+${protobuf-maven-plugin.version}
+true
+
+  
+compile-protoc
+
+  compile
+  test-compile
+
+
+  ${basedir}/src/main/proto/
+  
+
com.google.protobuf:protoc:${protobuf.version}:exe:${os.detected.classifier}
+  
+
+  
+
+  
+
+  
 
\ No newline at end of file
diff --git 
a/hadoop-ozone/interface-storage/src/main/java/org/apache/hadoop/ozone/om/codec/OmPrefixInfoCodec.java
 
b/hadoop-ozone/interface-storage/src/main/java/org/apache/hadoop/ozone/om/codec/OmPrefixInfoCodec.java
index 44a0741..919d972 100644
--- 
a/hadoop-ozone/interface-storage/src/main/java/org/apache/hadoop/ozone/om/codec/OmPrefixInfoCodec.java
+++ 
b/hadoop-ozone/interface-storage/src/main/java/org/apache/hadoop/ozone/om/codec/OmPrefixInfoCodec.java
@@ -20,7 +20,7 @@ package org.apache.hadoop.ozone.om.codec;
 import com.google.common.base.Preconditions;
 import com.google.protobuf.InvalidProtocolBufferException;
 import org.apache.hadoop.ozone.om.helpers.OmPrefixInfo;
-import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.PrefixInfo;
+import 
org.apache.hadoop.ozone.storage.proto.OzoneManagerStorageProtos.PersistedPrefixInfo;
 
 import org.apache.hadoop.hdds.utils.db.Codec;
 
@@ -44,7 +44,8 @@ public class OmPrefixInfoCodec implements Codec 
{
 .checkNotNull(rawData,
 "Null byte array can't converted to real object.");
 try {
-  return OmPrefixInfo.getFromProtobuf(PrefixInfo.parseFrom(rawData));
+  return OmPrefixInfo.getFromProtobuf(
+  PersistedPrefixInfo.parseFrom(rawData));
 } catch (InvalidProtocolBufferException e) {
   throw new IllegalArgumentException(
   "Can't encode the the raw data from the byte array", e);
diff --git 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmPrefixInfo.java
 
b/hadoop-ozone/interface-storage/src/main/java/org/apache/hadoop/ozone/om/helpers/OmPrefixInfo.java
similarity index 92%
rename from 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmPrefixInfo.java
rename to 
hadoop-ozone/interface-storage/src/main/java/org/apache/hadoop/ozone/om/helpers/OmPrefixInfo.java
index 80ca54d..a1ad55a 100644
--- 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmPrefixInfo.java
+++ 
b/hadoop-ozone/interface-storage/src/main/java/org/apache/hadoop/ozone/om/helpers/OmPrefixInfo.java
@@ -20,7 +20,7 @@ package org.apache.hadoop.ozone.om.helpers;
 
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.ozone.OzoneAcl;
-import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.PrefixInfo;
+import 
org.apache.hadoop.ozone.storage.proto.OzoneManagerStorageProtos.PersistedPrefixInfo;
 
 import java.util.BitSet;
 import java.util.HashMap;
@@ -150,11 +150,12 @@ public final class OmPrefixInfo extends WithObjectID {
   /**
* Creates PrefixInfo protobuf from OmPrefixInfo.
*/
-  public PrefixInfo getProtobuf() {
-PrefixInfo.Builder pib =  PrefixInfo.newBuilder().setName(name)
+  public PersistedPr

[hadoop-ozone] branch master updated: HDDS-4264. Uniform naming conventions of Ozone Shell Options. (#1447)

2020-10-05 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 4ad0318  HDDS-4264. Uniform naming conventions of Ozone Shell Options. 
(#1447)
4ad0318 is described below

commit 4ad03188ed4fbbe8d6dce1e8e0c8d91518904fc1
Author: micah zhao 
AuthorDate: Mon Oct 5 19:45:34 2020 +0800

HDDS-4264. Uniform naming conventions of Ozone Shell Options. (#1447)
---
 hadoop-hdds/docs/content/tools/TestTools.md|  2 +-
 hadoop-hdds/docs/content/tools/TestTools.zh.md |  2 +-
 .../scm/cli/pipeline/CreatePipelineSubcommand.java | 10 +++---
 .../main/k8s/definitions/ozone/freon/freon.yaml|  2 +-
 .../getting-started/freon/freon-deployment.yaml|  2 +-
 .../examples/minikube/freon/freon-deployment.yaml  |  2 +-
 .../examples/ozone-dev/freon/freon-deployment.yaml |  2 +-
 .../k8s/examples/ozone/freon/freon-deployment.yaml |  2 +-
 .../main/smoketest/auditparser/auditparser.robot   |  2 +-
 .../dist/src/main/smoketest/basic/basic.robot  |  2 +-
 .../src/main/smoketest/basic/ozone-shell-lib.robot |  2 +-
 .../dist/src/main/smoketest/freon/freon.robot  |  2 +-
 .../dist/src/main/smoketest/recon/recon-api.robot  |  2 +-
 .../dist/src/main/smoketest/spnego/web.robot   |  2 +-
 .../hadoop/ozone/TestMiniChaosOzoneCluster.java| 40 +-
 .../src/test/blockade/ozone/client.py  | 10 +++---
 .../hadoop/ozone/freon/HadoopDirTreeGenerator.java | 15 
 .../ozone/freon/HadoopNestedDirGenerator.java  |  5 +--
 .../hadoop/ozone/freon/RandomKeyGenerator.java | 40 +-
 19 files changed, 84 insertions(+), 62 deletions(-)

diff --git a/hadoop-hdds/docs/content/tools/TestTools.md 
b/hadoop-hdds/docs/content/tools/TestTools.md
index 47e12eb..ac025f0 100644
--- a/hadoop-hdds/docs/content/tools/TestTools.md
+++ b/hadoop-hdds/docs/content/tools/TestTools.md
@@ -87,7 +87,7 @@ bin/ozone freon --help
 For example:
 
 ```
-ozone freon randomkeys --numOfVolumes=10 --numOfBuckets 10 --numOfKeys 10  
--replicationType=RATIS --factor=THREE
+ozone freon randomkeys --num-of-volumes=10 --num-of-buckets 10 --num-of-keys 
10  --replication-type=RATIS --factor=THREE
 ```
 
 ```
diff --git a/hadoop-hdds/docs/content/tools/TestTools.zh.md 
b/hadoop-hdds/docs/content/tools/TestTools.zh.md
index 1c79f27..c6dfd2c 100644
--- a/hadoop-hdds/docs/content/tools/TestTools.zh.md
+++ b/hadoop-hdds/docs/content/tools/TestTools.zh.md
@@ -88,7 +88,7 @@ bin/ozone freon --help
 例如:
 
 ```
-ozone freon randomkeys --numOfVolumes=10 --numOfBuckets 10 --numOfKeys 10  
--replicationType=RATIS --factor=THREE
+ozone freon randomkeys --num-of-volumes=10 --num-of-buckets 10 --num-of-keys 
10  --replication-type=RATIS --factor=THREE
 ```
 
 ```
diff --git 
a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/CreatePipelineSubcommand.java
 
b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/CreatePipelineSubcommand.java
index c784be8..90858de 100644
--- 
a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/CreatePipelineSubcommand.java
+++ 
b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/CreatePipelineSubcommand.java
@@ -38,15 +38,17 @@ import java.io.IOException;
 public class CreatePipelineSubcommand extends ScmSubcommand {
 
   @CommandLine.Option(
-  names = {"-t", "--replicationType"},
-  description = "Replication type (STAND_ALONE, RATIS)",
+  names = {"-t", "--replication-type", "--replicationType"},
+  description = "Replication type (STAND_ALONE, RATIS). Full name" +
+  " --replicationType will be removed in later versions.",
   defaultValue = "STAND_ALONE"
   )
   private HddsProtos.ReplicationType type;
 
   @CommandLine.Option(
-  names = {"-f", "--replicationFactor"},
-  description = "Replication factor (ONE, THREE)",
+  names = {"-f", "--replication-factor", "--replicationFactor"},
+  description = "Replication factor (ONE, THREE). Full name" +
+  " --replicationFactor will be removed in later versions.",
   defaultValue = "ONE"
   )
   private HddsProtos.ReplicationFactor factor;
diff --git a/hadoop-ozone/dist/src/main/k8s/definitions/ozone/freon/freon.yaml 
b/hadoop-ozone/dist/src/main/k8s/definitions/ozone/freon/freon.yaml
index 40ebc98..90135f2 100644
--- a/hadoop-ozone/dist/src/main/k8s/definitions/ozone/freon/freon.yaml
+++ b/hadoop-ozone/dist/src/main/k8s/definitions/ozone/freon/freon.yaml
@@ -34,7 +34,7 @@ spec:
   containers:
 - name: freon
   image: "@docker.image@"
-  args: ["ozone","freon", "rk

[hadoop-ozone] branch master updated (8cd86a6 -> cfff097)

2020-10-05 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 8cd86a6  HDDS-4299. Display Ratis version with ozone version (#1464)
 add cfff097  HDDS-4271. Avoid logging chunk content in Ozone Insight 
(#1466)

No new revisions were added by this update.

Summary of changes:
 .../container/common/helpers/ContainerUtils.java   | 66 +++---
 .../container/common/impl/HddsDispatcher.java  |  4 +-
 .../server/OzoneProtocolMessageDispatcher.java | 32 ---
 3 files changed, 86 insertions(+), 16 deletions(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (8cd86a6 -> cfff097)

2020-10-05 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 8cd86a6  HDDS-4299. Display Ratis version with ozone version (#1464)
 add cfff097  HDDS-4271. Avoid logging chunk content in Ozone Insight 
(#1466)

No new revisions were added by this update.

Summary of changes:
 .../container/common/helpers/ContainerUtils.java   | 66 +++---
 .../container/common/impl/HddsDispatcher.java  |  4 +-
 .../server/OzoneProtocolMessageDispatcher.java | 32 ---
 3 files changed, 86 insertions(+), 16 deletions(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch HDDS-4298 created (now 08dda9a)

2020-10-05 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-4298
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


  at 08dda9a  move out unsafeByteBufferConversion from the new interface

This branch includes the following new commits:

 new 08dda9a  move out unsafeByteBufferConversion from the new interface

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] 01/01: move out unsafeByteBufferConversion from the new interface

2020-10-05 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch HDDS-4298
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git

commit 08dda9ade16b0befb2e643d64906337cd2fa7e34
Author: Elek Márton 
AuthorDate: Mon Oct 5 12:30:11 2020 +0200

move out unsafeByteBufferConversion from the new interface
---
 .../apache/hadoop/hdds/scm/XceiverClientFactory.java   |  6 --
 .../apache/hadoop/hdds/scm/XceiverClientManager.java   |  7 ---
 .../org/apache/hadoop/hdds/scm/storage/BufferPool.java |  2 +-
 .../apache/hadoop/hdds/scm/ByteStringConversion.java   | 18 +++---
 .../ozone/container/keyvalue/KeyValueHandler.java  | 14 +++---
 .../ozone/client/io/BlockOutputStreamEntryPool.java| 11 ---
 .../apache/hadoop/ozone/client/io/KeyOutputStream.java | 14 +++---
 .../org/apache/hadoop/ozone/client/rpc/RpcClient.java  |  6 ++
 8 files changed, 44 insertions(+), 34 deletions(-)

diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientFactory.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientFactory.java
index 184645d..dc35cd5 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientFactory.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientFactory.java
@@ -18,20 +18,14 @@
 package org.apache.hadoop.hdds.scm;
 
 import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.util.function.Function;
 
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 
-import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
-
 /**
  * Interface to provide XceiverClient when needed.
  */
 public interface XceiverClientFactory {
 
-  Function byteBufferToByteStringConversion();
-
   XceiverClientSpi acquireClient(Pipeline pipeline) throws IOException;
 
   void releaseClient(XceiverClientSpi xceiverClient, boolean invalidateClient);
diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
index e07a5d2..eaf0503 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
@@ -20,12 +20,10 @@ package org.apache.hadoop.hdds.scm;
 
 import java.io.Closeable;
 import java.io.IOException;
-import java.nio.ByteBuffer;
 import java.security.cert.CertificateException;
 import java.security.cert.X509Certificate;
 import java.util.concurrent.Callable;
 import java.util.concurrent.TimeUnit;
-import java.util.function.Function;
 
 import org.apache.hadoop.hdds.conf.Config;
 import org.apache.hadoop.hdds.conf.ConfigGroup;
@@ -49,7 +47,6 @@ import static java.util.concurrent.TimeUnit.MILLISECONDS;
 import static org.apache.hadoop.hdds.conf.ConfigTag.OZONE;
 import static org.apache.hadoop.hdds.conf.ConfigTag.PERFORMANCE;
 import static 
org.apache.hadoop.hdds.scm.exceptions.SCMException.ResultCodes.NO_REPLICA_FOUND;
-import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -277,10 +274,6 @@ public class XceiverClientManager implements Closeable, 
XceiverClientFactory {
 }
   }
 
-  public Function byteBufferToByteStringConversion(){
-return ByteStringConversion.createByteBufferConversion(conf);
-  }
-
   /**
* Get xceiver client metric.
*/
diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BufferPool.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BufferPool.java
index dc27d4b..94fa87a 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BufferPool.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BufferPool.java
@@ -42,7 +42,7 @@ public class BufferPool {
 
   public BufferPool(int bufferSize, int capacity) {
 this(bufferSize, capacity,
-ByteStringConversion.createByteBufferConversion(null));
+ByteStringConversion.createByteBufferConversion(false));
   }
 
   public BufferPool(int bufferSize, int capacity,
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ByteStringConversion.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ByteStringConversion.java
index dc44392..b5f6e48 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ByteStringConversion.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ByteStringConversion.java
@@ -17,14 +17,14 @@
  */
 package org.apache.hadoop.hdds.scm;
 
-import org.apache.hadoop.hdds.conf.ConfigurationSource;
+import java.nio.ByteBuffer;
+import java.util.function.Function;
+
 import org.apache.hadoop.ozone.OzoneConfigKeys;
+
 import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
 import

[hadoop-ozone] branch master updated (68642c2 -> 3ad1034)

2020-09-29 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 68642c2  HDDS-4023. Delete closed container after all blocks have been 
deleted. (#1338)
 add 3ad1034  HDDS-4215. Update Freon doc in source tree. (#1403)

No new revisions were added by this update.

Summary of changes:
 hadoop-hdds/docs/content/tools/TestTools.md| 12 ++--
 hadoop-hdds/docs/content/tools/TestTools.zh.md | 12 ++--
 2 files changed, 20 insertions(+), 4 deletions(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-4102. Normalize Keypath for lookupKey. (#1328)

2020-09-28 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 004dd3f  HDDS-4102. Normalize Keypath for lookupKey. (#1328)
004dd3f is described below

commit 004dd3ff9d35c0d4b6bcb95370e1bd95b0569008
Author: Bharat Viswanadham 
AuthorDate: Mon Sep 28 11:29:04 2020 -0700

HDDS-4102. Normalize Keypath for lookupKey. (#1328)
---
 .../fs/ozone/TestOzoneFSWithObjectStoreCreate.java | 40 ++
 .../hadoop/fs/ozone/TestOzoneFileSystem.java   |  6 ++--
 .../org/apache/hadoop/ozone/om/KeyManagerImpl.java |  9 -
 3 files changed, 51 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java
index f288973..e89d1c4 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java
@@ -29,6 +29,7 @@ import org.apache.hadoop.ozone.MiniOzoneCluster;
 import org.apache.hadoop.ozone.TestDataUtil;
 import org.apache.hadoop.ozone.client.OzoneBucket;
 import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.client.io.OzoneInputStream;
 import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
 import org.apache.hadoop.ozone.om.OMConfigKeys;
 import org.apache.hadoop.ozone.om.exceptions.OMException;
@@ -329,6 +330,45 @@ public class TestOzoneFSWithObjectStoreCreate {
 }
   }
 
+
+  @Test
+  public void testReadWithNotNormalizedPath() throws Exception {
+OzoneVolume ozoneVolume =
+cluster.getRpcClient().getObjectStore().getVolume(volumeName);
+
+OzoneBucket ozoneBucket = ozoneVolume.getBucket(bucketName);
+
+String key = "/dir1///dir2/file1/";
+
+int length = 10;
+byte[] input = new byte[length];
+Arrays.fill(input, (byte)96);
+String inputString = new String(input);
+
+OzoneOutputStream ozoneOutputStream =
+ozoneBucket.createKey(key, length);
+
+ozoneOutputStream.write(input);
+ozoneOutputStream.write(input, 0, 10);
+ozoneOutputStream.close();
+
+// Read the key with given key name.
+OzoneInputStream ozoneInputStream = ozoneBucket.readKey(key);
+byte[] read = new byte[length];
+ozoneInputStream.read(read, 0, length);
+ozoneInputStream.close();
+
+Assert.assertEquals(inputString, new String(read));
+
+// Read using filesystem.
+FSDataInputStream fsDataInputStream = o3fs.open(new Path(key));
+read = new byte[length];
+fsDataInputStream.read(read, 0, length);
+ozoneInputStream.close();
+
+Assert.assertEquals(inputString, new String(read));
+  }
+
   private void checkPath(Path path) {
 try {
   o3fs.getFileStatus(path);
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java
index 4e728f7..46c0115 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java
@@ -326,9 +326,9 @@ public class TestOzoneFileSystem {
 
 // Deleting the only child should create the parent dir key if it does
 // not exist
-String parentKey = o3fs.pathToKey(parent) + "/";
-OzoneKeyDetails parentKeyInfo = getKey(parent, true);
-assertEquals(parentKey, parentKeyInfo.getName());
+FileStatus fileStatus = o3fs.getFileStatus(parent);
+Assert.assertTrue(fileStatus.isDirectory());
+assertEquals(parent.toString(), fileStatus.getPath().toUri().getPath());
   }
 
 
diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
index 14fc07e..ced055c 100644
--- 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
+++ 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
@@ -94,6 +94,7 @@ import org.apache.hadoop.ozone.om.helpers.OzoneAclUtil;
 import org.apache.hadoop.ozone.om.helpers.OzoneFSUtils;
 import org.apache.hadoop.ozone.om.helpers.OzoneFileStatus;
 import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
 import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.PartKeyInfo;
 import org.apache.hadoop.ozone.security.OzoneBlockTokenSecretManager;
 import org.apache.hadoop.ozone.security.a

[hadoop-ozone] branch master updated (8ca694a -> 8899ff7)

2020-09-23 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 8ca694a  HDDS-4236. Move "Om*Codec.java" to new project 
hadoop-ozone/interface-storage (#1424)
 add 8899ff7  HDDS-4324. Add important comment to ListVolumes logic (#1417)

No new revisions were added by this update.

Summary of changes:
 .../src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java  | 3 +++
 1 file changed, 3 insertions(+)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (a78a4b7 -> 8ca694a)

2020-09-23 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from a78a4b7  HDDS-4254. Bucket space: add usedBytes and update it when 
create and delete key. (#1431)
 add 8ca694a  HDDS-4236. Move "Om*Codec.java" to new project 
hadoop-ozone/interface-storage (#1424)

No new revisions were added by this update.

Summary of changes:
 .../interface-storage}/pom.xml | 46 +++---
 .../apache/hadoop/ozone/om/OMMetadataManager.java  |  0
 .../ozone/om/codec/OMTransactionInfoCodec.java |  0
 .../hadoop/ozone/om/codec/OmBucketInfoCodec.java   |  0
 .../hadoop/ozone/om/codec/OmKeyInfoCodec.java  |  0
 .../ozone/om/codec/OmMultipartKeyInfoCodec.java|  0
 .../hadoop/ozone/om/codec/OmPrefixInfoCodec.java   |  0
 .../hadoop/ozone/om/codec/OmVolumeArgsCodec.java   |  0
 .../ozone/om/codec/RepeatedOmKeyInfoCodec.java |  0
 .../hadoop/ozone/om/codec/S3SecretValueCodec.java  |  0
 .../ozone/om/codec/TokenIdentifierCodec.java   |  0
 .../hadoop/ozone/om/codec/UserVolumeInfoCodec.java |  0
 .../apache/hadoop/ozone/om/codec/package-info.java |  2 +-
 .../org/apache/hadoop/ozone/om}/package-info.java  |  4 +-
 .../hadoop/ozone/om/ratis/OMTransactionInfo.java   |  2 +-
 .../hadoop/ozone/om/ratis}/package-info.java   |  4 +-
 .../ozone/om/codec/TestOMTransactionInfoCodec.java |  0
 .../hadoop/ozone/om/codec/TestOmKeyInfoCodec.java  |  0
 .../om/codec/TestOmMultipartKeyInfoCodec.java  |  0
 .../ozone/om/codec/TestOmPrefixInfoCodec.java  |  0
 .../ozone/om/codec/TestRepeatedOmKeyInfoCodec.java |  0
 .../ozone/om/codec/TestS3SecretValueCodec.java |  0
 .../apache/hadoop/ozone/om/codec/package-info.java |  0
 hadoop-ozone/ozone-manager/pom.xml |  5 +++
 .../apache/hadoop/ozone/om/codec/package-info.java |  3 ++
 hadoop-ozone/pom.xml   |  6 +++
 26 files changed, 43 insertions(+), 29 deletions(-)
 copy {hadoop-hdds/client => hadoop-ozone/interface-storage}/pom.xml (66%)
 rename hadoop-ozone/{ozone-manager => 
interface-storage}/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
 (100%)
 rename hadoop-ozone/{ozone-manager => 
interface-storage}/src/main/java/org/apache/hadoop/ozone/om/codec/OMTransactionInfoCodec.java
 (100%)
 rename hadoop-ozone/{ozone-manager => 
interface-storage}/src/main/java/org/apache/hadoop/ozone/om/codec/OmBucketInfoCodec.java
 (100%)
 rename hadoop-ozone/{ozone-manager => 
interface-storage}/src/main/java/org/apache/hadoop/ozone/om/codec/OmKeyInfoCodec.java
 (100%)
 rename hadoop-ozone/{ozone-manager => 
interface-storage}/src/main/java/org/apache/hadoop/ozone/om/codec/OmMultipartKeyInfoCodec.java
 (100%)
 rename hadoop-ozone/{ozone-manager => 
interface-storage}/src/main/java/org/apache/hadoop/ozone/om/codec/OmPrefixInfoCodec.java
 (100%)
 rename hadoop-ozone/{ozone-manager => 
interface-storage}/src/main/java/org/apache/hadoop/ozone/om/codec/OmVolumeArgsCodec.java
 (100%)
 rename hadoop-ozone/{ozone-manager => 
interface-storage}/src/main/java/org/apache/hadoop/ozone/om/codec/RepeatedOmKeyInfoCodec.java
 (100%)
 rename hadoop-ozone/{ozone-manager => 
interface-storage}/src/main/java/org/apache/hadoop/ozone/om/codec/S3SecretValueCodec.java
 (100%)
 rename hadoop-ozone/{ozone-manager => 
interface-storage}/src/main/java/org/apache/hadoop/ozone/om/codec/TokenIdentifierCodec.java
 (100%)
 rename hadoop-ozone/{ozone-manager => 
interface-storage}/src/main/java/org/apache/hadoop/ozone/om/codec/UserVolumeInfoCodec.java
 (100%)
 copy hadoop-ozone/{ozone-manager => 
interface-storage}/src/main/java/org/apache/hadoop/ozone/om/codec/package-info.java
 (95%)
 copy 
hadoop-ozone/{ozone-manager/src/test/java/org/apache/hadoop/ozone/om/codec => 
interface-storage/src/main/java/org/apache/hadoop/ozone/om}/package-info.java 
(92%)
 rename hadoop-ozone/{ozone-manager => 
interface-storage}/src/main/java/org/apache/hadoop/ozone/om/ratis/OMTransactionInfo.java
 (100%)
 copy 
hadoop-ozone/{ozone-manager/src/test/java/org/apache/hadoop/ozone/om/codec => 
interface-storage/src/main/java/org/apache/hadoop/ozone/om/ratis}/package-info.java
 (92%)
 rename hadoop-ozone/{ozone-manager => 
interface-storage}/src/test/java/org/apache/hadoop/ozone/om/codec/TestOMTransactionInfoCodec.java
 (100%)
 rename hadoop-ozone/{ozone-manager => 
interface-storage}/src/test/java/org/apache/hadoop/ozone/om/codec/TestOmKeyInfoCodec.java
 (100%)
 rename hadoop-ozone/{ozone-manager => 
interface-storage}/src/test/java/org/apache/hadoop/ozone/om/codec/TestOmMultipartKeyInfoCodec.java
 (100%)
 rename hadoop-ozone/{ozone-manager => 
interface-storage}/src/test/java/org/apache/hadoop/ozone/om/codec/TestOmPrefixInfoCodec.java
 (100%)
 rename hadoop-ozone/{ozone-manager => 
interface-storage}/src/test/java/org/apache/hadoop/ozone/om/

[hadoop-ozone] 01/01: typo fixes

2020-09-21 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch HDDS-3755-design
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git

commit 61d485f4d925c2b02fd3573c31dbf5308e51121c
Author: Elek Márton 
AuthorDate: Mon Sep 21 16:53:41 2020 +0200

typo fixes
---
 hadoop-hdds/docs/content/design/storage-class.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hadoop-hdds/docs/content/design/storage-class.md 
b/hadoop-hdds/docs/content/design/storage-class.md
index 4fcdf08..cdd648a 100644
--- a/hadoop-hdds/docs/content/design/storage-class.md
+++ b/hadoop-hdds/docs/content/design/storage-class.md
@@ -64,7 +64,7 @@ ozone sh bucket create --storage-class=INFREQUENT_ACCESS
 ```
 
 
-Bucket-level default storage-class can be overridden for ay key, but will be 
used as default.
+Bucket-level default storage-class can be overridden for any key, but will be 
used as default.
 
 
 ## [USER] Fine grained replication control when using S3 API
@@ -330,7 +330,7 @@ With this storage class the containers can be converted to 
a specific EC contain
 
 NFS / Fuse file system might require to support Random read/write which can be 
tricky as the closed containers are immutable. In case of changing any one byte 
in a block, the whole block should be re-created with the new data. It can have 
a lot of overhead especially in case of many small writes.
 
-But write is cheap with Ratis/THREE containers. Similar to any `ChunkWrite` 
and `PutBlock` we can implementa `UpdateChunk` call which modifies the current 
content of the chunk AND replicates the change with the help of Ratis.
+But write is cheap with Ratis/THREE containers. Similar to any `ChunkWrite` 
and `PutBlock` we can implement an `UpdateChunk` call which modifies the 
current content of the chunk AND replicates the change with the help of Ratis.
 
 Let's imagine that we solved the resiliency of Ratis pipelines: In case of any 
Ratis error we can ask other Datanode to join to the Ratis ring *instead of* 
closing it. I know that it can be hard to implement, but if it is solved, we 
have an easy solution for random read/write.
 


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch HDDS-3755-design created (now 61d485f)

2020-09-21 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-3755-design
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


  at 61d485f  typo fixes

This branch includes the following new commits:

 new 61d485f  typo fixes

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (68d1ab0 -> ce0c072)

2020-09-18 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 68d1ab0  HDDS-3981. Add more debug level log to XceiverClientGrpc for 
debug purpose (#1214)
 add ce0c072  HDDS-3102. ozone getconf command should use the GenericCli 
parent class (#1410)

No new revisions were added by this update.

Summary of changes:
 .../apache/hadoop/ozone/freon/OzoneGetConf.java| 278 -
 .../apache/hadoop/ozone/freon/package-info.java|  21 --
 .../readdata.robot => basic/getconf.robot} |  17 +-
 hadoop-ozone/dist/src/shell/ozone/ozone|   2 +-
 hadoop-ozone/dist/src/shell/ozone/stop-ozone.sh|   8 +-
 .../org/apache/hadoop/ozone/conf/OzoneGetConf.java |  86 +++
 .../OzoneManagersCommandHandler.java}  |  37 ++-
 .../PrintConfKeyCommandHandler.java}   |  32 +--
 .../StorageContainerManagersCommandHandler.java}   |  38 ++-
 .../apache/hadoop/ozone/conf}/package-info.java|   4 +-
 10 files changed, 150 insertions(+), 373 deletions(-)
 delete mode 100644 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/freon/OzoneGetConf.java
 delete mode 100644 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/freon/package-info.java
 copy hadoop-ozone/dist/src/main/smoketest/{topology/readdata.robot => 
basic/getconf.robot} (65%)
 create mode 100644 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/conf/OzoneGetConf.java
 copy 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/{audit/parser/handler/LoadCommandHandler.java
 => conf/OzoneManagersCommandHandler.java} (58%)
 copy 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/{audit/parser/handler/LoadCommandHandler.java
 => conf/PrintConfKeyCommandHandler.java} (59%)
 copy 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/{audit/parser/handler/LoadCommandHandler.java
 => conf/StorageContainerManagersCommandHandler.java} (57%)
 copy 
hadoop-ozone/{ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils
 => tools/src/main/java/org/apache/hadoop/ozone/conf}/package-info.java (90%)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch HDDS-4255 created (now 9cfad94)

2020-09-17 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-4255
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


  at 9cfad94  HDDS-4255. Remove unused Ant and Jdiff dependency versions

This branch includes the following new commits:

 new 9cfad94  HDDS-4255. Remove unused Ant and Jdiff dependency versions

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] 01/01: HDDS-4255. Remove unused Ant and Jdiff dependency versions

2020-09-17 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch HDDS-4255
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git

commit 9cfad94103199461ea72a6bed51f3985f0cd22a9
Author: Elek Márton 
AuthorDate: Thu Sep 17 11:18:46 2020 +0200

HDDS-4255. Remove unused Ant and Jdiff dependency versions
---
 pom.xml | 16 
 1 file changed, 16 deletions(-)

diff --git a/pom.xml b/pom.xml
index 5964430..36c546e 100644
--- a/pom.xml
+++ b/pom.xml
@@ -110,12 +110,6 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xs
 
 4
 
-
-
-1.0.9
-
-2.11.0
-
 1.0.13
 
 ${project.build.directory}/test-dir
@@ -263,11 +257,6 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xs
 4.4.0
   
   
-jdiff
-jdiff
-${jdiff.version}
-  
-  
 org.apache.hadoop
 hadoop-assemblies
 ${hadoop.version}
@@ -1292,11 +1281,6 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xs
 0.3
   
   
-org.apache.ant
-ant
-1.8.1
-  
-  
 com.google.re2j
 re2j
 ${re2j.version}


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-3927. Rename Ozone OM, DN, SCM runtime options to conform to naming conventions (#1401)

2020-09-13 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 48e8e50  HDDS-3927. Rename Ozone OM,DN,SCM runtime options to conform 
to naming conventions (#1401)
48e8e50 is described below

commit 48e8e5047bc5a0075904e9142666b67a32b0da91
Author: Siyao Meng <50227127+smen...@users.noreply.github.com>
AuthorDate: Sun Sep 13 13:09:50 2020 -0700

HDDS-3927. Rename Ozone OM,DN,SCM runtime options to conform to naming 
conventions (#1401)
---
 hadoop-hdds/common/src/main/conf/hadoop-env.sh  | 13 +++--
 .../dist/src/main/compose/ozone-topology/docker-config  |  4 ++--
 hadoop-ozone/dist/src/shell/ozone/ozone | 13 ++---
 3 files changed, 19 insertions(+), 11 deletions(-)

diff --git a/hadoop-hdds/common/src/main/conf/hadoop-env.sh 
b/hadoop-hdds/common/src/main/conf/hadoop-env.sh
index 51ee585..07f2ed8 100644
--- a/hadoop-hdds/common/src/main/conf/hadoop-env.sh
+++ b/hadoop-hdds/common/src/main/conf/hadoop-env.sh
@@ -400,7 +400,16 @@ export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
 # These options will be appended to the options specified as HADOOP_OPTS
 # and therefore may override any similar flags set in HADOOP_OPTS
 #
-# export HDFS_OM_OPTS=""
+# export OZONE_OM_OPTS=""
+
+###
+# Ozone DataNode specific parameters
+###
+# Specify the JVM options to be used when starting Ozone DataNodes.
+# These options will be appended to the options specified as HADOOP_OPTS
+# and therefore may override any similar flags set in HADOOP_OPTS
+#
+# export OZONE_DATANODE_OPTS=""
 
 ###
 # HDFS StorageContainerManager specific parameters
@@ -409,7 +418,7 @@ export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
 # These options will be appended to the options specified as HADOOP_OPTS
 # and therefore may override any similar flags set in HADOOP_OPTS
 #
-# export HDFS_STORAGECONTAINERMANAGER_OPTS=""
+# export OZONE_SCM_OPTS=""
 
 ###
 # Advanced Users Only!
diff --git a/hadoop-ozone/dist/src/main/compose/ozone-topology/docker-config 
b/hadoop-ozone/dist/src/main/compose/ozone-topology/docker-config
index 648a858..6e93594 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-topology/docker-config
+++ b/hadoop-ozone/dist/src/main/compose/ozone-topology/docker-config
@@ -38,8 +38,8 @@ OZONE-SITE.XML_ozone.network.topology.aware.read=true
 HDFS-SITE.XML_rpc.metrics.quantile.enable=true
 HDFS-SITE.XML_rpc.metrics.percentiles.intervals=60,300
 ASYNC_PROFILER_HOME=/opt/profiler
-HDDS_DN_OPTS=-Dmodule.name=datanode
-HDFS_OM_OPTS=-Dmodule.name=om
+OZONE_DATANODE_OPTS=-Dmodule.name=datanode
+OZONE_MANAGER_OPTS=-Dmodule.name=om
 HDFS_STORAGECONTAINERMANAGER_OPTS=-Dmodule.name=scm
 HDFS_OM_SH_OPTS=-Dmodule.name=sh
 HDFS_SCM_CLI_OPTS=-Dmodule.name=admin
diff --git a/hadoop-ozone/dist/src/shell/ozone/ozone 
b/hadoop-ozone/dist/src/shell/ozone/ozone
index c536484..13a416a 100755
--- a/hadoop-ozone/dist/src/shell/ozone/ozone
+++ b/hadoop-ozone/dist/src/shell/ozone/ozone
@@ -105,8 +105,8 @@ function ozonecmd_case
   # Corresponding Ratis issue 
https://issues.apache.org/jira/browse/RATIS-534.
   # TODO: Fix the problem related to netty resource leak detector throwing
   # exception as mentioned in HDDS-3812
-  
HDDS_DN_OPTS="-Dorg.apache.ratis.thirdparty.io.netty.allocator.useCacheForAllThreads=false
 -Dorg.apache.ratis.thirdparty.io.netty.leakDetection.level=disabled 
-Dlog4j.configurationFile=${HADOOP_CONF_DIR}/dn-audit-log4j2.properties 
${HDDS_DN_OPTS}"
-  HADOOP_OPTS="${HADOOP_OPTS} ${HDDS_DN_OPTS}"
+  hadoop_deprecate_envvar HDDS_DN_OPTS OZONE_DATANODE_OPTS
+  
OZONE_DATANODE_OPTS="-Dorg.apache.ratis.thirdparty.io.netty.allocator.useCacheForAllThreads=false
 -Dorg.apache.ratis.thirdparty.io.netty.leakDetection.level=disabled 
-Dlog4j.configurationFile=${HADOOP_CONF_DIR}/dn-audit-log4j2.properties 
${OZONE_DATANODE_OPTS}"
   HADOOP_CLASSNAME=org.apache.hadoop.ozone.HddsDatanodeService
   OZONE_RUN_ARTIFACT_NAME="hadoop-ozone-datanode"
 ;;
@@ -154,8 +154,8 @@ function ozonecmd_case
 om)
   HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"
   HADOOP_CLASSNAME=org.apache.hadoop.ozone.om.OzoneManagerStarter
-  HDFS_OM_OPTS="${HDFS_OM_OPTS} 
-Dlog4j.configurationFile=${HADOOP_CONF_DIR}/om-audit-log4j2.properties"
-  HADOOP_OPTS="${HADOOP_OPTS} ${HDFS_OM_OPTS}"
+  hadoop_deprecate_envvar HDFS_OM_OPTS OZONE_OM_OPTS
+  OZONE_OM_OPTS="${OZONE_OM_OPTS} 
-Dlog4j.configurationFile=${HADOOP_CONF_DIR}/om-audit-log4j2.properties"
   OZONE_RUN_ARTIFACT_NAME="hadoop-ozone-ozone-manager"
 ;;
 sh | shell)
@@ -172,9 +172,8 @@ function ozonecmd_case
 scm)
   HADOOP_SUBCMD_SUPP

[hadoop-ozone] branch HDDS-4097 created (now 78fbaff)

2020-09-13 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-4097
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


  at 78fbaff  fix typo

No new revisions were added by this update.


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-4119. Improve performance of the BufferPool management of Ozone client (#1336)

2020-09-11 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 72e3215  HDDS-4119. Improve performance of the BufferPool management 
of Ozone client (#1336)
72e3215 is described below

commit 72e3215846bfaaa562aa9fb7a15f87f27e97867c
Author: Elek, Márton 
AuthorDate: Fri Sep 11 16:25:04 2020 +0200

HDDS-4119. Improve performance of the BufferPool management of Ozone client 
(#1336)
---
 hadoop-hdds/client/pom.xml |  11 +
 .../hadoop/hdds/scm/storage/BlockOutputStream.java | 123 +++
 .../apache/hadoop/hdds/scm/storage/BufferPool.java |  47 ++---
 .../hadoop/hdds/scm/storage/CommitWatcher.java |  37 ++--
 .../storage/TestBlockOutputStreamCorrectness.java  | 224 +
 .../hadoop/hdds/scm/storage/TestBufferPool.java|  46 +
 .../apache/hadoop/hdds/scm/XceiverClientSpi.java   |   4 +-
 .../hdds/scm/storage/ContainerProtocolCalls.java   |  80 +++-
 .../apache/hadoop/ozone/common/ChunkBuffer.java|  14 +-
 .../common/ChunkBufferImplWithByteBuffer.java  |  10 +-
 .../hadoop/hdds/scm/pipeline/MockPipeline.java |  29 ++-
 .../hadoop/ozone/common/TestChunkBuffer.java   |  16 +-
 .../hadoop/ozone/client/rpc/TestCommitWatcher.java |  31 ++-
 13 files changed, 508 insertions(+), 164 deletions(-)

diff --git a/hadoop-hdds/client/pom.xml b/hadoop-hdds/client/pom.xml
index e7a8ebb..608839e 100644
--- a/hadoop-hdds/client/pom.xml
+++ b/hadoop-hdds/client/pom.xml
@@ -39,6 +39,17 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd;>
   org.apache.hadoop
   hadoop-hdds-common
 
+
+  org.apache.hadoop
+  hadoop-hdds-common
+  test-jar
+  test
+
+
+  org.mockito
+  mockito-core
+  test
+
 
 
   io.netty
diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
index 32e5b37..1a16caf 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
@@ -17,46 +17,44 @@
  */
 
 package org.apache.hadoop.hdds.scm.storage;
-import com.google.common.annotations.VisibleForTesting;
-import com.google.common.base.Preconditions;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.CompletionException;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.hadoop.hdds.client.BlockID;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.BlockData;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumType;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.KeyValue;
+import org.apache.hadoop.hdds.scm.XceiverClientManager;
 import org.apache.hadoop.hdds.scm.XceiverClientReply;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
 import 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.hadoop.ozone.common.Checksum;
 import org.apache.hadoop.ozone.common.ChecksumData;
 import org.apache.hadoop.ozone.common.ChunkBuffer;
 import org.apache.hadoop.ozone.common.OzoneChecksumException;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import static 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.putBlockAsync;
+import static 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunkAsync;
 import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
-import org.apache.hadoop.hdds.scm.XceiverClientManager;
-import org.apache.hadoop.hdds.scm.XceiverClientSpi;
-import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumType;
-import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
-import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.BlockData;
-import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.KeyValue;
-import org.apache.hadoop.hdds.client.BlockID;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import java.io.IOException;
-imp

[hadoop-ozone] branch master updated: HDDS-4119. Improve performance of the BufferPool management of Ozone client (#1336)

2020-09-11 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 72e3215  HDDS-4119. Improve performance of the BufferPool management 
of Ozone client (#1336)
72e3215 is described below

commit 72e3215846bfaaa562aa9fb7a15f87f27e97867c
Author: Elek, Márton 
AuthorDate: Fri Sep 11 16:25:04 2020 +0200

HDDS-4119. Improve performance of the BufferPool management of Ozone client 
(#1336)
---
 hadoop-hdds/client/pom.xml |  11 +
 .../hadoop/hdds/scm/storage/BlockOutputStream.java | 123 +++
 .../apache/hadoop/hdds/scm/storage/BufferPool.java |  47 ++---
 .../hadoop/hdds/scm/storage/CommitWatcher.java |  37 ++--
 .../storage/TestBlockOutputStreamCorrectness.java  | 224 +
 .../hadoop/hdds/scm/storage/TestBufferPool.java|  46 +
 .../apache/hadoop/hdds/scm/XceiverClientSpi.java   |   4 +-
 .../hdds/scm/storage/ContainerProtocolCalls.java   |  80 +++-
 .../apache/hadoop/ozone/common/ChunkBuffer.java|  14 +-
 .../common/ChunkBufferImplWithByteBuffer.java  |  10 +-
 .../hadoop/hdds/scm/pipeline/MockPipeline.java |  29 ++-
 .../hadoop/ozone/common/TestChunkBuffer.java   |  16 +-
 .../hadoop/ozone/client/rpc/TestCommitWatcher.java |  31 ++-
 13 files changed, 508 insertions(+), 164 deletions(-)

diff --git a/hadoop-hdds/client/pom.xml b/hadoop-hdds/client/pom.xml
index e7a8ebb..608839e 100644
--- a/hadoop-hdds/client/pom.xml
+++ b/hadoop-hdds/client/pom.xml
@@ -39,6 +39,17 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd;>
   org.apache.hadoop
   hadoop-hdds-common
 
+
+  org.apache.hadoop
+  hadoop-hdds-common
+  test-jar
+  test
+
+
+  org.mockito
+  mockito-core
+  test
+
 
 
   io.netty
diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
index 32e5b37..1a16caf 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
@@ -17,46 +17,44 @@
  */
 
 package org.apache.hadoop.hdds.scm.storage;
-import com.google.common.annotations.VisibleForTesting;
-import com.google.common.base.Preconditions;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.CompletionException;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.hadoop.hdds.client.BlockID;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.BlockData;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumType;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.KeyValue;
+import org.apache.hadoop.hdds.scm.XceiverClientManager;
 import org.apache.hadoop.hdds.scm.XceiverClientReply;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
 import 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.hadoop.ozone.common.Checksum;
 import org.apache.hadoop.ozone.common.ChecksumData;
 import org.apache.hadoop.ozone.common.ChunkBuffer;
 import org.apache.hadoop.ozone.common.OzoneChecksumException;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import static 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.putBlockAsync;
+import static 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunkAsync;
 import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
-import org.apache.hadoop.hdds.scm.XceiverClientManager;
-import org.apache.hadoop.hdds.scm.XceiverClientSpi;
-import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumType;
-import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
-import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.BlockData;
-import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.KeyValue;
-import org.apache.hadoop.hdds.client.BlockID;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import java.io.IOException;
-imp

[hadoop-docker-ozone] branch ozone-1.0.0 created (now 67d955b)

2020-09-08 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch ozone-1.0.0
in repository https://gitbox.apache.org/repos/asf/hadoop-docker-ozone.git.


  at 67d955b  HDDS-4203. Publish apache/ozone:1.0.0 image (#16)

No new revisions were added by this update.


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-docker-ozone] branch ozone-latest updated: HDDS-4203. Publish apache/ozone:1.0.0 image (#16)

2020-09-08 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-latest
in repository https://gitbox.apache.org/repos/asf/hadoop-docker-ozone.git


The following commit(s) were added to refs/heads/ozone-latest by this push:
 new 67d955b  HDDS-4203. Publish apache/ozone:1.0.0 image (#16)
67d955b is described below

commit 67d955b64ebe7ee882f94ab7cc92c846fe936e95
Author: Elek, Márton 
AuthorDate: Tue Sep 8 09:05:55 2020 +0200

HDDS-4203. Publish apache/ozone:1.0.0 image (#16)
---
 Dockerfile  |  2 +-
 build.sh|  2 +-
 docker-compose.yaml | 10 +-
 start-ozone-all.sh  |  4 ++--
 4 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/Dockerfile b/Dockerfile
index 7cfaa55..a66acf1 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -14,7 +14,7 @@
 # limitations under the License.
 
 FROM apache/ozone-runner:20191107-1
-ARG 
OZONE_URL=https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download=hadoop/ozone/ozone-0.5.0-beta/hadoop-ozone-0.5.0-beta.tar.gz
+ARG 
OZONE_URL=https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download=hadoop/ozone/ozone-1.0.0/hadoop-ozone-1.0.0.tar.gz
 WORKDIR /opt
 RUN sudo rm -rf /opt/hadoop && wget $OZONE_URL -O ozone.tar.gz && tar zxf 
ozone.tar.gz && rm ozone.tar.gz && mv ozone* hadoop
 WORKDIR /opt/hadoop
diff --git a/build.sh b/build.sh
index dd2176d..d27b85b 100755
--- a/build.sh
+++ b/build.sh
@@ -31,4 +31,4 @@ if [ ! -d "$DIR/build/apache-rat-0.13" ]; then
 fi
 java -jar $DIR/build/apache-rat-0.13/apache-rat-0.13.jar $DIR -e .dockerignore 
-e public -e apache-rat-0.13 -e .git -e .gitignore
 docker build --build-arg OZONE_URL -t apache/ozone .
-docker tag apache/ozone apache/ozone:0.5.0
+docker tag apache/ozone apache/ozone:1.0.0
diff --git a/docker-compose.yaml b/docker-compose.yaml
index 8117275..b9b5a4e 100644
--- a/docker-compose.yaml
+++ b/docker-compose.yaml
@@ -17,14 +17,14 @@
 version: "3"
 services:
datanode:
-  image: apache/ozone:0.5.0
+  image: apache/ozone:1.0.0
   ports:
  - 9864
   command: ["ozone","datanode"]
   env_file:
  - ./docker-config
om:
-  image: apache/ozone:0.5.0
+  image: apache/ozone:1.0.0
   ports:
  - 9874:9874
   environment:
@@ -34,7 +34,7 @@ services:
  - ./docker-config
   command: ["ozone","om"]
scm:
-  image: apache/ozone:0.5.0
+  image: apache/ozone:1.0.0
   ports:
  - 9876:9876
   env_file:
@@ -43,14 +43,14 @@ services:
  ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION
   command: ["ozone","scm"]
recon:
-  image: apache/ozone:0.5.0
+  image: apache/ozone:1.0.0
   ports:
  - 9888:9888
   env_file:
  - ./docker-config
   command: ["ozone","recon"]
s3g:
-  image: apache/ozone:0.5.0
+  image: apache/ozone:1.0.0
   ports:
  - 9878:9878
   env_file:
diff --git a/start-ozone-all.sh b/start-ozone-all.sh
index 21fd23e..edb2706 100755
--- a/start-ozone-all.sh
+++ b/start-ozone-all.sh
@@ -17,11 +17,11 @@
 ozone scm --init
 ozone scm &
 
-ozone datanode &
-
 #wait for scm startup
 export WAITFOR=localhost:9876
 
+/opt/hadoop/libexec/entrypoint.sh ozone datanode &
+
 /opt/hadoop/libexec/entrypoint.sh ozone om --init
 /opt/hadoop/libexec/entrypoint.sh ozone om &
 sleep 15


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (53353c0 -> 0a490cb)

2020-09-07 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 53353c0  HDDS-4204. upgrade docker environment does not work with 
KEEP_RUNNING=true (#1388)
 add 0a490cb  HDDS-3441. Enable TestKeyManagerImpl test cases. (#1326)

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/hadoop/ozone/om/TestKeyManagerImpl.java   | 11 +++
 1 file changed, 3 insertions(+), 8 deletions(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-4204. upgrade docker environment does not work with KEEP_RUNNING=true (#1388)

2020-09-07 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 53353c0  HDDS-4204. upgrade docker environment does not work with 
KEEP_RUNNING=true (#1388)
53353c0 is described below

commit 53353c064319a8d71bf99f6e95acd1dca318e37e
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Mon Sep 7 14:27:52 2020 +0200

HDDS-4204. upgrade docker environment does not work with KEEP_RUNNING=true 
(#1388)
---
 hadoop-ozone/dist/src/main/compose/upgrade/test.sh | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/hadoop-ozone/dist/src/main/compose/upgrade/test.sh 
b/hadoop-ozone/dist/src/main/compose/upgrade/test.sh
index 0c51325..7284bf7 100644
--- a/hadoop-ozone/dist/src/main/compose/upgrade/test.sh
+++ b/hadoop-ozone/dist/src/main/compose/upgrade/test.sh
@@ -26,7 +26,6 @@ export COMPOSE_DIR
 export OZONE_VOLUME
 
 mkdir -p "${OZONE_VOLUME}"/{dn1,dn2,dn3,om,recon,s3g,scm}
-mkdir -p "${OZONE_VOLUME}/debug"
 
 if [[ -n "${OZONE_VOLUME_OWNER}" ]]; then
   current_user=$(whoami)
@@ -47,7 +46,7 @@ source "${COMPOSE_DIR}/../testlib.sh"
 # prepare pre-upgrade cluster
 start_docker_env
 execute_robot_test scm topology/loaddata.robot
-stop_docker_env
+KEEP_RUNNING=false stop_docker_env
 
 # run upgrade scripts
 SCRIPT_DIR=../../libexec/upgrade


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (4b325a8 -> ce02172)

2020-09-07 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 4b325a8  HDDS-4193. Range used by S3 MultipartUpload copy-from-source 
should be incusive (#1384)
 add ce02172  HDDS-4202. Upgrade ratis to 1.1.0-ea949f1-SNAPSHOT (#1382)

No new revisions were added by this update.

Summary of changes:
 hadoop-hdds/common/pom.xml |  5 +
 .../transport/server/ratis/XceiverServerRatis.java |  2 +-
 .../hadoop/hdds/scm/pipeline/RatisPipelineUtils.java   |  2 +-
 .../client/rpc/TestContainerStateMachineFailures.java  | 18 +++---
 pom.xml|  4 ++--
 5 files changed, 24 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (dc49daa -> 4b325a8)

2020-09-07 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from dc49daa  HDDS-4198. Compile Ozone with multiple Java versions (#1387)
 add 4b325a8  HDDS-4193. Range used by S3 MultipartUpload copy-from-source 
should be incusive (#1384)

No new revisions were added by this update.

Summary of changes:
 .../dist/src/main/smoketest/s3/MultipartUpload.robot  |  4 ++--
 .../apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java   |  3 ++-
 .../ozone/s3/endpoint/TestMultipartUploadWithCopy.java| 15 ++-
 3 files changed, 14 insertions(+), 8 deletions(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-4198. Compile Ozone with multiple Java versions (#1387)

2020-09-07 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new dc49daa  HDDS-4198. Compile Ozone with multiple Java versions (#1387)
dc49daa is described below

commit dc49daa25910c701e1091a0e3ed544ad5c05d6f6
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Mon Sep 7 09:13:46 2020 +0200

HDDS-4198. Compile Ozone with multiple Java versions (#1387)
---
 .github/workflows/post-commit.yml| 17 ++---
 hadoop-ozone/dev-support/checks/build.sh |  2 +-
 2 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/.github/workflows/post-commit.yml 
b/.github/workflows/post-commit.yml
index ff0111f..d963617 100644
--- a/.github/workflows/post-commit.yml
+++ b/.github/workflows/post-commit.yml
@@ -22,6 +22,10 @@ jobs:
   build:
 name: compile
 runs-on: ubuntu-18.04
+strategy:
+  matrix:
+java: [ 8, 11 ]
+  fail-fast: false
 steps:
   - name: Checkout project
 uses: actions/checkout@v2
@@ -34,10 +38,17 @@ jobs:
   key: ${{ runner.os }}-pnpm-${{ hashFiles('**/pnpm-lock.yaml') }}
   restore-keys: |
 ${{ runner.os }}-pnpm-
-  - name: Execute tests
-uses: ./.github/buildenv
+  - name: Cache for maven dependencies
+uses: actions/cache@v2
 with:
-  args: ./hadoop-ozone/dev-support/checks/build.sh
+  path: ~/.m2/repository
+  key: maven-repo-${{ hashFiles('**/pom.xml') }}
+  - name: Setup java
+uses: actions/setup-java@v1
+with:
+  java-version: ${{ matrix.java }}
+  - name: Run a full build
+run: hadoop-ozone/dev-support/checks/build.sh
   bats:
 runs-on: ubuntu-18.04
 steps:
diff --git a/hadoop-ozone/dev-support/checks/build.sh 
b/hadoop-ozone/dev-support/checks/build.sh
index 2cdc4fe..01a4f5c 100755
--- a/hadoop-ozone/dev-support/checks/build.sh
+++ b/hadoop-ozone/dev-support/checks/build.sh
@@ -17,5 +17,5 @@ DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 
&& pwd )"
 cd "$DIR/../../.." || exit 1
 
 export MAVEN_OPTS="-Xmx4096m"
-mvn -B -Dmaven.javadoc.skip=true -DskipTests clean install "$@"
+mvn -V -B -Dmaven.javadoc.skip=true -DskipTests clean install "$@"
 exit $?


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: Removing an archaic reference to Skaffold in the README and other little improvements (#1360)

2020-09-04 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 7bf205c  Removing an archaic reference to Skaffold in the README and 
other little improvements (#1360)
7bf205c is described below

commit 7bf205c6050f4e9a50fcc6b1af1c8b5279997c64
Author: Alexander Scammon 
AuthorDate: Fri Sep 4 04:20:45 2020 -0700

Removing an archaic reference to Skaffold in the README and other little 
improvements (#1360)
---
 hadoop-ozone/dist/README.md | 52 ++---
 1 file changed, 16 insertions(+), 36 deletions(-)

diff --git a/hadoop-ozone/dist/README.md b/hadoop-ozone/dist/README.md
index 88132ec..520ba12 100644
--- a/hadoop-ozone/dist/README.md
+++ b/hadoop-ozone/dist/README.md
@@ -14,63 +14,43 @@
 
 # Ozone Distribution
 
-This folder contains the project to create the binary ozone distribution and 
provide all the helper script and docker files to start it locally or in the 
cluster.
+This folder contains the project to create the binary Ozone distribution and 
provide all the helper scripts and Docker files to start Ozone locally or on a 
remote cluster.
 
 ## Testing with local docker based cluster
 
-After a full dist build you can find multiple docker-compose based cluster 
definition in the `target/ozone-*/compose` folder.
+After a full dist build you can find multiple docker-compose-based cluster 
definitions in the `target/ozone-*/compose` folder.
 
 Please check the README files there.
 
-Usually you can start the cluster with:
+Usually, you can start the cluster with:
 
 ```
 cd compose/ozone
 docker-compose up -d
 ```
 
-## Testing on Kubernetes
-
-You can also test the ozone cluster in kubernetes. If you have no active 
kubernetes cluster you can start a local one with minikube:
-
-```
-minikube start
-```
-
-For testing in kubernetes you need to:
-
-1. Create a docker image with the new build
-2. Upload it to a docker registery
-3. Deploy the cluster with apply kubernetes resources
+More information can be found in the Getting Started Guide:
+* [Getting Started: Run Ozone with Docker 
Compose](https://hadoop.apache.org/ozone/docs/current/start/runningviadocker.html)
 
-The easiest way to do all these steps is using the 
[skaffold](https://github.com/GoogleContainerTools/skaffold) tool. After the 
[installation of 
skaffold](https://github.com/GoogleContainerTools/skaffold#installation), you 
can execute
-
-```
-skaffold run
-```
-
-in this  (`hadoop-ozone/dist`) folder.
+## Testing on Kubernetes
 
-The default kubernetes resources set (`src/main/k8s/`) contains NodePort based 
service definitions for the Ozone Manager, Storage Container Manager and the S3 
gateway.
 
-With minikube you can access the services with:
+### Installation
 
-```
-minikube service s3g-public
-minikube service om-public
-minikube service scm-public
-```
+Please refer to the Getting Started guide for a couple of options for testing 
Ozone on Kubernetes:
+* [Getting Started: Minikube and 
Ozone](https://hadoop.apache.org/ozone/docs/current/start/minikube.html)
+* [Getting Started: Ozone on 
Kubernetes](https://hadoop.apache.org/ozone/docs/current/start/kubernetes.html)
 
 ### Monitoring
 
-Apache Hadoop Ozone supports Prometheus out-of the box. It contains a 
prometheus compatible exporter servlet. To start the monitoring you need a 
prometheus deploy in your kubernetes  cluster:
+Apache Hadoop Ozone supports Prometheus out of the box. It contains a 
prometheus-compatible exporter servlet. To start monitoring you need a 
Prometheus deployment in your Kubernetes cluster:
 
 ```
 cd src/main/k8s/prometheus
 kubectl apply -f .
 ```
 
-The prometheus ui also could be access via a NodePort service:
+The Prometheus UI can be made accessible via a NodePort service:
 
 ```
 minikube service prometheus-public
@@ -78,8 +58,8 @@ minikube service prometheus-public
 
 ### Notes on the Kubernetes setup
 
-Please not that the provided kubernetes resources are not suitable production:
+Please note that the provided Kubernetes resources are not suitable for 
production:
 
-1. There are no security setup
-2. The datanode is started in StatefulSet instead of DaemonSet (To make it 
possible to scale it up on one node minikube cluster)
-3. All the UI pages are published with NodePort services
\ No newline at end of file
+1. There is no security setup.
+2. The datanode is started as a StatefulSet instead of DaemonSet.  This is to 
make it possible to scale it up on one-node minikube cluster.
+3. All of the UI pages are published with NodePort services.
\ No newline at end of file


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-4197. Failed to load existing service definition files: ...SubcommandWithParent (#1386)

2020-09-04 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 549a1a0  HDDS-4197. Failed to load existing service definition files: 
...SubcommandWithParent (#1386)
549a1a0 is described below

commit 549a1a02c69847b65ea12c85245d0d812c45ccfd
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Fri Sep 4 12:20:36 2020 +0200

HDDS-4197. Failed to load existing service definition files: 
...SubcommandWithParent (#1386)
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index eca71e9..5409e61 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1335,7 +1335,7 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xs
   
 org.kohsuke.metainf-services
 metainf-services
-1.1
+1.8
 true
   
   


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-4150. Disabling flaky unit test until HDDS-4150 is fixed.

2020-09-03 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new fd63aac  HDDS-4150. Disabling flaky unit test until HDDS-4150 is fixed.
fd63aac is described below

commit fd63aac48430e47ea29bc29de7869119735f5e56
Author: Elek Márton 
AuthorDate: Thu Sep 3 10:48:44 2020 +0200

HDDS-4150. Disabling flaky unit test until HDDS-4150 is fixed.
---
 .../src/test/java/org/apache/hadoop/ozone/recon/api/TestEndpoints.java  | 2 ++
 1 file changed, 2 insertions(+)

diff --git 
a/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestEndpoints.java
 
b/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestEndpoints.java
index f1350a9..c7392a7 100644
--- 
a/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestEndpoints.java
+++ 
b/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestEndpoints.java
@@ -69,6 +69,7 @@ import org.jooq.Configuration;
 import org.jooq.DSLContext;
 import org.junit.Assert;
 import org.junit.Before;
+import org.junit.Ignore;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.TemporaryFolder;
@@ -364,6 +365,7 @@ public class TestEndpoints extends AbstractReconSqlDBTest {
   }
 
   @Test
+  @Ignore("HDDS-4150")
   public void testGetDatanodes() throws Exception {
 Response response = nodeEndpoint.getDatanodes();
 DatanodesResponse datanodesResponse =


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (199512b -> 77d56e6)

2020-09-02 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 199512b  HDDS-4131. Container report should update container key count 
and bytes used if they differ in SCM (#1339)
 add 77d56e6  HDDS-4165. GitHub Actions cache does not work outside of 
workspace (#1364)

No new revisions were added by this update.

Summary of changes:
 .github/workflows/post-commit.yml | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-4167. Acceptance test logs missing if fails during cluster startup (#1366)

2020-09-01 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 13fe31b  HDDS-4167. Acceptance test logs missing if fails during 
cluster startup (#1366)
13fe31b is described below

commit 13fe31b4927660bc2534656c6a7048b45d3c8051
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Tue Sep 1 16:39:51 2020 +0200

HDDS-4167. Acceptance test logs missing if fails during cluster startup 
(#1366)
---
 .../dist/src/main/compose/ozone-mr/test.sh | 22 +
 hadoop-ozone/dist/src/main/compose/test-all.sh | 21 +++--
 hadoop-ozone/dist/src/main/compose/testlib.sh  | 36 ++
 3 files changed, 49 insertions(+), 30 deletions(-)

diff --git a/hadoop-ozone/dist/src/main/compose/ozone-mr/test.sh 
b/hadoop-ozone/dist/src/main/compose/ozone-mr/test.sh
index 6146dab..3a18d4d 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-mr/test.sh
+++ b/hadoop-ozone/dist/src/main/compose/ozone-mr/test.sh
@@ -1,3 +1,4 @@
+#!/usr/bin/env bash
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
@@ -15,29 +16,22 @@
 # limitations under the License.
 SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )
 ALL_RESULT_DIR="$SCRIPT_DIR/result"
+mkdir -p "$ALL_RESULT_DIR"
+rm "$ALL_RESULT_DIR/*" || true
 source "$SCRIPT_DIR/../testlib.sh"
 
 tests=$(find_tests)
+cd "$SCRIPT_DIR"
 
 RESULT=0
 # shellcheck disable=SC2044
 for t in ${tests}; do
   d="$(dirname "${t}")"
-  echo "Executing test in ${d}"
 
-  #required to read the .env file from the right location
-  cd "${d}" || continue
-  ./test.sh
-  ret=$?
-  if [[ $ret -ne 0 ]]; then
-  RESULT=1
-  echo "ERROR: Test execution of ${d} is FAILED"
+  if ! run_test_script "${d}"; then
+RESULT=1
   fi
-  cd "$SCRIPT_DIR"
-  RESULT_DIR="${d}/result"
-  TEST_DIR_NAME=$(basename ${d})
-  rebot -N $TEST_DIR_NAME -o "$ALL_RESULT_DIR"/$TEST_DIR_NAME.xml 
"$RESULT_DIR"/"*.xml"
-  cp "$RESULT_DIR"/docker-*.log "$ALL_RESULT_DIR"/
-  cp "$RESULT_DIR"/*.out* "$ALL_RESULT_DIR"/ || true
+
+  copy_results "${d}" "${ALL_RESULT_DIR}"
 done
 
diff --git a/hadoop-ozone/dist/src/main/compose/test-all.sh 
b/hadoop-ozone/dist/src/main/compose/test-all.sh
index 1fdc0ff..45a3c52 100755
--- a/hadoop-ozone/dist/src/main/compose/test-all.sh
+++ b/hadoop-ozone/dist/src/main/compose/test-all.sh
@@ -34,29 +34,18 @@ if [ "$OZONE_WITH_COVERAGE" ]; then
 fi
 
 tests=$(find_tests)
+cd "$SCRIPT_DIR"
 
 RESULT=0
 # shellcheck disable=SC2044
 for t in ${tests}; do
   d="$(dirname "${t}")"
-  echo "Executing test in ${d}"
 
-  #required to read the .env file from the right location
-  cd "${d}" || continue
-  set +e
-  ./test.sh
-  ret=$?
-  set -e
-  if [[ $ret -ne 0 ]]; then
-  RESULT=1
-  echo "ERROR: Test execution of ${d} is FAILED"
+  if ! run_test_script "${d}"; then
+RESULT=1
   fi
-  cd "$SCRIPT_DIR"
-  RESULT_DIR="${d}/result"
-  TEST_DIR_NAME=$(basename ${d})
-  rebot --nostatusrc -N $TEST_DIR_NAME -o "$ALL_RESULT_DIR"/$TEST_DIR_NAME.xml 
"$RESULT_DIR"/"*.xml"
-  cp "$RESULT_DIR"/docker-*.log "$ALL_RESULT_DIR"/
-  cp "$RESULT_DIR"/*.out* "$ALL_RESULT_DIR"/ || true
+
+  copy_results "${d}" "${ALL_RESULT_DIR}"
 done
 
 rebot --nostatusrc -N acceptance -d "$ALL_RESULT_DIR" "$ALL_RESULT_DIR"/*.xml
diff --git a/hadoop-ozone/dist/src/main/compose/testlib.sh 
b/hadoop-ozone/dist/src/main/compose/testlib.sh
index 228572f..db449b9 100755
--- a/hadoop-ozone/dist/src/main/compose/testlib.sh
+++ b/hadoop-ozone/dist/src/main/compose/testlib.sh
@@ -247,3 +247,39 @@ generate_report(){
  exit 1
   fi
 }
+
+## @description  Copy results of a single test environment to the "all tests" 
dir.
+copy_results() {
+  local test_dir="$1"
+  local all_result_dir="$2"
+
+  local result_dir="${test_dir}/result"
+  local test_dir_name=$(basename ${test_dir})
+  if [[ -n "$(find "${result_dir}" -name "*.xml")" ]]; then
+rebot --nostatusrc -N "${test_dir_name}" -o 
"${all_result_dir}/${test_dir_name}.xml" "${result_dir}/*.xml"
+  fi
+
+  cp "${result_dir}"/docker-*.log "${all_result_dir}"/
+  if [[ -n "$(find "

[hadoop-ozone] branch HDDS-4119 updated (b8d1e3d -> c4144cc)

2020-09-01 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-4119
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from b8d1e3d  HDDS-4099. No Log4j 2 configuration file found error appears 
in CLI (#1318)
 add 16347b7  teragenfix
 add 61a8e6a  revert genesis changes
 add 2e32191  cleanup patch
 add 962cfd5  Cleanup tests and block output stream
 add d4456c0  fix buffer pool allocation
 add 1c4f272  unit test fix
 add 65a542f  additional debug log
 add 256db2a  fix write(byte) with the help of Lokesh
 add 118d8ce  Additional fixes from Lokesh
 add 514f711  rat and checkstyle fixes
 add dd99deb  checkstyle fixes
 add a2d8fc5  Address review comments
 add ad9c07c  checkstyle fixes
 add 9ab01a7  move conditions to the helper methods
 add 8969b42  restore orginal writeChunk logic in handleFlush
 add 7bf5b29  Use incremental chunk buffer for time being
 add 40721a1  Revert single writeChunk() call with different condition
 add bc5b38b  Merge remote-tracking branch 'elek/HDDS-4119' into HDDS-4119
 add e5e89e0  HDDS-4114. Bump log4j2 version (#1325)
 add 59fc0bb  HDDS-4127. Components with web interface should depend on 
hdds-docs. (#1335)
 add 1abbfed  HDDS-4094. Support byte-level write in Freon 
HadoopFsGenerator (#1310)
 add 1c7003e  HDDS-4139. Update version number in upgrade tests (#1347)
 add c656feb  HDDS-4144. Update version info in hadoop client dependency 
readme (#1348)
 add 122eac5  HDDS-4074. [OFS] Implement AbstractFileSystem for 
RootedOzoneFileSystem (#1330)
 add 854fdc4  HDDS-4112. Improve SCM webui page performance (#1323)
 add c0084a1  HDDS-3654. Let backgroundCreator create pipeline for the 
support replication factors alternately (#984)
 add a2080cf  HDDS-4111. Keep the CSI.zh.md consistent with CSI.md (#1320)
 add 8102ac7  HDDS-4062. Non rack aware pipelines should not be created if 
multiple racks are alive. (#1291)
 add 9292b39  HDDS-4068. Client should not retry same OM on network 
connection failure (#1324)
 add 7f674fd  HDDS-3972. Add option to limit number of items displaying 
through ldb tool. (#1206)
 add bc7786a  HDDS-4056. Convert OzoneAdmin to pluggable model (#1285)
 add 5fab834  HDDS-4152. Archive container logs for kubernetes check (#1355)
 add 5523636  HDDS-4140. Auto-close /pending pull requests after 21 days of 
inactivity (#1344)
 add dcb1c6e  HDDS-2411. add a datanode chunk validator fo datanode chunk 
generator (#1312)
 add 2f3edd9  HDDS-4153. Increase default timeout in kubernetes tests 
(#1357)
 add da61c4a  HDDS-4149. Implement OzoneFileStatus#toString (#1356)
 add d064230  HDDS-4109. Tests in TestOzoneFileSystem should use the 
existing MiniOzoneCluster (#1316)
 add f6e4417  HDDS-4145. Bump version to 1.1.0-SNAPSHOT on master (#1349)
 add 02289ce  HDDS-4146. Show the ScmId and ClusterId in the scm web ui. 
(#1350)
 add f64bc6e  HDDS-4137. Turn on the verbose mode of safe mode check on 
testlib (#1343)
 add 44acf78  HDDS-4147. Add OFS to FileSystem META-INF (#1352)
 add 8e98977  HDDS-4151. Skip the inputstream while offset larger than zero 
in s3g (#1354)
 add d34ab29  HDDS-3903. OzoneRpcClient support batch rename keys. (#1150)
 add 78ca8bf  HDDS-4077. Incomplete OzoneFileSystem statistics (#1329)
 add 0ec1a8a  HDDS-3867. Extend the chunkinfo tool to display information 
from all nodes in the pipeline. (#1154)
 add 0bce14d  Merge remote-tracking branch 'origin/master' into HDDS-4119
 add c4144cc  fix merge problem

No new revisions were added by this update.

Summary of changes:
 .github/close-pending.sh   |  41 
 .github/closing-message.txt|   7 +
 .github/comment-commands/close.sh  |  10 +-
 .github/comment-commands/pending.sh|   1 +
 .../{comments.yaml => close-pending.yaml}  |  19 +-
 hadoop-hdds/client/pom.xml |  15 +-
 .../apache/hadoop/hdds/scm/XceiverClientGrpc.java  |  30 +++
 .../apache/hadoop/hdds/scm/XceiverClientRatis.java |   7 +
 .../hadoop/hdds/scm/storage/BlockOutputStream.java | 131 ++
 .../apache/hadoop/hdds/scm/storage/BufferPool.java |  47 ++--
 .../hadoop/hdds/scm/storage/CommitWatcher.java |  37 +--
 .../storage/TestBlockOutputStreamCorrectness.java  | 224 +
 .../hadoop/hdds/scm/storage/TestBufferPool.java}   |  29 ++-
 hadoop-hdds/common/pom.xml |   4 +-
 .../org/apache/hadoop/hdds/cli/package-info.java   |   4 +-
 .../apache/hadoop/hdds/scm/XceiverClientSpi.java   |  15 +-
 .../hdds/scm/storage/ContainerProtocolCalls.java   | 110 +
 .../java/org/apache/hadoop/ozone/OzoneConsts.java  |   2 +
 .../apache/hadoop/ozone/common/ChunkBuffer.java|  14 +-
 .../common/ChunkBufferImplWithByteBuffer.java  | 

[hadoop-ozone] branch master updated (78ca8bf -> 0ec1a8a)

2020-08-31 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 78ca8bf  HDDS-4077. Incomplete OzoneFileSystem statistics (#1329)
 add 0ec1a8a  HDDS-3867. Extend the chunkinfo tool to display information 
from all nodes in the pipeline. (#1154)

No new revisions were added by this update.

Summary of changes:
 .../apache/hadoop/hdds/scm/XceiverClientGrpc.java  |  30 +
 .../apache/hadoop/hdds/scm/XceiverClientRatis.java |   7 +
 .../apache/hadoop/hdds/scm/XceiverClientSpi.java   |  11 ++
 .../hdds/scm/storage/ContainerProtocolCalls.java   |  34 +
 .../src/main/smoketest/debug/ozone-debug.robot |   4 +-
 .../apache/hadoop/ozone/debug/ChunkKeyHandler.java | 149 -
 .../hadoop/ozone/debug/ContainerChunkInfo.java |  21 +--
 7 files changed, 175 insertions(+), 81 deletions(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (5523636 -> dcb1c6e)

2020-08-27 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 5523636  HDDS-4140. Auto-close /pending pull requests after 21 days of 
inactivity (#1344)
 add dcb1c6e  HDDS-2411. add a datanode chunk validator fo datanode chunk 
generator (#1312)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/ozone/freon/DatanodeChunkValidator.java | 244 +
 .../java/org/apache/hadoop/ozone/freon/Freon.java  |   1 +
 2 files changed, 245 insertions(+)
 create mode 100644 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/DatanodeChunkValidator.java


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (5fab834 -> 5523636)

2020-08-27 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 5fab834  HDDS-4152. Archive container logs for kubernetes check (#1355)
 add 5523636  HDDS-4140. Auto-close /pending pull requests after 21 days of 
inactivity (#1344)

No new revisions were added by this update.

Summary of changes:
 .github/close-pending.sh   | 41 ++
 .github/closing-message.txt|  7 
 .github/comment-commands/close.sh  | 10 ++
 .github/comment-commands/pending.sh|  1 +
 .../{comments.yaml => close-pending.yaml}  | 19 +-
 5 files changed, 61 insertions(+), 17 deletions(-)
 create mode 100755 .github/close-pending.sh
 create mode 100644 .github/closing-message.txt
 copy .github/workflows/{comments.yaml => close-pending.yaml} (75%)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-4152. Archive container logs for kubernetes check (#1355)

2020-08-27 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 5fab834  HDDS-4152. Archive container logs for kubernetes check (#1355)
5fab834 is described below

commit 5fab8343c8e798b71b17d918efbddb41e7cc05fb
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Thu Aug 27 10:56:56 2020 +0200

HDDS-4152. Archive container logs for kubernetes check (#1355)
---
 hadoop-ozone/dev-support/checks/kubernetes.sh |  2 +-
 .../dist/src/main/k8s/examples/getting-started/test.sh|  2 ++
 hadoop-ozone/dist/src/main/k8s/examples/minikube/test.sh  |  2 ++
 hadoop-ozone/dist/src/main/k8s/examples/ozone-dev/test.sh |  2 ++
 hadoop-ozone/dist/src/main/k8s/examples/ozone/test.sh |  2 ++
 hadoop-ozone/dist/src/main/k8s/examples/test-all.sh   | 11 ---
 hadoop-ozone/dist/src/main/k8s/examples/testlib.sh|  7 +++
 7 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/hadoop-ozone/dev-support/checks/kubernetes.sh 
b/hadoop-ozone/dev-support/checks/kubernetes.sh
index a23aa83..7f68da1 100755
--- a/hadoop-ozone/dev-support/checks/kubernetes.sh
+++ b/hadoop-ozone/dev-support/checks/kubernetes.sh
@@ -31,6 +31,6 @@ mkdir -p "$REPORT_DIR"
 cd "$DIST_DIR/kubernetes/examples" || exit 1
 ./test-all.sh
 RES=$?
-cp result/* "$REPORT_DIR/"
+cp -r result/* "$REPORT_DIR/"
 cp "$REPORT_DIR/log.html" "$REPORT_DIR/summary.html"
 exit $RES
diff --git a/hadoop-ozone/dist/src/main/k8s/examples/getting-started/test.sh 
b/hadoop-ozone/dist/src/main/k8s/examples/getting-started/test.sh
index dabe394..7d6bdfb 100755
--- a/hadoop-ozone/dist/src/main/k8s/examples/getting-started/test.sh
+++ b/hadoop-ozone/dist/src/main/k8s/examples/getting-started/test.sh
@@ -32,6 +32,8 @@ execute_robot_test scm-0 smoketest/basic/basic.robot
 
 combine_reports
 
+get_logs
+
 stop_k8s_env
 
 revert_resources
diff --git a/hadoop-ozone/dist/src/main/k8s/examples/minikube/test.sh 
b/hadoop-ozone/dist/src/main/k8s/examples/minikube/test.sh
index dabe394..7d6bdfb 100755
--- a/hadoop-ozone/dist/src/main/k8s/examples/minikube/test.sh
+++ b/hadoop-ozone/dist/src/main/k8s/examples/minikube/test.sh
@@ -32,6 +32,8 @@ execute_robot_test scm-0 smoketest/basic/basic.robot
 
 combine_reports
 
+get_logs
+
 stop_k8s_env
 
 revert_resources
diff --git a/hadoop-ozone/dist/src/main/k8s/examples/ozone-dev/test.sh 
b/hadoop-ozone/dist/src/main/k8s/examples/ozone-dev/test.sh
index dabe394..7d6bdfb 100755
--- a/hadoop-ozone/dist/src/main/k8s/examples/ozone-dev/test.sh
+++ b/hadoop-ozone/dist/src/main/k8s/examples/ozone-dev/test.sh
@@ -32,6 +32,8 @@ execute_robot_test scm-0 smoketest/basic/basic.robot
 
 combine_reports
 
+get_logs
+
 stop_k8s_env
 
 revert_resources
diff --git a/hadoop-ozone/dist/src/main/k8s/examples/ozone/test.sh 
b/hadoop-ozone/dist/src/main/k8s/examples/ozone/test.sh
index dabe394..7d6bdfb 100755
--- a/hadoop-ozone/dist/src/main/k8s/examples/ozone/test.sh
+++ b/hadoop-ozone/dist/src/main/k8s/examples/ozone/test.sh
@@ -32,6 +32,8 @@ execute_robot_test scm-0 smoketest/basic/basic.robot
 
 combine_reports
 
+get_logs
+
 stop_k8s_env
 
 revert_resources
diff --git a/hadoop-ozone/dist/src/main/k8s/examples/test-all.sh 
b/hadoop-ozone/dist/src/main/k8s/examples/test-all.sh
index 1d763ff..ae810c9 100755
--- a/hadoop-ozone/dist/src/main/k8s/examples/test-all.sh
+++ b/hadoop-ozone/dist/src/main/k8s/examples/test-all.sh
@@ -31,13 +31,18 @@ RESULT=0
 IFS=$'\n'
 # shellcheck disable=SC2044
 for test in $(find "$SCRIPT_DIR" -name test.sh | grep 
"${OZONE_TEST_SELECTOR:-""}" |sort); do
+  TEST_DIR="$(dirname $test)"
+  TEST_NAME="$(basename "$TEST_DIR")"
+
   echo ""
-  echo " Executing tests of $(dirname "$test") #"
+  echo " Executing tests of ${TEST_DIR} #"
   echo ""
-  TEST_DIR="$(dirname $test)"
   cd "$TEST_DIR" || continue
   ./test.sh
-  cp "$TEST_DIR"/result/output.xml "$ALL_RESULT_DIR"/"$(basename 
"$TEST_DIR")".xml
+
+  cp "$TEST_DIR"/result/output.xml "$ALL_RESULT_DIR"/"${TEST_NAME}".xml
+  mkdir -p "$ALL_RESULT_DIR"/"${TEST_NAME}"
+  mv "$TEST_DIR"/logs/*log "$ALL_RESULT_DIR"/"${TEST_NAME}"/
 done
 
 rebot -N "smoketests" -d "$ALL_RESULT_DIR/" "$ALL_RESULT_DIR/*.xml"
diff --git a/hadoop-ozone/dist/src/main/k8s/examples/testlib.sh 
b/hadoop-ozone/dist/src/main/k8s/examples/testlib.sh
index d33194d..5dff226 100644
--- a/hadoop-ozone/dist/src/main/k8s/examples/testlib.sh
+++ b/hadoop-ozone/dist/src/main/k8

[hadoop-ozone] branch master updated: HDDS-4056. Convert OzoneAdmin to pluggable model (#1285)

2020-08-27 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new bc7786a  HDDS-4056. Convert OzoneAdmin to pluggable model (#1285)
bc7786a is described below

commit bc7786a2fafb2d36923506f8de6c25fcfd26d55b
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Thu Aug 27 10:43:39 2020 +0200

HDDS-4056. Convert OzoneAdmin to pluggable model (#1285)
---
 .../org/apache/hadoop/hdds/cli/package-info.java   |   4 +-
 hadoop-hdds/tools/pom.xml  |   8 ++
 .../org/apache/hadoop/hdds/cli/OzoneAdmin.java |  67 +++
 .../WithScmClient.java => cli/package-info.java}   |  19 +---
 .../hdds/scm/cli/ReplicationManagerCommands.java   |  23 ++--
 .../scm/cli/ReplicationManagerStartSubcommand.java |  21 ++--
 .../cli/ReplicationManagerStatusSubcommand.java|  32 ++
 .../scm/cli/ReplicationManagerStopSubcommand.java  |  25 ++---
 .../hdds/scm/cli/SafeModeCheckSubcommand.java  |  40 +++
 .../hadoop/hdds/scm/cli/SafeModeCommands.java  |  27 ++---
 .../hdds/scm/cli/SafeModeExitSubcommand.java   |  22 ++--
 .../hdds/scm/cli/SafeModeWaitSubcommand.java   |  13 +--
 .../org/apache/hadoop/hdds/scm/cli/ScmOption.java  |  72 
 .../WithScmClient.java => ScmSubcommand.java}  |  24 +++-
 .../hadoop/hdds/scm/cli/TopologySubcommand.java|  65 ++-
 .../hdds/scm/cli/container/CloseSubcommand.java|  20 ++--
 .../hdds/scm/cli/container/ContainerCommands.java  |  21 ++--
 .../hdds/scm/cli/container/CreateSubcommand.java   |  26 ++---
 .../hdds/scm/cli/container/DeleteSubcommand.java   |  20 ++--
 .../hdds/scm/cli/container/InfoSubcommand.java |  40 +++
 .../hdds/scm/cli/container/ListSubcommand.java |  32 ++
 .../hdds/scm/cli/datanode/DatanodeCommands.java|  21 ++--
 .../hdds/scm/cli/datanode/ListInfoSubcommand.java  |  48 
 .../cli/pipeline/ActivatePipelineSubcommand.java   |  19 ++--
 .../scm/cli/pipeline/ClosePipelineSubcommand.java  |  19 ++--
 .../scm/cli/pipeline/CreatePipelineSubcommand.java |  38 +++
 .../cli/pipeline/DeactivatePipelineSubcommand.java |  19 ++--
 .../scm/cli/pipeline/ListPipelinesSubcommand.java  |  40 +++
 .../hdds/scm/cli/pipeline/PipelineCommands.java|  22 ++--
 .../admincli/{datanode.robot => admin.robot}   |  20 ++--
 .../src/main/smoketest/admincli/container.robot|  68 
 .../src/main/smoketest/admincli/datanode.robot |  19 ++--
 .../src/main/smoketest/admincli/pipeline.robot |  49 +++--
 .../smoketest/admincli/replicationmanager.robot|  53 +
 .../src/main/smoketest/admincli/safemode.robot |  45 
 hadoop-ozone/dist/src/shell/ozone/ozone|   2 +-
 .../hadoop/ozone/shell/TestOzoneDatanodeShell.java |   2 +-
 hadoop-ozone/tools/pom.xml |   2 -
 .../org/apache/hadoop/ozone/admin/OzoneAdmin.java  | 122 -
 .../org/apache/hadoop/ozone/admin/om/OMAdmin.java  |   2 +-
 .../TestGenerateOzoneRequiredConfigurations.java   |   5 +-
 pom.xml|   8 +-
 42 files changed, 686 insertions(+), 558 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/cli/package-info.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/cli/package-info.java
index 8dcc1d1..aabad6f 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/cli/package-info.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/cli/package-info.java
@@ -1,4 +1,4 @@
-/**
+/*
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -19,4 +19,4 @@
 /**
  * Generic helper class to make instantiate picocli based cli tools.
  */
-package org.apache.hadoop.hdds.cli;
\ No newline at end of file
+package org.apache.hadoop.hdds.cli;
diff --git a/hadoop-hdds/tools/pom.xml b/hadoop-hdds/tools/pom.xml
index f362a0b..fcc553f 100644
--- a/hadoop-hdds/tools/pom.xml
+++ b/hadoop-hdds/tools/pom.xml
@@ -67,6 +67,14 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd;>
   commons-cli
 
 
+  log4j
+  log4j
+
+
+  org.kohsuke.metainf-services
+  metainf-services
+
+
   org.xerial
   sqlite-jdbc
 
diff --git 
a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/cli/OzoneAdmin.java 
b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/cli/OzoneAdmin.java
new file mode 100644
index 000..aca8a4c
--- /dev/null
+++ b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/cli/OzoneAdmin.java
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  S

[hadoop-ozone] branch master updated: HDDS-3972. Add option to limit number of items displaying through ldb tool. (#1206)

2020-08-27 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 7f674fd  HDDS-3972. Add option to limit number of items displaying 
through ldb tool. (#1206)
7f674fd is described below

commit 7f674fdc5fccdc8b6899470fc91841ec2cbb2f22
Author: Sadanand Shenoy 
AuthorDate: Thu Aug 27 14:03:40 2020 +0530

HDDS-3972. Add option to limit number of items displaying through ldb tool. 
(#1206)
---
 .../org/apache/hadoop/ozone/om/TestOmLDBCli.java   | 120 +++
 .../org/apache/hadoop/ozone/om/TestOmSQLCli.java   | 235 -
 .../org/apache/hadoop/ozone/debug/DBScanner.java   |  62 --
 .../org/apache/hadoop/ozone/debug/RDBParser.java   |   4 +
 4 files changed, 172 insertions(+), 249 deletions(-)

diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOmLDBCli.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOmLDBCli.java
new file mode 100644
index 000..450eebb
--- /dev/null
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOmLDBCli.java
@@ -0,0 +1,120 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om;
+
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.utils.db.DBStore;
+import org.apache.hadoop.hdds.utils.db.DBStoreBuilder;
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.debug.DBScanner;
+import org.apache.hadoop.ozone.debug.RDBParser;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.request.TestOMRequestUtils;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.Assert;
+import org.junit.rules.TemporaryFolder;
+
+import java.io.File;
+import java.util.List;
+import java.util.ArrayList;
+
+
+/**
+ * This class tests the Debug LDB CLI that reads from an om.db file.
+ */
+public class TestOmLDBCli {
+  private OzoneConfiguration conf;
+
+  private RDBParser rdbParser;
+  private DBScanner dbScanner;
+  private DBStore dbStore = null;
+  private static List keyNames;
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  @Before
+  public void setup() throws Exception {
+conf = new OzoneConfiguration();
+rdbParser = new RDBParser();
+dbScanner = new DBScanner();
+keyNames = new ArrayList<>();
+  }
+
+  @After
+  public void shutdown() throws Exception {
+if (dbStore!=null){
+  dbStore.close();
+}
+  }
+
+  @Test
+  public void testOMDB() throws Exception {
+File newFolder = folder.newFolder();
+if(!newFolder.exists()) {
+  Assert.assertTrue(newFolder.mkdirs());
+}
+// Dummy om.db with only keyTable
+dbStore = DBStoreBuilder.newBuilder(conf)
+  .setName("om.db")
+  .setPath(newFolder.toPath())
+  .addTable("keyTable")
+  .build();
+// insert 5 keys
+for (int i = 0; i<5; i++) {
+  OmKeyInfo value = TestOMRequestUtils.createOmKeyInfo("sampleVol",
+  "sampleBuck", "key" + (i+1), HddsProtos.ReplicationType.STAND_ALONE,
+  HddsProtos.ReplicationFactor.ONE);
+  String key = "key"+ (i);
+  Table keyTable = dbStore.getTable("keyTable");
+  keyTable.put(key.getBytes(), value.getProtobuf().toByteArray());
+}
+rdbParser.setDbPath(dbStore.getDbLocation().getAbsolutePath());
+dbScanner.setParent(rdbParser);
+Assert.assertEquals(5, getKeyNames(dbScanner).size());
+Assert.assertTrue(getKeyNames(dbScanner).contains("key1"));
+Assert.assertTrue(getKeyNames(dbScanner).contains("key5"));
+Assert.assertFalse(getKeyNames(dbScanner).contains("key6"));
+DBScanner.setLimit(1);
+Assert.assertEquals(1, getKeyNames(dbScanner).size());
+DBScanner.setLimit(-1);
+try {
+  getKeyNames(dbScanner);
+  Assert

[hadoop-ozone] branch master updated (c0084a1 -> a2080cf)

2020-08-25 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from c0084a1  HDDS-3654. Let backgroundCreator create pipeline for the 
support replication factors alternately (#984)
 add a2080cf  HDDS-4111. Keep the CSI.zh.md consistent with CSI.md (#1320)

No new revisions were added by this update.

Summary of changes:
 hadoop-hdds/docs/content/interface/CSI.zh.md | 13 -
 1 file changed, 12 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (122eac5 -> 854fdc4)

2020-08-25 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 122eac5  HDDS-4074. [OFS] Implement AbstractFileSystem for 
RootedOzoneFileSystem (#1330)
 add 854fdc4  HDDS-4112. Improve SCM webui page performance (#1323)

No new revisions were added by this update.

Summary of changes:
 .../server-scm/src/main/resources/webapps/scm/scm-overview.html   | 4 ++--
 hadoop-hdds/server-scm/src/main/resources/webapps/scm/scm.js  | 4 
 2 files changed, 2 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (c656feb -> 122eac5)

2020-08-25 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from c656feb  HDDS-4144. Update version info in hadoop client dependency 
readme (#1348)
 add 122eac5  HDDS-4074. [OFS] Implement AbstractFileSystem for 
RootedOzoneFileSystem (#1330)

No new revisions were added by this update.

Summary of changes:
 .../dist/src/main/compose/ozone-mr/hadoop27/docker-config|  1 +
 .../dist/src/main/compose/ozone-mr/hadoop31/docker-config|  1 +
 .../dist/src/main/compose/ozone-mr/hadoop32/docker-config|  1 +
 .../dist/src/main/compose/ozonesecure-mr/docker-config   |  1 +
 .../main/java/org/apache/hadoop/fs/ozone/RootedOzFs.java}| 12 ++--
 .../main/java/org/apache/hadoop/fs/ozone/RootedOzFs.java}| 12 ++--
 .../apache/hadoop/fs/ozone/{OzFs.java => RootedOzFs.java}| 12 ++--
 7 files changed, 22 insertions(+), 18 deletions(-)
 copy 
hadoop-ozone/{ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicOzFs.java
 => ozonefs-hadoop2/src/main/java/org/apache/hadoop/fs/ozone/RootedOzFs.java} 
(81%)
 copy hadoop-ozone/{ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzFs.java 
=> ozonefs-hadoop3/src/main/java/org/apache/hadoop/fs/ozone/RootedOzFs.java} 
(80%)
 copy hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/{OzFs.java 
=> RootedOzFs.java} (80%)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (1c7003e -> c656feb)

2020-08-25 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 1c7003e  HDDS-4139. Update version number in upgrade tests (#1347)
 add c656feb  HDDS-4144. Update version info in hadoop client dependency 
readme (#1348)

No new revisions were added by this update.

Summary of changes:
 hadoop-hdds/hadoop-dependency-client/README.md | 18 +-
 1 file changed, 9 insertions(+), 9 deletions(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch ozone-1.0.0 updated: HDDS-4144. Update version info in hadoop client dependency readme (#1348)

2020-08-25 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-1.0.0
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/ozone-1.0.0 by this push:
 new 28d372c  HDDS-4144. Update version info in hadoop client dependency 
readme (#1348)
28d372c is described below

commit 28d372ca903b4741131bace09e0339e9161257bb
Author: Sammi Chen 
AuthorDate: Tue Aug 25 20:30:41 2020 +0800

HDDS-4144. Update version info in hadoop client dependency readme (#1348)
---
 hadoop-hdds/hadoop-dependency-client/README.md | 18 +-
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/hadoop-hdds/hadoop-dependency-client/README.md 
b/hadoop-hdds/hadoop-dependency-client/README.md
index 0ca9a1c..a3ec3680d 100644
--- a/hadoop-hdds/hadoop-dependency-client/README.md
+++ b/hadoop-hdds/hadoop-dependency-client/README.md
@@ -30,31 +30,31 @@ mvn dependency:tree
 [INFO] Scanning for projects...
 [INFO] 
 [INFO] ---< org.apache.hadoop:hadoop-hdds-hadoop-dependency-client >---
-[INFO] Building Apache Hadoop HDDS Hadoop Client dependencies 0.6.0-SNAPSHOT
+[INFO] Building Apache Hadoop HDDS Hadoop Client dependencies 1.0.0
 [INFO] [ jar ]-
-[INFO] 
+[INFO]
 [INFO] --- maven-dependency-plugin:3.0.2:tree (default-cli) @ 
hadoop-hdds-hadoop-dependency-client ---
-[INFO] 
org.apache.hadoop:hadoop-hdds-hadoop-dependency-client:jar:0.6.0-SNAPSHOT
-[INFO] +- org.apache.hadoop:hadoop-annotations:jar:3.2.0:compile
+[INFO] org.apache.hadoop:hadoop-hdds-hadoop-dependency-client:jar:1.0.0
+[INFO] +- org.apache.hadoop:hadoop-annotations:jar:3.2.1:compile
 [INFO] |  \- jdk.tools:jdk.tools:jar:1.8:system
-[INFO] +- org.apache.hadoop:hadoop-common:jar:3.2.0:compile
+[INFO] +- org.apache.hadoop:hadoop-common:jar:3.2.1:compile
 [INFO] |  +- org.apache.httpcomponents:httpclient:jar:4.5.2:compile
 [INFO] |  |  \- org.apache.httpcomponents:httpcore:jar:4.4.4:compile
 [INFO] |  +- org.apache.commons:commons-configuration2:jar:2.1.1:compile
 [INFO] |  +- com.google.re2j:re2j:jar:1.1:compile
 [INFO] |  +- com.google.protobuf:protobuf-java:jar:2.5.0:compile
-[INFO] |  +- org.apache.hadoop:hadoop-auth:jar:3.2.0:compile
+[INFO] |  +- org.apache.hadoop:hadoop-auth:jar:3.2.1:compile
 [INFO] |  +- com.google.code.findbugs:jsr305:jar:3.0.0:compile
 [INFO] |  +- org.apache.htrace:htrace-core4:jar:4.1.0-incubating:compile
 [INFO] |  +- org.codehaus.woodstox:stax2-api:jar:3.1.4:compile
 [INFO] |  \- com.fasterxml.woodstox:woodstox-core:jar:5.0.3:compile
-[INFO] +- org.apache.hadoop:hadoop-hdfs:jar:3.2.0:compile
+[INFO] +- org.apache.hadoop:hadoop-hdfs:jar:3.2.1:compile
 [INFO] \- junit:junit:jar:4.11:test
 [INFO]\- org.hamcrest:hamcrest-core:jar:1.3:test
 [INFO] 
 [INFO] BUILD SUCCESS
 [INFO] 
-[INFO] Total time:  1.144 s
-[INFO] Finished at: 2020-04-01T11:21:46+02:00
+[INFO] Total time:  1.464 s
+[INFO] Finished at: 2020-08-25T19:40:29+08:00
 [INFO] 
 ```


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-4114. Bump log4j2 version (#1325)

2020-08-14 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new e5e89e0  HDDS-4114. Bump log4j2 version (#1325)
e5e89e0 is described below

commit e5e89e0df591afb4b79a00296866a3842ca3d8e8
Author: Elek, Márton 
AuthorDate: Fri Aug 14 15:57:11 2020 +0200

HDDS-4114. Bump log4j2 version (#1325)
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index 49ca857..f4b6414 100644
--- a/pom.xml
+++ b/pom.xml
@@ -146,7 +146,7 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xs
 
 1.7.25
 1.2.17
-2.11.0
+2.13.3
 3.4.2
 
 0.7.0


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch HDDS-4119 created (now b8d1e3d)

2020-08-14 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-4119
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


  at b8d1e3d  HDDS-4099. No Log4j 2 configuration file found error appears 
in CLI (#1318)

No new revisions were added by this update.


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-4057. Failed acceptance test missing from bundle (#1283)

2020-08-12 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 7ab53b5  HDDS-4057. Failed acceptance test missing from bundle (#1283)
7ab53b5 is described below

commit 7ab53b5209d710c7d6597972a3f66e2278b97544
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Wed Aug 12 12:11:17 2020 +0200

HDDS-4057. Failed acceptance test missing from bundle (#1283)
---
 hadoop-ozone/dist/src/main/compose/failing1/.env   |  1 +
 .../src/main/compose/failing1/docker-compose.yaml  |  1 +
 .../dist/src/main/compose/failing1/docker-config   |  1 +
 .../dist/src/main/compose/failing1/test.sh | 36 ++
 hadoop-ozone/dist/src/main/compose/failing2/.env   |  1 +
 .../src/main/compose/failing2/docker-compose.yaml  |  1 +
 .../dist/src/main/compose/failing2/docker-config   |  1 +
 .../dist/src/main/compose/failing2/test.sh | 36 ++
 hadoop-ozone/dist/src/main/compose/test-all.sh |  6 ++--
 .../dist/src/main/smoketest/failing/test1.robot| 21 +
 .../dist/src/main/smoketest/failing/test2.robot| 21 +
 11 files changed, 124 insertions(+), 2 deletions(-)

diff --git a/hadoop-ozone/dist/src/main/compose/failing1/.env 
b/hadoop-ozone/dist/src/main/compose/failing1/.env
new file mode 12
index 000..c9b103f
--- /dev/null
+++ b/hadoop-ozone/dist/src/main/compose/failing1/.env
@@ -0,0 +1 @@
+../ozone/.env
\ No newline at end of file
diff --git a/hadoop-ozone/dist/src/main/compose/failing1/docker-compose.yaml 
b/hadoop-ozone/dist/src/main/compose/failing1/docker-compose.yaml
new file mode 12
index 000..76acad5
--- /dev/null
+++ b/hadoop-ozone/dist/src/main/compose/failing1/docker-compose.yaml
@@ -0,0 +1 @@
+../ozone/docker-compose.yaml
\ No newline at end of file
diff --git a/hadoop-ozone/dist/src/main/compose/failing1/docker-config 
b/hadoop-ozone/dist/src/main/compose/failing1/docker-config
new file mode 12
index 000..4969452
--- /dev/null
+++ b/hadoop-ozone/dist/src/main/compose/failing1/docker-config
@@ -0,0 +1 @@
+../ozone/docker-config
\ No newline at end of file
diff --git a/hadoop-ozone/dist/src/main/compose/failing1/test.sh 
b/hadoop-ozone/dist/src/main/compose/failing1/test.sh
new file mode 100755
index 000..cb8687f
--- /dev/null
+++ b/hadoop-ozone/dist/src/main/compose/failing1/test.sh
@@ -0,0 +1,36 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#suite:failing
+
+COMPOSE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+export COMPOSE_DIR
+
+export SECURITY_ENABLED=false
+export OZONE_REPLICATION_FACTOR=3
+
+# shellcheck source=/dev/null
+source "$COMPOSE_DIR/../testlib.sh"
+
+start_docker_env
+
+execute_robot_test scm failing/test1.robot
+execute_robot_test scm failing/test2.robot
+
+stop_docker_env
+
+generate_report
diff --git a/hadoop-ozone/dist/src/main/compose/failing2/.env 
b/hadoop-ozone/dist/src/main/compose/failing2/.env
new file mode 12
index 000..c9b103f
--- /dev/null
+++ b/hadoop-ozone/dist/src/main/compose/failing2/.env
@@ -0,0 +1 @@
+../ozone/.env
\ No newline at end of file
diff --git a/hadoop-ozone/dist/src/main/compose/failing2/docker-compose.yaml 
b/hadoop-ozone/dist/src/main/compose/failing2/docker-compose.yaml
new file mode 12
index 000..76acad5
--- /dev/null
+++ b/hadoop-ozone/dist/src/main/compose/failing2/docker-compose.yaml
@@ -0,0 +1 @@
+../ozone/docker-compose.yaml
\ No newline at end of file
diff --git a/hadoop-ozone/dist/src/main/compose/failing2/docker-config 
b/hadoop-ozone/dist/src/main/compose/failing2/docker-config
new file mode 12
index 000..4969452
--- /dev/null
+++ b/hadoop-ozone/dist/src/main/compose/failing2/docker-config
@@ -0,0 +1 @@
+../ozone/docker-config
\ No newline at end of file
diff --git a/hadoop-ozone/dist/src/main/compose/failing2/test.sh 
b/hadoop-ozone/dist/src/main/compose/failing2/test.sh
new file mode 100755
index 000..cb8687f
--- /dev/nul

[hadoop-ozone] branch ozone-0.6.0 updated: HDDS-3878. Make OMHA serviceID optional if one (but only one) is defined in the config (#1149)

2020-08-11 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.6.0
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/ozone-0.6.0 by this push:
 new 665e84f  HDDS-3878. Make OMHA serviceID optional if one (but only one) 
is defined in the config (#1149)
665e84f is described below

commit 665e84f0971cc4551c5a0005fff0aa3106eca460
Author: Elek, Márton 
AuthorDate: Tue Aug 11 15:49:44 2020 +0200

HDDS-3878. Make OMHA serviceID optional if one (but only one) is defined in 
the config (#1149)
---
 .../hadoop/hdds/conf/InMemoryConfiguration.java|  58 +++
 .../hadoop/ozone/client/OzoneClientFactory.java|   8 +-
 hadoop-ozone/dist/src/main/compose/ozone-ha/.env   |  19 +++
 .../src/main/compose/ozone-ha/docker-compose.yaml  |  93 +++
 .../dist/src/main/compose/ozone-ha/docker-config   |  35 +
 .../dist/src/main/compose/ozone-ha/test.sh |  33 
 .../{ozone-shell.robot => ozone-shell-lib.robot}   |  25 +--
 .../main/smoketest/basic/ozone-shell-single.robot  |  27 
 .../src/main/smoketest/basic/ozone-shell.robot | 119 +-
 .../hadoop/ozone/shell/TestOzoneShellHA.java   |  45 +++---
 .../hadoop/ozone/freon/BaseFreonGenerator.java |  12 ++
 .../apache/hadoop/ozone/shell/OzoneAddress.java| 103 
 .../hadoop/ozone/shell/TestOzoneAddress.java   |   6 +-
 .../shell/TestOzoneAddressClientCreation.java  | 172 +
 14 files changed, 552 insertions(+), 203 deletions(-)

diff --git 
a/hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/InMemoryConfiguration.java
 
b/hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/InMemoryConfiguration.java
new file mode 100644
index 000..0bea7af
--- /dev/null
+++ 
b/hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/InMemoryConfiguration.java
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.conf;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * In memory, mutable configuration source for testing.
+ */
+public class InMemoryConfiguration implements MutableConfigurationSource {
+
+  private Map configs = new HashMap<>();
+
+  public InMemoryConfiguration() {
+  }
+
+  public InMemoryConfiguration(String key, String value) {
+set(key, value);
+  }
+
+  @Override
+  public String get(String key) {
+return configs.get(key);
+  }
+
+  @Override
+  public Collection getConfigKeys() {
+return configs.keySet();
+  }
+
+  @Override
+  public char[] getPassword(String key) throws IOException {
+return configs.get(key).toCharArray();
+  }
+
+  @Override
+  public void set(String key, String value) {
+configs.put(key, value);
+  }
+}
diff --git 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientFactory.java
 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientFactory.java
index 2f7b107..9bf3973 100644
--- 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientFactory.java
+++ 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientFactory.java
@@ -137,13 +137,17 @@ public final class OzoneClientFactory {
 // configuration, we don't fall back to default ozone.om.address defined
 // in ozone-default.xml.
 
-if (OmUtils.isServiceIdsDefined(config)) {
+String[] serviceIds = config.getTrimmedStrings(OZONE_OM_SERVICE_IDS_KEY);
+if (serviceIds.length > 1) {
   throw new IOException("Following ServiceID's " +
   config.getTrimmedStringCollection(OZONE_OM_SERVICE_IDS_KEY) + " are" 
+
   " defined in the configuration. Use the method getRpcClient which " +
   "takes serviceID and configuration as param");
+} else if (serviceIds.length == 1) {
+  return getRpcClient(getClientProtocol(config, serviceIds[0]), config);
+} else {
+  return getRpcClient(getClientProtocol(config), config);
 }
-return getRpcClient(getClientProtocol

[hadoop-ozone] branch master updated: HDDS-3878. Make OMHA serviceID optional if one (but only one) is defined in the config (#1149)

2020-08-11 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new cee43e9  HDDS-3878. Make OMHA serviceID optional if one (but only one) 
is defined in the config (#1149)
cee43e9 is described below

commit cee43e908f700633a283f8791687523341a8ee27
Author: Elek, Márton 
AuthorDate: Tue Aug 11 15:49:44 2020 +0200

HDDS-3878. Make OMHA serviceID optional if one (but only one) is defined in 
the config (#1149)
---
 .../hadoop/hdds/conf/InMemoryConfiguration.java|  58 +++
 .../hadoop/ozone/client/OzoneClientFactory.java|   8 +-
 hadoop-ozone/dist/src/main/compose/ozone-ha/.env   |  19 +++
 .../src/main/compose/ozone-ha/docker-compose.yaml  |  93 +++
 .../dist/src/main/compose/ozone-ha/docker-config   |  35 +
 .../dist/src/main/compose/ozone-ha/test.sh |  33 
 .../{ozone-shell.robot => ozone-shell-lib.robot}   |  25 +--
 .../main/smoketest/basic/ozone-shell-single.robot  |  27 
 .../src/main/smoketest/basic/ozone-shell.robot | 119 +-
 .../hadoop/ozone/shell/TestOzoneShellHA.java   |  45 +++---
 .../hadoop/ozone/freon/BaseFreonGenerator.java |  12 ++
 .../apache/hadoop/ozone/shell/OzoneAddress.java| 103 
 .../hadoop/ozone/shell/TestOzoneAddress.java   |   6 +-
 .../shell/TestOzoneAddressClientCreation.java  | 172 +
 14 files changed, 552 insertions(+), 203 deletions(-)

diff --git 
a/hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/InMemoryConfiguration.java
 
b/hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/InMemoryConfiguration.java
new file mode 100644
index 000..0bea7af
--- /dev/null
+++ 
b/hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/InMemoryConfiguration.java
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.conf;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * In memory, mutable configuration source for testing.
+ */
+public class InMemoryConfiguration implements MutableConfigurationSource {
+
+  private Map configs = new HashMap<>();
+
+  public InMemoryConfiguration() {
+  }
+
+  public InMemoryConfiguration(String key, String value) {
+set(key, value);
+  }
+
+  @Override
+  public String get(String key) {
+return configs.get(key);
+  }
+
+  @Override
+  public Collection getConfigKeys() {
+return configs.keySet();
+  }
+
+  @Override
+  public char[] getPassword(String key) throws IOException {
+return configs.get(key).toCharArray();
+  }
+
+  @Override
+  public void set(String key, String value) {
+configs.put(key, value);
+  }
+}
diff --git 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientFactory.java
 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientFactory.java
index 2f7b107..9bf3973 100644
--- 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientFactory.java
+++ 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientFactory.java
@@ -137,13 +137,17 @@ public final class OzoneClientFactory {
 // configuration, we don't fall back to default ozone.om.address defined
 // in ozone-default.xml.
 
-if (OmUtils.isServiceIdsDefined(config)) {
+String[] serviceIds = config.getTrimmedStrings(OZONE_OM_SERVICE_IDS_KEY);
+if (serviceIds.length > 1) {
   throw new IOException("Following ServiceID's " +
   config.getTrimmedStringCollection(OZONE_OM_SERVICE_IDS_KEY) + " are" 
+
   " defined in the configuration. Use the method getRpcClient which " +
   "takes serviceID and configuration as param");
+} else if (serviceIds.length == 1) {
+  return getRpcClient(getClientProtocol(config, serviceIds[0]), config);
+} else {
+  return getRpcClient(getClientProtocol(config), config);
 }
-return getRpcClient(getClientProtocol(config)

[hadoop-ozone] branch master updated: HDDS-3833. Use Pipeline choose policy to choose pipeline from exist pipeline list (#1096)

2020-08-11 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new a79dfae  HDDS-3833. Use Pipeline choose policy to choose pipeline from 
exist pipeline list (#1096)
a79dfae is described below

commit a79dfae611e06d7fc30e90ffe61ca62909ae9e35
Author: maobaolong <307499...@qq.com>
AuthorDate: Tue Aug 11 21:09:57 2020 +0800

HDDS-3833. Use Pipeline choose policy to choose pipeline from exist 
pipeline list (#1096)
---
 .../hadoop/hdds/scm/PipelineChoosePolicy.java  |  37 +++
 .../hdds/scm/PipelineRequestInformation.java   |  59 
 .../java/org/apache/hadoop/hdds/scm/ScmConfig.java |  23 +
 .../org/apache/hadoop/hdds/scm/ScmConfigKeys.java  |   1 +
 .../hadoop/hdds/scm/exceptions/SCMException.java   |   3 +-
 .../src/main/proto/ScmServerProtocol.proto |   1 +
 .../hadoop/hdds/scm/block/BlockManagerImpl.java|  14 ++-
 .../algorithms/HealthyPipelineChoosePolicy.java|  46 +
 .../algorithms/PipelineChoosePolicyFactory.java| 106 +
 .../algorithms/RandomPipelineChoosePolicy.java |  38 
 .../pipeline/choose/algorithms/package-info.java   |  18 
 .../hdds/scm/server/StorageContainerManager.java   |   8 ++
 .../TestPipelineChoosePolicyFactory.java   |  94 ++
 13 files changed, 443 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/PipelineChoosePolicy.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/PipelineChoosePolicy.java
new file mode 100644
index 000..c829e2e
--- /dev/null
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/PipelineChoosePolicy.java
@@ -0,0 +1,37 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.hdds.scm;
+
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+
+import java.util.List;
+
+/**
+ * A {@link PipelineChoosePolicy} support choosing pipeline from exist list.
+ */
+public interface PipelineChoosePolicy {
+
+  /**
+   * Given an initial list of pipelines, return one of the pipelines.
+   *
+   * @param pipelineList list of pipelines.
+   * @return one of the pipelines.
+   */
+  Pipeline choosePipeline(List pipelineList,
+  PipelineRequestInformation pri);
+}
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/PipelineRequestInformation.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/PipelineRequestInformation.java
new file mode 100644
index 000..ac0cfbe
--- /dev/null
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/PipelineRequestInformation.java
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm;
+
+/**
+ * The information of the request of pipeline.
+ */
+public final class PipelineRequestInformation {
+  private long size;
+
+  /**
+   * Builder for PipelineRequestInformation.
+   */
+  public static class Builder {
+private long size;
+
+public static Builder getBuilder() {
+  return new Builder();
+}
+
+/**
+ * sets the size.
+ * @param sz request size
+ * @return Builder for PipelineRequestInformation
+ */
+public Builder setSize(l

[hadoop-ozone] branch master updated: HDDS-3979. Make bufferSize configurable for stream copy (#1212)

2020-08-11 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 9a702e5  HDDS-3979. Make bufferSize configurable for stream copy 
(#1212)
9a702e5 is described below

commit 9a702e5c35cfdffe7f0f1a4e2cc7ec97ec8e6d4e
Author: maobaolong <307499...@qq.com>
AuthorDate: Tue Aug 11 20:20:17 2020 +0800

HDDS-3979. Make bufferSize configurable for stream copy (#1212)
---
 .../org/apache/hadoop/ozone/OzoneConfigKeys.java   |   1 +
 .../common/src/main/resources/ozone-default.xml|   8 ++
 .../hadoop/ozone/client/io/KeyInputStream.java |  62 ++-
 .../ozone/client/rpc/TestKeyInputStream.java   | 119 ++---
 .../hadoop/ozone/s3/S3GatewayConfigKeys.java   |   6 ++
 .../hadoop/ozone/s3/endpoint/ObjectEndpoint.java   |  36 ---
 .../hadoop/ozone/s3/io/S3WrapperInputStream.java   |  36 +--
 7 files changed, 107 insertions(+), 161 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
index d89fef9..482ac88 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
@@ -468,6 +468,7 @@ public final class OzoneConfigKeys {
   public static final String  OZONE_CLIENT_HTTPS_NEED_AUTH_KEY =
   "ozone.https.client.need-auth";
   public static final boolean OZONE_CLIENT_HTTPS_NEED_AUTH_DEFAULT = false;
+
   /**
* There is no need to instantiate this class.
*/
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index b9774aa..5770448 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -2432,6 +2432,14 @@
 
   
   
+ozone.s3g.client.buffer.size
+OZONE, S3GATEWAY
+4KB
+
+  The size of the buffer which is for read block. (4KB by default).
+
+  
+  
 ssl.server.keystore.keypassword
 OZONE, SECURITY, MANAGEMENT
 
diff --git 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
index 4af6838..769035a 100644
--- 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
+++ 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
@@ -20,7 +20,6 @@ package org.apache.hadoop.ozone.client.io;
 import com.google.common.annotations.VisibleForTesting;
 
 import org.apache.commons.collections.CollectionUtils;
-import org.apache.commons.io.IOUtils;
 import org.apache.hadoop.fs.FSExceptionMessages;
 import org.apache.hadoop.fs.Seekable;
 import org.apache.hadoop.hdds.client.BlockID;
@@ -35,7 +34,6 @@ import org.slf4j.LoggerFactory;
 import java.io.EOFException;
 import java.io.IOException;
 import java.io.InputStream;
-import java.io.OutputStream;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.List;
@@ -325,62 +323,14 @@ public class KeyInputStream extends InputStream 
implements Seekable {
 return blockStreams.get(index).getRemaining();
   }
 
-  /**
-   * Copies some or all bytes from a large (over 2GB) InputStream
-   * to an OutputStream, optionally skipping input bytes.
-   * 
-   * Copy the method from IOUtils of commons-io to reimplement skip by seek
-   * rather than read. The reason why IOUtils of commons-io implement skip
-   * by read can be found at
-   * https://issues.apache.org/jira/browse/IO-203;>IO-203.
-   * 
-   * 
-   * This method uses the provided buffer, so there is no need to use a
-   * BufferedInputStream.
-   * 
-   *
-   * @param output the OutputStream to write to
-   * @param inputOffset : number of bytes to skip from input before copying
-   * -ve values are ignored
-   * @param length : number of bytes to copy. -ve means all
-   * @param buffer the buffer to use for the copy
-   * @return the number of bytes copied
-   * @throws NullPointerException if the input or output is null
-   * @throws IOException  if an I/O error occurs
-   */
-  public long copyLarge(final OutputStream output,
-  final long inputOffset, final long len, final byte[] buffer)
-  throws IOException {
-if (inputOffset > 0) {
-  seek(inputOffset);
-}
-
-if (len == 0) {
+  @Override
+  public long skip(long n) throws IOException {
+if (n <= 0) {
   return 0;
 }
 
-final int bufferLength = buffer.length;
-int bytesToRead = bufferLength;
-if (len > 0 && len < bufferLength) {
-  bytesToRead = (int) len;
-}
-
-int read;
-long tot

[hadoop-ozone] branch master updated (5ce6f0e -> ca8eb40)

2020-08-10 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 5ce6f0e  HDDS-4042. Update documentation for the GA release (#1269)
 add ca8eb40  HDDS-4055. Cleanup GitHub workflow (#1282)

No new revisions were added by this update.

Summary of changes:
 .github/workflows/post-commit.yml | 294 +-
 1 file changed, 161 insertions(+), 133 deletions(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (99b693e -> db31571)

2020-08-07 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 99b693e  HDDS-4073. Remove leftover robot.robot (#1297)
 add db31571  HDDS-4066. Add core-site.xml to intellij configuration (#1292)

No new revisions were added by this update.

Summary of changes:
 .../test/resources => dev-support/intellij}/core-site.xml   | 13 -
 1 file changed, 8 insertions(+), 5 deletions(-)
 copy hadoop-ozone/{integration-test/src/test/resources => 
dev-support/intellij}/core-site.xml (80%)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (d7ea496 -> 99b693e)

2020-08-07 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from d7ea496  HDDS-4044. Deprecate ozone.s3g.volume.name. #1270
 add 99b693e  HDDS-4073. Remove leftover robot.robot (#1297)

No new revisions were added by this update.

Summary of changes:
 hadoop-ozone/dist/src/main/smoketest/robot.robot | 81 
 1 file changed, 81 deletions(-)
 delete mode 100644 hadoop-ozone/dist/src/main/smoketest/robot.robot


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] 01/01: add warning where translations are missing

2020-08-07 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch HDDS-4042-work
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git

commit 3a746909887c59d9ea3ba1362263ab7571bc0487
Author: Elek Márton 
AuthorDate: Fri Aug 7 16:22:14 2020 +0200

add warning where translations are missing
---
 hadoop-hdds/docs/content/concept/OzoneManager.zh.md| 6 ++
 hadoop-hdds/docs/content/concept/StorageContainerManager.zh.md | 6 ++
 hadoop-hdds/docs/content/feature/GDPR.zh.md| 5 +
 hadoop-hdds/docs/content/interface/O3fs.zh.md  | 6 ++
 hadoop-hdds/docs/content/start/FromSource.zh.md| 7 ++-
 5 files changed, 29 insertions(+), 1 deletion(-)

diff --git a/hadoop-hdds/docs/content/concept/OzoneManager.zh.md 
b/hadoop-hdds/docs/content/concept/OzoneManager.zh.md
index 5e9ab7f..27b33c5 100644
--- a/hadoop-hdds/docs/content/concept/OzoneManager.zh.md
+++ b/hadoop-hdds/docs/content/concept/OzoneManager.zh.md
@@ -21,6 +21,12 @@ summary: Ozone Manager 是 Ozone 主要的命名空间服务,它管理了卷
   limitations under the License.
 -->
 
+
+
+注意:本页面翻译的信息可能滞后,最新的信息请参看英文版的相关页面。
+
+
+
 Ozone Manager(OM)管理 Ozone 的命名空间。
 
 当向 Ozone 写入数据时,你需要向 OM 请求一个块,OM 会返回一个块并记录下相关信息。当你想要读取那个文件时,你也需要先通过 OM 获取那个块的地址。
diff --git a/hadoop-hdds/docs/content/concept/StorageContainerManager.zh.md 
b/hadoop-hdds/docs/content/concept/StorageContainerManager.zh.md
index d530906..da29869 100644
--- a/hadoop-hdds/docs/content/concept/StorageContainerManager.zh.md
+++ b/hadoop-hdds/docs/content/concept/StorageContainerManager.zh.md
@@ -21,6 +21,12 @@ summary:  Storage Container Manager(SCM)是 Ozone 的核心元数据服务
   limitations under the License.
 -->
 
+
+
+注意:本页面翻译的信息可能滞后,最新的信息请参看英文版的相关页面。
+
+
+
 SCM 为 Ozone 集群提供了多种重要功能,包括:集群管理、证书管理、块管理和副本管理等。
 
 {{}}
diff --git a/hadoop-hdds/docs/content/feature/GDPR.zh.md 
b/hadoop-hdds/docs/content/feature/GDPR.zh.md
index e44957f..af0684d 100644
--- a/hadoop-hdds/docs/content/feature/GDPR.zh.md
+++ b/hadoop-hdds/docs/content/feature/GDPR.zh.md
@@ -22,6 +22,11 @@ icon: user
   limitations under the License.
 -->
 
+
+
+注意:本页面翻译的信息可能滞后,最新的信息请参看英文版的相关页面。
+
+
 
 在 Ozone 中遵守 GDPR 规范非常简单,只需要在创建桶时指定 `--enforcegdpr=true` 或  `-g=true` 
参数,这样创建出的桶都是符合 GDPR 规范的,当然,在桶中创建的键也都自动符合。
 
diff --git a/hadoop-hdds/docs/content/interface/O3fs.zh.md 
b/hadoop-hdds/docs/content/interface/O3fs.zh.md
index 9969919..0b2a06f 100644
--- a/hadoop-hdds/docs/content/interface/O3fs.zh.md
+++ b/hadoop-hdds/docs/content/interface/O3fs.zh.md
@@ -21,6 +21,12 @@ summary: Hadoop 文件系统兼容使得任何使用类 HDFS 接口的应用无
   limitations under the License.
 -->
 
+
+
+注意:本页面翻译的信息可能滞后,最新的信息请参看英文版的相关页面。
+
+
+
 Hadoop 的文件系统接口兼容可以让任意像 Ozone 这样的存储后端轻松地整合进 Hadoop 生态系统,Ozone 文件系统就是一个兼容 Hadoop 
的文件系统。
 目前ozone支持两种协议: 
o3fs和ofs。两者最大的区别是o3fs只支持在单个bucket上操作,而ofs则支持跨所有volume和bucket的操作。关于两者在操作
 上的具体区别可以参考ofs.md中的"Differences from existing o3fs"。
diff --git a/hadoop-hdds/docs/content/start/FromSource.zh.md 
b/hadoop-hdds/docs/content/start/FromSource.zh.md
index a1b9f37..ab740af 100644
--- a/hadoop-hdds/docs/content/start/FromSource.zh.md
+++ b/hadoop-hdds/docs/content/start/FromSource.zh.md
@@ -19,10 +19,15 @@ weight: 30
   limitations under the License.
 -->
 
+
+
+注意:本页面翻译的信息可能滞后,最新的信息请参看英文版的相关页面。
+
+
+
 {{< requirements >}}
  * Java 1.8
  * Maven
- * Protoc (2.5)
 {{< /requirements >}}
 
 本文档是关于从源码构建 Ozone 的指南,如果你

[hadoop-ozone] branch HDDS-4042-work created (now 3a74690)

2020-08-07 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-4042-work
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


  at 3a74690  add warning where translations are missing

This branch includes the following new commits:

 new 3a74690  add warning where translations are missing

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch HDDS-3878 created (now 9980b56)

2020-08-07 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-3878
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


  at 9980b56  one line method def

This branch includes the following new commits:

 new 29550ce  typo fix
 new 9980b56  one line method def

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] 01/02: typo fix

2020-08-07 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch HDDS-3878
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git

commit 29550ce803abb075d7899e7a7e747d97d0e9cbc7
Author: Elek Márton 
AuthorDate: Fri Aug 7 15:55:15 2020 +0200

typo fix
---
 .../src/main/java/org/apache/hadoop/test/InMemoryConfiguration.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-hdds/test-utils/src/main/java/org/apache/hadoop/test/InMemoryConfiguration.java
 
b/hadoop-hdds/test-utils/src/main/java/org/apache/hadoop/test/InMemoryConfiguration.java
index abf8d37..ee74fef 100644
--- 
a/hadoop-hdds/test-utils/src/main/java/org/apache/hadoop/test/InMemoryConfiguration.java
+++ 
b/hadoop-hdds/test-utils/src/main/java/org/apache/hadoop/test/InMemoryConfiguration.java
@@ -25,7 +25,7 @@ import java.util.Map;
 import org.apache.hadoop.hdds.conf.MutableConfigurationSource;
 
 /**
- * In memory, mutable configuration source for testing..
+ * In memory, mutable configuration source for testing.
  */
 public class InMemoryConfiguration implements MutableConfigurationSource {
 


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (1613726 -> 0993d12)

2020-07-29 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 1613726  HDDS-3511. Fix javadoc comment in OmMetadataManager (#1247)
 add 0993d12  HDDS-4019. Show the storageDir while need init om or scm 
(#1248)

No new revisions were added by this update.

Summary of changes:
 .../apache/hadoop/hdds/scm/server/StorageContainerManager.java| 3 ++-
 .../src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java| 8 +---
 2 files changed, 7 insertions(+), 4 deletions(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-3511. Fix javadoc comment in OmMetadataManager (#1247)

2020-07-29 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 1613726  HDDS-3511. Fix javadoc comment in OmMetadataManager (#1247)
1613726 is described below

commit 16137265d62c1e2cc56d8e86f913070100bbfcb3
Author: Lisa <30621230+aeioul...@users.noreply.github.com>
AuthorDate: Wed Jul 29 20:58:23 2020 +0800

HDDS-3511. Fix javadoc comment in OmMetadataManager (#1247)
---
 .../src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
index e64b023..36d219b 100644
--- 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
+++ 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
@@ -116,7 +116,7 @@ public class OmMetadataManagerImpl implements 
OMMetadataManager {
* |--|
* | s3SecretTable  | s3g_access_key_id -> s3Secret   |
* |--|
-   * | dTokenTable| s3g_access_key_id -> s3Secret   |
+   * | dTokenTable| OzoneTokenID -> renew_time  |
* |--|
* | prefixInfoTable| prefix -> PrefixInfo|
* |--|


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-4041. Ozone /conf endpoint triggers kerberos replay error when SPNEGO is enabled. (#1267)

2020-07-29 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 21c08ee  HDDS-4041. Ozone /conf endpoint triggers kerberos replay 
error when SPNEGO is enabled. (#1267)
21c08ee is described below

commit 21c08eef622bc085c0dc16b2804c33280c0c88b0
Author: Xiaoyu Yao 
AuthorDate: Wed Jul 29 04:32:24 2020 -0700

HDDS-4041. Ozone /conf endpoint triggers kerberos replay error when SPNEGO 
is enabled. (#1267)
---
 .../apache/hadoop/hdds/server/http/HttpServer2.java | 21 +
 .../dist/src/main/smoketest/spnego/web.robot| 14 ++
 2 files changed, 35 insertions(+)

diff --git 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/HttpServer2.java
 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/HttpServer2.java
index 3a2c49b..9282c84 100644
--- 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/HttpServer2.java
+++ 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/HttpServer2.java
@@ -893,6 +893,27 @@ public final class HttpServer2 implements FilterContainer {
 }
 webAppContext.addServlet(holder, pathSpec);
 
+// Remove any previous filter attached to the removed servlet path to avoid
+// Kerberos replay error.
+FilterMapping[] filterMappings = webAppContext.getServletHandler().
+getFilterMappings();
+for (int i = 0; i < filterMappings.length; i++) {
+  if (filterMappings[i].getPathSpecs() == null) {
+LOG.debug("Skip checking {} filterMappings {} without a path spec.",
+filterMappings[i].getFilterName(), filterMappings[i]);
+continue;
+  }
+  int oldPathSpecsLen = filterMappings[i].getPathSpecs().length;
+  String[] newPathSpecs =
+  ArrayUtil.removeFromArray(filterMappings[i].getPathSpecs(), 
pathSpec);
+  if (newPathSpecs.length == 0) {
+webAppContext.getServletHandler().setFilterMappings(
+ArrayUtil.removeFromArray(filterMappings, filterMappings[i]));
+  } else if (newPathSpecs.length != oldPathSpecsLen) {
+filterMappings[i].setPathSpecs(newPathSpecs);
+  }
+}
+
 if (requireAuth && UserGroupInformation.isSecurityEnabled()) {
   LOG.info("Adding Kerberos (SPNEGO) filter to {}", name);
   ServletHandler handler = webAppContext.getServletHandler();
diff --git a/hadoop-ozone/dist/src/main/smoketest/spnego/web.robot 
b/hadoop-ozone/dist/src/main/smoketest/spnego/web.robot
index 9c4156f..065e390 100644
--- a/hadoop-ozone/dist/src/main/smoketest/spnego/web.robot
+++ b/hadoop-ozone/dist/src/main/smoketest/spnego/web.robot
@@ -30,6 +30,11 @@ ${OM_SERVICE_LIST_URL}   http://om:9874/serviceList
 ${SCM_URL}   http://scm:9876
 ${RECON_URL}   http://recon:9888
 
+${SCM_CONF_URL} http://scm:9876/conf
+${SCM_JMX_URL}  http://scm:9876/jmx
+${SCM_STACKS_URL}   http://scm:9876/stacks
+
+
 *** Keywords ***
 Verify SPNEGO enabled URL
 [arguments]  ${url}
@@ -60,6 +65,15 @@ Test OM Service List
 Test SCM portal
 Verify SPNEGO enabled URL   ${SCM_URL}
 
+Test SCM conf
+Verify SPNEGO enabled URL   ${SCM_CONF_URL}
+
+Test SCM jmx
+Verify SPNEGO enabled URL   ${SCM_JMX_URL}
+
+Test SCM stacks
+Verify SPNEGO enabled URL   ${SCM_STACKS_URL}
+
 Test Recon portal
 Verify SPNEGO enabled URL   ${RECON_URL}
 


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch ozone-0.6.0 updated: HDDS-4041. Ozone /conf endpoint triggers kerberos replay error when SPNEGO is enabled. (#1267)

2020-07-29 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.6.0
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/ozone-0.6.0 by this push:
 new 87e0016  HDDS-4041. Ozone /conf endpoint triggers kerberos replay 
error when SPNEGO is enabled. (#1267)
87e0016 is described below

commit 87e0016843fb486028102368e62773df55447d45
Author: Xiaoyu Yao 
AuthorDate: Wed Jul 29 04:32:24 2020 -0700

HDDS-4041. Ozone /conf endpoint triggers kerberos replay error when SPNEGO 
is enabled. (#1267)
---
 .../apache/hadoop/hdds/server/http/HttpServer2.java | 21 +
 .../dist/src/main/smoketest/spnego/web.robot| 14 ++
 2 files changed, 35 insertions(+)

diff --git 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/HttpServer2.java
 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/HttpServer2.java
index 3a2c49b..9282c84 100644
--- 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/HttpServer2.java
+++ 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/HttpServer2.java
@@ -893,6 +893,27 @@ public final class HttpServer2 implements FilterContainer {
 }
 webAppContext.addServlet(holder, pathSpec);
 
+// Remove any previous filter attached to the removed servlet path to avoid
+// Kerberos replay error.
+FilterMapping[] filterMappings = webAppContext.getServletHandler().
+getFilterMappings();
+for (int i = 0; i < filterMappings.length; i++) {
+  if (filterMappings[i].getPathSpecs() == null) {
+LOG.debug("Skip checking {} filterMappings {} without a path spec.",
+filterMappings[i].getFilterName(), filterMappings[i]);
+continue;
+  }
+  int oldPathSpecsLen = filterMappings[i].getPathSpecs().length;
+  String[] newPathSpecs =
+  ArrayUtil.removeFromArray(filterMappings[i].getPathSpecs(), 
pathSpec);
+  if (newPathSpecs.length == 0) {
+webAppContext.getServletHandler().setFilterMappings(
+ArrayUtil.removeFromArray(filterMappings, filterMappings[i]));
+  } else if (newPathSpecs.length != oldPathSpecsLen) {
+filterMappings[i].setPathSpecs(newPathSpecs);
+  }
+}
+
 if (requireAuth && UserGroupInformation.isSecurityEnabled()) {
   LOG.info("Adding Kerberos (SPNEGO) filter to {}", name);
   ServletHandler handler = webAppContext.getServletHandler();
diff --git a/hadoop-ozone/dist/src/main/smoketest/spnego/web.robot 
b/hadoop-ozone/dist/src/main/smoketest/spnego/web.robot
index 9c4156f..065e390 100644
--- a/hadoop-ozone/dist/src/main/smoketest/spnego/web.robot
+++ b/hadoop-ozone/dist/src/main/smoketest/spnego/web.robot
@@ -30,6 +30,11 @@ ${OM_SERVICE_LIST_URL}   http://om:9874/serviceList
 ${SCM_URL}   http://scm:9876
 ${RECON_URL}   http://recon:9888
 
+${SCM_CONF_URL} http://scm:9876/conf
+${SCM_JMX_URL}  http://scm:9876/jmx
+${SCM_STACKS_URL}   http://scm:9876/stacks
+
+
 *** Keywords ***
 Verify SPNEGO enabled URL
 [arguments]  ${url}
@@ -60,6 +65,15 @@ Test OM Service List
 Test SCM portal
 Verify SPNEGO enabled URL   ${SCM_URL}
 
+Test SCM conf
+Verify SPNEGO enabled URL   ${SCM_CONF_URL}
+
+Test SCM jmx
+Verify SPNEGO enabled URL   ${SCM_JMX_URL}
+
+Test SCM stacks
+Verify SPNEGO enabled URL   ${SCM_STACKS_URL}
+
 Test Recon portal
 Verify SPNEGO enabled URL   ${RECON_URL}
 


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-4031. Run shell tests in CI (#1261)

2020-07-29 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 829b860  HDDS-4031. Run shell tests in CI (#1261)
829b860 is described below

commit 829b8602b1667be2374a9376b416ba8f2919245c
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Wed Jul 29 12:47:58 2020 +0200

HDDS-4031. Run shell tests in CI (#1261)
---
 .github/workflows/post-commit.yml | 20 +++
 hadoop-ozone/dev-support/checks/bats.sh   | 35 +++
 hadoop-ozone/dist/src/test/shell/gc_opts.bats |  6 ++---
 3 files changed, 57 insertions(+), 4 deletions(-)

diff --git a/.github/workflows/post-commit.yml 
b/.github/workflows/post-commit.yml
index d0b5433..992715f 100644
--- a/.github/workflows/post-commit.yml
+++ b/.github/workflows/post-commit.yml
@@ -35,6 +35,26 @@ jobs:
   - uses: ./.github/buildenv
 with:
   args: ./hadoop-ozone/dev-support/checks/build.sh
+  bats:
+runs-on: ubuntu-18.04
+steps:
+  - uses: actions/checkout@v2
+  - name: install bats
+run: |
+  cd /tmp
+  curl -LSs 
https://github.com/bats-core/bats-core/archive/v1.2.1.tar.gz | tar xzf -
+  cd bats-core-1.2.1
+  sudo ./install.sh /usr/local
+  - name: run tests
+run: ./hadoop-ozone/dev-support/checks/${{ github.job }}.sh
+  - name: Summary of failures
+run: cat target/${{ github.job }}/summary.txt
+if: always()
+  - uses: actions/upload-artifact@master
+if: always()
+with:
+  name: ${{ github.job }}
+  path: target/${{ github.job }}
   rat:
 name: rat
 runs-on: ubuntu-18.04
diff --git a/hadoop-ozone/dev-support/checks/bats.sh 
b/hadoop-ozone/dev-support/checks/bats.sh
new file mode 100755
index 000..2e1bbad
--- /dev/null
+++ b/hadoop-ozone/dev-support/checks/bats.sh
@@ -0,0 +1,35 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+cd "${DIR}/../../.." || exit 1
+
+REPORT_DIR=${OUTPUT_DIR:-"${DIR}/../../../target/bats"}
+mkdir -p "${REPORT_DIR}"
+REPORT_FILE="${REPORT_DIR}/summary.txt"
+
+rm -f "${REPORT_DIR}/output.log"
+
+find * -path '*/src/test/shell/*' -name '*.bats' -print0 \
+  | xargs -0 -n1 bats --formatter tap \
+  | tee -a "${REPORT_DIR}/output.log"
+
+grep '^\(not ok\|#\)' "${REPORT_DIR}/output.log" > "${REPORT_FILE}"
+
+grep -c '^not ok' "${REPORT_FILE}" > "${REPORT_DIR}/failures"
+
+if [[ -s "${REPORT_FILE}" ]]; then
+   exit 1
+fi
diff --git a/hadoop-ozone/dist/src/test/shell/gc_opts.bats 
b/hadoop-ozone/dist/src/test/shell/gc_opts.bats
index 1400a40..feb29af 100644
--- a/hadoop-ozone/dist/src/test/shell/gc_opts.bats
+++ b/hadoop-ozone/dist/src/test/shell/gc_opts.bats
@@ -14,14 +14,12 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-
-
 #
 # Can be executed with bats (https://github.com/bats-core/bats-core)
-# bats gc_opts.bats (FROM THE CURRENT DIRECTORY)
+# bats gc_opts.bats
 #
 
-source ../../shell/hdds/hadoop-functions.sh
+load ../../shell/hdds/hadoop-functions.sh
 @test "Setting Hadoop GC parameters: add GC params for server" {
   export HADOOP_SUBCMD_SUPPORTDAEMONIZATION=true
   export HADOOP_OPTS="Test"


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-4038. Eliminate GitHub check warnings (#1268)

2020-07-29 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new a77d9ea  HDDS-4038. Eliminate GitHub check warnings (#1268)
a77d9ea is described below

commit a77d9ea1d36ad705b474b6f312ff90d48da47295
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Wed Jul 29 12:35:42 2020 +0200

HDDS-4038. Eliminate GitHub check warnings (#1268)
---
 .github/workflows/comments.yaml   |  2 +-
 .github/workflows/post-commit.yml | 30 +++---
 2 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/.github/workflows/comments.yaml b/.github/workflows/comments.yaml
index bfab244..2341662 100644
--- a/.github/workflows/comments.yaml
+++ b/.github/workflows/comments.yaml
@@ -25,7 +25,7 @@ jobs:
 name: check-comment
 runs-on: ubuntu-latest
 steps:
-  - uses: actions/checkout@master
+  - uses: actions/checkout@v2
   - run: ./.github/process-comment.sh
 env:
   GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
diff --git a/.github/workflows/post-commit.yml 
b/.github/workflows/post-commit.yml
index 344602f..d0b5433 100644
--- a/.github/workflows/post-commit.yml
+++ b/.github/workflows/post-commit.yml
@@ -23,7 +23,7 @@ jobs:
 name: compile
 runs-on: ubuntu-18.04
 steps:
-  - uses: actions/checkout@master
+  - uses: actions/checkout@v2
   - uses: actions/cache@v2
 with:
   path: |
@@ -39,14 +39,14 @@ jobs:
 name: rat
 runs-on: ubuntu-18.04
 steps:
-- uses: actions/checkout@master
+- uses: actions/checkout@v2
 - uses: ./.github/buildenv
   with:
  args: ./hadoop-ozone/dev-support/checks/rat.sh
 - name: Summary of failures
   run: cat target/${{ github.job }}/summary.txt
   if: always()
-- uses: actions/upload-artifact@master
+- uses: actions/upload-artifact@v2
   if: always()
   with:
 name: rat
@@ -56,12 +56,12 @@ jobs:
 name: author
 runs-on: ubuntu-18.04
 steps:
-- uses: actions/checkout@master
+- uses: actions/checkout@v2
 - run: hadoop-ozone/dev-support/checks/author.sh
 - name: Summary of failures
   run: cat target/${{ github.job }}/summary.txt
   if: always()
-- uses: actions/upload-artifact@master
+- uses: actions/upload-artifact@v2
   if: always()
   with:
 name: author
@@ -71,14 +71,14 @@ jobs:
 name: unit
 runs-on: ubuntu-18.04
 steps:
-- uses: actions/checkout@master
+- uses: actions/checkout@v2
 - uses: ./.github/buildenv
   with:
  args: ./hadoop-ozone/dev-support/checks/unit.sh
 - name: Summary of failures
   run: cat target/${{ github.job }}/summary.txt
   if: always()
-- uses: actions/upload-artifact@master
+- uses: actions/upload-artifact@v2
   if: always()
   with:
 name: unit
@@ -88,14 +88,14 @@ jobs:
 name: checkstyle
 runs-on: ubuntu-18.04
 steps:
-- uses: actions/checkout@master
+- uses: actions/checkout@v2
 - uses: ./.github/buildenv
   with:
  args: ./hadoop-ozone/dev-support/checks/checkstyle.sh
 - name: Summary of failures
   run: cat target/${{ github.job }}/summary.txt
   if: always()
-- uses: actions/upload-artifact@master
+- uses: actions/upload-artifact@v2
   if: always()
   with:
 name: checkstyle
@@ -105,14 +105,14 @@ jobs:
 name: findbugs
 runs-on: ubuntu-18.04
 steps:
-- uses: actions/checkout@master
+- uses: actions/checkout@v2
 - uses: ./.github/buildenv
   with:
  args: ./hadoop-ozone/dev-support/checks/findbugs.sh
 - name: Summary of failures
   run: cat target/${{ github.job }}/summary.txt
   if: always()
-- uses: actions/upload-artifact@master
+- uses: actions/upload-artifact@v2
   if: always()
   with:
 name: findbugs
@@ -166,7 +166,7 @@ jobs:
   OZONE_ACCEPTANCE_SUITE: ${{ matrix.suite }}
   OZONE_WITH_COVERAGE: true
   OZONE_VOLUME_OWNER: 1000
-  - uses: actions/upload-artifact@master
+  - uses: actions/upload-artifact@v2
 if: always()
 with:
   name: acceptance-${{ matrix.suite }}
@@ -189,7 +189,7 @@ jobs:
   fail-fast: false
 steps:
 - run: sudo mkdir mnt && sudo mount --bind /mnt `pwd`/mnt && sudo 
chmod 777 mnt
-- uses: actions/checkout@master
+- uses: actions/checkout@v2
   with:
 path: mnt/ozone
 - uses: ./mnt/ozone/.github/bui

[hadoop-ozone] branch master updated: HDDS-4017. Acceptance check may run against wrong commit (#1249)

2020-07-27 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new fd47f91  HDDS-4017. Acceptance check may run against wrong commit 
(#1249)
fd47f91 is described below

commit fd47f915c65512de6618811e6c247871066a8351
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Mon Jul 27 17:28:02 2020 +0200

HDDS-4017. Acceptance check may run against wrong commit (#1249)
---
 .github/workflows/post-commit.yml | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/.github/workflows/post-commit.yml 
b/.github/workflows/post-commit.yml
index e472aea..3e617bc 100644
--- a/.github/workflows/post-commit.yml
+++ b/.github/workflows/post-commit.yml
@@ -146,9 +146,13 @@ jobs:
   - name: checkout to /mnt/ozone
 run: |
   sudo chmod 777 /mnt
-  git clone https://github.com/${GITHUB_REPOSITORY}.git /mnt/ozone
+  git clone 'https://github.com/${{ github.repository }}.git' 
/mnt/ozone
   cd /mnt/ozone
-  git fetch origin "${GITHUB_REF}"
+  if [[ '${{ github.event_name }}' == 'pull_request' ]]; then
+git fetch --verbose origin '${{ github.ref }}'
+  else
+git fetch --verbose origin '${{ github.sha }}'
+  fi
   git checkout FETCH_HEAD
   git reset --hard
   - name: run a full build


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-4000. Split acceptance tests to reduce CI feedback time (#1236)

2020-07-27 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new a7fe726  HDDS-4000. Split acceptance tests to reduce CI feedback time 
(#1236)
a7fe726 is described below

commit a7fe7260f5a72809fe5387725c92942c454ce899
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Mon Jul 27 13:46:22 2020 +0200

HDDS-4000. Split acceptance tests to reduce CI feedback time (#1236)
---
 .github/workflows/post-commit.yml  | 17 ---
 hadoop-ozone/dev-support/checks/acceptance.sh  |  2 +
 hadoop-ozone/dist/src/main/compose/ozone/test.sh   |  2 +
 .../dist/src/main/compose/ozonesecure/test.sh  |  2 +
 hadoop-ozone/dist/src/main/compose/test-all.sh | 31 +---
 pom.xml| 55 +-
 6 files changed, 42 insertions(+), 67 deletions(-)

diff --git a/.github/workflows/post-commit.yml 
b/.github/workflows/post-commit.yml
index 9588423..e472aea 100644
--- a/.github/workflows/post-commit.yml
+++ b/.github/workflows/post-commit.yml
@@ -123,6 +123,13 @@ jobs:
   acceptance:
 name: acceptance
 runs-on: ubuntu-18.04
+strategy:
+  matrix:
+suite:
+  - secure
+  - unsecure
+  - misc
+  fail-fast: false
 steps:
   - uses: actions/cache@v2
 with:
@@ -154,12 +161,13 @@ jobs:
   cd /mnt/ozone && hadoop-ozone/dev-support/checks/acceptance.sh
 env:
   KEEP_IMAGE: false
+  OZONE_ACCEPTANCE_SUITE: ${{ matrix.suite }}
   OZONE_WITH_COVERAGE: true
   OZONE_VOLUME_OWNER: 1000
   - uses: actions/upload-artifact@master
 if: always()
 with:
-  name: acceptance
+  name: acceptance-${{ matrix.suite }}
   path: /mnt/ozone/target/acceptance
 continue-on-error: true
   - run: |
@@ -170,16 +178,11 @@ jobs:
   integration:
 name: integration
 runs-on: ubuntu-18.04
-needs:
-- build
 strategy:
   matrix:
 profile:
   - client
-  - filesystem
-  - filesystem-contract
-  - freon
-  - hdds-om
+  - filesystem-hdds
   - ozone
   fail-fast: false
 steps:
diff --git a/hadoop-ozone/dev-support/checks/acceptance.sh 
b/hadoop-ozone/dev-support/checks/acceptance.sh
index d95c034..99d8d52 100755
--- a/hadoop-ozone/dev-support/checks/acceptance.sh
+++ b/hadoop-ozone/dev-support/checks/acceptance.sh
@@ -28,6 +28,8 @@ fi
 
 mkdir -p "$REPORT_DIR"
 
+export OZONE_ACCEPTANCE_SUITE
+
 cd "$DIST_DIR/compose" || exit 1
 ./test-all.sh
 RES=$?
diff --git a/hadoop-ozone/dist/src/main/compose/ozone/test.sh 
b/hadoop-ozone/dist/src/main/compose/ozone/test.sh
index c40339e..b5b778f 100755
--- a/hadoop-ozone/dist/src/main/compose/ozone/test.sh
+++ b/hadoop-ozone/dist/src/main/compose/ozone/test.sh
@@ -15,6 +15,8 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+#suite:unsecure
+
 COMPOSE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
 export COMPOSE_DIR
 
diff --git a/hadoop-ozone/dist/src/main/compose/ozonesecure/test.sh 
b/hadoop-ozone/dist/src/main/compose/ozonesecure/test.sh
index 84de2a9..076b83a 100755
--- a/hadoop-ozone/dist/src/main/compose/ozonesecure/test.sh
+++ b/hadoop-ozone/dist/src/main/compose/ozonesecure/test.sh
@@ -15,6 +15,8 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+#suite:secure
+
 COMPOSE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
 export COMPOSE_DIR
 
diff --git a/hadoop-ozone/dist/src/main/compose/test-all.sh 
b/hadoop-ozone/dist/src/main/compose/test-all.sh
index e7f6f71..da3b80e 100755
--- a/hadoop-ozone/dist/src/main/compose/test-all.sh
+++ b/hadoop-ozone/dist/src/main/compose/test-all.sh
@@ -31,21 +31,40 @@ if [ "$OZONE_WITH_COVERAGE" ]; then
export 
HADOOP_OPTS="-javaagent:share/coverage/jacoco-agent.jar=output=tcpclient,address=$DOCKER_BRIDGE_IP,includes=org.apache.hadoop.ozone.*:org.apache.hadoop.hdds.*:org.apache.hadoop.fs.ozone.*"
 fi
 
+if [[ -n "${OZONE_ACCEPTANCE_SUITE}" ]]; then
+  tests=$(find "$SCRIPT_DIR" -name test.sh | xargs grep -l 
"^#suite:${OZONE_ACCEPTANCE_SUITE}$" | sort)
+
+  # 'misc' is default suite, add untagged tests, too
+  if [[ "misc" == "${OZONE_ACCEPTANCE_SUITE}" ]]; then
+untagged="$(find "$SCRIPT_DIR" -name test.sh | xargs grep -L "^#suite:")"
+if [[ -n "${untagged}" ]]; then
+  tests=$(echo ${tests} 

[hadoop-ozone] branch master updated (a123b4e -> 2ba43d0)

2020-07-27 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from a123b4e  HDDS-4022. Ozone s3 API return 400 Bad Request for 
head-bucket for non existing bucket. (#1251)
 add 2ba43d0  HDDS-3905. Show status of OM in the OM web ui (#1152)

No new revisions were added by this update.

Summary of changes:
 .../src/main/resources/webapps/ozoneManager/main.html  | 2 ++
 .../src/main/resources/webapps/ozoneManager/om-overview.html   | 5 +
 .../src/main/resources/webapps/ozoneManager/ozoneManager.js| 7 ++-
 3 files changed, 13 insertions(+), 1 deletion(-)
 copy 
hadoop-hdds/container-service/src/main/resources/webapps/hddsDatanode/dn-overview.html
 => 
hadoop-ozone/ozone-manager/src/main/resources/webapps/ozoneManager/om-overview.html
 (89%)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch storage-class created (now f339bc5)

2020-07-27 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch storage-class
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


  at f339bc5  HDDS-4022. Ozone s3 API return 400 Bad Request for 
head-bucket for non existing bucket. (#1251)

This branch includes the following new commits:

 new f339bc5  HDDS-4022. Ozone s3 API return 400 Bad Request for 
head-bucket for non existing bucket. (#1251)

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] 01/01: HDDS-4022. Ozone s3 API return 400 Bad Request for head-bucket for non existing bucket. (#1251)

2020-07-27 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch storage-class
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git

commit f339bc502d5d2db9e752046dfb568c26d93b1ad7
Author: Bharat Viswanadham 
AuthorDate: Mon Jul 27 04:23:25 2020 -0700

HDDS-4022. Ozone s3 API return 400 Bad Request for head-bucket for non 
existing bucket. (#1251)
---
 hadoop-ozone/dist/src/main/smoketest/s3/buckethead.robot  |  5 +++--
 .../org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java   |  8 +---
 .../org/apache/hadoop/ozone/s3/endpoint/TestBucketHead.java   | 11 +--
 3 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/hadoop-ozone/dist/src/main/smoketest/s3/buckethead.robot 
b/hadoop-ozone/dist/src/main/smoketest/s3/buckethead.robot
index 7666871..f3ecd01 100644
--- a/hadoop-ozone/dist/src/main/smoketest/s3/buckethead.robot
+++ b/hadoop-ozone/dist/src/main/smoketest/s3/buckethead.robot
@@ -31,5 +31,6 @@ ${BUCKET} generated
 Head Bucket not existent
 ${result} = Execute AWSS3APICli head-bucket --bucket ${BUCKET}
 ${result} = Execute AWSS3APICli and checkrc  head-bucket 
--bucket ozonenosuchbucketqqweqwe  255
-Should contain  ${result}Bad Request
-Should contain  ${result}400
+Should contain  ${result}404
+Should contain  ${result}Not Found
+
diff --git 
a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
 
b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
index fbb2dbf..87aa0ff 100644
--- 
a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
+++ 
b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
@@ -31,7 +31,6 @@ import javax.ws.rs.core.Context;
 import javax.ws.rs.core.HttpHeaders;
 import javax.ws.rs.core.MediaType;
 import javax.ws.rs.core.Response;
-import javax.ws.rs.core.Response.Status;
 import java.io.IOException;
 import java.io.InputStream;
 import java.util.Iterator;
@@ -250,12 +249,7 @@ public class BucketEndpoint extends EndpointBase {
   getBucket(bucketName);
 } catch (OS3Exception ex) {
   LOG.error("Exception occurred in headBucket", ex);
-  //TODO: use a subclass fo OS3Exception and catch it here.
-  if (ex.getCode().contains("NoSuchBucket")) {
-return Response.status(Status.BAD_REQUEST).build();
-  } else {
-throw ex;
-  }
+  throw ex;
 }
 return Response.ok().build();
   }
diff --git 
a/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestBucketHead.java
 
b/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestBucketHead.java
index 6f991e6..18b4b2c 100644
--- 
a/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestBucketHead.java
+++ 
b/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestBucketHead.java
@@ -26,7 +26,10 @@ import org.apache.hadoop.ozone.OzoneConsts;
 import org.apache.hadoop.ozone.client.OzoneClient;
 import org.apache.hadoop.ozone.client.OzoneClientStub;
 
+import org.apache.hadoop.ozone.s3.exception.OS3Exception;
 import org.junit.Assert;
+
+import static java.net.HttpURLConnection.HTTP_NOT_FOUND;
 import static org.junit.Assert.assertEquals;
 import org.junit.Before;
 import org.junit.Test;
@@ -60,7 +63,11 @@ public class TestBucketHead {
 
   @Test
   public void testHeadFail() throws Exception {
-Response response = bucketEndpoint.head("unknownbucket");
-Assert.assertEquals(400, response.getStatus());
+try {
+  bucketEndpoint.head("unknownbucket");
+} catch (OS3Exception ex) {
+  Assert.assertEquals(HTTP_NOT_FOUND, ex.getHttpCode());
+  Assert.assertEquals("NoSuchBucket", ex.getCode());
+}
   }
 }


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-4022. Ozone s3 API return 400 Bad Request for head-bucket for non existing bucket. (#1251)

2020-07-27 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new a123b4e  HDDS-4022. Ozone s3 API return 400 Bad Request for 
head-bucket for non existing bucket. (#1251)
a123b4e is described below

commit a123b4ef6d4e9804d298390ca1a8aeb8e64e62a5
Author: Bharat Viswanadham 
AuthorDate: Mon Jul 27 04:23:25 2020 -0700

HDDS-4022. Ozone s3 API return 400 Bad Request for head-bucket for non 
existing bucket. (#1251)
---
 hadoop-ozone/dist/src/main/smoketest/s3/buckethead.robot  |  5 +++--
 .../org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java   |  8 +---
 .../org/apache/hadoop/ozone/s3/endpoint/TestBucketHead.java   | 11 +--
 3 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/hadoop-ozone/dist/src/main/smoketest/s3/buckethead.robot 
b/hadoop-ozone/dist/src/main/smoketest/s3/buckethead.robot
index 7666871..f3ecd01 100644
--- a/hadoop-ozone/dist/src/main/smoketest/s3/buckethead.robot
+++ b/hadoop-ozone/dist/src/main/smoketest/s3/buckethead.robot
@@ -31,5 +31,6 @@ ${BUCKET} generated
 Head Bucket not existent
 ${result} = Execute AWSS3APICli head-bucket --bucket ${BUCKET}
 ${result} = Execute AWSS3APICli and checkrc  head-bucket 
--bucket ozonenosuchbucketqqweqwe  255
-Should contain  ${result}Bad Request
-Should contain  ${result}400
+Should contain  ${result}404
+Should contain  ${result}Not Found
+
diff --git 
a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
 
b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
index ef02510..067d6a4 100644
--- 
a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
+++ 
b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
@@ -31,7 +31,6 @@ import javax.ws.rs.core.Context;
 import javax.ws.rs.core.HttpHeaders;
 import javax.ws.rs.core.MediaType;
 import javax.ws.rs.core.Response;
-import javax.ws.rs.core.Response.Status;
 import java.io.IOException;
 import java.io.InputStream;
 import java.util.Iterator;
@@ -253,12 +252,7 @@ public class BucketEndpoint extends EndpointBase {
   getBucket(bucketName);
 } catch (OS3Exception ex) {
   LOG.error("Exception occurred in headBucket", ex);
-  //TODO: use a subclass fo OS3Exception and catch it here.
-  if (ex.getCode().contains("NoSuchBucket")) {
-return Response.status(Status.BAD_REQUEST).build();
-  } else {
-throw ex;
-  }
+  throw ex;
 }
 return Response.ok().build();
   }
diff --git 
a/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestBucketHead.java
 
b/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestBucketHead.java
index 6f991e6..18b4b2c 100644
--- 
a/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestBucketHead.java
+++ 
b/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestBucketHead.java
@@ -26,7 +26,10 @@ import org.apache.hadoop.ozone.OzoneConsts;
 import org.apache.hadoop.ozone.client.OzoneClient;
 import org.apache.hadoop.ozone.client.OzoneClientStub;
 
+import org.apache.hadoop.ozone.s3.exception.OS3Exception;
 import org.junit.Assert;
+
+import static java.net.HttpURLConnection.HTTP_NOT_FOUND;
 import static org.junit.Assert.assertEquals;
 import org.junit.Before;
 import org.junit.Test;
@@ -60,7 +63,11 @@ public class TestBucketHead {
 
   @Test
   public void testHeadFail() throws Exception {
-Response response = bucketEndpoint.head("unknownbucket");
-Assert.assertEquals(400, response.getStatus());
+try {
+  bucketEndpoint.head("unknownbucket");
+} catch (OS3Exception ex) {
+  Assert.assertEquals(HTTP_NOT_FOUND, ex.getHttpCode());
+  Assert.assertEquals("NoSuchBucket", ex.getCode());
+}
   }
 }


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-3877. Do not fail CI check for log upload failure (#1209)

2020-07-27 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 78875bb  HDDS-3877. Do not fail CI check for log upload failure (#1209)
78875bb is described below

commit 78875bb2617849e4cd55489aa4661234282fa628
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Mon Jul 27 11:29:02 2020 +0200

HDDS-3877. Do not fail CI check for log upload failure (#1209)
---
 .github/workflows/post-commit.yml | 8 
 1 file changed, 8 insertions(+)

diff --git a/.github/workflows/post-commit.yml 
b/.github/workflows/post-commit.yml
index bac27e4..9588423 100644
--- a/.github/workflows/post-commit.yml
+++ b/.github/workflows/post-commit.yml
@@ -51,6 +51,7 @@ jobs:
   with:
 name: rat
 path: target/rat
+  continue-on-error: true
   author:
 name: author
 runs-on: ubuntu-18.04
@@ -67,6 +68,7 @@ jobs:
   with:
 name: author
 path: target/author
+  continue-on-error: true
   unit:
 name: unit
 runs-on: ubuntu-18.04
@@ -83,6 +85,7 @@ jobs:
   with:
 name: unit
 path: target/unit
+  continue-on-error: true
   checkstyle:
 name: checkstyle
 runs-on: ubuntu-18.04
@@ -99,6 +102,7 @@ jobs:
   with:
 name: checkstyle
 path: target/checkstyle
+  continue-on-error: true
   findbugs:
 name: findbugs
 runs-on: ubuntu-18.04
@@ -115,6 +119,7 @@ jobs:
   with:
 name: findbugs
 path: target/findbugs
+  continue-on-error: true
   acceptance:
 name: acceptance
 runs-on: ubuntu-18.04
@@ -156,6 +161,7 @@ jobs:
 with:
   name: acceptance
   path: /mnt/ozone/target/acceptance
+continue-on-error: true
   - run: |
   #Never cache local artifacts
   rm -rf ~/.m2/repository/org/apache/hadoop/hdds
@@ -192,6 +198,7 @@ jobs:
   with:
 name: it-${{ matrix.profile }}
 path: mnt/ozone/target/integration
+  continue-on-error: true
   coverage:
 name: coverage
 runs-on: ubuntu-18.04
@@ -225,3 +232,4 @@ jobs:
  with:
name: coverage
path: target/coverage
+ continue-on-error: true


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch ozone-0.6.0 updated: HDDS-3973. Update main feature design status. (#1207)

2020-07-27 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.6.0
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/ozone-0.6.0 by this push:
 new 2befa7c  HDDS-3973. Update main feature design status. (#1207)
2befa7c is described below

commit 2befa7cd3a9c5031e00c6b4b647a5941e01d2210
Author: Sammi Chen 
AuthorDate: Mon Jul 27 16:46:41 2020 +0800

HDDS-3973. Update main feature design status. (#1207)
---
 hadoop-hdds/docs/content/design/multiraft.md   | 2 +-
 hadoop-hdds/docs/content/design/ozone-enhancement-proposals.md | 2 +-
 hadoop-hdds/docs/content/design/recon2.md  | 2 +-
 hadoop-hdds/docs/content/design/scmha.md   | 4 ++--
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/hadoop-hdds/docs/content/design/multiraft.md 
b/hadoop-hdds/docs/content/design/multiraft.md
index bccaff3..f9f978a 100644
--- a/hadoop-hdds/docs/content/design/multiraft.md
+++ b/hadoop-hdds/docs/content/design/multiraft.md
@@ -4,7 +4,7 @@ summary: Datanodes can be part of multiple independent RAFT 
groups / pipelines
 date: 2019-05-21
 jira: HDDS-1564
 status: implemented
-author:  
+author: Li Cheng, Sammi Chen
 ---
 

[hadoop-ozone] branch ozone-0.6.0 updated: HDDS-3973. Update main feature design status. (#1207)

2020-07-27 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch ozone-0.6.0
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/ozone-0.6.0 by this push:
 new 2befa7c  HDDS-3973. Update main feature design status. (#1207)
2befa7c is described below

commit 2befa7cd3a9c5031e00c6b4b647a5941e01d2210
Author: Sammi Chen 
AuthorDate: Mon Jul 27 16:46:41 2020 +0800

HDDS-3973. Update main feature design status. (#1207)
---
 hadoop-hdds/docs/content/design/multiraft.md   | 2 +-
 hadoop-hdds/docs/content/design/ozone-enhancement-proposals.md | 2 +-
 hadoop-hdds/docs/content/design/recon2.md  | 2 +-
 hadoop-hdds/docs/content/design/scmha.md   | 4 ++--
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/hadoop-hdds/docs/content/design/multiraft.md 
b/hadoop-hdds/docs/content/design/multiraft.md
index bccaff3..f9f978a 100644
--- a/hadoop-hdds/docs/content/design/multiraft.md
+++ b/hadoop-hdds/docs/content/design/multiraft.md
@@ -4,7 +4,7 @@ summary: Datanodes can be part of multiple independent RAFT 
groups / pipelines
 date: 2019-05-21
 jira: HDDS-1564
 status: implemented
-author:  
+author: Li Cheng, Sammi Chen
 ---
 

[hadoop-ozone] branch master updated: HDDS-3973. Update main feature design status. (#1207)

2020-07-27 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new e643ab2  HDDS-3973. Update main feature design status. (#1207)
e643ab2 is described below

commit e643ab2761f66bd223a73242c530e29c686244d3
Author: Sammi Chen 
AuthorDate: Mon Jul 27 16:46:41 2020 +0800

HDDS-3973. Update main feature design status. (#1207)
---
 hadoop-hdds/docs/content/design/multiraft.md   | 2 +-
 hadoop-hdds/docs/content/design/ozone-enhancement-proposals.md | 2 +-
 hadoop-hdds/docs/content/design/recon2.md  | 2 +-
 hadoop-hdds/docs/content/design/scmha.md   | 4 ++--
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/hadoop-hdds/docs/content/design/multiraft.md 
b/hadoop-hdds/docs/content/design/multiraft.md
index bccaff3..f9f978a 100644
--- a/hadoop-hdds/docs/content/design/multiraft.md
+++ b/hadoop-hdds/docs/content/design/multiraft.md
@@ -4,7 +4,7 @@ summary: Datanodes can be part of multiple independent RAFT 
groups / pipelines
 date: 2019-05-21
 jira: HDDS-1564
 status: implemented
-author:  
+author: Li Cheng, Sammi Chen
 ---
 

[hadoop-ozone] branch master updated (c3bbe18 -> 404ec6d)

2020-07-22 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from c3bbe18  HDDS-3980. Correct the toString of RangeHeader (#1213)
 add 404ec6d  HDDS-3991. Ignore protobuf lock files (#1224)

No new revisions were added by this update.

Summary of changes:
 hadoop-hdds/interface-admin/src/main/{proto => resources}/proto.lock   | 0
 hadoop-hdds/interface-client/src/main/{proto => resources}/proto.lock  | 0
 hadoop-hdds/interface-server/src/main/{proto => resources}/proto.lock  | 0
 hadoop-hdds/pom.xml| 2 +-
 hadoop-ozone/csi/src/main/{proto => resources}/proto.lock  | 0
 hadoop-ozone/interface-client/src/main/{proto => resources}/proto.lock | 0
 hadoop-ozone/pom.xml   | 2 +-
 pom.xml| 2 +-
 8 files changed, 3 insertions(+), 3 deletions(-)
 rename hadoop-hdds/interface-admin/src/main/{proto => resources}/proto.lock 
(100%)
 rename hadoop-hdds/interface-client/src/main/{proto => resources}/proto.lock 
(100%)
 rename hadoop-hdds/interface-server/src/main/{proto => resources}/proto.lock 
(100%)
 rename hadoop-ozone/csi/src/main/{proto => resources}/proto.lock (100%)
 rename hadoop-ozone/interface-client/src/main/{proto => resources}/proto.lock 
(100%)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (783a18c -> fd7e05c)

2020-07-22 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 783a18c  HDDS-3892. Datanode initialization is too slow when there are 
thousan… (#1147)
 add fd7e05c  HDDS-3989. Addendum: revert proto.lock file (#1226)

No new revisions were added by this update.

Summary of changes:
 hadoop-hdds/interface-client/src/main/proto/proto.lock | 12 +---
 1 file changed, 1 insertion(+), 11 deletions(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (fb2649e -> 798e00c)

2020-07-21 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from fb2649e  Update ratis to 1.0.0 (#1222)
 add 798e00c  HDDS-3813. Upgrade Ratis third-party, too (#1229)

No new revisions were added by this update.

Summary of changes:
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] 01/02: asd

2020-07-21 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch HDDS-3991
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git

commit 5b9e7e87b8e84a9ed057db9ba54a2da6735b0440
Author: Elek Márton 
AuthorDate: Tue Jul 21 10:18:57 2020 +0200

asd
---
 proto.lock | 0
 1 file changed, 0 insertions(+), 0 deletions(-)

diff --git a/proto.lock b/proto.lock
new file mode 100644
index 000..e69de29


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] 02/02: switch to use maven based ignore model

2020-07-21 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch HDDS-3991
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git

commit f86ea3049df51974c26406038ab5a1d6e3947169
Author: Elek Márton 
AuthorDate: Tue Jul 21 10:43:04 2020 +0200

switch to use maven based ignore model
---
 .gitignore| 4 
 hadoop-hdds/interface-admin/src/main/{proto => resources}/proto.lock  | 0
 hadoop-hdds/interface-client/src/main/{proto => resources}/proto.lock | 0
 hadoop-hdds/interface-server/src/main/{proto => resources}/proto.lock | 0
 .../interface-client/src/main/{proto => resources}/proto.lock | 0
 pom.xml   | 2 +-
 proto.lock| 0
 7 files changed, 1 insertion(+), 5 deletions(-)

diff --git a/.gitignore b/.gitignore
index a3dbad1..e09c2eb 100644
--- a/.gitignore
+++ b/.gitignore
@@ -65,7 +65,3 @@ hadoop-hdds/docs/public
 hadoop-ozone/recon/node_modules
 
 .mvn
-
-# Protolock files should be updated manually after every release
-# See: https://github.com/nilslice/protolock
-*/proto/*.lock
diff --git a/hadoop-hdds/interface-admin/src/main/proto/proto.lock 
b/hadoop-hdds/interface-admin/src/main/resources/proto.lock
similarity index 100%
rename from hadoop-hdds/interface-admin/src/main/proto/proto.lock
rename to hadoop-hdds/interface-admin/src/main/resources/proto.lock
diff --git a/hadoop-hdds/interface-client/src/main/proto/proto.lock 
b/hadoop-hdds/interface-client/src/main/resources/proto.lock
similarity index 100%
rename from hadoop-hdds/interface-client/src/main/proto/proto.lock
rename to hadoop-hdds/interface-client/src/main/resources/proto.lock
diff --git a/hadoop-hdds/interface-server/src/main/proto/proto.lock 
b/hadoop-hdds/interface-server/src/main/resources/proto.lock
similarity index 100%
rename from hadoop-hdds/interface-server/src/main/proto/proto.lock
rename to hadoop-hdds/interface-server/src/main/resources/proto.lock
diff --git a/hadoop-ozone/interface-client/src/main/proto/proto.lock 
b/hadoop-ozone/interface-client/src/main/resources/proto.lock
similarity index 100%
rename from hadoop-ozone/interface-client/src/main/proto/proto.lock
rename to hadoop-ozone/interface-client/src/main/resources/proto.lock
diff --git a/pom.xml b/pom.xml
index 40f2f58..e803ae0 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1615,7 +1615,7 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xs
   proto-backwards-compatibility
   ${proto-backwards-compatibility.version}
   
-${basedir}/src/main/proto/
+${basedir}/target/classes
   
   
 
diff --git a/proto.lock b/proto.lock
deleted file mode 100644
index e69de29..000


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-3855. Add upgrade smoketest (#1142)

2020-07-17 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 9b13ab6  HDDS-3855. Add upgrade smoketest (#1142)
9b13ab6 is described below

commit 9b13ab67ca433b44a548bd1ee8c9fa4b01250c50
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Fri Jul 17 12:45:03 2020 +0200

HDDS-3855. Add upgrade smoketest (#1142)
---
 .github/workflows/post-commit.yml  |   1 +
 .../dist/dev-support/bin/dist-layout-stitching |   1 +
 hadoop-ozone/dist/pom.xml  |   2 +-
 hadoop-ozone/dist/src/main/compose/testlib.sh  |  39 +++
 hadoop-ozone/dist/src/main/compose/upgrade/.env|  21 
 .../dist/src/main/compose/upgrade/README.md|  29 +
 .../src/main/compose/upgrade/docker-compose.yaml   | 127 +
 .../dist/src/main/compose/upgrade/docker-config|  33 ++
 hadoop-ozone/dist/src/main/compose/upgrade/test.sh |  70 
 .../src/main/compose/upgrade/versions/README.md|  15 +++
 .../main/compose/upgrade/versions/ozone-0.5.0.sh   |  18 +++
 .../main/compose/upgrade/versions/ozone-0.6.0.sh   |  18 +++
 hadoop-ozone/dist/src/shell/upgrade/0.6.0.sh   |  23 
 .../src/shell/upgrade/0.6.0/01-migrate-scm-db.sh   |  24 
 14 files changed, 400 insertions(+), 21 deletions(-)

diff --git a/.github/workflows/post-commit.yml 
b/.github/workflows/post-commit.yml
index e00018a..bac27e4 100644
--- a/.github/workflows/post-commit.yml
+++ b/.github/workflows/post-commit.yml
@@ -150,6 +150,7 @@ jobs:
 env:
   KEEP_IMAGE: false
   OZONE_WITH_COVERAGE: true
+  OZONE_VOLUME_OWNER: 1000
   - uses: actions/upload-artifact@master
 if: always()
 with:
diff --git a/hadoop-ozone/dist/dev-support/bin/dist-layout-stitching 
b/hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
index e1f5c7e..80455a6 100755
--- a/hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
+++ b/hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
@@ -108,6 +108,7 @@ run cp 
"${ROOT}/hadoop-ozone/dist/src/shell/hdds/hadoop-config.cmd" "libexec/"
 run cp "${ROOT}/hadoop-ozone/dist/src/shell/hdds/hadoop-functions.sh" 
"libexec/"
 run cp "${ROOT}/hadoop-ozone/dist/src/shell/ozone/ozone-config.sh" "libexec/"
 run cp -r "${ROOT}/hadoop-ozone/dist/src/shell/shellprofile.d" "libexec/"
+run cp -r "${ROOT}/hadoop-ozone/dist/src/shell/upgrade" "libexec/"
 
 
 run cp "${ROOT}/hadoop-ozone/dist/src/shell/hdds/hadoop-daemons.sh" "sbin/"
diff --git a/hadoop-ozone/dist/pom.xml b/hadoop-ozone/dist/pom.xml
index 840f628..a766c0a 100644
--- a/hadoop-ozone/dist/pom.xml
+++ b/hadoop-ozone/dist/pom.xml
@@ -28,7 +28,7 @@
   
 UTF-8
 true
-20200420-1
+20200625-1
   
 
   
diff --git a/hadoop-ozone/dist/src/main/compose/testlib.sh 
b/hadoop-ozone/dist/src/main/compose/testlib.sh
index 15d1664..56c35c1 100755
--- a/hadoop-ozone/dist/src/main/compose/testlib.sh
+++ b/hadoop-ozone/dist/src/main/compose/testlib.sh
@@ -17,7 +17,6 @@
 set -e
 
 COMPOSE_ENV_NAME=$(basename "$COMPOSE_DIR")
-COMPOSE_FILE=$COMPOSE_DIR/docker-compose.yaml
 RESULT_DIR=${RESULT_DIR:-"$COMPOSE_DIR/result"}
 RESULT_DIR_INSIDE="/tmp/smoketest/$(basename "$COMPOSE_ENV_NAME")/result"
 SMOKETEST_DIR_INSIDE="${OZONE_DIR:-/opt/hadoop}/smoketest"
@@ -32,7 +31,7 @@ fi
 ## @description create results directory, purging any prior data
 create_results_dir() {
   #delete previous results
-  rm -rf "$RESULT_DIR"
+  [[ "${OZONE_KEEP_RESULTS:-}" == "true" ]] || rm -rf "$RESULT_DIR"
   mkdir -p "$RESULT_DIR"
   #Should be writeable from the docker containers where user is different.
   chmod ogu+w "$RESULT_DIR"
@@ -40,9 +39,9 @@ create_results_dir() {
 
 
 ## @description wait until safemode exit (or 180 seconds)
-## @param the docker-compose file
 wait_for_safemode_exit(){
-  local compose_file=$1
+  # version-dependent
+  : ${OZONE_ADMIN_COMMAND:=admin}
 
   #Reset the timer
   SECONDS=0
@@ -51,11 +50,11 @@ wait_for_safemode_exit(){
   while [[ $SECONDS -lt 180 ]]; do
 
  #This line checks the safemode status in scm
- local command="ozone admin safemode status"
+ local command="ozone ${OZONE_ADMIN_COMMAND} safemode status"
  if [[ "${SECURITY_ENABLED}" == 'true' ]]; then
- status=$(docker-compose -f "${compose_file}" exec -T scm bash -c 
"kinit -k HTTP/s...@example.com -t /etc/security/keytabs/HTTP.keytab && 
$command" || true)
+ status=$(docker-compose exec -T scm bash -c "kinit -k 
HTTP/s...@e

[hadoop-ozone] branch master updated (86bd3b3 -> 6705761)

2020-07-03 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 86bd3b3  HDDS-3862. Prepare checks for running some tests multiple 
times (#1128)
 add 6705761  HDDS-3917. Add recon to no_proxy of docker-config for 
acceptance test (#1161)

No new revisions were added by this update.

Summary of changes:
 hadoop-ozone/dist/src/main/compose/ozone-csi/docker-config | 2 +-
 hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop27/docker-config | 2 +-
 hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop31/docker-config | 2 +-
 hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop32/docker-config | 2 +-
 hadoop-ozone/dist/src/main/compose/ozone-om-ha-s3/docker-config| 2 +-
 hadoop-ozone/dist/src/main/compose/ozone-om-ha/docker-config   | 2 +-
 hadoop-ozone/dist/src/main/compose/ozone-topology/docker-config| 2 +-
 hadoop-ozone/dist/src/main/compose/ozone/docker-config | 2 +-
 hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-config | 2 +-
 hadoop-ozone/dist/src/main/compose/ozones3-haproxy/docker-config   | 2 +-
 hadoop-ozone/dist/src/main/compose/ozonescripts/docker-config  | 2 +-
 hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config| 2 +-
 12 files changed, 12 insertions(+), 12 deletions(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-3875. Package classpath files to the jar files instead of uploading them as artifacts (#1133)

2020-07-02 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new bfb9a15  HDDS-3875. Package classpath files to the jar files instead 
of uploading them as artifacts (#1133)
bfb9a15 is described below

commit bfb9a1517210ac737781e3dddaed2f3e5ff738c6
Author: Elek, Márton 
AuthorDate: Thu Jul 2 09:54:36 2020 +0200

HDDS-3875. Package classpath files to the jar files instead of uploading 
them as artifacts (#1133)
---
 hadoop-hdds/pom.xml   |  26 +
 hadoop-ozone/dist/pom.xml | 132 +++---
 hadoop-ozone/pom.xml  |  26 +
 3 files changed, 11 insertions(+), 173 deletions(-)

diff --git a/hadoop-hdds/pom.xml b/hadoop-hdds/pom.xml
index ac854d4..5ad290f 100644
--- a/hadoop-hdds/pom.xml
+++ b/hadoop-hdds/pom.xml
@@ -279,12 +279,12 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd;>
 
   
 add-classpath-descriptor
-package
+prepare-package
 
   build-classpath
 
 
-  ${project.build.directory}/classpath
+  
${project.build.outputDirectory}/${project.artifactId}.classpath
   $HDDS_LIB_JARS_DIR
   true
   runtime
@@ -293,28 +293,6 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd;>
 
   
   
-org.codehaus.mojo
-build-helper-maven-plugin
-
-  
-attach-classpath-artifact
-package
-
-  attach-artifact
-
-
-  
-
-  ${project.build.directory}/classpath
-  cp
-  classpath
-
-  
-
-  
-
-  
-  
 org.apache.maven.plugins
 maven-jar-plugin
 
diff --git a/hadoop-ozone/dist/pom.xml b/hadoop-ozone/dist/pom.xml
index a6b5bfd..840f628 100644
--- a/hadoop-ozone/dist/pom.xml
+++ b/hadoop-ozone/dist/pom.xml
@@ -41,103 +41,18 @@
 copy-classpath-files
 prepare-package
 
-  copy
+  unpack-dependencies
 
 
   
 target/ozone-${ozone.version}/share/ozone/classpath
   
-  
-
-  org.apache.hadoop
-  hadoop-hdds-server-scm
-  ${hdds.version}
-  classpath
-  cp
-  hadoop-hdds-server-scm.classpath
-
-
-  org.apache.hadoop
-  hadoop-hdds-tools
-  ${hdds.version}
-  classpath
-  cp
-  hadoop-hdds-tools.classpath
-
-
-  org.apache.hadoop
-  hadoop-ozone-s3gateway
-  ${ozone.version}
-  classpath
-  cp
-  hadoop-ozone-s3gateway.classpath
-
-
-  org.apache.hadoop
-  hadoop-ozone-csi
-  ${ozone.version}
-  classpath
-  cp
-  hadoop-ozone-csi.classpath
-
-
-  org.apache.hadoop
-  hadoop-ozone-ozone-manager
-  ${ozone.version}
-  classpath
-  cp
-  hadoop-ozone-ozone-manager.classpath
-  
-
-
-  org.apache.hadoop
-  hadoop-ozone-tools
-  ${ozone.version}
-  classpath
-  cp
-  hadoop-ozone-tools.classpath
-
-
-  org.apache.hadoop
-  hadoop-ozone-filesystem
-  ${ozone.version}
-  classpath
-  cp
-  
hadoop-ozone-filesystem.classpath
-
-
-  org.apache.hadoop
-  hadoop-ozone-common
-  ${ozone.version}
-  classpath
-  cp
-  hadoop-ozone-common.classpath
-
-
-  org.apache.hadoop
-  hadoop-ozone-datanode
-  ${ozone.version}
-  classpath
-  cp
-  hadoop-ozone-datanode.classpath
-
-
-  org.apache.hadoop
-  hadoop-ozone-upgrade
-  ${ozone.version}
-  classpath
-  cp
-  hadoop

[hadoop-ozone] branch master updated: HDDS-3632. starter scripts can't manage Ozone and HDFS datandodes on the same machine (#1115)

2020-07-02 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new b1ab8bc  HDDS-3632. starter scripts can't manage Ozone and HDFS 
datandodes on the same machine  (#1115)
b1ab8bc is described below

commit b1ab8bcf59ac7525d83d650640f21a1f4b5cd893
Author: Elek, Márton 
AuthorDate: Thu Jul 2 09:53:15 2020 +0200

HDDS-3632. starter scripts can't manage Ozone and HDFS datandodes on the 
same machine  (#1115)
---
 hadoop-ozone/dist/src/shell/hdds/hadoop-functions.sh | 20 ++--
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/hadoop-ozone/dist/src/shell/hdds/hadoop-functions.sh 
b/hadoop-ozone/dist/src/shell/hdds/hadoop-functions.sh
index d2b4df9..b46045b 100755
--- a/hadoop-ozone/dist/src/shell/hdds/hadoop-functions.sh
+++ b/hadoop-ozone/dist/src/shell/hdds/hadoop-functions.sh
@@ -2699,14 +2699,14 @@ function hadoop_generic_java_subcmd_handler
 
 hadoop_verify_secure_prereq
 hadoop_setup_secure_service
-
priv_outfile="${HADOOP_LOG_DIR}/privileged-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}-${HOSTNAME}.out"
-
priv_errfile="${HADOOP_LOG_DIR}/privileged-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}-${HOSTNAME}.err"
-
priv_pidfile="${HADOOP_PID_DIR}/privileged-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}.pid"
-
daemon_outfile="${HADOOP_LOG_DIR}/hadoop-${HADOOP_SECURE_USER}-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}-${HOSTNAME}.out"
-
daemon_pidfile="${HADOOP_PID_DIR}/hadoop-${HADOOP_SECURE_USER}-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}.pid"
+
priv_outfile="${HADOOP_LOG_DIR}/ozone-privileged-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}-${HOSTNAME}.out"
+
priv_errfile="${HADOOP_LOG_DIR}/ozone-privileged-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}-${HOSTNAME}.err"
+
priv_pidfile="${HADOOP_PID_DIR}/ozone-privileged-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}.pid"
+
daemon_outfile="${HADOOP_LOG_DIR}/ozone-${HADOOP_SECURE_USER}-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}-${HOSTNAME}.out"
+
daemon_pidfile="${HADOOP_PID_DIR}/ozone-${HADOOP_SECURE_USER}-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}.pid"
   else
-
daemon_outfile="${HADOOP_LOG_DIR}/hadoop-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}-${HOSTNAME}.out"
-
daemon_pidfile="${HADOOP_PID_DIR}/hadoop-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}.pid"
+
daemon_outfile="${HADOOP_LOG_DIR}/ozone-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}-${HOSTNAME}.out"
+
daemon_pidfile="${HADOOP_PID_DIR}/ozone-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}.pid"
   fi
 
   # are we actually in daemon mode?
@@ -2714,9 +2714,9 @@ function hadoop_generic_java_subcmd_handler
   if [[ "${HADOOP_DAEMON_MODE}" != "default" ]]; then
 HADOOP_ROOT_LOGGER="${HADOOP_DAEMON_ROOT_LOGGER}"
 if [[ "${HADOOP_SUBCMD_SECURESERVICE}" = true ]]; then
-  
HADOOP_LOGFILE="hadoop-${HADOOP_SECURE_USER}-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}-${HOSTNAME}.log"
+  
HADOOP_LOGFILE="ozone-${HADOOP_SECURE_USER}-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}-${HOSTNAME}.log"
 else
-  
HADOOP_LOGFILE="hadoop-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}-${HOSTNAME}.log"
+  
HADOOP_LOGFILE="ozone-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}-${HOSTNAME}.log"
 fi
   fi
 
@@ -2798,4 +2798,4 @@ function hadoop_assembly_classpath() {
   fi
 
   IFS=$OIFS
-}
\ No newline at end of file
+}


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-2413. Set configuration variables from annotated java objects (#1106)

2020-07-01 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 0300feb  HDDS-2413. Set configuration variables from annotated java 
objects (#1106)
0300feb is described below

commit 0300febe2cf04bbe2769aab51bd0253b4af99a25
Author: Doroszlai, Attila <6454655+adorosz...@users.noreply.github.com>
AuthorDate: Wed Jul 1 18:48:35 2020 +0200

HDDS-2413. Set configuration variables from annotated java objects (#1106)
---
 .../hadoop/hdds/scm/XceiverClientManager.java  |  4 +-
 .../hadoop/hdds/scm/client/HddsClientUtils.java|  3 +-
 .../hadoop/hdds/conf/OzoneConfiguration.java   | 21 +-
 .../java/org/apache/hadoop/hdds/fs/DUFactory.java  | 19 ++---
 .../hdds/fs/DedicatedDiskSpaceUsageFactory.java|  3 +-
 .../hadoop/hdds/fs/SpaceUsageCheckFactory.java |  3 +-
 .../org/apache/hadoop/hdds/ratis/RatisHelper.java  |  3 +-
 .../utils/LegacyHadoopConfigurationSource.java |  4 +-
 .../hadoop/hdds/conf/SimpleConfiguration.java  |  2 +-
 .../hdds/conf/SimpleConfigurationParent.java   |  6 +-
 .../hadoop/hdds/conf/TestOzoneConfiguration.java   | 85 --
 .../org/apache/hadoop/hdds/fs/TestDUFactory.java   | 14 ++--
 .../hdds/conf/ConfigurationReflectionUtil.java | 80 +++-
 .../hadoop/hdds/conf/ConfigurationSource.java  |  7 +-
 .../hadoop/hdds/conf/ConfigurationTarget.java  | 54 ++
 .../hdds/conf/MutableConfigurationSource.java} | 15 +---
 .../hadoop/ozone/HddsDatanodeHttpServer.java   |  7 +-
 .../server/ratis/ContainerStateMachine.java|  3 +-
 .../container/common/TestBlockDeletingService.java |  3 +-
 .../ozoneimpl/TestContainerScrubberMetrics.java|  5 +-
 .../hdds/conf/DatanodeRatisServerConfig.java   | 36 -
 .../hadoop/hdds/server/http/BaseHttpServer.java| 10 +--
 .../apache/hadoop/hdds/server/http/HttpConfig.java |  6 +-
 .../hadoop/hdds/server/http/HttpServer2.java   |  7 +-
 .../hdds/scm/container/ReplicationManager.java | 15 ++--
 .../server/StorageContainerManagerHttpServer.java  |  6 +-
 .../hdds/scm/container/TestReplicationManager.java |  4 +-
 .../hdds/scm/cli/ContainerOperationClient.java |  4 +-
 .../hadoop/ozone/client/OzoneClientFactory.java|  3 +-
 .../apache/hadoop/ozone/client/rpc/RpcClient.java  |  4 +-
 .../main/java/org/apache/hadoop/ozone/OmUtils.java |  7 +-
 .../ozone/om/ha/OMFailoverProxyProvider.java   |  2 +-
 .../hadoop/ozone/om/helpers/TestOzoneAclUtil.java  | 12 +--
 .../apache/hadoop/ozone/MiniOzoneChaosCluster.java |  9 ++-
 .../hadoop/fs/ozone/contract/OzoneContract.java| 15 ++--
 .../ozone/contract/rooted/RootedOzoneContract.java | 15 ++--
 .../hadoop/hdds/scm/pipeline/TestNodeFailure.java  | 20 ++---
 .../apache/hadoop/ozone/MiniOzoneClusterImpl.java  |  9 ++-
 .../ozone/client/rpc/Test2WayCommitInRatis.java| 15 ++--
 .../rpc/TestBlockOutputStreamWithFailures.java | 15 ++--
 ...estBlockOutputStreamWithFailuresFlushDelay.java | 15 ++--
 .../hadoop/ozone/client/rpc/TestCommitWatcher.java | 15 ++--
 .../rpc/TestContainerReplicationEndToEnd.java  | 22 +++---
 .../TestContainerStateMachineFailureOnRead.java| 25 ++-
 .../rpc/TestContainerStateMachineFailures.java | 15 ++--
 .../client/rpc/TestDeleteWithSlowFollower.java | 25 ++-
 .../client/rpc/TestFailureHandlingByClient.java| 15 ++--
 .../rpc/TestFailureHandlingByClientFlushDelay.java | 15 ++--
 .../rpc/TestMultiBlockWritesWithDnFailures.java| 15 ++--
 .../client/rpc/TestValidateBCSIDOnRestart.java | 15 ++--
 .../ozone/client/rpc/TestWatchForCommit.java   | 15 ++--
 .../hadoop/ozone/freon/TestDataValidate.java   | 15 ++--
 .../freon/TestDataValidateWithDummyContainers.java |  4 -
 .../ozone/freon/TestFreonWithDatanodeRestart.java  | 15 ++--
 .../ozone/freon/TestFreonWithPipelineDestroy.java  | 15 ++--
 .../hadoop/ozone/freon/TestRandomKeyGenerator.java | 15 ++--
 .../ozone/om/TestOzoneManagerRocksDBLogging.java   |  6 +-
 .../apache/hadoop/ozone/recon/TestReconTasks.java  |  8 +-
 .../ozone/recon/TestReconWithOzoneManagerHA.java   |  8 +-
 .../hadoop/ozone/om/OzoneManagerHttpServer.java|  8 +-
 .../om/snapshot/OzoneManagerSnapshotProvider.java  |  4 +-
 .../org/apache/hadoop/ozone/om/TestOMStorage.java  |  6 +-
 .../ozone/recon/codegen/ReconSqlDbConfig.java  |  7 --
 .../ozone/recon/fsck/ContainerHealthTask.java  |  4 +-
 .../hadoop/ozone/recon/scm/PipelineSyncTask.java   |  4 +-
 .../hadoop/ozone/recon/tasks/ReconTaskConfig.java  | 26 +++
 .../ozone/recon/fsck/TestContainerHealthTask.java  |  3 +-
 .../hadoop/ozone/s3/S3GatewayHttpServer.java   |  4 +-
 .../org/apache/hadoop/ozone/admin/OzoneAdmin.java  |  4 +-
 .../apache/hadoop/ozone/freon/FreonHttpServer.java |  6 +-
 70 files changed, 524 insertions(+), 385 del

[hadoop-ozone] branch master updated (4e6d6c6 -> 3c1d8b6)

2020-06-30 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 4e6d6c6  HDDS-3876. Display summary of failures as a separate job step 
(#1131)
 add 3c1d8b6  HDDS-3246. Include OM hostname info in getserviceroles 
subcommand of OM CLI (#706)

No new revisions were added by this update.

Summary of changes:
 .../ozone/admin/om/GetServiceRolesSubcommand.java  | 28 +++--
 .../org/apache/hadoop/ozone/admin/om/OMAdmin.java  | 29 ++
 2 files changed, 50 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (dd46a55 -> 4e6d6c6)

2020-06-30 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from dd46a55  HDDS-3161. Block illegal characters when creating keys. (#812)
 add 4e6d6c6  HDDS-3876. Display summary of failures as a separate job step 
(#1131)

No new revisions were added by this update.

Summary of changes:
 .github/workflows/post-commit.yml | 18 ++
 1 file changed, 18 insertions(+)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (f734182 -> dd46a55)

2020-06-30 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from f734182  HDDS-3782. Remove podAntiAffinity from datanode-statefulset 
(#1057)
 add dd46a55  HDDS-3161. Block illegal characters when creating keys. (#812)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/hdds/scm/client/HddsClientUtils.java| 17 +
 .../java/org/apache/hadoop/ozone/OzoneConsts.java  | 17 +
 .../common/src/main/resources/ozone-default.xml| 10 ++
 .../apache/hadoop/ozone/client/rpc/RpcClient.java  | 14 ++
 .../hadoop/ozone/client/TestHddsClientUtils.java   | 22 ++
 .../main/java/org/apache/hadoop/ozone/OmUtils.java | 13 +
 .../org/apache/hadoop/ozone/om/OMConfigKeys.java   |  5 +
 .../ozone/om/request/file/OMFileCreateRequest.java | 13 +
 .../ozone/om/request/key/OMKeyCommitRequest.java   | 13 +
 .../ozone/om/request/key/OMKeyCreateRequest.java   |  9 +
 .../ozone/om/request/key/OMKeyRenameRequest.java   | 10 ++
 11 files changed, 143 insertions(+)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (574763b -> f734182)

2020-06-30 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 574763b  HDDS-3770. Improve getPipelines performance (#1066)
 add f734182  HDDS-3782. Remove podAntiAffinity from datanode-statefulset 
(#1057)

No new revisions were added by this update.

Summary of changes:
 .../config.yaml => definitions/onenode.yaml}  | 15 ---
 .../dist/src/main/k8s/examples/getting-started/Flekszible |  1 +
 .../dist/src/main/k8s/examples/ozone-dev/Flekszible   |  1 +
 3 files changed, 10 insertions(+), 7 deletions(-)
 copy 
hadoop-ozone/dist/src/main/k8s/definitions/ozone/{transformations/config.yaml 
=> definitions/onenode.yaml} (79%)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-3770. Improve getPipelines performance (#1066)

2020-06-30 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 574763b  HDDS-3770. Improve getPipelines performance (#1066)
574763b is described below

commit 574763b1cca814a90182a5b820e53dbd0c7de1b8
Author: runzhiwang <51938049+runzhiw...@users.noreply.github.com>
AuthorDate: Tue Jun 30 17:23:30 2020 +0800

HDDS-3770. Improve getPipelines performance (#1066)
---
 .../scm/container/common/helpers/ExcludeList.java  | 34 +++
 .../hdds/scm/container/ContainerManager.java   |  3 +-
 .../hdds/scm/container/SCMContainerManager.java|  7 ++--
 .../hadoop/hdds/scm/pipeline/PipelineStateMap.java | 49 +-
 .../hadoop/hdds/scm/block/TestBlockManager.java|  6 ++-
 .../TestContainerStateManagerIntegration.java  | 10 +++--
 6 files changed, 50 insertions(+), 59 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ExcludeList.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ExcludeList.java
index dcc3263..f6b111f 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ExcludeList.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ExcludeList.java
@@ -23,9 +23,9 @@ import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.scm.container.ContainerID;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
 
-import java.util.ArrayList;
-import java.util.List;
 import java.util.Collection;
+import java.util.HashSet;
+import java.util.Set;
 
 /**
  * This class contains set of dns and containers which ozone client provides
@@ -33,22 +33,22 @@ import java.util.Collection;
  */
 public class ExcludeList {
 
-  private final List datanodes;
-  private final List containerIds;
-  private final List pipelineIds;
+  private final Set datanodes;
+  private final Set containerIds;
+  private final Set pipelineIds;
 
 
   public ExcludeList() {
-datanodes = new ArrayList<>();
-containerIds = new ArrayList<>();
-pipelineIds = new ArrayList<>();
+datanodes = new HashSet<>();
+containerIds = new HashSet<>();
+pipelineIds = new HashSet<>();
   }
 
-  public List getContainerIds() {
+  public Set getContainerIds() {
 return containerIds;
   }
 
-  public List getDatanodes() {
+  public Set getDatanodes() {
 return datanodes;
   }
 
@@ -57,24 +57,18 @@ public class ExcludeList {
   }
 
   public void addDatanode(DatanodeDetails dn) {
-if (!datanodes.contains(dn)) {
-  datanodes.add(dn);
-}
+datanodes.add(dn);
   }
 
   public void addConatinerId(ContainerID containerId) {
-if (!containerIds.contains(containerId)) {
-  containerIds.add(containerId);
-}
+containerIds.add(containerId);
   }
 
   public void addPipeline(PipelineID pipelineId) {
-if (!pipelineIds.contains(pipelineId)) {
-  pipelineIds.add(pipelineId);
-}
+pipelineIds.add(pipelineId);
   }
 
-  public List getPipelineIds() {
+  public Set getPipelineIds() {
 return pipelineIds;
   }
 
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerManager.java
index 43c1ced..0e1c98f 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerManager.java
@@ -18,6 +18,7 @@ package org.apache.hadoop.hdds.scm.container;
 
 import java.io.Closeable;
 import java.io.IOException;
+import java.util.Collection;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
@@ -180,7 +181,7 @@ public interface ContainerManager extends Closeable {
* @return ContainerInfo for the matching container.
*/
   ContainerInfo getMatchingContainer(long size, String owner,
-  Pipeline pipeline, List excludedContainerIDS);
+  Pipeline pipeline, Collection excludedContainerIDS);
 
   /**
* Once after report processor handler completes, call this to notify
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
index 7ac90fc..34177f0 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
@@ -18,6 +18,7 @@ package org.apache.hadoop.hdds.scm.container;
 
 import java.io.IOException;
 import java.util.ArrayL

[hadoop-ozone] branch master updated (bf23dcb -> 3479e67)

2020-06-30 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from bf23dcb  HDDS-3868. Implement getTrashRoot and getTrashRoots in o3fs 
(#1134)
 add 3479e67  HDDS-3699. Change write chunk failure logging level to ERROR 
in BlockOutputStream. (#1006)

No new revisions were added by this update.

Summary of changes:
 .../org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java | 8 +++-
 1 file changed, 3 insertions(+), 5 deletions(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-3865. Export the SCM client IPC port in docker-compose (#1124)

2020-06-29 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 95073f4  HDDS-3865. Export the SCM client IPC port in docker-compose 
(#1124)
95073f4 is described below

commit 95073f4379a705fd0ad5e036151f611641c7c616
Author: maobaolong <307499...@qq.com>
AuthorDate: Mon Jun 29 21:22:44 2020 +0800

HDDS-3865. Export the SCM client IPC port in docker-compose (#1124)
---
 hadoop-ozone/dist/src/main/compose/ozone-csi/docker-compose.yaml | 1 +
 hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop27/docker-compose.yaml | 1 +
 hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop31/docker-compose.yaml | 1 +
 hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop32/docker-compose.yaml | 1 +
 hadoop-ozone/dist/src/main/compose/ozone-om-ha-s3/docker-compose.yaml| 1 +
 hadoop-ozone/dist/src/main/compose/ozone-om-ha/docker-compose.yaml   | 1 +
 hadoop-ozone/dist/src/main/compose/ozone-topology/docker-compose.yaml| 1 +
 hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml | 1 +
 hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-compose.yaml | 1 +
 hadoop-ozone/dist/src/main/compose/ozones3-haproxy/docker-compose.yaml   | 1 +
 hadoop-ozone/dist/src/main/compose/ozonescripts/docker-compose.yaml  | 1 +
 hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml| 1 +
 hadoop-ozone/dist/src/main/compose/ozonesecure-om-ha/docker-compose.yaml | 1 +
 hadoop-ozone/dist/src/main/compose/ozonesecure/docker-compose.yaml   | 1 +
 .../network-tests/src/test/compose/docker-compose.yaml   | 1 +
 15 files changed, 15 insertions(+)

diff --git a/hadoop-ozone/dist/src/main/compose/ozone-csi/docker-compose.yaml 
b/hadoop-ozone/dist/src/main/compose/ozone-csi/docker-compose.yaml
index 1fd7ebd..171ae5d 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-csi/docker-compose.yaml
+++ b/hadoop-ozone/dist/src/main/compose/ozone-csi/docker-compose.yaml
@@ -50,6 +50,7 @@ services:
   - docker-config
 ports:
   - 9876:9876
+  - 9860:9860
 environment:
   ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION
   HADOOP_OPTS: ${HADOOP_OPTS}
diff --git 
a/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop27/docker-compose.yaml 
b/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop27/docker-compose.yaml
index effb4a7..d7293be 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop27/docker-compose.yaml
+++ b/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop27/docker-compose.yaml
@@ -59,6 +59,7 @@ services:
   - ../../..:/opt/hadoop
 ports:
   - 9876:9876
+  - 9860:9860
 env_file:
   - docker-config
   - ../common-config
diff --git 
a/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop31/docker-compose.yaml 
b/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop31/docker-compose.yaml
index dc7261f..307882a 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop31/docker-compose.yaml
+++ b/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop31/docker-compose.yaml
@@ -64,6 +64,7 @@ services:
   - ../../..:/opt/hadoop
 ports:
   - 9876:9876
+  - 9860:9860
 env_file:
   - docker-config
   - ../common-config
diff --git 
a/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop32/docker-compose.yaml 
b/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop32/docker-compose.yaml
index 89c4f7a..9cea61f 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop32/docker-compose.yaml
+++ b/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop32/docker-compose.yaml
@@ -62,6 +62,7 @@ services:
   - ../../..:/opt/hadoop
 ports:
   - 9876:9876
+  - 9860:9860
 env_file:
   - docker-config
   - ../common-config
diff --git 
a/hadoop-ozone/dist/src/main/compose/ozone-om-ha-s3/docker-compose.yaml 
b/hadoop-ozone/dist/src/main/compose/ozone-om-ha-s3/docker-compose.yaml
index 2359482..4d271be 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-om-ha-s3/docker-compose.yaml
+++ b/hadoop-ozone/dist/src/main/compose/ozone-om-ha-s3/docker-compose.yaml
@@ -77,6 +77,7 @@ services:
  - ../..:/opt/hadoop
   ports:
  - 9876:9876
+ - 9860:9860
   env_file:
   - ./docker-config
   environment:
diff --git a/hadoop-ozone/dist/src/main/compose/ozone-om-ha/docker-compose.yaml 
b/hadoop-ozone/dist/src/main/compose/ozone-om-ha/docker-compose.yaml
index 8bb6409..87bb161 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-om-ha/docker-compose.yaml
+++ b/hadoop-ozone/dist/src/main/compose/ozone-om-ha/docker-compose.yaml
@@ -92,6 +92,7 @@ services:
  - ../..:/opt/hadoop
   ports:
  - 9876:9876
+ - 9860:9860
   env_file:
   - ./docker-config
   environment:
diff --git 
a/hadoop-ozone/dist/src/main/compose

[hadoop-ozone] branch master updated (90c17ca -> 0f2a118)

2020-06-27 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 90c17ca  HDDS-3615. Call cleanup on tables only when double buffer has 
transactions related to tables. (#943)
 add 0f2a118  HDDS-3757. Add test coverage of the acceptance tests to 
overall test coverage  (#1050)

No new revisions were added by this update.

Summary of changes:
 .github/workflows/post-commit.yml  |  59 ++-
 hadoop-hdds/test-utils/pom.xml |   6 ++
 .../java/org/apache/hadoop/test/JacocoServer.java  | 114 +
 hadoop-ozone/dev-support/checks/acceptance.sh  |   5 +-
 hadoop-ozone/dev-support/checks/build.sh   |   2 +-
 hadoop-ozone/dist/pom.xml  |  53 ++
 hadoop-ozone/dist/src/main/compose/ozone-csi/.env  |   1 +
 .../src/main/compose/ozone-csi/docker-compose.yaml |   6 ++
 .../dist/src/main/compose/ozone-mr/hadoop31/.env   |   1 +
 .../compose/ozone-mr/hadoop31/docker-compose.yaml  |   6 ++
 .../dist/src/main/compose/ozone-mr/hadoop32/.env   |   1 +
 .../compose/ozone-mr/hadoop32/docker-compose.yaml  |   4 +
 .../dist/src/main/compose/ozone-om-ha-s3/.env  |   1 +
 .../compose/ozone-om-ha-s3/docker-compose.yaml |   8 ++
 .../dist/src/main/compose/ozone-topology/.env  |   1 +
 .../compose/ozone-topology/docker-compose.yaml |  14 +++
 hadoop-ozone/dist/src/main/compose/ozone/.env  |   1 +
 .../src/main/compose/ozone/docker-compose.yaml |   5 +
 .../dist/src/main/compose/ozonesecure-mr/.env  |   1 +
 .../compose/ozonesecure-mr/docker-compose.yaml |   4 +
 .../dist/src/main/compose/ozonesecure-om-ha/.env   |   1 +
 .../compose/ozonesecure-om-ha/docker-compose.yaml  |  13 ++-
 .../dist/src/main/compose/ozonesecure/.env |   1 +
 .../main/compose/ozonesecure/docker-compose.yaml   |   5 +
 .../dist/src/main/compose/ozonesecure/test.sh  |   6 +-
 hadoop-ozone/dist/src/main/compose/test-all.sh |  18 +++-
 26 files changed, 301 insertions(+), 36 deletions(-)
 create mode 100644 
hadoop-hdds/test-utils/src/main/java/org/apache/hadoop/test/JacocoServer.java


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (025fc54 -> 93b1f63)

2020-06-26 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 025fc54  HDDS-3018. Fix TestContainerStateMachineFailures.java (#556)
 add 93b1f63  HDDS-3479. Use SCMMetadataStore high level abstraction 
instead of DBS… (#997)

No new revisions were added by this update.

Summary of changes:
 ...StoreRDBImpl.java => SCMMetadataStoreImpl.java} |   6 +-
 .../hdds/scm/server/StorageContainerManager.java   |   4 +-
 .../hadoop/hdds/scm/block/TestBlockManager.java|   4 +-
 .../container/TestCloseContainerEventHandler.java  |  28 +++--
 .../scm/container/TestSCMContainerManager.java |  13 +--
 .../hdds/scm/node/TestContainerPlacement.java  |  20 ++--
 .../hdds/scm/pipeline/TestSCMPipelineManager.java  |  36 +++---
 .../safemode/TestHealthyPipelineSafeModeRule.java  | 121 ++---
 .../TestOneReplicaPipelineSafeModeRule.java|  11 +-
 .../hdds/scm/safemode/TestSCMSafeModeManager.java  |  25 ++---
 .../apache/hadoop/ozone/genesis/GenesisUtil.java   |  13 +--
 11 files changed, 137 insertions(+), 144 deletions(-)
 rename 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/metadata/{SCMMetadataStoreRDBImpl.java
 => SCMMetadataStoreImpl.java} (96%)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated: HDDS-3858. Remove support to start Ozone and HDFS datanodes in the same JVM (#1117)

2020-06-25 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git


The following commit(s) were added to refs/heads/master by this push:
 new 8235366  HDDS-3858. Remove support to start Ozone and HDFS datanodes 
in the same JVM (#1117)
8235366 is described below

commit 823536604509f69d65880f4224974c331f7418f5
Author: Elek, Márton 
AuthorDate: Thu Jun 25 12:33:37 2020 +0200

HDDS-3858. Remove support to start Ozone and HDFS datanodes in the same JVM 
(#1117)
---
 hadoop-hdds/docs/content/beyond/RunningWithHDFS.md | 70 --
 .../docs/content/beyond/RunningWithHDFS.zh.md  | 64 
 hadoop-ozone/dist/src/main/compose/ozone-hdfs/.env | 18 --
 .../main/compose/ozone-hdfs/docker-compose.yaml| 70 --
 .../dist/src/main/compose/ozone-hdfs/docker-config | 35 ---
 5 files changed, 257 deletions(-)

diff --git a/hadoop-hdds/docs/content/beyond/RunningWithHDFS.md 
b/hadoop-hdds/docs/content/beyond/RunningWithHDFS.md
deleted file mode 100644
index 2bfbc99..000
--- a/hadoop-hdds/docs/content/beyond/RunningWithHDFS.md
+++ /dev/null
@@ -1,70 +0,0 @@

-title: Running concurrently with HDFS
-linktitle: Runing with HDFS
-weight: 1
-summary: Ozone is designed to run concurrently with HDFS. This page explains 
how to deploy Ozone in a exisiting HDFS cluster.

-
-
-Ozone is designed to work with HDFS. So it is easy to deploy ozone in an
-existing HDFS cluster.
-
-The container manager part of Ozone can run inside DataNodes as a pluggable 
module
-or as a standalone component. This document describe how can it be started as
-a HDFS datanode plugin.
-
-To activate ozone you should define the service plugin implementation class.
-
-
-Important: It should be added to the hdfs-site.xml as the plugin 
should
-be activated as part of the normal HDFS Datanode bootstrap.
-
-
-{{< highlight xml >}}
-
-   dfs.datanode.plugins
-   org.apache.hadoop.ozone.HddsDatanodeService
-
-{{< /highlight >}}
-
-You also need to add the jar file under path /opt/ozone/share/ozone/lib/ to 
the classpath:
-
-{{< highlight bash >}}
-export HADOOP_CLASSPATH=/opt/ozone/share/ozone/lib/*.jar
-{{< /highlight >}}
-
-
-
-To start ozone with HDFS you should start the the following components:
-
- 1. HDFS Namenode (from Hadoop distribution)
- 2. HDFS Datanode (from the Hadoop distribution with the plugin on the
- classpath from the Ozone distribution)
- 3. Ozone Manager (from the Ozone distribution)
- 4. Storage Container Manager (from the Ozone distribution)
-
-Please check the log of the datanode whether the HDDS/Ozone plugin is started 
or
-not. Log of datanode should contain something like this:
-
-```
-2018-09-17 16:19:24 INFO  HddsDatanodeService:158 - Started plug-in 
org.apache.hadoop.ozone.web.OzoneHddsDatanodeService@6f94fb9d
-```
-
-
-Note: document above is based on Hadoop 3.1.
-
diff --git a/hadoop-hdds/docs/content/beyond/RunningWithHDFS.zh.md 
b/hadoop-hdds/docs/content/beyond/RunningWithHDFS.zh.md
deleted file mode 100644
index 981c1e3..000
--- a/hadoop-hdds/docs/content/beyond/RunningWithHDFS.zh.md
+++ /dev/null
@@ -1,64 +0,0 @@

-title: 与 HDFS 并存运行
-linktitle: Runing with HDFS
-weight: 1
-summary: Ozone 能够与 HDFS 并存运行,本页介绍如何将 Ozone 部署到已有的 HDFS 集群上。

-
-
-Ozone 支持与 HDFS 并存工作,所以用户可以轻易的在已有的 HDFS 集群上部署 Ozone。
-
-Ozone 的容器管理组件可以在 HDFS 数据节点上以插件的形式或是独立运行,下文介绍插件运行的方法。
-
-为了在 HDFS 数据节点上启用 Ozone 插件,你需要定义服务插件实现类。
-
-
-重要:因为插件在 HDFS 数据节点启动过程中被激活,服务插件实现类的定义需要添加到 hdfs-site.xml 中。
-
-
-{{< highlight xml >}}
-
-   dfs.datanode.plugins
-   org.apache.hadoop.ozone.HddsDatanodeService
-
-{{< /highlight >}}
-
-此外还需要将 /opt/ozone/share/ozone/lib/ 路径下的 jar 包添加到 Hadoop classpath 下: 
-
-{{< highlight bash >}}
-export HADOOP_CLASSPATH=/opt/ozone/share/ozone/lib/*.jar
-{{< /highlight >}}
-
-
-
-让 Ozone 随 HDFS 一同启动的步骤为:
-
- 1. HDFS Namenode(从 Hadoop 中启动)
- 2. HDFS Datanode (从 Hadoop 中启动,需要按照如上配置插件和 classpath)
- 3. Ozone Manager (从 Ozone 中启动)
- 4. Storage Container Manager (从 Ozone 启动)
-
-检查数据节点的日志以确认 HDDS/Ozone 插件是否启动,日志中应当包含以下内容:
-
-```
-2018-09-17 16:19:24 INFO  HddsDatanodeService:158 - Started plug-in 
org.apache.hadoop.ozone.HddsDatanodeService@6f94fb9d
-```
-
-
-注意: 上面的测试是基于 Hadoop 3.1 进行的。
-
diff --git a/hadoop-ozone/dist/src/main/compose/ozone-hdfs/.env 
b/hadoop-ozone/dist/src/main/compose/ozone-hdfs/.env
deleted file mode 100644
index df9065c..000
--- a/hadoop-ozone/dist/src/main/compose/ozone-hdfs/.env
+++ /dev/null
@@ -1,18 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance

[hadoop-ozone] branch master updated (5e7b2b6 -> fe015bd)

2020-06-25 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from 5e7b2b6  HDDS-3773. Add OMDBDefinition to define structure of om.db. 
(#1076)
 add fe015bd  HDDS-3704. Update all the documentation to use 
ozonefs-hadoop2/3 instead of legacy/current (#1099)

No new revisions were added by this update.

Summary of changes:
 hadoop-hdds/docs/content/interface/OzoneFS.md  | 53 +++---
 hadoop-hdds/docs/content/interface/OzoneFS.zh.md   | 37 ++-
 hadoop-hdds/docs/content/recipe/SparkOzoneFSK8S.md | 30 +---
 .../docs/content/recipe/SparkOzoneFSK8S.zh.md  | 36 ---
 4 files changed, 30 insertions(+), 126 deletions(-)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



[hadoop-ozone] branch master updated (ac9387c -> 5e7b2b6)

2020-06-25 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git.


from ac9387c  Revert "HDDS-3263. Fix TestCloseContainerByPipeline.java. 
(#1119)" (#1126)
 add 5e7b2b6  HDDS-3773. Add OMDBDefinition to define structure of om.db. 
(#1076)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/ozone/om/codec/OMDBDefinition.java  | 161 +
 .../{RDBParser.java => DBDefinitionFactory.java}   |  43 +++---
 .../org/apache/hadoop/ozone/debug/DBScanner.java   |  11 +-
 3 files changed, 190 insertions(+), 25 deletions(-)
 create mode 100644 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/codec/OMDBDefinition.java
 copy 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/debug/{RDBParser.java 
=> DBDefinitionFactory.java} (55%)


-
To unsubscribe, e-mail: ozone-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-commits-h...@hadoop.apache.org



  1   2   3   4   5   >