[hadoop] branch trunk updated: HDDS-1612. Add 'scmcli printTopology' shell command to print datanode topology. Contributed by Sammi Chen.(#910)

2019-06-05 Thread xyao
This is an automated email from the ASF dual-hosted git repository.

xyao pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 73954c1  HDDS-1612. Add 'scmcli printTopology' shell command to print 
datanode topology. Contributed by Sammi Chen.(#910)
73954c1 is described below

commit 73954c1dd98dd9f0aa535aeefcd1484d09fd75dc
Author: Sammi Chen 
AuthorDate: Thu Jun 6 11:13:39 2019 +0800

HDDS-1612. Add 'scmcli printTopology' shell command to print datanode 
topology. Contributed by Sammi Chen.(#910)
---
 .../hadoop/hdds/protocol/DatanodeDetails.java  |  2 +
 hadoop-hdds/common/src/main/proto/hdds.proto   |  1 +
 .../org/apache/hadoop/hdds/scm/cli/SCMCLI.java |  3 +-
 .../hadoop/hdds/scm/cli/TopologySubcommand.java| 80 ++
 4 files changed, 85 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
index be6f44c..34de028 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
@@ -212,6 +212,8 @@ public class DatanodeDetails extends NodeImpl implements
 if (certSerialId != null) {
   builder.setCertSerialId(certSerialId);
 }
+builder.setNetworkLocation(getNetworkLocation());
+
 for (Port port : ports) {
   builder.addPorts(HddsProtos.Port.newBuilder()
   .setName(port.getName().toString())
diff --git a/hadoop-hdds/common/src/main/proto/hdds.proto 
b/hadoop-hdds/common/src/main/proto/hdds.proto
index ddde7ea..2d5cb03 100644
--- a/hadoop-hdds/common/src/main/proto/hdds.proto
+++ b/hadoop-hdds/common/src/main/proto/hdds.proto
@@ -34,6 +34,7 @@ message DatanodeDetailsProto {
 required string hostName = 3;  // hostname
 repeated Port ports = 4;
 optional string certSerialId = 5;   // Certificate serial id.
+optional string networkLocation = 6; // Network topology location
 }
 
 /**
diff --git 
a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java 
b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java
index 5013a74..1a19a3c 100644
--- a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java
+++ b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java
@@ -84,7 +84,8 @@ import picocli.CommandLine.Option;
 CreateSubcommand.class,
 CloseSubcommand.class,
 ListPipelinesSubcommand.class,
-ClosePipelineSubcommand.class
+ClosePipelineSubcommand.class,
+TopologySubcommand.class
 },
 mixinStandardHelpOptions = true)
 public class SCMCLI extends GenericCli {
diff --git 
a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/TopologySubcommand.java
 
b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/TopologySubcommand.java
new file mode 100644
index 000..6deccd1
--- /dev/null
+++ 
b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/TopologySubcommand.java
@@ -0,0 +1,80 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.cli;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.client.ScmClient;
+import picocli.CommandLine;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DEAD;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DECOMMISSIONED;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DECOMMISSIONING;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.STALE;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.Callable;
+
+/**
+ * Handler of printTopology command.
+ */
+@CommandLine.Command(
+name = "printTopology",
+description = 

[hadoop] branch branch-3.1 updated: YARN-9545. Create healthcheck REST endpoint for ATSv2. Contributed by Zoltan Siegl.

2019-06-05 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new d65371c  YARN-9545. Create healthcheck REST endpoint for ATSv2. 
Contributed by Zoltan Siegl.
d65371c is described below

commit d65371c4e8bbb4ae655ccacda389cd37a18fab32
Author: Sunil G 
AuthorDate: Thu Jun 6 06:24:01 2019 +0530

YARN-9545. Create healthcheck REST endpoint for ATSv2. Contributed by 
Zoltan Siegl.

(cherry picked from commit f1d3a17d3e67ec2acad52227a3f4eb7cca83e468)
---
 .../yarn/api/records/timeline/TimelineHealth.java  |  82 
 .../node_modules/.bin/apidoc   |   1 +
 .../node_modules/.bin/markdown-it  |   1 +
 .../node_modules/.bin/r.js |   1 +
 .../node_modules/.bin/r_js |   1 +
 .../node_modules/.bin/semver   |   1 +
 .../node_modules/.bin/shjs |   1 +
 .../yarn.lock  | 422 +
 .../storage/HBaseTimelineReaderImpl.java   |  13 +
 .../reader/TimelineReaderManager.java  |  10 +
 .../reader/TimelineReaderWebServices.java  |  33 ++
 .../storage/FileSystemTimelineReaderImpl.java  |  23 ++
 .../timelineservice/storage/TimelineReader.java|   8 +
 .../reader/TestTimelineReaderWebServices.java  |  19 +
 14 files changed, 616 insertions(+)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineHealth.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineHealth.java
new file mode 100644
index 000..d592167
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineHealth.java
@@ -0,0 +1,82 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.api.records.timeline;
+
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+
+/**
+ * This class holds health information for ATS.
+ */
+@XmlRootElement(name = "health")
+@XmlAccessorType(XmlAccessType.NONE)
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+public class TimelineHealth {
+
+  /**
+   * Timline health status.
+   *
+   * RUNNING - Service is up and running
+   * READER_CONNECTION_FAULURE - isConnectionAlive() of reader implementation
+   *reported an error
+   */
+  public enum TimelineHealthStatus {
+RUNNING,
+READER_CONNECTION_FAILURE
+  }
+
+  private TimelineHealthStatus healthStatus;
+  private String diagnosticsInfo;
+
+  public TimelineHealth(TimelineHealthStatus healthy, String diagnosticsInfo) {
+this.healthStatus = healthy;
+this.diagnosticsInfo = diagnosticsInfo;
+  }
+
+  public TimelineHealth() {
+
+  }
+
+  @XmlElement(name = "healthStatus")
+  public TimelineHealthStatus getHealthStatus() {
+return healthStatus;
+  }
+
+  @XmlElement(name = "diagnosticsInfo")
+  public String getDiagnosticsInfo() {
+return diagnosticsInfo;
+  }
+
+
+  public void setHealthStatus(TimelineHealthStatus healthStatus) {
+this.healthStatus = healthStatus;
+  }
+
+  public void setDiagnosticsInfo(String diagnosticsInfo) {
+this.diagnosticsInfo = diagnosticsInfo;
+  }
+
+
+}
\ No newline at end of file
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/node_modules/.bin/apidoc
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/node_modules/.bin/apidoc
new file mode 12
index 000..a588095
--- /dev/null
+++ 

[hadoop] branch branch-3.2 updated: YARN-9545. Create healthcheck REST endpoint for ATSv2. Contributed by Zoltan Siegl.

2019-06-05 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new f1d3a17  YARN-9545. Create healthcheck REST endpoint for ATSv2. 
Contributed by Zoltan Siegl.
f1d3a17 is described below

commit f1d3a17d3e67ec2acad52227a3f4eb7cca83e468
Author: Sunil G 
AuthorDate: Thu Jun 6 06:24:01 2019 +0530

YARN-9545. Create healthcheck REST endpoint for ATSv2. Contributed by 
Zoltan Siegl.
---
 .../yarn/api/records/timeline/TimelineHealth.java  |  82 
 .../node_modules/.bin/apidoc   |   1 +
 .../node_modules/.bin/markdown-it  |   1 +
 .../node_modules/.bin/r.js |   1 +
 .../node_modules/.bin/r_js |   1 +
 .../node_modules/.bin/semver   |   1 +
 .../node_modules/.bin/shjs |   1 +
 .../yarn.lock  | 422 +
 .../storage/HBaseTimelineReaderImpl.java   |  13 +
 .../reader/TimelineReaderManager.java  |  10 +
 .../reader/TimelineReaderWebServices.java  |  33 ++
 .../storage/FileSystemTimelineReaderImpl.java  |  23 ++
 .../timelineservice/storage/TimelineReader.java|   8 +
 .../reader/TestTimelineReaderWebServices.java  |  19 +
 14 files changed, 616 insertions(+)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineHealth.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineHealth.java
new file mode 100644
index 000..d592167
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineHealth.java
@@ -0,0 +1,82 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.api.records.timeline;
+
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+
+/**
+ * This class holds health information for ATS.
+ */
+@XmlRootElement(name = "health")
+@XmlAccessorType(XmlAccessType.NONE)
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+public class TimelineHealth {
+
+  /**
+   * Timline health status.
+   *
+   * RUNNING - Service is up and running
+   * READER_CONNECTION_FAULURE - isConnectionAlive() of reader implementation
+   *reported an error
+   */
+  public enum TimelineHealthStatus {
+RUNNING,
+READER_CONNECTION_FAILURE
+  }
+
+  private TimelineHealthStatus healthStatus;
+  private String diagnosticsInfo;
+
+  public TimelineHealth(TimelineHealthStatus healthy, String diagnosticsInfo) {
+this.healthStatus = healthy;
+this.diagnosticsInfo = diagnosticsInfo;
+  }
+
+  public TimelineHealth() {
+
+  }
+
+  @XmlElement(name = "healthStatus")
+  public TimelineHealthStatus getHealthStatus() {
+return healthStatus;
+  }
+
+  @XmlElement(name = "diagnosticsInfo")
+  public String getDiagnosticsInfo() {
+return diagnosticsInfo;
+  }
+
+
+  public void setHealthStatus(TimelineHealthStatus healthStatus) {
+this.healthStatus = healthStatus;
+  }
+
+  public void setDiagnosticsInfo(String diagnosticsInfo) {
+this.diagnosticsInfo = diagnosticsInfo;
+  }
+
+
+}
\ No newline at end of file
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/node_modules/.bin/apidoc
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/node_modules/.bin/apidoc
new file mode 12
index 000..a588095
--- /dev/null
+++ 

[hadoop] branch trunk updated: HADOOP-16314. Make sure all web end points are covered by the same authentication filter. Contributed by Prabhu Joseph

2019-06-05 Thread eyang
This is an automated email from the ASF dual-hosted git repository.

eyang pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 294695d  HADOOP-16314.  Make sure all web end points are covered by 
the same authentication filter.Contributed by Prabhu Joseph
294695d is described below

commit 294695dd57cb75f2756a31a54264bdd37b32bb01
Author: Eric Yang 
AuthorDate: Wed Jun 5 18:52:39 2019 -0400

HADOOP-16314.  Make sure all web end points are covered by the same 
authentication filter.
   Contributed by Prabhu Joseph
---
 .../java/org/apache/hadoop/http/HttpServer2.java   |  48 ++---
 .../java/org/apache/hadoop/http/WebServlet.java|  59 +
 .../src/site/markdown/HttpAuthentication.md|   4 +-
 .../org/apache/hadoop/http/TestGlobalFilter.java   |   4 +-
 .../hadoop/http/TestHttpServerWithSpnego.java  | 238 +
 .../org/apache/hadoop/http/TestPathFilter.java |   2 -
 .../org/apache/hadoop/http/TestServletFilter.java  |   1 -
 .../java/org/apache/hadoop/log/TestLogLevel.java   |   9 +
 .../hdfs/server/namenode/NameNodeHttpServer.java   |  12 --
 .../TestDFSInotifyEventInputStreamKerberized.java  |   9 +
 .../hadoop/hdfs/qjournal/TestSecureNNWithQJM.java  |   8 +
 .../apache/hadoop/hdfs/web/TestWebHdfsTokens.java  |   8 +
 .../web/TestWebHdfsWithAuthenticationFilter.java   |  18 +-
 .../org/apache/hadoop/yarn/webapp/Dispatcher.java  |   9 +
 .../server/util/timeline/TimelineServerUtils.java  |  10 +-
 .../resourcemanager/webapp/RMWebAppUtil.java   |   4 +
 .../reader/TimelineReaderServer.java   |  13 +-
 .../webproxy/amfilter/TestSecureAmFilter.java  |  10 +-
 18 files changed, 412 insertions(+), 54 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
index fb2dff5..7825e08 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
@@ -27,6 +27,7 @@ import java.net.InetSocketAddress;
 import java.net.MalformedURLException;
 import java.net.URI;
 import java.net.URL;
+import java.util.Arrays;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.Enumeration;
@@ -66,6 +67,8 @@ import 
org.apache.hadoop.security.AuthenticationFilterInitializer;
 import org.apache.hadoop.security.SecurityUtil;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authentication.server.AuthenticationFilter;
+import 
org.apache.hadoop.security.authentication.server.ProxyUserAuthenticationFilterInitializer;
+import 
org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler;
 import org.apache.hadoop.security.authentication.util.SignerSecretProvider;
 import org.apache.hadoop.security.authorize.AccessControlList;
 import org.apache.hadoop.security.ssl.SSLFactory;
@@ -90,7 +93,6 @@ import org.eclipse.jetty.server.handler.HandlerCollection;
 import org.eclipse.jetty.server.handler.RequestLogHandler;
 import org.eclipse.jetty.server.session.AbstractSessionManager;
 import org.eclipse.jetty.server.session.SessionHandler;
-import org.eclipse.jetty.servlet.DefaultServlet;
 import org.eclipse.jetty.servlet.FilterHolder;
 import org.eclipse.jetty.servlet.FilterMapping;
 import org.eclipse.jetty.servlet.ServletContextHandler;
@@ -155,7 +157,7 @@ public final class HttpServer2 implements FilterContainer {
   // gets stored.
   public static final String CONF_CONTEXT_ATTRIBUTE = "hadoop.conf";
   public static final String ADMINS_ACL = "admins.acl";
-  public static final String SPNEGO_FILTER = "SpnegoFilter";
+  public static final String SPNEGO_FILTER = "authentication";
   public static final String NO_CACHE_FILTER = "NoCacheFilter";
 
   public static final String BIND_ADDRESS = "bind.address";
@@ -433,7 +435,9 @@ public final class HttpServer2 implements FilterContainer {
 
   HttpServer2 server = new HttpServer2(this);
 
-  if (this.securityEnabled) {
+  if (this.securityEnabled &&
+  !this.conf.get(authFilterConfigurationPrefix + "type").
+  equals(PseudoAuthenticationHandler.TYPE)) {
 server.initSpnego(conf, hostName, usernameConfKey, keytabConfKey);
   }
 
@@ -608,13 +612,6 @@ public final class HttpServer2 implements FilterContainer {
 }
 
 addDefaultServlets();
-
-if (pathSpecs != null) {
-  for (String path : pathSpecs) {
-LOG.info("adding path spec: " + path);
-addFilterPathMapping(path, webAppContext);
-  }
-}
   }
 
   private void addListener(ServerConnector connector) {
@@ -625,7 +622,7 @@ public final class HttpServer2 implements FilterContainer {
   

[hadoop] branch trunk updated: HDDS-1541. Implement addAcl, removeAcl, setAcl, getAcl for Key. Contributed by Ajay Kumat. (#885)

2019-06-05 Thread xyao
This is an automated email from the ASF dual-hosted git repository.

xyao pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 3b1c257  HDDS-1541. Implement addAcl,removeAcl,setAcl,getAcl for Key. 
Contributed by Ajay Kumat. (#885)
3b1c257 is described below

commit 3b1c2577d773ab42578033721c39822965092e56
Author: Ajay Yadav <7813154+ajay...@users.noreply.github.com>
AuthorDate: Wed Jun 5 14:42:10 2019 -0700

HDDS-1541. Implement addAcl,removeAcl,setAcl,getAcl for Key. Contributed by 
Ajay Kumat. (#885)
---
 .../apache/hadoop/ozone/client/rpc/RpcClient.java  |  29 +-
 .../java/org/apache/hadoop/ozone/OzoneAcl.java |   3 +-
 .../apache/hadoop/ozone/om/helpers/OmKeyArgs.java  |  18 +-
 .../apache/hadoop/ozone/om/helpers/OmKeyInfo.java  |  26 +-
 .../hadoop/ozone/om/helpers/OmOzoneAclMap.java |  34 +-
 ...OzoneManagerProtocolClientSideTranslatorPB.java |  11 +
 .../hadoop/ozone/security/acl/OzoneObjInfo.java|  57 ++--
 .../apache/hadoop/ozone/web/utils/OzoneUtils.java  |  29 ++
 .../src/main/proto/OzoneManagerProtocol.proto  |   2 +
 .../client/rpc/TestOzoneRpcClientAbstract.java | 284 +---
 .../ozone/om/TestMultipleContainerReadWrite.java   |   2 +
 .../hadoop/ozone/om/TestOmBlockVersioning.java |   3 +
 .../apache/hadoop/ozone/om/TestOzoneManager.java   |   4 +
 .../apache/hadoop/ozone/om/TestScmSafeMode.java|   2 +
 .../web/storage/DistributedStorageHandler.java |   7 +
 .../apache/hadoop/ozone/om/BucketManagerImpl.java  | 116 +++
 .../org/apache/hadoop/ozone/om/KeyManager.java |  40 +++
 .../org/apache/hadoop/ozone/om/KeyManagerImpl.java | 359 +++--
 .../org/apache/hadoop/ozone/om/OzoneManager.java   |  11 +-
 .../apache/hadoop/ozone/om/VolumeManagerImpl.java  |   4 +-
 .../ozone/om/ratis/OzoneManagerRatisServer.java|   2 +
 .../protocolPB/OzoneManagerRequestHandler.java |  12 +
 .../hadoop/ozone/om/TestKeyDeletingService.java|   2 +
 .../apache/hadoop/ozone/om/TestKeyManagerImpl.java |  13 +-
 24 files changed, 837 insertions(+), 233 deletions(-)

diff --git 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
index cb6ac53..48968a4 100644
--- 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
+++ 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
@@ -405,15 +405,7 @@ public class RpcClient implements ClientProtocol, 
KeyProviderTokenIssuer {
   .setKeyName(bucketArgs.getEncryptionKey()).build();
 }
 
-List listOfAcls = new ArrayList<>();
-//User ACL
-listOfAcls.add(new OzoneAcl(ACLIdentityType.USER,
-ugi.getUserName(), userRights));
-//Group ACLs of the User
-List userGroups = Arrays.asList(UserGroupInformation
-.createRemoteUser(ugi.getUserName()).getGroupNames());
-userGroups.stream().forEach((group) -> listOfAcls.add(
-new OzoneAcl(ACLIdentityType.GROUP, group, groupRights)));
+List listOfAcls = getAclList();
 //ACLs from BucketArgs
 if(bucketArgs.getAcls() != null) {
   listOfAcls.addAll(bucketArgs.getAcls());
@@ -437,6 +429,16 @@ public class RpcClient implements ClientProtocol, 
KeyProviderTokenIssuer {
 ozoneManagerClient.createBucket(builder.build());
   }
 
+  /**
+   * Helper function to get default acl list for current user.
+   *
+   * @return listOfAcls
+   * */
+  private List getAclList() {
+return OzoneUtils.getAclList(ugi.getUserName(), ugi.getGroups(),
+userRights, groupRights);
+  }
+
   @Override
   public void addBucketAcls(
   String volumeName, String bucketName, List addAcls)
@@ -629,6 +631,7 @@ public class RpcClient implements ClientProtocol, 
KeyProviderTokenIssuer {
 .setType(HddsProtos.ReplicationType.valueOf(type.toString()))
 .setFactor(HddsProtos.ReplicationFactor.valueOf(factor.getValue()))
 .addAllMetadata(metadata)
+.setAcls(getAclList())
 .build();
 
 OpenKeySession openKey = ozoneManagerClient.openKey(keyArgs);
@@ -819,6 +822,7 @@ public class RpcClient implements ClientProtocol, 
KeyProviderTokenIssuer {
 .setKeyName(keyName)
 .setType(HddsProtos.ReplicationType.valueOf(type.toString()))
 .setFactor(HddsProtos.ReplicationFactor.valueOf(factor.getValue()))
+.setAcls(getAclList())
 .build();
 OmMultipartInfo multipartInfo = ozoneManagerClient
 .initiateMultipartUpload(keyArgs);
@@ -848,6 +852,7 @@ public class RpcClient implements ClientProtocol, 
KeyProviderTokenIssuer {
 .setIsMultipartKey(true)
 .setMultipartUploadID(uploadID)
 .setMultipartUploadPartNumber(partNumber)
+.setAcls(getAclList())
 .build();
 
 OpenKeySession openKey = 

[hadoop] branch HDFS-13891 updated: HDFS-13404. Addendum: RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fail. Contributed by Takanobu Asanuma.

2019-06-05 Thread ayushsaxena
This is an automated email from the ASF dual-hosted git repository.

ayushsaxena pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HDFS-13891 by this push:
 new f3e25bb  HDFS-13404. Addendum: RBF: 
TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fail. Contributed 
by Takanobu Asanuma.
f3e25bb is described below

commit f3e25bb23383dcef4fbd6e0712324f772ade944f
Author: Ayush Saxena 
AuthorDate: Wed Jun 5 22:20:26 2019 +0530

HDFS-13404. Addendum: RBF: 
TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fail. Contributed 
by Takanobu Asanuma.
---
 .../java/org/apache/hadoop/fs/contract/AbstractContractAppendTest.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractAppendTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractAppendTest.java
index 02a8996..a9fb117 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractAppendTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractAppendTest.java
@@ -137,7 +137,7 @@ public abstract class AbstractContractAppendTest extends 
AbstractFSContractTestB
   // Some filesystems like WebHDFS doesn't assure sequential consistency.
   // In such a case, delay is needed. Given that we can not check the lease
   // because here is closed in client side package, simply add a sleep.
-  Thread.sleep(10);
+  Thread.sleep(100);
 }
 outputStream.write(dataset);
 Path renamed = new Path(testPath, "renamed");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HDFS-13891 updated: HDFS-14526. RBF: Update the document of RBF related metrics. Contributed by Takanobu Asanuma.

2019-06-05 Thread ayushsaxena
This is an automated email from the ASF dual-hosted git repository.

ayushsaxena pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HDFS-13891 by this push:
 new 3344c95  HDFS-14526. RBF: Update the document of RBF related metrics. 
Contributed by  Takanobu Asanuma.
3344c95 is described below

commit 3344c95ff53ce65581d3bc70ad55f3196d22491d
Author: Ayush Saxena 
AuthorDate: Wed Jun 5 22:03:27 2019 +0530

HDFS-14526. RBF: Update the document of RBF related metrics. Contributed by 
 Takanobu Asanuma.
---
 .../hadoop-common/src/site/markdown/Metrics.md | 34 ++
 .../hdfs/server/federation/metrics/RBFMetrics.java |  2 ++
 .../src/site/markdown/HDFSRouterFederation.md  |  2 +-
 3 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
index 1ef2b44..3cff9ca 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
@@ -469,6 +469,40 @@ contains tags such as Hostname as additional information 
along with metrics.
 | `FileIoErrorRateNumOps` | The number of file io error operations within an 
interval time of metric |
 | `FileIoErrorRateAvgTime` | It measures the mean time in milliseconds from 
the start of an operation to hitting a failure |
 
+RBFMetrics
+
+RBFMetrics shows the metrics which are the aggregated values of sub-clusters' 
information in the Router-based federation.
+
+| Name | Description |
+|: |: |
+| `NumFiles` | Current number of files and directories |
+| `NumBlocks` | Current number of allocated blocks |
+| `NumOfBlocksPendingReplication` | Current number of blocks pending to be 
replicated |
+| `NumOfBlocksUnderReplicated` | Current number of blocks under replicated |
+| `NumOfBlocksPendingDeletion` | Current number of blocks pending deletion |
+| `ProvidedSpace` | The total remote storage capacity mounted in the federated 
cluster |
+| `NumInMaintenanceLiveDataNodes` | Number of live Datanodes which are in 
maintenance state |
+| `NumInMaintenanceDeadDataNodes` | Number of dead Datanodes which are in 
maintenance state |
+| `NumEnteringMaintenanceDataNodes` | Number of Datanodes that are entering 
the maintenance state |
+| `TotalCapacity` | Current raw capacity of DataNodes in bytes |
+| `UsedCapacity` | Current used capacity across all DataNodes in bytes |
+| `RemainingCapacity` | Current remaining capacity in bytes |
+| `NumOfMissingBlocks` | Current number of missing blocks |
+| `NumLiveNodes` | Number of datanodes which are currently live |
+| `NumDeadNodes` | Number of datanodes which are currently dead |
+| `NumStaleNodes` | Current number of DataNodes marked stale due to delayed 
heartbeat |
+| `NumDecomLiveNodes` | Number of datanodes which have been decommissioned and 
are now live |
+| `NumDecomDeadNodes` | Number of datanodes which have been decommissioned and 
are now dead |
+| `NumDecommissioningNodes` | Number of datanodes in decommissioning state |
+| `Namenodes` | Current information about all the namenodes |
+| `Nameservices` | Current information for each registered nameservice |
+| `MountTable` | The mount table for the federated filesystem |
+| `Routers` | Current information about all routers |
+| `NumNameservices` | Number of nameservices |
+| `NumNamenodes` | Number of namenodes |
+| `NumExpiredNamenodes` | Number of expired namenodes |
+| `NodeUsage` | Max, Median, Min and Standard Deviation of DataNodes usage |
+
 RouterRPCMetrics
 
 RouterRPCMetrics shows the statistics of the Router component in Router-based 
federation.
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/RBFMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/RBFMetrics.java
index 9aa469d..4b33f80 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/RBFMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/RBFMetrics.java
@@ -77,6 +77,7 @@ import 
org.apache.hadoop.hdfs.server.federation.store.records.MembershipStats;
 import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
 import org.apache.hadoop.hdfs.server.federation.store.records.RouterState;
 import 
org.apache.hadoop.hdfs.server.federation.store.records.StateStoreVersion;
+import org.apache.hadoop.metrics2.annotation.Metrics;
 import org.apache.hadoop.metrics2.util.MBeans;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.util.StringUtils;
@@ -91,6 +92,7 @@ import com.google.common.annotations.VisibleForTesting;
 /**
  * Implementation of the Router 

[hadoop] branch trunk updated: HDDS-1637. Fix random test failure TestSCMContainerPlacementRackAware. Contributed by Sammi Chen. (#904)

2019-06-05 Thread xyao
This is an automated email from the ASF dual-hosted git repository.

xyao pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 0b1e288  HDDS-1637. Fix random test failure 
TestSCMContainerPlacementRackAware. Contributed by Sammi Chen. (#904)
0b1e288 is described below

commit 0b1e288deb2c330521b9bb1d1803481afe49168b
Author: ChenSammi 
AuthorDate: Thu Jun 6 00:09:36 2019 +0800

HDDS-1637. Fix random test failure TestSCMContainerPlacementRackAware. 
Contributed by Sammi Chen. (#904)
---
 .../algorithms/SCMContainerPlacementRackAware.java  | 13 +
 1 file changed, 13 insertions(+)

diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMContainerPlacementRackAware.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMContainerPlacementRackAware.java
index ffebb84..e126f27 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMContainerPlacementRackAware.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMContainerPlacementRackAware.java
@@ -237,6 +237,7 @@ public final class SCMContainerPlacementRackAware extends 
SCMCommonPolicy {
   long sizeRequired) throws SCMException {
 int ancestorGen = RACK_LEVEL;
 int maxRetry = MAX_RETRY;
+List excludedNodesForCapacity = null;
 while(true) {
   Node node = networkTopology.chooseRandom(NetConstants.ROOT, null,
   excludedNodes, affinityNode, ancestorGen);
@@ -265,6 +266,9 @@ public final class SCMContainerPlacementRackAware extends 
SCMCommonPolicy {
   if (hasEnoughSpace((DatanodeDetails)node, sizeRequired)) {
 LOG.debug("Datanode {} is chosen. Required size is {}",
 node.toString(), sizeRequired);
+if (excludedNodes != null && excludedNodesForCapacity != null) {
+  excludedNodes.removeAll(excludedNodesForCapacity);
+}
 return node;
   } else {
 maxRetry--;
@@ -275,6 +279,15 @@ public final class SCMContainerPlacementRackAware extends 
SCMCommonPolicy {
   LOG.info(errMsg);
   throw new SCMException(errMsg, null);
 }
+if (excludedNodesForCapacity == null) {
+  excludedNodesForCapacity = new ArrayList<>();
+}
+excludedNodesForCapacity.add(node);
+if (excludedNodes == null) {
+  excludedNodes = excludedNodesForCapacity;
+} else {
+  excludedNodes.add(node);
+}
   }
 }
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-14356. Implement HDFS cache on SCM with native PMDK libs. Contributed by Feilong He.

2019-06-05 Thread sammichen
This is an automated email from the ASF dual-hosted git repository.

sammichen pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d1aad44  HDFS-14356. Implement HDFS cache on SCM with native PMDK 
libs. Contributed by Feilong He.
d1aad44 is described below

commit d1aad444907e1fc5314e8e64529e57c51ed7561c
Author: Sammi Chen 
AuthorDate: Wed Jun 5 21:33:00 2019 +0800

HDFS-14356. Implement HDFS cache on SCM with native PMDK libs. Contributed 
by Feilong He.
---
 BUILDING.txt   |  28 +++
 dev-support/bin/dist-copynativelibs|   8 +
 hadoop-common-project/hadoop-common/pom.xml|   2 +
 .../hadoop-common/src/CMakeLists.txt   |  21 ++
 .../hadoop-common/src/config.h.cmake   |   1 +
 .../org/apache/hadoop/io/nativeio/NativeIO.java| 135 ++-
 .../src/org/apache/hadoop/io/nativeio/NativeIO.c   | 252 +
 .../src/org/apache/hadoop/io/nativeio/pmdk_load.c  | 106 +
 .../src/org/apache/hadoop/io/nativeio/pmdk_load.h  |  95 
 .../apache/hadoop/io/nativeio/TestNativeIO.java| 153 +
 .../datanode/fsdataset/impl/FsDatasetCache.java|  22 ++
 .../datanode/fsdataset/impl/FsDatasetImpl.java |   8 +
 .../datanode/fsdataset/impl/FsDatasetUtil.java |  22 ++
 .../datanode/fsdataset/impl/MappableBlock.java |   6 +
 .../fsdataset/impl/MappableBlockLoader.java|  11 +-
 .../fsdataset/impl/MappableBlockLoaderFactory.java |   4 +
 .../fsdataset/impl/MemoryMappableBlockLoader.java  |   8 +-
 .../datanode/fsdataset/impl/MemoryMappedBlock.java |   5 +
 ...der.java => NativePmemMappableBlockLoader.java} | 166 +++---
 ...MappedBlock.java => NativePmemMappedBlock.java} |  49 ++--
 .../fsdataset/impl/PmemMappableBlockLoader.java|  10 +-
 .../datanode/fsdataset/impl/PmemMappedBlock.java   |   5 +
 22 files changed, 1009 insertions(+), 108 deletions(-)

diff --git a/BUILDING.txt b/BUILDING.txt
index cc9ac17..8c57a1d 100644
--- a/BUILDING.txt
+++ b/BUILDING.txt
@@ -78,6 +78,8 @@ Optional packages:
   $ sudo apt-get install fuse libfuse-dev
 * ZStandard compression
 $ sudo apt-get install zstd
+* PMDK library for storage class memory(SCM) as HDFS cache backend
+  Please refer to http://pmem.io/ and https://github.com/pmem/pmdk
 
 
--
 Maven main modules:
@@ -262,6 +264,32 @@ Maven build goals:
invoke, run 'mvn dependency-check:aggregate'. Note that this plugin
requires maven 3.1.1 or greater.
 
+ PMDK library build options:
+
+   The Persistent Memory Development Kit (PMDK), formerly known as NVML, is a 
growing
+   collection of libraries which have been developed for various use cases, 
tuned,
+   validated to production quality, and thoroughly documented. These libraries 
are built
+   on the Direct Access (DAX) feature available in both Linux and Windows, 
which allows
+   applications directly load/store access to persistent memory by 
memory-mapping files
+   on a persistent memory aware file system.
+
+   It is currently an optional component, meaning that Hadoop can be built 
without
+   this dependency. Please Note the library is used via dynamic module. For 
getting
+   more details please refer to the official sites:
+   http://pmem.io/ and https://github.com/pmem/pmdk.
+
+  * -Drequire.pmdk is used to build the project with PMDK libraries forcibly. 
With this
+option provided, the build will fail if libpmem library is not found. If 
this option
+is not given, the build will generate a version of Hadoop with 
libhadoop.so.
+And storage class memory(SCM) backed HDFS cache is still supported without 
PMDK involved.
+Because PMDK can bring better caching write/read performance, it is 
recommended to build
+the project with this option if user plans to use SCM backed HDFS cache.
+  * -Dpmdk.lib is used to specify a nonstandard location for PMDK libraries if 
they are not
+under /usr/lib or /usr/lib64.
+  * -Dbundle.pmdk is used to copy the specified libpmem libraries into the 
distribution tar
+package. This option requires that -Dpmdk.lib is specified. With 
-Dbundle.pmdk provided,
+the build will fail if -Dpmdk.lib is not specified.
+
 
--
 Building components separately
 
diff --git a/dev-support/bin/dist-copynativelibs 
b/dev-support/bin/dist-copynativelibs
index 67d2edf..4a783f0 100755
--- a/dev-support/bin/dist-copynativelibs
+++ b/dev-support/bin/dist-copynativelibs
@@ -96,6 +96,12 @@ for i in "$@"; do
 --isalbundle=*)
   ISALBUNDLE=${i#*=}
 ;;
+--pmdklib=*)
+  PMDKLIB=${i#*=}
+;;
+--pmdkbundle=*)
+  PMDKBUNDLE=${i#*=}
+;;
 --opensslbinbundle=*)
   OPENSSLBINBUNDLE=${i#*=}
 ;;
@@ -153,6 

[hadoop] branch trunk updated (42cd861 -> 309501c)

2019-06-05 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 42cd861  HDDS-1628. Fix the execution and return code of smoketest 
executor shell script
 new 7724d80  Revert "HADOOP-16321: ITestS3ASSL+TestOpenSSLSocketFactory 
failing with java.lang.UnsatisfiedLinkErrors"
 new 309501c  Revert "HADOOP-16050: s3a SSL connections should use OpenSSL"

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 hadoop-common-project/hadoop-common/pom.xml| 10 ---
 .../security/ssl/TestOpenSSLSocketFactory.java | 57 
 hadoop-tools/hadoop-aws/pom.xml|  5 --
 .../java/org/apache/hadoop/fs/s3a/Constants.java   |  6 --
 .../java/org/apache/hadoop/fs/s3a/S3AUtils.java| 38 ++-
 .../java/org/apache/hadoop/fs/s3a/ITestS3ASSL.java | 75 --
 hadoop-tools/hadoop-azure/pom.xml  |  2 +-
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  4 +-
 .../constants/FileSystemConfigurations.java|  6 +-
 .../hadoop/fs/azurebfs/services/AbfsClient.java|  8 +--
 .../fs/azurebfs/services/AbfsHttpOperation.java|  4 +-
 .../fs/azurebfs/utils/SSLSocketFactoryEx.java  | 62 +-
 .../TestAbfsConfigurationFieldsValidation.java | 16 ++---
 .../fs/azurebfs/services/TestAbfsClient.java   |  6 +-
 14 files changed, 57 insertions(+), 242 deletions(-)
 delete mode 100644 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestOpenSSLSocketFactory.java
 delete mode 100644 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ASSL.java
 rename 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/OpenSSLSocketFactory.java
 => 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/SSLSocketFactoryEx.java
 (82%)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/02: Revert "HADOOP-16321: ITestS3ASSL+TestOpenSSLSocketFactory failing with java.lang.UnsatisfiedLinkErrors"

2019-06-05 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 7724d8031b3b8cf499c9777c837b5000db12ecee
Author: Steve Loughran 
AuthorDate: Wed Jun 5 12:42:45 2019 +0100

Revert "HADOOP-16321: ITestS3ASSL+TestOpenSSLSocketFactory failing with 
java.lang.UnsatisfiedLinkErrors"

This reverts commit 5906268f0dd63a93eb591ddccf70d23b15e5c2ed.
---
 .../org/apache/hadoop/security/ssl/TestOpenSSLSocketFactory.java  | 8 ++--
 .../src/test/java/org/apache/hadoop/fs/s3a/ITestS3ASSL.java   | 5 +
 2 files changed, 3 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestOpenSSLSocketFactory.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestOpenSSLSocketFactory.java
index 41ec3e4..ea881e9 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestOpenSSLSocketFactory.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestOpenSSLSocketFactory.java
@@ -35,10 +35,7 @@ public class TestOpenSSLSocketFactory {
 
   @Test
   public void testOpenSSL() throws IOException {
-assumeTrue("Unable to load native libraries",
-NativeCodeLoader.isNativeCodeLoaded());
-assumeTrue("Build was not compiled with support for OpenSSL",
-NativeCodeLoader.buildSupportsOpenssl());
+assumeTrue(NativeCodeLoader.buildSupportsOpenssl());
 OpenSSLSocketFactory.initializeDefaultFactory(
 OpenSSLSocketFactory.SSLChannelMode.OpenSSL);
 assertThat(OpenSSLSocketFactory.getDefaultFactory()
@@ -47,8 +44,7 @@ public class TestOpenSSLSocketFactory {
 
   @Test
   public void testJSEEJava8() throws IOException {
-assumeTrue("Not running on Java 8",
-System.getProperty("java.version").startsWith("1.8"));
+assumeTrue(System.getProperty("java.version").startsWith("1.8"));
 OpenSSLSocketFactory.initializeDefaultFactory(
 OpenSSLSocketFactory.SSLChannelMode.Default_JSSE);
 assertThat(Arrays.stream(OpenSSLSocketFactory.getDefaultFactory()
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ASSL.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ASSL.java
index 4232b0f..794bf80 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ASSL.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ASSL.java
@@ -40,10 +40,7 @@ public class ITestS3ASSL extends AbstractS3ATestBase {
 
   @Test
   public void testOpenSSL() throws IOException {
-assumeTrue("Unable to load native libraries",
-NativeCodeLoader.isNativeCodeLoaded());
-assumeTrue("Build was not compiled with support for OpenSSL",
-NativeCodeLoader.buildSupportsOpenssl());
+assumeTrue(NativeCodeLoader.buildSupportsOpenssl());
 Configuration conf = new Configuration(getConfiguration());
 conf.setEnum(Constants.SSL_CHANNEL_MODE,
 OpenSSLSocketFactory.SSLChannelMode.OpenSSL);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 02/02: Revert "HADOOP-16050: s3a SSL connections should use OpenSSL"

2019-06-05 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 309501c6fa1073f3cfd7e535a4207dbfb21165f9
Author: Steve Loughran 
AuthorDate: Wed Jun 5 12:43:36 2019 +0100

Revert "HADOOP-16050: s3a SSL connections should use OpenSSL"

This reverts commit b067f8acaa79b1230336900a5c62ba465b2adb28.

Change-Id: I584b050a56c0e6f70b11fa3f7db00d5ac46e7dd8
---
 hadoop-common-project/hadoop-common/pom.xml| 10 ---
 .../security/ssl/TestOpenSSLSocketFactory.java | 53 
 hadoop-tools/hadoop-aws/pom.xml|  5 --
 .../java/org/apache/hadoop/fs/s3a/Constants.java   |  6 --
 .../java/org/apache/hadoop/fs/s3a/S3AUtils.java| 38 ++--
 .../java/org/apache/hadoop/fs/s3a/ITestS3ASSL.java | 72 --
 hadoop-tools/hadoop-azure/pom.xml  |  2 +-
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  4 +-
 .../constants/FileSystemConfigurations.java|  6 +-
 .../hadoop/fs/azurebfs/services/AbfsClient.java|  8 +--
 .../fs/azurebfs/services/AbfsHttpOperation.java|  4 +-
 .../fs/azurebfs/utils/SSLSocketFactoryEx.java  | 62 +--
 .../TestAbfsConfigurationFieldsValidation.java | 16 ++---
 .../fs/azurebfs/services/TestAbfsClient.java   |  6 +-
 14 files changed, 57 insertions(+), 235 deletions(-)

diff --git a/hadoop-common-project/hadoop-common/pom.xml 
b/hadoop-common-project/hadoop-common/pom.xml
index 6d15958..64e4d04 100644
--- a/hadoop-common-project/hadoop-common/pom.xml
+++ b/hadoop-common-project/hadoop-common/pom.xml
@@ -343,16 +343,6 @@
   dnsjava
   compile
 
-
-  org.wildfly.openssl
-  wildfly-openssl
-  provided
-
-
-  org.assertj
-  assertj-core
-  test
-
   
 
   
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestOpenSSLSocketFactory.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestOpenSSLSocketFactory.java
deleted file mode 100644
index ea881e9..000
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestOpenSSLSocketFactory.java
+++ /dev/null
@@ -1,53 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.security.ssl;
-
-import java.io.IOException;
-import java.util.Arrays;
-
-import org.junit.Test;
-
-import org.apache.hadoop.util.NativeCodeLoader;
-
-import static org.assertj.core.api.Assertions.assertThat;
-import static org.junit.Assume.assumeTrue;
-
-/**
- * Tests for {@link OpenSSLSocketFactory}.
- */
-public class TestOpenSSLSocketFactory {
-
-  @Test
-  public void testOpenSSL() throws IOException {
-assumeTrue(NativeCodeLoader.buildSupportsOpenssl());
-OpenSSLSocketFactory.initializeDefaultFactory(
-OpenSSLSocketFactory.SSLChannelMode.OpenSSL);
-assertThat(OpenSSLSocketFactory.getDefaultFactory()
-.getProviderName()).contains("openssl");
-  }
-
-  @Test
-  public void testJSEEJava8() throws IOException {
-assumeTrue(System.getProperty("java.version").startsWith("1.8"));
-OpenSSLSocketFactory.initializeDefaultFactory(
-OpenSSLSocketFactory.SSLChannelMode.Default_JSSE);
-assertThat(Arrays.stream(OpenSSLSocketFactory.getDefaultFactory()
-.getSupportedCipherSuites())).noneMatch("GCM"::contains);
-  }
-}
diff --git a/hadoop-tools/hadoop-aws/pom.xml b/hadoop-tools/hadoop-aws/pom.xml
index 880ae83..9dc0acc 100644
--- a/hadoop-tools/hadoop-aws/pom.xml
+++ b/hadoop-tools/hadoop-aws/pom.xml
@@ -418,11 +418,6 @@
   compile
 
 
-  org.wildfly.openssl
-  wildfly-openssl
-  runtime
-
-
   junit
   junit
   test
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
index 7a68794..18ed7b4 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java

[hadoop] branch trunk updated: HDDS-1628. Fix the execution and return code of smoketest executor shell script

2019-06-05 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 42cd861  HDDS-1628. Fix the execution and return code of smoketest 
executor shell script
42cd861 is described below

commit 42cd861be08767f2388d9efdc5047c4840312c2e
Author: Márton Elek 
AuthorDate: Wed Jun 5 14:04:17 2019 +0200

HDDS-1628. Fix the execution and return code of smoketest executor shell 
script

Closes #902
---
 hadoop-ozone/dev-support/checks/acceptance.sh  | 3 ++-
 hadoop-ozone/dist/src/main/compose/test-all.sh | 2 +-
 hadoop-ozone/dist/src/main/smoketest/test.sh   | 3 ++-
 3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/hadoop-ozone/dev-support/checks/acceptance.sh 
b/hadoop-ozone/dev-support/checks/acceptance.sh
index 0a4c5d6..8de920f 100755
--- a/hadoop-ozone/dev-support/checks/acceptance.sh
+++ b/hadoop-ozone/dev-support/checks/acceptance.sh
@@ -13,6 +13,7 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
+DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
 export HADOOP_VERSION=3
-hadoop-ozone/dist/target/ozone-*-SNAPSHOT/smoketest/test.sh
+"$DIR/../../../hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/test-all.sh"
 exit $?
diff --git a/hadoop-ozone/dist/src/main/compose/test-all.sh 
b/hadoop-ozone/dist/src/main/compose/test-all.sh
index 225acec..a17ef4d 100755
--- a/hadoop-ozone/dist/src/main/compose/test-all.sh
+++ b/hadoop-ozone/dist/src/main/compose/test-all.sh
@@ -34,7 +34,7 @@ for test in $(find $SCRIPT_DIR -name test.sh); do
 
   #required to read the .env file from the right location
   cd "$(dirname "$test")" || continue
-  $test
+  ./test.sh
   ret=$?
   if [[ $ret -ne 0 ]]; then
   RESULT=-1
diff --git a/hadoop-ozone/dist/src/main/smoketest/test.sh 
b/hadoop-ozone/dist/src/main/smoketest/test.sh
index b2cdfc3..e0a26b0 100755
--- a/hadoop-ozone/dist/src/main/smoketest/test.sh
+++ b/hadoop-ozone/dist/src/main/smoketest/test.sh
@@ -23,5 +23,6 @@ REPLACEMENT="$DIR/../compose/test-all.sh"
 echo "THIS SCRIPT IS DEPRECATED. Please use $REPLACEMENT instead."
 
 ${REPLACEMENT}
-
+RESULT=$?
 cp -r "$DIR/../compose/result" "$DIR"
+exit $RESULT


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] annotated tag submarine-0.2.0-RC0 created (now 526bbd4)

2019-06-05 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a change to annotated tag submarine-0.2.0-RC0
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at 526bbd4  (tag)
 tagging 4c49cae231beb7859d34c2171eb24d1fd5d51c4e (commit)
  by Sunil G
  on Wed Jun 5 08:06:12 2019 +

- Log -
Submarine 0.2.0 release
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAABAgAGBQJc93f0AAoJEPQxMxUmZSbLFLgP+wRIOiMP0qqERwp3H2fEGlr/
u5yWZhDr1BkEJxuC7hXJtNVv87OG/b84EbD1htdoB0i/jG+L9ibQ1n50mBJkCyNQ
ssMtC/x17m84BbQE3ZBVK0g0KJiz64wvSFsJqBF0q4M3aR2mdk8C9rtr8dw0wxs8
muy09X5QxjUYcIzqjCq+eooQfQY1cVnblIIfxDpdpUrb+TQoqm2e0N+WTJlNk1RF
qt5FsCHH0tgyYdoUbbgxZ/X8syN0W2HGHAduEgl4DQ49tCBeE4rNIOtN6TupGL7P
STePfSxz3mGb3xC8UIXGroq0qUKPJCgpCS8een1Z1F+vUCBAjsA10snAuOpPlVEp
DSjShKw/qmP9/3ts9IFM4uGkWS/iLOGc3x6ObFjPOtiRyInYvk3+VVqYmZrtgNzc
+yoBIuNZzUvn9Y6qi37L0hT1czLxO7FRBJz6P4JM0b2FlsExs2fQm3/l1hoVaYQw
hjsCRyMvT7IHLsUv1bZwLEMbk2f0iMacCfQmW53Ua+6yoNqTGaiNv4790I+sIUFt
QCWoNCpI8nfquPGTXQzlutExCPa1awI7/5GkPTYgGuBFAfd13zoE8lqcMAeY5YXx
0kpsMBRm1S3ZY4Ns+mNJo5hGlpdUlUT24KVwrSKlWc6JHPC/MSAMbI8p7BPXeKkV
G3hzGOp4LqLBcaFhBxeg
=4VmC
-END PGP SIGNATURE-
---

No new revisions were added by this update.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org