[hadoop] branch trunk updated (bfe1dac -> e8e7d7b)

2019-09-23 Thread zhangduo
This is an automated email from the ASF dual-hosted git repository.

zhangduo pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from bfe1dac  HADOOP-16560. [YARN] use protobuf-maven-plugin to generate 
protobuf classes (#1496)
 add e8e7d7b  HADOOP-16561. [MAPREDUCE] use protobuf-maven-plugin to 
generate protobuf classes (#1500)

No new revisions were added by this update.

Summary of changes:
 .../hadoop-mapreduce-client-common/pom.xml | 34 --
 .../src/main/proto/HSAdminRefreshProtocol.proto|  3 +-
 .../src/main/proto/MRClientProtocol.proto  |  1 +
 .../src/main/proto/mr_protos.proto |  1 +
 .../src/main/proto/mr_service_protos.proto |  1 +
 .../hadoop-mapreduce-client-shuffle/pom.xml| 27 ++---
 .../src/main/proto/ShuffleHandlerRecovery.proto|  1 +
 7 files changed, 27 insertions(+), 41 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (0a716bd -> bfe1dac)

2019-09-23 Thread zhangduo
This is an automated email from the ASF dual-hosted git repository.

zhangduo pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 0a716bd  HDDS-2159. Fix Race condition in ProfileServlet#pid.
 add bfe1dac  HADOOP-16560. [YARN] use protobuf-maven-plugin to generate 
protobuf classes (#1496)

No new revisions were added by this update.

Summary of changes:
 .../hadoop-yarn/hadoop-yarn-api/pom.xml| 41 +++--
 .../src/main/proto/YarnCsiAdaptor.proto|  1 +
 .../main/proto/application_history_client.proto|  1 +
 .../main/proto/applicationclient_protocol.proto|  1 +
 .../main/proto/applicationmaster_protocol.proto|  1 +
 .../src/main/proto/client_SCM_protocol.proto   |  1 +
 .../main/proto/containermanagement_protocol.proto  |  1 +
 .../src/main/proto/server/SCM_Admin_protocol.proto |  1 +
 .../proto/server/application_history_server.proto  |  1 +
 .../resourcemanager_administration_protocol.proto  |  1 +
 ...arn_server_resourcemanager_service_protos.proto |  1 +
 .../src/main/proto/yarn_csi_adaptor.proto  |  4 +-
 .../src/main/proto/yarn_protos.proto   |  1 +
 .../src/main/proto/yarn_service_protos.proto   |  1 +
 .../hadoop-yarn-services-core/pom.xml  | 22 ++-
 .../src/main/proto/ClientAMProtocol.proto  |  1 +
 .../hadoop-yarn/hadoop-yarn-client/pom.xml | 31 --
 .../src/test/proto/test_amrm_token.proto   |  1 +
 .../hadoop-yarn/hadoop-yarn-common/pom.xml | 41 +++--
 .../src/main/proto/yarn_security_token.proto   |  1 +
 .../pom.xml| 39 ++--
 .../yarn_server_timelineserver_recovery.proto  |  1 +
 .../hadoop-yarn-server-common/pom.xml  | 39 
 .../src/main/proto/ResourceTracker.proto   |  1 +
 .../src/main/proto/SCMUploader.proto   |  1 +
 .../main/proto/collectornodemanager_protocol.proto |  1 +
 .../proto/distributed_scheduling_am_protocol.proto |  2 +-
 .../src/main/proto/yarn_server_common_protos.proto |  1 +
 .../proto/yarn_server_common_service_protos.proto  |  1 +
 .../main/proto/yarn_server_federation_protos.proto |  1 +
 .../hadoop-yarn-server-nodemanager/pom.xml | 34 ---
 .../src/main/proto/LocalizationProtocol.proto  |  1 +
 .../proto/yarn_server_nodemanager_recovery.proto   |  1 +
 .../yarn_server_nodemanager_service_protos.proto   |  1 +
 .../hadoop-yarn-server-resourcemanager/pom.xml | 69 +-
 .../yarn_server_resourcemanager_recovery.proto |  1 +
 .../src/test/proto/test_client_tokens.proto|  1 +
 .../hadoop-yarn-server-tests/pom.xml   | 31 --
 .../src/test/proto/test_token.proto|  1 +
 39 files changed, 160 insertions(+), 221 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2159. Fix Race condition in ProfileServlet#pid.

2019-09-23 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 0a716bd  HDDS-2159. Fix Race condition in ProfileServlet#pid.
0a716bd is described below

commit 0a716bd3a5b38779bb07450acb3279e859bb7471
Author: Hanisha Koneru 
AuthorDate: Fri Sep 20 13:06:29 2019 -0700

HDDS-2159. Fix Race condition in ProfileServlet#pid.

Signed-off-by: Anu Engineer 
---
 .../java/org/apache/hadoop/hdds/server/ProfileServlet.java | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java
 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java
index 016445c..7cea582 100644
--- 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java
+++ 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java
@@ -119,7 +119,7 @@ public class ProfileServlet extends HttpServlet {
   Pattern.compile(FILE_PREFIX + "[0-9]+-[0-9A-Za-z\\-_]+-[0-9]+\\.[a-z]+");
 
   private Lock profilerLock = new ReentrantLock();
-  private Integer pid;
+  private final Integer pid;
   private String asyncProfilerHome;
   private transient Process process;
 
@@ -208,11 +208,11 @@ public class ProfileServlet extends HttpServlet {
   return;
 }
 // if pid is explicitly specified, use it else default to current process
-pid = getInteger(req, "pid", pid);
+Integer processId = getInteger(req, "pid", pid);
 
 // if pid is not specified in query param and if current process pid
 // cannot be determined
-if (pid == null) {
+if (processId == null) {
   resp.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);
   setResponseHeader(resp);
   resp.getWriter().write(
@@ -243,7 +243,7 @@ public class ProfileServlet extends HttpServlet {
 //Should be in sync with FILE_NAME_PATTERN
 File outputFile =
 OUTPUT_DIR.resolve(
-ProfileServlet.generateFileName(pid, output, event))
+ProfileServlet.generateFileName(processId, output, event))
 .toFile();
 List cmd = new ArrayList<>();
 cmd.add(asyncProfilerHome + PROFILER_SCRIPT);
@@ -288,7 +288,7 @@ public class ProfileServlet extends HttpServlet {
 if (reverse) {
   cmd.add("--reverse");
 }
-cmd.add(pid.toString());
+cmd.add(processId.toString());
 process = runCmdAsync(cmd);
 
 // set response and set refresh header to output location


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (3fd3d74 -> 6cbe5d3)

2019-09-23 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 3fd3d74  HDDS-2161. Create RepeatedKeyInfo structure to be saved in 
deletedTable
 add 6cbe5d3  HDDS-2160. Add acceptance test for ozonesecure-mr compose. 
Contributed by Xiaoyu Yao. (#1490)

No new revisions were added by this update.

Summary of changes:
 .../compose/{ozone-mr/hadoop32 => ozonesecure-mr}/test.sh| 12 
 .../src/main/smoketest/{kinit.robot => kinit-hadoop.robot}   |  2 +-
 hadoop-ozone/dist/src/main/smoketest/kinit.robot |  5 -
 hadoop-ozone/dist/src/main/smoketest/mapreduce.robot |  2 +-
 4 files changed, 14 insertions(+), 7 deletions(-)
 copy hadoop-ozone/dist/src/main/compose/{ozone-mr/hadoop32 => 
ozonesecure-mr}/test.sh (83%)
 copy hadoop-ozone/dist/src/main/smoketest/{kinit.robot => kinit-hadoop.robot} 
(94%)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable

2019-09-23 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 3fd3d74  HDDS-2161. Create RepeatedKeyInfo structure to be saved in 
deletedTable
3fd3d74 is described below

commit 3fd3d746fc4033cb4ab2265c7b9c9aaf8b39c10c
Author: dchitlangia 
AuthorDate: Fri Sep 20 18:06:30 2019 -0400

HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable

Signed-off-by: Anu Engineer 
---
 .../apache/hadoop/ozone/om/OMMetadataManager.java  |  6 +-
 .../ozone/om/codec/RepeatedOmKeyInfoCodec.java | 52 +
 .../hadoop/ozone/om/helpers/RepeatedOmKeyInfo.java | 91 ++
 .../src/main/proto/OzoneManagerProtocol.proto  |  4 +
 .../org/apache/hadoop/ozone/om/KeyManagerImpl.java | 47 +--
 .../hadoop/ozone/om/OmMetadataManagerImpl.java | 90 +++--
 .../ozone/om/request/key/OMKeyDeleteRequest.java   |  3 +-
 .../multipart/S3MultipartUploadAbortRequest.java   |  5 +-
 .../S3MultipartUploadCommitPartRequest.java|  4 +-
 .../ozone/om/response/key/OMKeyDeleteResponse.java | 27 ---
 .../multipart/S3MultipartUploadAbortResponse.java  | 21 +++--
 .../S3MultipartUploadCommitPartResponse.java   | 34 +---
 .../ozone/om/request/TestOMRequestUtils.java   | 14 +++-
 .../om/response/key/TestOMKeyDeleteResponse.java   | 20 ++---
 .../s3/multipart/TestS3MultipartResponse.java  |  3 +-
 .../TestS3MultipartUploadAbortResponse.java| 19 ++---
 16 files changed, 324 insertions(+), 116 deletions(-)

diff --git 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
index cc908fc..1d80f97 100644
--- 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
+++ 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
@@ -26,9 +26,11 @@ import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
 import org.apache.hadoop.ozone.om.helpers.OmMultipartKeyInfo;
 import org.apache.hadoop.ozone.om.helpers.OmPrefixInfo;
 import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
 import org.apache.hadoop.ozone.om.helpers.S3SecretValue;
 import org.apache.hadoop.ozone.om.lock.OzoneManagerLock;
-import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.VolumeList;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.VolumeList;
 import org.apache.hadoop.ozone.security.OzoneTokenIdentifier;
 import org.apache.hadoop.hdds.utils.db.DBStore;
 import org.apache.hadoop.hdds.utils.db.Table;
@@ -251,7 +253,7 @@ public interface OMMetadataManager {
*
* @return Deleted Table.
*/
-  Table getDeletedTable();
+  Table getDeletedTable();
 
   /**
* Gets the OpenKeyTable.
diff --git 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/codec/RepeatedOmKeyInfoCodec.java
 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/codec/RepeatedOmKeyInfoCodec.java
new file mode 100644
index 000..a0ef4a5
--- /dev/null
+++ 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/codec/RepeatedOmKeyInfoCodec.java
@@ -0,0 +1,52 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.codec;
+
+import com.google.common.base.Preconditions;
+import com.google.protobuf.InvalidProtocolBufferException;
+import org.apache.hadoop.hdds.utils.db.Codec;
+import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.RepeatedKeyInfo;
+
+import java.io.IOException;
+
+/**
+ * Codec to encode RepeatedOmKeyInfo as byte array.
+ */
+public class RepeatedOmKeyInfoCodec implements Codec {
+  @Override
+  public byte[] toPersistedFormat(RepeatedOmKeyInfo object)
+  throws IOException {
+Preconditions.checkNotNull(object,
+"Null object can't be converted to byte array.");
+return 

[hadoop] branch branch-2 updated: YARN-9762. Add submission context label to audit logs. Contributed by Manoj Kumar

2019-09-23 Thread jhung
This is an automated email from the ASF dual-hosted git repository.

jhung pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 90fbfbb  YARN-9762. Add submission context label to audit logs. 
Contributed by Manoj Kumar
90fbfbb is described below

commit 90fbfbbe710a092b6698b97410ac4ea82deeaf4f
Author: Jonathan Hung 
AuthorDate: Mon Sep 23 11:42:41 2019 -0700

YARN-9762. Add submission context label to audit logs. Contributed by Manoj 
Kumar

(cherry picked from commit 3d78b1223d3fdc29d500803cefd2931b54f44928)
(cherry picked from commit a1fa9a8a7f79a1a711cd881b526724b502e03456)
(cherry picked from commit 6a1d2d56bd6b3cd2f535a732cc07a78ea52062f8)
---
 .../server/resourcemanager/ClientRMService.java|  6 ++-
 .../yarn/server/resourcemanager/RMAuditLogger.java | 53 --
 .../server/resourcemanager/TestRMAuditLogger.java  | 21 +++--
 3 files changed, 69 insertions(+), 11 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
index d85f83a..8c03526 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
@@ -668,13 +668,15 @@ public class ClientRMService extends AbstractService 
implements
   " submitted by user " + user);
   RMAuditLogger.logSuccess(user, AuditConstants.SUBMIT_APP_REQUEST,
   "ClientRMService", applicationId, callerContext,
-  submissionContext.getQueue());
+  submissionContext.getQueue(),
+  submissionContext.getNodeLabelExpression());
 } catch (YarnException e) {
   LOG.info("Exception in submitting " + applicationId, e);
   RMAuditLogger.logFailure(user, AuditConstants.SUBMIT_APP_REQUEST,
   e.getMessage(), "ClientRMService",
   "Exception in submitting application", applicationId, callerContext,
-  submissionContext.getQueue());
+  submissionContext.getQueue(),
+  submissionContext.getNodeLabelExpression());
   throw e;
 }
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
index af66229..0271964 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
@@ -275,6 +275,16 @@ public class RMAuditLogger {
 }
   }
 
+  public static void logSuccess(String user, String operation, String target,
+  ApplicationId appId, CallerContext callerContext, String queueName,
+  String partition) {
+if (LOG.isInfoEnabled()) {
+  LOG.info(
+  createSuccessLog(user, operation, target, appId, null, null, null,
+  callerContext, Server.getRemoteIp(), queueName, partition));
+}
+  }
+
   /**
* Create a readable and parseable audit log string for a successful event.
*
@@ -395,7 +405,8 @@ public class RMAuditLogger {
   static String createFailureLog(String user, String operation, String perm,
   String target, String description, ApplicationId appId,
   ApplicationAttemptId attemptId, ContainerId containerId,
-  Resource resource, CallerContext callerContext, String queueName) {
+  Resource resource, CallerContext callerContext, String queueName,
+  String partition) {
 StringBuilder b = createStringBuilderForFailureLog(user,
 operation, target, description, perm);
 if (appId != null) {
@@ -414,6 +425,10 @@ public class RMAuditLogger {
 if (queueName != null) {
   add(Keys.QUEUENAME, queueName, b);
 }
+if (partition != null) {
+  add(Keys.NODELABEL, partition, b);
+}
+
 return b.toString();
   }
 
@@ -424,7 +439,7 @@ public class RMAuditLogger {
   String target, String description, ApplicationId appId,
   ApplicationAttemptId attemptId, ContainerId containerId, Resource 
resource) {
 return createFailureLog(user, operation, perm, target, 

[hadoop] branch branch-3.2 updated: YARN-9762. Add submission context label to audit logs. Contributed by Manoj Kumar

2019-09-23 Thread jhung
This is an automated email from the ASF dual-hosted git repository.

jhung pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new a1fa9a8  YARN-9762. Add submission context label to audit logs. 
Contributed by Manoj Kumar
a1fa9a8 is described below

commit a1fa9a8a7f79a1a711cd881b526724b502e03456
Author: Jonathan Hung 
AuthorDate: Mon Sep 23 11:42:41 2019 -0700

YARN-9762. Add submission context label to audit logs. Contributed by Manoj 
Kumar

(cherry picked from commit 3d78b1223d3fdc29d500803cefd2931b54f44928)
---
 .../server/resourcemanager/ClientRMService.java|  6 ++-
 .../yarn/server/resourcemanager/RMAuditLogger.java | 53 --
 .../server/resourcemanager/TestRMAuditLogger.java  | 21 +++--
 3 files changed, 69 insertions(+), 11 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
index 3d1f01d..571add2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
@@ -694,13 +694,15 @@ public class ClientRMService extends AbstractService 
implements
   " submitted by user " + user);
   RMAuditLogger.logSuccess(user, AuditConstants.SUBMIT_APP_REQUEST,
   "ClientRMService", applicationId, callerContext,
-  submissionContext.getQueue());
+  submissionContext.getQueue(),
+  submissionContext.getNodeLabelExpression());
 } catch (YarnException e) {
   LOG.info("Exception in submitting " + applicationId, e);
   RMAuditLogger.logFailure(user, AuditConstants.SUBMIT_APP_REQUEST,
   e.getMessage(), "ClientRMService",
   "Exception in submitting application", applicationId, callerContext,
-  submissionContext.getQueue());
+  submissionContext.getQueue(),
+  submissionContext.getNodeLabelExpression());
   throw e;
 }
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
index 292aa8b..06ac64c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
@@ -275,6 +275,16 @@ public class RMAuditLogger {
 }
   }
 
+  public static void logSuccess(String user, String operation, String target,
+  ApplicationId appId, CallerContext callerContext, String queueName,
+  String partition) {
+if (LOG.isInfoEnabled()) {
+  LOG.info(
+  createSuccessLog(user, operation, target, appId, null, null, null,
+  callerContext, Server.getRemoteIp(), queueName, partition));
+}
+  }
+
   /**
* Create a readable and parseable audit log string for a successful event.
*
@@ -395,7 +405,8 @@ public class RMAuditLogger {
   static String createFailureLog(String user, String operation, String perm,
   String target, String description, ApplicationId appId,
   ApplicationAttemptId attemptId, ContainerId containerId,
-  Resource resource, CallerContext callerContext, String queueName) {
+  Resource resource, CallerContext callerContext, String queueName,
+  String partition) {
 StringBuilder b = createStringBuilderForFailureLog(user,
 operation, target, description, perm);
 if (appId != null) {
@@ -414,6 +425,10 @@ public class RMAuditLogger {
 if (queueName != null) {
   add(Keys.QUEUENAME, queueName, b);
 }
+if (partition != null) {
+  add(Keys.NODELABEL, partition, b);
+}
+
 return b.toString();
   }
 
@@ -424,7 +439,7 @@ public class RMAuditLogger {
   String target, String description, ApplicationId appId,
   ApplicationAttemptId attemptId, ContainerId containerId, Resource 
resource) {
 return createFailureLog(user, operation, perm, target, description, appId,
-attemptId, containerId, resource, null, null);
+attemptId, containerId, resource, null, null, null);
   }

[hadoop] branch branch-3.1 updated: YARN-9762. Add submission context label to audit logs. Contributed by Manoj Kumar

2019-09-23 Thread jhung
This is an automated email from the ASF dual-hosted git repository.

jhung pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 6a1d2d5  YARN-9762. Add submission context label to audit logs. 
Contributed by Manoj Kumar
6a1d2d5 is described below

commit 6a1d2d56bd6b3cd2f535a732cc07a78ea52062f8
Author: Jonathan Hung 
AuthorDate: Mon Sep 23 11:42:41 2019 -0700

YARN-9762. Add submission context label to audit logs. Contributed by Manoj 
Kumar

(cherry picked from commit 3d78b1223d3fdc29d500803cefd2931b54f44928)
(cherry picked from commit a1fa9a8a7f79a1a711cd881b526724b502e03456)
---
 .../server/resourcemanager/ClientRMService.java|  6 ++-
 .../yarn/server/resourcemanager/RMAuditLogger.java | 53 --
 .../server/resourcemanager/TestRMAuditLogger.java  | 21 +++--
 3 files changed, 69 insertions(+), 11 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
index e81a372..1c10e89 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
@@ -680,13 +680,15 @@ public class ClientRMService extends AbstractService 
implements
   " submitted by user " + user);
   RMAuditLogger.logSuccess(user, AuditConstants.SUBMIT_APP_REQUEST,
   "ClientRMService", applicationId, callerContext,
-  submissionContext.getQueue());
+  submissionContext.getQueue(),
+  submissionContext.getNodeLabelExpression());
 } catch (YarnException e) {
   LOG.info("Exception in submitting " + applicationId, e);
   RMAuditLogger.logFailure(user, AuditConstants.SUBMIT_APP_REQUEST,
   e.getMessage(), "ClientRMService",
   "Exception in submitting application", applicationId, callerContext,
-  submissionContext.getQueue());
+  submissionContext.getQueue(),
+  submissionContext.getNodeLabelExpression());
   throw e;
 }
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
index 292aa8b..06ac64c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
@@ -275,6 +275,16 @@ public class RMAuditLogger {
 }
   }
 
+  public static void logSuccess(String user, String operation, String target,
+  ApplicationId appId, CallerContext callerContext, String queueName,
+  String partition) {
+if (LOG.isInfoEnabled()) {
+  LOG.info(
+  createSuccessLog(user, operation, target, appId, null, null, null,
+  callerContext, Server.getRemoteIp(), queueName, partition));
+}
+  }
+
   /**
* Create a readable and parseable audit log string for a successful event.
*
@@ -395,7 +405,8 @@ public class RMAuditLogger {
   static String createFailureLog(String user, String operation, String perm,
   String target, String description, ApplicationId appId,
   ApplicationAttemptId attemptId, ContainerId containerId,
-  Resource resource, CallerContext callerContext, String queueName) {
+  Resource resource, CallerContext callerContext, String queueName,
+  String partition) {
 StringBuilder b = createStringBuilderForFailureLog(user,
 operation, target, description, perm);
 if (appId != null) {
@@ -414,6 +425,10 @@ public class RMAuditLogger {
 if (queueName != null) {
   add(Keys.QUEUENAME, queueName, b);
 }
+if (partition != null) {
+  add(Keys.NODELABEL, partition, b);
+}
+
 return b.toString();
   }
 
@@ -424,7 +439,7 @@ public class RMAuditLogger {
   String target, String description, ApplicationId appId,
   ApplicationAttemptId attemptId, ContainerId containerId, Resource 
resource) {
 return createFailureLog(user, operation, perm, target, description, appId,
-attemptId, containerId, resource, null, 

[hadoop] branch trunk updated: YARN-9762. Add submission context label to audit logs. Contributed by Manoj Kumar

2019-09-23 Thread jhung
This is an automated email from the ASF dual-hosted git repository.

jhung pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 3d78b12  YARN-9762. Add submission context label to audit logs. 
Contributed by Manoj Kumar
3d78b12 is described below

commit 3d78b1223d3fdc29d500803cefd2931b54f44928
Author: Jonathan Hung 
AuthorDate: Mon Sep 23 11:42:41 2019 -0700

YARN-9762. Add submission context label to audit logs. Contributed by Manoj 
Kumar
---
 .../server/resourcemanager/ClientRMService.java|  6 ++-
 .../yarn/server/resourcemanager/RMAuditLogger.java | 53 --
 .../server/resourcemanager/TestRMAuditLogger.java  | 21 +++--
 3 files changed, 69 insertions(+), 11 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
index 2b93ca7..f9681e0 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
@@ -695,13 +695,15 @@ public class ClientRMService extends AbstractService 
implements
   " submitted by user " + user);
   RMAuditLogger.logSuccess(user, AuditConstants.SUBMIT_APP_REQUEST,
   "ClientRMService", applicationId, callerContext,
-  submissionContext.getQueue());
+  submissionContext.getQueue(),
+  submissionContext.getNodeLabelExpression());
 } catch (YarnException e) {
   LOG.info("Exception in submitting " + applicationId, e);
   RMAuditLogger.logFailure(user, AuditConstants.SUBMIT_APP_REQUEST,
   e.getMessage(), "ClientRMService",
   "Exception in submitting application", applicationId, callerContext,
-  submissionContext.getQueue());
+  submissionContext.getQueue(),
+  submissionContext.getNodeLabelExpression());
   throw e;
 }
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
index b24cac9..854b6ca 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
@@ -271,6 +271,16 @@ public class RMAuditLogger {
 }
   }
 
+  public static void logSuccess(String user, String operation, String target,
+  ApplicationId appId, CallerContext callerContext, String queueName,
+  String partition) {
+if (LOG.isInfoEnabled()) {
+  LOG.info(
+  createSuccessLog(user, operation, target, appId, null, null, null,
+  callerContext, Server.getRemoteIp(), queueName, partition));
+}
+  }
+
   /**
* Create a readable and parseable audit log string for a successful event.
*
@@ -391,7 +401,8 @@ public class RMAuditLogger {
   static String createFailureLog(String user, String operation, String perm,
   String target, String description, ApplicationId appId,
   ApplicationAttemptId attemptId, ContainerId containerId,
-  Resource resource, CallerContext callerContext, String queueName) {
+  Resource resource, CallerContext callerContext, String queueName,
+  String partition) {
 StringBuilder b = createStringBuilderForFailureLog(user,
 operation, target, description, perm);
 if (appId != null) {
@@ -410,6 +421,10 @@ public class RMAuditLogger {
 if (queueName != null) {
   add(Keys.QUEUENAME, queueName, b);
 }
+if (partition != null) {
+  add(Keys.NODELABEL, partition, b);
+}
+
 return b.toString();
   }
 
@@ -420,7 +435,7 @@ public class RMAuditLogger {
   String target, String description, ApplicationId appId,
   ApplicationAttemptId attemptId, ContainerId containerId, Resource 
resource) {
 return createFailureLog(user, operation, perm, target, description, appId,
-attemptId, containerId, resource, null, null);
+attemptId, containerId, resource, null, null, null);
   }
 
   /**
@@ -492,7 +507,7 @@ public class RMAuditLogger {
   CallerContext 

[hadoop] branch HDDS-1880-Decom updated (ee8f24c -> fd5e877)

2019-09-23 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a change to branch HDDS-1880-Decom
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from ee8f24c  HDDS-1982. Extend SCMNodeManager to support decommission and 
maintenance states. Contributed by Stephen O'Donnell.
 add 3f223be  HDFS-14844. Make buffer of 
BlockReaderRemote#newBlockReader#BufferedOutputStream configurable. Contributed 
by Lisheng Sun.
 add 5363730  HDDS-2157. checkstyle: print filenames relative to project 
root (#1485)
 add d7d6ec8  HDDS-2128. Make ozone sh command work with OM HA service ids 
(#1445)
 add aa93866  HDFS-14833. RBF: Router Update Doesn't Sync Quota. 
Contributed by Ayush Saxena.
 add efed445  HADOOP-16589. [pb-upgrade] Update docker image to make 3.7.1 
protoc as default (#1482). Contributed by Vinayakumar B.
 add dbdc612  HDDS-2163. Add 'Replication factor' to the output of list 
keys (#1493)
 add e02b102  HADOOP-16445. Allow separate custom signing algorithms for S3 
and DDB (#1332)
 add a94aa1f  HDDS-2150. Update dependency versions to avoid security 
vulnerabilities. (#1472)
 add 659c888  HDFS-14818. Check native pmdk lib by 'hadoop checknative' 
command. Contributed by Feilong He.
 add 4c0a7a9  Make upstream aware of 3.2.1 release.
 add 07c81e9  HADOOP-16558. [COMMON+HDFS] use protobuf-maven-plugin to 
generate protobuf classes (#1494). Contributed by Vinayakumar B.
 add aa664d7  HADOOP-16138. hadoop fs mkdir / of nonexistent abfs container 
raises NPE (#1302). Contributed by Gabor Bota.
 add 2b5fc95  HADOOP-16591 Fix S3A ITest*MRjob failures.
 add c30e495  HDFS-14853. NPE in 
DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is not 
present. Contributed by Ranith Sardar.
 new fd5e877  Merge branch 'trunk' into HDDS-1880-Decom

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 dev-support/docker/Dockerfile  |  20 +-
 hadoop-common-project/hadoop-common/pom.xml|  66 +--
 .../hadoop-common/src/CMakeLists.txt   |   2 +-
 .../java/org/apache/hadoop/fs/shell/Mkdir.java |  11 +-
 .../org/apache/hadoop/io/nativeio/NativeIO.java|  28 +-
 .../apache/hadoop/util/NativeLibraryChecker.java   |  10 +
 .../src/org/apache/hadoop/io/nativeio/NativeIO.c   |  14 +-
 .../src/org/apache/hadoop/io/nativeio/pmdk_load.c  |  28 +-
 .../src/org/apache/hadoop/io/nativeio/pmdk_load.h  |   5 -
 .../hadoop-common/src/main/proto/FSProtos.proto|   2 +-
 .../src/main/proto/GenericRefreshProtocol.proto|   2 +-
 .../src/main/proto/GetUserMappingsProtocol.proto   |   2 +-
 .../src/main/proto/HAServiceProtocol.proto |   2 +-
 .../src/main/proto/IpcConnectionContext.proto  |   2 +-
 .../src/main/proto/ProtobufRpcEngine.proto |   2 +-
 .../src/main/proto/ProtocolInfo.proto  |   2 +-
 .../proto/RefreshAuthorizationPolicyProtocol.proto |   2 +-
 .../src/main/proto/RefreshCallQueueProtocol.proto  |   2 +-
 .../main/proto/RefreshUserMappingsProtocol.proto   |   2 +-
 .../hadoop-common/src/main/proto/RpcHeader.proto   |   2 +-
 .../hadoop-common/src/main/proto/Security.proto|   2 +-
 .../hadoop-common/src/main/proto/TraceAdmin.proto  |   2 +-
 .../src/main/proto/ZKFCProtocol.proto  |   2 +-
 .../site/markdown/release/3.2.1/CHANGELOG.3.2.1.md | 553 +
 .../markdown/release/3.2.1/RELEASENOTES.3.2.1.md   |  80 +++
 .../hadoop-common/src/test/proto/test.proto|   2 +-
 .../src/test/proto/test_rpc_service.proto  |   1 +
 hadoop-hdds/common/pom.xml |   2 +-
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml |  36 +-
 .../hadoop/hdfs/client/HdfsClientConfigKeys.java   |   3 +
 .../hdfs/client/impl/BlockReaderFactory.java   |   2 +-
 .../hadoop/hdfs/client/impl/BlockReaderRemote.java |  11 +-
 .../src/main/proto/ClientDatanodeProtocol.proto|   2 +-
 .../src/main/proto/ClientNamenodeProtocol.proto|   2 +-
 .../src/main/proto/ReconfigurationProtocol.proto   |   2 +-
 .../hadoop-hdfs-client/src/main/proto/acl.proto|   2 +-
 .../src/main/proto/datatransfer.proto  |   2 +-
 .../src/main/proto/encryption.proto|   2 +-
 .../src/main/proto/erasurecoding.proto |   2 +-
 .../hadoop-hdfs-client/src/main/proto/hdfs.proto   |   2 +-
 .../src/main/proto/inotify.proto   |   2 +-
 .../hadoop-hdfs-client/src/main/proto/xattr.proto  |   2 +-
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml|  32 +-
 .../federation/router/RouterAdminServer.java   |  98 ++--
 .../src/main/proto/FederationProtocol.proto|   2 +-
 .../src/main/proto/RouterProtocol.proto|   2 +-
 

[hadoop] 01/01: Merge branch 'trunk' into HDDS-1880-Decom

2019-09-23 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch HDDS-1880-Decom
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit fd5e87750551e2fc3352c4d6acd7f43d5932cb32
Merge: ee8f24c c30e495
Author: Anu Engineer 
AuthorDate: Mon Sep 23 09:08:14 2019 -0700

Merge branch 'trunk' into HDDS-1880-Decom

 dev-support/docker/Dockerfile  |  20 +-
 hadoop-common-project/hadoop-common/pom.xml|  66 +-
 .../hadoop-common/src/CMakeLists.txt   |   2 +-
 .../java/org/apache/hadoop/fs/shell/Mkdir.java |  11 +-
 .../org/apache/hadoop/io/nativeio/NativeIO.java|  28 +-
 .../apache/hadoop/util/NativeLibraryChecker.java   |  10 +
 .../src/org/apache/hadoop/io/nativeio/NativeIO.c   |  14 +-
 .../src/org/apache/hadoop/io/nativeio/pmdk_load.c  |  28 +-
 .../src/org/apache/hadoop/io/nativeio/pmdk_load.h  |   5 -
 .../hadoop-common/src/main/proto/FSProtos.proto|   2 +-
 .../src/main/proto/GenericRefreshProtocol.proto|   2 +-
 .../src/main/proto/GetUserMappingsProtocol.proto   |   2 +-
 .../src/main/proto/HAServiceProtocol.proto |   2 +-
 .../src/main/proto/IpcConnectionContext.proto  |   2 +-
 .../src/main/proto/ProtobufRpcEngine.proto |   2 +-
 .../src/main/proto/ProtocolInfo.proto  |   2 +-
 .../proto/RefreshAuthorizationPolicyProtocol.proto |   2 +-
 .../src/main/proto/RefreshCallQueueProtocol.proto  |   2 +-
 .../main/proto/RefreshUserMappingsProtocol.proto   |   2 +-
 .../hadoop-common/src/main/proto/RpcHeader.proto   |   2 +-
 .../hadoop-common/src/main/proto/Security.proto|   2 +-
 .../hadoop-common/src/main/proto/TraceAdmin.proto  |   2 +-
 .../src/main/proto/ZKFCProtocol.proto  |   2 +-
 .../site/markdown/release/3.2.1/CHANGELOG.3.2.1.md | 553 +
 .../markdown/release/3.2.1/RELEASENOTES.3.2.1.md   |  80 +++
 .../hadoop-common/src/test/proto/test.proto|   2 +-
 .../src/test/proto/test_rpc_service.proto  |   1 +
 hadoop-hdds/common/pom.xml |   2 +-
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml |  36 +-
 .../hadoop/hdfs/client/HdfsClientConfigKeys.java   |   3 +
 .../hdfs/client/impl/BlockReaderFactory.java   |   2 +-
 .../hadoop/hdfs/client/impl/BlockReaderRemote.java |  11 +-
 .../src/main/proto/ClientDatanodeProtocol.proto|   2 +-
 .../src/main/proto/ClientNamenodeProtocol.proto|   2 +-
 .../src/main/proto/ReconfigurationProtocol.proto   |   2 +-
 .../hadoop-hdfs-client/src/main/proto/acl.proto|   2 +-
 .../src/main/proto/datatransfer.proto  |   2 +-
 .../src/main/proto/encryption.proto|   2 +-
 .../src/main/proto/erasurecoding.proto |   2 +-
 .../hadoop-hdfs-client/src/main/proto/hdfs.proto   |   2 +-
 .../src/main/proto/inotify.proto   |   2 +-
 .../hadoop-hdfs-client/src/main/proto/xattr.proto  |   2 +-
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml|  32 +-
 .../federation/router/RouterAdminServer.java   |  98 ++-
 .../src/main/proto/FederationProtocol.proto|   2 +-
 .../src/main/proto/RouterProtocol.proto|   2 +-
 .../server/federation/router/TestRouterQuota.java  |   9 +
 .../dev-support/jdiff/Apache_Hadoop_HDFS_3.2.1.xml | 674 +
 hadoop-hdfs-project/hadoop-hdfs/pom.xml|  48 +-
 .../apache/hadoop/hdfs/net/DFSNetworkTopology.java |   3 +
 .../datanode/erasurecode/StripedBlockReader.java   |   2 +-
 .../datanode/fsdataset/impl/FsDatasetCache.java|  15 +-
 .../src/main/proto/AliasMapProtocol.proto  |   2 +-
 .../src/main/proto/DatanodeLifelineProtocol.proto  |   2 +-
 .../src/main/proto/DatanodeProtocol.proto  |   2 +-
 .../hadoop-hdfs/src/main/proto/HAZKInfo.proto  |   2 +-
 .../hadoop-hdfs/src/main/proto/HdfsServer.proto|   2 +-
 .../src/main/proto/InterDatanodeProtocol.proto |   2 +-
 .../src/main/proto/InterQJournalProtocol.proto |   2 +-
 .../src/main/proto/JournalProtocol.proto   |   2 +-
 .../src/main/proto/NamenodeProtocol.proto  |   2 +-
 .../src/main/proto/QJournalProtocol.proto  |   2 +-
 .../hadoop-hdfs/src/main/proto/editlog.proto   |   2 +-
 .../hadoop-hdfs/src/main/proto/fsimage.proto   |   2 +-
 .../src/main/resources/hdfs-default.xml|  12 +
 .../hadoop/hdfs/net/TestDFSNetworkTopology.java|  16 +
 .../org/apache/hadoop/ozone/client/OzoneKey.java   |  17 +-
 .../hadoop/ozone/client/OzoneKeyDetails.java   |   4 +-
 .../apache/hadoop/ozone/client/rpc/RpcClient.java  |   5 +-
 hadoop-ozone/dev-support/checks/checkstyle.sh  |  12 +-
 .../hadoop/ozone/ozShell/TestOzoneShellHA.java | 343 +++
 .../hadoop/ozone/web/ozShell/OzoneAddress.java |  17 +-
 .../hadoop/ozone/client/OzoneBucketStub.java   |   3 +-
 hadoop-project-dist/pom.xml|   2 +-
 hadoop-project/pom.xml |  52 +-
 

[hadoop] branch trunk updated: HDFS-14853. NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is not present. Contributed by Ranith Sardar.

2019-09-23 Thread ayushsaxena
This is an automated email from the ASF dual-hosted git repository.

ayushsaxena pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c30e495  HDFS-14853. NPE in 
DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is not 
present. Contributed by Ranith Sardar.
c30e495 is described below

commit c30e495557359b23681a61edbc90cfafafdb7dfe
Author: Ayush Saxena 
AuthorDate: Mon Sep 23 21:22:50 2019 +0530

HDFS-14853. NPE in DFSNetworkTopology#chooseRandomWithStorageType() when 
the excludedNode is not present. Contributed by Ranith Sardar.
---
 .../org/apache/hadoop/hdfs/net/DFSNetworkTopology.java   |  3 +++
 .../apache/hadoop/hdfs/net/TestDFSNetworkTopology.java   | 16 
 2 files changed, 19 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java
index 7889ef4..0884fc0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java
@@ -226,6 +226,9 @@ public class DFSNetworkTopology extends NetworkTopology {
   String nodeLocation = excludedNode.getNetworkLocation()
   + "/" + excludedNode.getName();
   DatanodeDescriptor dn = (DatanodeDescriptor)getNode(nodeLocation);
+  if (dn == null) {
+continue;
+  }
   availableCount -= dn.hasStorageType(type)? 1 : 0;
 } else {
   LOG.error("Unexpected node type: {}.", excludedNode.getClass());
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/net/TestDFSNetworkTopology.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/net/TestDFSNetworkTopology.java
index 42b1928..3360d68 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/net/TestDFSNetworkTopology.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/net/TestDFSNetworkTopology.java
@@ -23,6 +23,8 @@ import org.slf4j.LoggerFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.DFSTestUtil;
+import org.apache.hadoop.hdfs.protocol.DatanodeID;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo.DatanodeInfoBuilder;
 import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor;
 import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo;
 import org.apache.hadoop.net.Node;
@@ -37,9 +39,11 @@ import java.util.HashSet;
 import java.util.Set;
 
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
+
 /**
  * This class tests the correctness of storage type info stored in
  * DFSNetworkTopology.
@@ -368,6 +372,18 @@ public class TestDFSNetworkTopology {
 }
   }
 
+  @Test
+  public void testChooseRandomWithStorageTypeWithExcludedforNullCheck()
+  throws Exception {
+HashSet excluded = new HashSet<>();
+
+excluded.add(new DatanodeInfoBuilder()
+.setNodeID(DatanodeID.EMPTY_DATANODE_ID).build());
+Node node = CLUSTER.chooseRandomWithStorageType("/", "/l1/d1/r1", excluded,
+StorageType.ARCHIVE);
+
+assertNotNull(node);
+  }
 
   /**
* This test tests the wrapper method. The wrapper method only takes one 
scope


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2 updated: HADOOP-16581. Addendum: Remove use of Java 8 functionality. Contributed by Masatake Iwasaki.

2019-09-23 Thread xkrogen
This is an automated email from the ASF dual-hosted git repository.

xkrogen pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 0050f43  HADOOP-16581. Addendum: Remove use of Java 8 functionality. 
Contributed by Masatake Iwasaki.
0050f43 is described below

commit 0050f4363ed909afb2e662cfc2ccb5f7ae224d45
Author: Erik Krogen 
AuthorDate: Mon Sep 23 08:06:15 2019 -0700

HADOOP-16581. Addendum: Remove use of Java 8 functionality. Contributed by 
Masatake Iwasaki.
---
 .../org/apache/hadoop/crypto/key/TestValueQueue.java  | 19 ---
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestValueQueue.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestValueQueue.java
index 55a9280..6dc0c76 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestValueQueue.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestValueQueue.java
@@ -32,6 +32,7 @@ import org.apache.hadoop.test.GenericTestUtils;
 import org.junit.Assert;
 import org.junit.Test;
 
+import com.google.common.base.Supplier;
 import com.google.common.collect.Sets;
 
 public class TestValueQueue {
@@ -62,15 +63,19 @@ public class TestValueQueue {
 }
   }
 
-  private void waitForRefill(ValueQueue valueQueue, String queueName, int 
queueSize)
+  private void waitForRefill(final ValueQueue valueQueue,
+  final String queueName, final int queueSize)
   throws TimeoutException, InterruptedException {
-GenericTestUtils.waitFor(() -> {
-  int size = valueQueue.getSize(queueName);
-  if (size != queueSize) {
-LOG.info("Current ValueQueue size is " + size);
-return false;
+GenericTestUtils.waitFor(new Supplier() {
+  @Override
+  public Boolean get() {
+int size = valueQueue.getSize(queueName);
+   if (size != queueSize) {
+ LOG.info("Current ValueQueue size is " + size);
+ return false;
+   }
+   return true;
   }
-  return true;
 }, 100, 3000);
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HDDS-2067 created (now 7256c69)

2019-09-23 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-2067
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at 7256c69  HDDS-2067. Create generic service facade with 
tracing/metrics/logging support

No new revisions were added by this update.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-16591 Fix S3A ITest*MRjob failures.

2019-09-23 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2b5fc95  HADOOP-16591 Fix S3A ITest*MRjob failures.
2b5fc95 is described below

commit 2b5fc95851552599e33674d9a23e7e9af74a304e
Author: Siddharth Seth 
AuthorDate: Mon Sep 23 14:55:24 2019 +0100

HADOOP-16591 Fix S3A ITest*MRjob failures.

Contributed by Siddharth Seth.

Change-Id: I7f08201c9f7c0551514049389b5b398a84855191
---
 .../org/apache/hadoop/fs/s3a/commit/AbstractYarnClusterITest.java| 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/AbstractYarnClusterITest.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/AbstractYarnClusterITest.java
index 2501662..2e8f1f0 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/AbstractYarnClusterITest.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/AbstractYarnClusterITest.java
@@ -196,8 +196,9 @@ public abstract class AbstractYarnClusterITest extends 
AbstractCommitITest {
 
 
   protected Job createJob() throws IOException {
-Job mrJob = Job.getInstance(getClusterBinding().getConf(),
-getMethodName());
+Configuration jobConf = getClusterBinding().getConf();
+jobConf.addResource(getConfiguration());
+Job mrJob = Job.getInstance(jobConf, getMethodName());
 patchConfigurationForCommitter(mrJob.getConfiguration());
 return mrJob;
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-16138. hadoop fs mkdir / of nonexistent abfs container raises NPE (#1302). Contributed by Gabor Bota.

2019-09-23 Thread gabota
This is an automated email from the ASF dual-hosted git repository.

gabota pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new aa664d7  HADOOP-16138. hadoop fs mkdir / of nonexistent abfs container 
raises NPE (#1302). Contributed by Gabor Bota.
aa664d7 is described below

commit aa664d72595ddfcb1a1bf082381bb222e59db354
Author: Gabor Bota 
AuthorDate: Mon Sep 23 13:29:01 2019 +0200

HADOOP-16138. hadoop fs mkdir / of nonexistent abfs container raises NPE 
(#1302). Contributed by Gabor Bota.

Change-Id: I2f637865c871e400b95fe7ddaa24bf99fa192023
---
 .../java/org/apache/hadoop/fs/shell/Mkdir.java | 11 +++-
 .../fs/azurebfs/ITestAzureBlobFileSystemCLI.java   | 65 ++
 2 files changed, 75 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Mkdir.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Mkdir.java
index 5828b0b..1780cda 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Mkdir.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Mkdir.java
@@ -73,8 +73,17 @@ class Mkdir extends FsCommand {
   // we want a/b
   final Path itemPath = new Path(item.path.toString());
   final Path itemParentPath = itemPath.getParent();
+
+  if(itemParentPath == null) {
+throw new PathNotFoundException(String.format(
+"Item: %s parent's path is null. This can happen if mkdir is " +
+"called on root, so there's no parent.", itemPath.toString()));
+  }
+
   if (!item.fs.exists(itemParentPath)) {
-throw new PathNotFoundException(itemParentPath.toString());
+throw new PathNotFoundException(String.format(
+"mkdir failed for path: %s. Item parent path not found: %s.",
+itemPath.toString(), itemParentPath.toString()));
   }
 }
 if (!item.fs.mkdirs(item.path)) {
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCLI.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCLI.java
new file mode 100644
index 000..c88b545
--- /dev/null
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCLI.java
@@ -0,0 +1,65 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.util.UUID;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FsShell;
+import org.apache.hadoop.conf.Configuration;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION;
+import static 
org.apache.hadoop.fs.azurebfs.constants.FileSystemUriSchemes.ABFS_SCHEME;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_ABFS_ACCOUNT_NAME;
+
+/**
+ * Tests for Azure Blob FileSystem CLI.
+ */
+public class ITestAzureBlobFileSystemCLI extends AbstractAbfsIntegrationTest {
+
+  public  ITestAzureBlobFileSystemCLI() throws Exception {
+super();
+final AbfsConfiguration conf = getConfiguration();
+conf.setBoolean(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION, 
false);
+  }
+
+  /**
+   * Test for HADOOP-16138: hadoop fs mkdir / of nonexistent abfs
+   * container raises NPE.
+   *
+   * The command should return with 1 exit status, but there should be no NPE.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testMkdirRootNonExistentContainer() throws Exception {
+final Configuration rawConf = getRawConfiguration();
+FsShell fsShell = new FsShell(rawConf);
+final String account =
+rawConf.get(FS_AZURE_ABFS_ACCOUNT_NAME, null);
+
+String nonExistentContainer = "nonexistent-" + UUID.randomUUID();
+
+int result = fsShell.run(new String[] { "-mkdir",
+ABFS_SCHEME + "://" + nonExistentContainer + "@" + account + "/" });
+
+assertEquals(1, result);
+  }
+}



[hadoop] 01/03: HDDS-1577. Add default pipeline placement policy implementation. (#1366)

2019-09-23 Thread sammichen
This is an automated email from the ASF dual-hosted git repository.

sammichen pushed a commit to branch HDDS-1564
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 7f76469d50ebd7555a353514e3826198d14414a2
Author: Li Cheng 
AuthorDate: Thu Sep 5 11:51:40 2019 +0800

HDDS-1577. Add default pipeline placement policy implementation. (#1366)



(cherry picked from commit b640a5f6d53830aee4b9c2a7d17bf57c987962cd)
---
 .../org/apache/hadoop/hdds/scm/ScmConfigKeys.java  |   5 +
 .../common/src/main/resources/ozone-default.xml|   7 +
 .../apache/hadoop/hdds/scm/node/NodeManager.java   |  14 +
 .../hadoop/hdds/scm/node/NodeStateManager.java |   9 +
 .../hadoop/hdds/scm/node/SCMNodeManager.java   |  19 +-
 .../hdds/scm/node/states/Node2ObjectsMap.java  |   4 +-
 .../hdds/scm/node/states/Node2PipelineMap.java |  12 +-
 .../hdds/scm/pipeline/PipelinePlacementPolicy.java | 338 +
 .../hadoop/hdds/scm/container/MockNodeManager.java |  36 ++-
 .../scm/pipeline/TestPipelinePlacementPolicy.java  | 197 
 .../testutils/ReplicationNodeManagerMock.java  |  16 +
 11 files changed, 653 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
index f00ecb2..ad7073e 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
@@ -313,6 +313,11 @@ public final class ScmConfigKeys {
   public static final String OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT =
   "ozone.scm.pipeline.owner.container.count";
   public static final int OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT_DEFAULT = 3;
+  // Pipeline placement policy:
+  // the max number of pipelines can a single datanode be engaged in.
+  public static final String OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT =
+  "ozone.scm.datanode.max.pipeline.engagement";
+  public static final int OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT = 5;
 
   public static final String
   OZONE_SCM_KEY_VALUE_CONTAINER_DELETION_CHOOSING_POLICY =
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index 9e4c5ea..c1e97a9 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -836,6 +836,13 @@
 
   
   
+ozone.scm.datanode.max.pipeline.engagement
+5
+OZONE, SCM, PIPELINE
+Max number of pipelines per datanode can be engaged in.
+
+  
+  
 ozone.scm.container.size
 5GB
 OZONE, PERFORMANCE, MANAGEMENT
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
index d8890fb..d638ee9 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
@@ -19,6 +19,7 @@ package org.apache.hadoop.hdds.scm.node;
 
 import 
org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.NodeReportProto;
 import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
 import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
@@ -118,6 +119,13 @@ public interface NodeManager extends 
StorageContainerNodeProtocol,
   Set getPipelines(DatanodeDetails datanodeDetails);
 
   /**
+   * Get the count of pipelines a datanodes is associated with.
+   * @param datanodeDetails DatanodeDetails
+   * @return The number of pipelines
+   */
+  int getPipelinesCount(DatanodeDetails datanodeDetails);
+
+  /**
* Add pipeline information in the NodeManager.
* @param pipeline - Pipeline to be added
*/
@@ -199,4 +207,10 @@ public interface NodeManager extends 
StorageContainerNodeProtocol,
* @return the given datanode, or null if not found
*/
   DatanodeDetails getNodeByAddress(String address);
+
+  /**
+   * Get cluster map as in network topology for this node manager.
+   * @return cluster map
+   */
+  NetworkTopology getClusterNetworkTopologyMap();
 }
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
index 954cb0e..9d2a9f2 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
@@ -284,6 +284,15 @@ public class NodeStateManager 

[hadoop] 03/03: HDDS-2089: Add createPipeline CLI. (#1418)

2019-09-23 Thread sammichen
This is an automated email from the ASF dual-hosted git repository.

sammichen pushed a commit to branch HDDS-1564
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 7b5a5fe74d45b536dec0997cdb8034159bbcbf79
Author: Li Cheng 
AuthorDate: Fri Sep 13 07:01:16 2019 +0800

HDDS-2089: Add createPipeline CLI. (#1418)

(cherry picked from commit 326b5acd4a63fe46821919322867f5daff30750c)
---
 .../org/apache/hadoop/ozone/audit/SCMAction.java   |  1 +
 ...inerLocationProtocolServerSideTranslatorPB.java | 10 ++-
 .../hdds/scm/pipeline/SimplePipelineProvider.java  |  2 +-
 .../hdds/scm/server/SCMClientProtocolServer.java   |  8 +--
 .../org/apache/hadoop/hdds/scm/cli/SCMCLI.java |  2 +
 .../scm/cli/pipeline/CreatePipelineSubcommand.java | 71 ++
 6 files changed, 87 insertions(+), 7 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/SCMAction.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/SCMAction.java
index d03ad15..f8b1bbf 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/SCMAction.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/SCMAction.java
@@ -31,6 +31,7 @@ public enum SCMAction implements AuditAction {
   GET_CONTAINER,
   GET_CONTAINER_WITH_PIPELINE,
   LIST_CONTAINER,
+  CREATE_PIPELINE,
   LIST_PIPELINE,
   CLOSE_PIPELINE,
   ACTIVATE_PIPELINE,
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/protocolPB/StorageContainerLocationProtocolServerSideTranslatorPB.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/protocolPB/StorageContainerLocationProtocolServerSideTranslatorPB.java
index 99c9e8d..092aba3 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/protocolPB/StorageContainerLocationProtocolServerSideTranslatorPB.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/protocolPB/StorageContainerLocationProtocolServerSideTranslatorPB.java
@@ -242,8 +242,14 @@ public final class 
StorageContainerLocationProtocolServerSideTranslatorPB
   public PipelineResponseProto allocatePipeline(
   RpcController controller, PipelineRequestProto request)
   throws ServiceException {
-// TODO : Wiring this up requires one more patch.
-return null;
+try (Scope scope = TracingUtil
+.importAndCreateScope("createPipeline", request.getTraceID())) {
+  impl.createReplicationPipeline(request.getReplicationType(),
+  request.getReplicationFactor(), request.getNodePool());
+  return PipelineResponseProto.newBuilder().build();
+} catch (IOException e) {
+  throw new ServiceException(e);
+}
   }
 
   @Override
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SimplePipelineProvider.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SimplePipelineProvider.java
index ab98dfa..54e2141 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SimplePipelineProvider.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SimplePipelineProvider.java
@@ -48,7 +48,7 @@ public class SimplePipelineProvider implements 
PipelineProvider {
   String e = String
   .format("Cannot create pipeline of factor %d using %d nodes.",
   factor.getNumber(), dns.size());
-  throw new IOException(e);
+  throw new InsufficientDatanodesException(e);
 }
 
 Collections.shuffle(dns);
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
index 7d9cb3e..7708bed 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
@@ -390,10 +390,10 @@ public class SCMClientProtocolServer implements
   public Pipeline createReplicationPipeline(HddsProtos.ReplicationType type,
   HddsProtos.ReplicationFactor factor, HddsProtos.NodePool nodePool)
   throws IOException {
-// TODO: will be addressed in future patch.
-// This is needed only for debugging purposes to make sure cluster is
-// working correctly.
-return null;
+Pipeline result = scm.getPipelineManager().createPipeline(type, factor);
+AUDIT.logWriteSuccess(
+buildAuditMessageForSuccess(SCMAction.CREATE_PIPELINE, null));
+return result;
   }
 
   @Override
diff --git 
a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java 
b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java
index 1b95418..1246fae 100644
--- a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java
+++ 

[hadoop] 02/03: HDDS-1571. Create an interface for pipeline placement policy to support network topologies. (#1395)

2019-09-23 Thread sammichen
This is an automated email from the ASF dual-hosted git repository.

sammichen pushed a commit to branch HDDS-1564
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 0001e1df5f8f55d1b41160dbc1a61ccc48421b06
Author: Li Cheng 
AuthorDate: Tue Sep 10 20:15:51 2019 +0800

HDDS-1571. Create an interface for pipeline placement policy to support 
network topologies. (#1395)

(cherry picked from commit 753fc6703a39154ed6013e44dbae572391748906)
---
 ...erPlacementPolicy.java => PlacementPolicy.java} | 12 +++
 .../placement/algorithms/package-info.java | 21 ---
 .../common/src/main/resources/ozone-default.xml|  6 ++--
 ...onPolicy.java => SCMCommonPlacementPolicy.java} | 23 ++--
 .../hdds/scm/container/ReplicationManager.java | 13 +++
 .../ContainerPlacementPolicyFactory.java   | 18 +-
 .../algorithms/SCMContainerPlacementCapacity.java  |  4 ++-
 .../algorithms/SCMContainerPlacementRackAware.java |  8 +++--
 .../algorithms/SCMContainerPlacementRandom.java|  6 ++--
 .../hdds/scm/pipeline/PipelinePlacementPolicy.java | 42 +-
 .../hdds/scm/pipeline/RatisPipelineProvider.java   | 14 
 .../hdds/scm/server/StorageContainerManager.java   |  4 +--
 .../hdds/scm/container/TestReplicationManager.java |  7 ++--
 .../algorithms/TestContainerPlacementFactory.java  |  7 ++--
 .../hdds/scm/node/TestContainerPlacement.java  |  5 ++-
 .../hdds/scm/safemode/TestSafeModeHandler.java |  5 ++-
 .../hadoop/ozone/TestContainerOperations.java  |  4 +--
 .../TestContainerStateMachineIdempotency.java  |  5 ++-
 .../hadoop/ozone/dn/scrubber/TestDataScrubber.java |  4 +--
 .../hadoop/ozone/scm/TestContainerSmallFile.java   |  4 +--
 .../scm/TestGetCommittedBlockLengthAndPutKey.java  |  5 ++-
 21 files changed, 105 insertions(+), 112 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/ContainerPlacementPolicy.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/PlacementPolicy.java
similarity index 80%
rename from 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/ContainerPlacementPolicy.java
rename to 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/PlacementPolicy.java
index 52ce796..f6a0e8b 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/ContainerPlacementPolicy.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/PlacementPolicy.java
@@ -15,7 +15,7 @@
  * the License.
  */
 
-package org.apache.hadoop.hdds.scm.container.placement.algorithms;
+package org.apache.hadoop.hdds.scm;
 
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 
@@ -23,14 +23,14 @@ import java.io.IOException;
 import java.util.List;
 
 /**
- * A ContainerPlacementPolicy support choosing datanodes to build replication
- * pipeline with specified constraints.
+ * A PlacementPolicy support choosing datanodes to build
+ * pipelines or containers with specified constraints.
  */
-public interface ContainerPlacementPolicy {
+public interface PlacementPolicy {
 
   /**
-   * Given the replication factor and size required, return set of datanodes
-   * that satisfy the nodes and size requirement.
+   * Given an initial set of datanodes and the size required,
+   * return set of datanodes that satisfy the nodes and size requirement.
*
* @param excludedNodes - list of nodes to be excluded.
* @param favoredNodes - list of nodes preferred.
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/package-info.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/package-info.java
deleted file mode 100644
index dac4752..000
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/package-info.java
+++ /dev/null
@@ -1,21 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hdds.scm.container.placement.algorithms;
-/**
- Contains container placement policy interface definition.
- **/
\ No newline at 

[hadoop] branch trunk updated (4c0a7a9 -> 07c81e9)

2019-09-23 Thread vinayakumarb
This is an automated email from the ASF dual-hosted git repository.

vinayakumarb pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 4c0a7a9  Make upstream aware of 3.2.1 release.
 add 07c81e9  HADOOP-16558. [COMMON+HDFS] use protobuf-maven-plugin to 
generate protobuf classes (#1494). Contributed by Vinayakumar B.

No new revisions were added by this update.

Summary of changes:
 hadoop-common-project/hadoop-common/pom.xml| 66 +-
 .../hadoop-common/src/main/proto/FSProtos.proto|  2 +-
 .../src/main/proto/GenericRefreshProtocol.proto|  2 +-
 .../src/main/proto/GetUserMappingsProtocol.proto   |  2 +-
 .../src/main/proto/HAServiceProtocol.proto |  2 +-
 .../src/main/proto/IpcConnectionContext.proto  |  2 +-
 .../src/main/proto/ProtobufRpcEngine.proto |  2 +-
 .../src/main/proto/ProtocolInfo.proto  |  2 +-
 .../proto/RefreshAuthorizationPolicyProtocol.proto |  2 +-
 .../src/main/proto/RefreshCallQueueProtocol.proto  |  2 +-
 .../main/proto/RefreshUserMappingsProtocol.proto   |  2 +-
 .../hadoop-common/src/main/proto/RpcHeader.proto   |  2 +-
 .../hadoop-common/src/main/proto/Security.proto|  2 +-
 .../hadoop-common/src/main/proto/TraceAdmin.proto  |  2 +-
 .../src/main/proto/ZKFCProtocol.proto  |  2 +-
 .../hadoop-common/src/test/proto/test.proto|  2 +-
 .../src/test/proto/test_rpc_service.proto  |  1 +
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml | 36 +++-
 .../src/main/proto/ClientDatanodeProtocol.proto|  2 +-
 .../src/main/proto/ClientNamenodeProtocol.proto|  2 +-
 .../src/main/proto/ReconfigurationProtocol.proto   |  2 +-
 .../hadoop-hdfs-client/src/main/proto/acl.proto|  2 +-
 .../src/main/proto/datatransfer.proto  |  2 +-
 .../src/main/proto/encryption.proto|  2 +-
 .../src/main/proto/erasurecoding.proto |  2 +-
 .../hadoop-hdfs-client/src/main/proto/hdfs.proto   |  2 +-
 .../src/main/proto/inotify.proto   |  2 +-
 .../hadoop-hdfs-client/src/main/proto/xattr.proto  |  2 +-
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml| 32 ---
 .../src/main/proto/FederationProtocol.proto|  2 +-
 .../src/main/proto/RouterProtocol.proto|  2 +-
 hadoop-hdfs-project/hadoop-hdfs/pom.xml| 48 ++--
 .../src/main/proto/AliasMapProtocol.proto  |  2 +-
 .../src/main/proto/DatanodeLifelineProtocol.proto  |  2 +-
 .../src/main/proto/DatanodeProtocol.proto  |  2 +-
 .../hadoop-hdfs/src/main/proto/HAZKInfo.proto  |  2 +-
 .../hadoop-hdfs/src/main/proto/HdfsServer.proto|  2 +-
 .../src/main/proto/InterDatanodeProtocol.proto |  2 +-
 .../src/main/proto/InterQJournalProtocol.proto |  2 +-
 .../src/main/proto/JournalProtocol.proto   |  2 +-
 .../src/main/proto/NamenodeProtocol.proto  |  2 +-
 .../src/main/proto/QJournalProtocol.proto  |  2 +-
 .../hadoop-hdfs/src/main/proto/editlog.proto   |  2 +-
 .../hadoop-hdfs/src/main/proto/fsimage.proto   |  2 +-
 hadoop-project/pom.xml | 49 +++-
 45 files changed, 141 insertions(+), 169 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: Make upstream aware of 3.2.1 release.

2019-09-23 Thread rohithsharmaks
This is an automated email from the ASF dual-hosted git repository.

rohithsharmaks pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new f9f0338  Make upstream aware of 3.2.1 release.
f9f0338 is described below

commit f9f0338104e81f1f67350083557dbccc9bd1bd80
Author: Rohith Sharma K S 
AuthorDate: Mon Sep 23 06:20:54 2019 +

Make upstream aware of 3.2.1 release.
---
 .../site/markdown/release/3.2.1/CHANGELOG.3.2.1.md | 553 +
 .../markdown/release/3.2.1/RELEASENOTES.3.2.1.md   |  80 +++
 .../dev-support/jdiff/Apache_Hadoop_HDFS_3.2.1.xml | 674 +
 hadoop-project-dist/pom.xml|   2 +-
 4 files changed, 1308 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.1/CHANGELOG.3.2.1.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.1/CHANGELOG.3.2.1.md
new file mode 100644
index 000..64e249e
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.1/CHANGELOG.3.2.1.md
@@ -0,0 +1,553 @@
+
+
+# Apache Hadoop Changelog
+
+## Release 3.2.1 - 2019-09-10
+
+### INCOMPATIBLE CHANGES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|: |: | :--- |: |: |: |
+| [HADOOP-15922](https://issues.apache.org/jira/browse/HADOOP-15922) | 
DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode 
URL |  Major | common, kms | He Xiaoqiao | He Xiaoqiao |
+
+
+### NEW FEATURES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|: |: | :--- |: |: |: |
+| [HADOOP-15950](https://issues.apache.org/jira/browse/HADOOP-15950) | 
Failover for LdapGroupsMapping |  Major | common, security | Lukas Majercak | 
Lukas Majercak |
+| [YARN-7055](https://issues.apache.org/jira/browse/YARN-7055) | YARN Timeline 
Service v.2: beta 1 / GA |  Major | timelineclient, timelinereader, 
timelineserver | Vrushali C |  |
+| [YARN-9761](https://issues.apache.org/jira/browse/YARN-9761) | Allow 
overriding application submissions based on server side configs |  Major | . | 
Jonathan Hung | pralabhkumar |
+
+
+### IMPROVEMENTS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|: |: | :--- |: |: |: |
+| [HADOOP-15676](https://issues.apache.org/jira/browse/HADOOP-15676) | Cleanup 
TestSSLHttpServer |  Minor | common | Szilard Nemeth | Szilard Nemeth |
+| [YARN-8896](https://issues.apache.org/jira/browse/YARN-8896) | Limit the 
maximum number of container assignments per heartbeat |  Major | . | Weiwei 
Yang | Zhankun Tang |
+| [YARN-8618](https://issues.apache.org/jira/browse/YARN-8618) | Yarn Service: 
When all the components of a service have restart policy NEVER then initiation 
of service upgrade should fail |  Major | . | Chandni Singh | Chandni Singh |
+| [HADOOP-15804](https://issues.apache.org/jira/browse/HADOOP-15804) | upgrade 
to commons-compress 1.18 |  Major | . | PJ Fanning | Akira Ajisaka |
+| [YARN-8916](https://issues.apache.org/jira/browse/YARN-8916) | Define a 
constant "docker" string in "ContainerRuntimeConstants.java" for better 
maintainability |  Minor | . | Zhankun Tang | Zhankun Tang |
+| [YARN-8908](https://issues.apache.org/jira/browse/YARN-8908) | Fix errors in 
yarn-default.xml related to GPU/FPGA |  Major | . | Zhankun Tang | Zhankun Tang 
|
+| [HDFS-13941](https://issues.apache.org/jira/browse/HDFS-13941) | make 
storageId in BlockPoolTokenSecretManager.checkAccess optional |  Major | . | 
Ajay Kumar | Ajay Kumar |
+| [HDFS-14029](https://issues.apache.org/jira/browse/HDFS-14029) | Sleep in 
TestLazyPersistFiles should be put into a loop |  Trivial | hdfs | Adam Antal | 
Adam Antal |
+| [YARN-8915](https://issues.apache.org/jira/browse/YARN-8915) | Update the 
doc about the default value of "maximum-container-assignments" for capacity 
scheduler |  Minor | . | Zhankun Tang | Zhankun Tang |
+| [HADOOP-15855](https://issues.apache.org/jira/browse/HADOOP-15855) | Review 
hadoop credential doc, including object store details |  Minor | documentation, 
security | Steve Loughran | Steve Loughran |
+| [YARN-7225](https://issues.apache.org/jira/browse/YARN-7225) | Add queue and 
partition info to RM audit log |  Major | resourcemanager | Jonathan Hung | 
Eric Payne |
+| [HADOOP-15687](https://issues.apache.org/jira/browse/HADOOP-15687) | 
Credentials class should allow access to aliases |  Trivial | . | Lars Francke 
| Lars Francke |
+| [YARN-8969](https://issues.apache.org/jira/browse/YARN-8969) | 
AbstractYarnScheduler#getNodeTracker should return generic type to avoid type 
casting |  Major | . | Wanqiang Ji | Wanqiang Ji |
+| [YARN-8977](https://issues.apache.org/jira/browse/YARN-8977) | Remove 
unnecessary type casting when calling AbstractYarnScheduler#getSchedulerNode |  
Trivial | . | 

[hadoop] branch trunk updated: Make upstream aware of 3.2.1 release.

2019-09-23 Thread rohithsharmaks
This is an automated email from the ASF dual-hosted git repository.

rohithsharmaks pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 4c0a7a9  Make upstream aware of 3.2.1 release.
4c0a7a9 is described below

commit 4c0a7a9e13c68b24c94009927f9fd3987abd8144
Author: Rohith Sharma K S 
AuthorDate: Mon Sep 23 06:20:54 2019 +

Make upstream aware of 3.2.1 release.
---
 .../site/markdown/release/3.2.1/CHANGELOG.3.2.1.md | 553 +
 .../markdown/release/3.2.1/RELEASENOTES.3.2.1.md   |  80 +++
 .../dev-support/jdiff/Apache_Hadoop_HDFS_3.2.1.xml | 674 +
 hadoop-project-dist/pom.xml|   2 +-
 4 files changed, 1308 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.1/CHANGELOG.3.2.1.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.1/CHANGELOG.3.2.1.md
new file mode 100644
index 000..64e249e
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.1/CHANGELOG.3.2.1.md
@@ -0,0 +1,553 @@
+
+
+# Apache Hadoop Changelog
+
+## Release 3.2.1 - 2019-09-10
+
+### INCOMPATIBLE CHANGES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|: |: | :--- |: |: |: |
+| [HADOOP-15922](https://issues.apache.org/jira/browse/HADOOP-15922) | 
DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode 
URL |  Major | common, kms | He Xiaoqiao | He Xiaoqiao |
+
+
+### NEW FEATURES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|: |: | :--- |: |: |: |
+| [HADOOP-15950](https://issues.apache.org/jira/browse/HADOOP-15950) | 
Failover for LdapGroupsMapping |  Major | common, security | Lukas Majercak | 
Lukas Majercak |
+| [YARN-7055](https://issues.apache.org/jira/browse/YARN-7055) | YARN Timeline 
Service v.2: beta 1 / GA |  Major | timelineclient, timelinereader, 
timelineserver | Vrushali C |  |
+| [YARN-9761](https://issues.apache.org/jira/browse/YARN-9761) | Allow 
overriding application submissions based on server side configs |  Major | . | 
Jonathan Hung | pralabhkumar |
+
+
+### IMPROVEMENTS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|: |: | :--- |: |: |: |
+| [HADOOP-15676](https://issues.apache.org/jira/browse/HADOOP-15676) | Cleanup 
TestSSLHttpServer |  Minor | common | Szilard Nemeth | Szilard Nemeth |
+| [YARN-8896](https://issues.apache.org/jira/browse/YARN-8896) | Limit the 
maximum number of container assignments per heartbeat |  Major | . | Weiwei 
Yang | Zhankun Tang |
+| [YARN-8618](https://issues.apache.org/jira/browse/YARN-8618) | Yarn Service: 
When all the components of a service have restart policy NEVER then initiation 
of service upgrade should fail |  Major | . | Chandni Singh | Chandni Singh |
+| [HADOOP-15804](https://issues.apache.org/jira/browse/HADOOP-15804) | upgrade 
to commons-compress 1.18 |  Major | . | PJ Fanning | Akira Ajisaka |
+| [YARN-8916](https://issues.apache.org/jira/browse/YARN-8916) | Define a 
constant "docker" string in "ContainerRuntimeConstants.java" for better 
maintainability |  Minor | . | Zhankun Tang | Zhankun Tang |
+| [YARN-8908](https://issues.apache.org/jira/browse/YARN-8908) | Fix errors in 
yarn-default.xml related to GPU/FPGA |  Major | . | Zhankun Tang | Zhankun Tang 
|
+| [HDFS-13941](https://issues.apache.org/jira/browse/HDFS-13941) | make 
storageId in BlockPoolTokenSecretManager.checkAccess optional |  Major | . | 
Ajay Kumar | Ajay Kumar |
+| [HDFS-14029](https://issues.apache.org/jira/browse/HDFS-14029) | Sleep in 
TestLazyPersistFiles should be put into a loop |  Trivial | hdfs | Adam Antal | 
Adam Antal |
+| [YARN-8915](https://issues.apache.org/jira/browse/YARN-8915) | Update the 
doc about the default value of "maximum-container-assignments" for capacity 
scheduler |  Minor | . | Zhankun Tang | Zhankun Tang |
+| [HADOOP-15855](https://issues.apache.org/jira/browse/HADOOP-15855) | Review 
hadoop credential doc, including object store details |  Minor | documentation, 
security | Steve Loughran | Steve Loughran |
+| [YARN-7225](https://issues.apache.org/jira/browse/YARN-7225) | Add queue and 
partition info to RM audit log |  Major | resourcemanager | Jonathan Hung | 
Eric Payne |
+| [HADOOP-15687](https://issues.apache.org/jira/browse/HADOOP-15687) | 
Credentials class should allow access to aliases |  Trivial | . | Lars Francke 
| Lars Francke |
+| [YARN-8969](https://issues.apache.org/jira/browse/YARN-8969) | 
AbstractYarnScheduler#getNodeTracker should return generic type to avoid type 
casting |  Major | . | Wanqiang Ji | Wanqiang Ji |
+| [YARN-8977](https://issues.apache.org/jira/browse/YARN-8977) | Remove 
unnecessary type casting when calling AbstractYarnScheduler#getSchedulerNode |  
Trivial | . | Wanqiang Ji