[hadoop] branch branch-3.2 updated: HDFS-16207. Remove NN logs stack trace for non-existent xattr query (#3375)

2021-09-08 Thread cnauroth
This is an automated email from the ASF dual-hosted git repository.

cnauroth pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 1944e0d  HDFS-16207. Remove NN logs stack trace for non-existent xattr 
query (#3375)
1944e0d is described below

commit 1944e0d714dfa2d9aaad003aecc7fafdb352ed49
Author: Ahmed Hussein <50450311+amahuss...@users.noreply.github.com>
AuthorDate: Wed Sep 8 23:21:16 2021 -0500

HDFS-16207. Remove NN logs stack trace for non-existent xattr query (#3375)

Change-Id: Ibde523b20a6b8ac92991da52583e625a018d2ee6
---
 .../hdfs/protocol/XAttrNotFoundException.java  | 40 ++
 .../hadoop/hdfs/server/namenode/FSDirXAttrOp.java  |  7 ++--
 .../hdfs/server/namenode/NameNodeRpcServer.java|  4 ++-
 .../tools/offlineImageViewer/FSImageLoader.java|  4 +--
 .../java/org/apache/hadoop/hdfs/TestDFSShell.java  |  3 +-
 .../hdfs/server/namenode/FSXAttrBaseTest.java  |  5 +--
 6 files changed, 53 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/XAttrNotFoundException.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/XAttrNotFoundException.java
new file mode 100644
index 000..d958491
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/XAttrNotFoundException.java
@@ -0,0 +1,40 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.protocol;
+
+import java.io.IOException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * The exception that happens when you ask to get a non existing XAttr.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Evolving
+public class XAttrNotFoundException extends IOException {
+  private static final long serialVersionUID = -6506239904158794057L;
+  public static final String DEFAULT_EXCEPTION_MSG =
+  "At least one of the attributes provided was not found.";
+  public XAttrNotFoundException() {
+this(DEFAULT_EXCEPTION_MSG);
+  }
+  public XAttrNotFoundException(String msg) {
+super(msg);
+  }
+}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
index ff82610..88abec0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
@@ -29,6 +29,7 @@ import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.XAttrHelper;
+import org.apache.hadoop.hdfs.protocol.XAttrNotFoundException;
 import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos;
 import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.ReencryptionInfoProto;
 import org.apache.hadoop.hdfs.protocolPB.PBHelperClient;
@@ -114,8 +115,7 @@ class FSDirXAttrOp {
   return filteredAll;
 }
 if (filteredAll == null || filteredAll.isEmpty()) {
-  throw new IOException(
-  "At least one of the attributes provided was not found.");
+  throw new XAttrNotFoundException();
 }
 List toGet = Lists.newArrayListWithCapacity(xAttrs.size());
 for (XAttr xAttr : xAttrs) {
@@ -129,8 +129,7 @@ class FSDirXAttrOp {
 }
   }
   if (!foundIt) {
-throw new IOException(
-"At least one of the attributes provided was not found.");
+throw new XAttrNotFoundException();
   }
 }
 return toGet;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
index dd21817..9ab7901 100644
--- 

[hadoop] branch branch-3.3 updated: HDFS-16207. Remove NN logs stack trace for non-existent xattr query (#3375)

2021-09-08 Thread cnauroth
This is an automated email from the ASF dual-hosted git repository.

cnauroth pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 1f61944  HDFS-16207. Remove NN logs stack trace for non-existent xattr 
query (#3375)
1f61944 is described below

commit 1f61944e3be1f8efe273d44e77ed3f4126b08b95
Author: Ahmed Hussein <50450311+amahuss...@users.noreply.github.com>
AuthorDate: Wed Sep 8 23:21:16 2021 -0500

HDFS-16207. Remove NN logs stack trace for non-existent xattr query (#3375)

Change-Id: Ibde523b20a6b8ac92991da52583e625a018d2ee6
---
 .../hdfs/protocol/XAttrNotFoundException.java  | 40 ++
 .../hadoop/hdfs/server/namenode/FSDirXAttrOp.java  |  7 ++--
 .../hdfs/server/namenode/NameNodeRpcServer.java|  4 ++-
 .../tools/offlineImageViewer/FSImageLoader.java|  4 +--
 .../java/org/apache/hadoop/hdfs/TestDFSShell.java  |  3 +-
 .../hdfs/server/namenode/FSXAttrBaseTest.java  |  5 +--
 6 files changed, 53 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/XAttrNotFoundException.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/XAttrNotFoundException.java
new file mode 100644
index 000..d958491
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/XAttrNotFoundException.java
@@ -0,0 +1,40 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.protocol;
+
+import java.io.IOException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * The exception that happens when you ask to get a non existing XAttr.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Evolving
+public class XAttrNotFoundException extends IOException {
+  private static final long serialVersionUID = -6506239904158794057L;
+  public static final String DEFAULT_EXCEPTION_MSG =
+  "At least one of the attributes provided was not found.";
+  public XAttrNotFoundException() {
+this(DEFAULT_EXCEPTION_MSG);
+  }
+  public XAttrNotFoundException(String msg) {
+super(msg);
+  }
+}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
index ce78b5b..7f16910 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
@@ -29,6 +29,7 @@ import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.XAttrHelper;
+import org.apache.hadoop.hdfs.protocol.XAttrNotFoundException;
 import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos;
 import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.ReencryptionInfoProto;
 import org.apache.hadoop.hdfs.protocolPB.PBHelperClient;
@@ -114,8 +115,7 @@ class FSDirXAttrOp {
   return filteredAll;
 }
 if (filteredAll == null || filteredAll.isEmpty()) {
-  throw new IOException(
-  "At least one of the attributes provided was not found.");
+  throw new XAttrNotFoundException();
 }
 List toGet = Lists.newArrayListWithCapacity(xAttrs.size());
 for (XAttr xAttr : xAttrs) {
@@ -129,8 +129,7 @@ class FSDirXAttrOp {
 }
   }
   if (!foundIt) {
-throw new IOException(
-"At least one of the attributes provided was not found.");
+throw new XAttrNotFoundException();
   }
 }
 return toGet;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
index 3adfa4df..6c196b0 100644
--- 

[hadoop] branch trunk updated (bddc9bf -> e708836)

2021-09-08 Thread sunchao
This is an automated email from the ASF dual-hosted git repository.

sunchao pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from bddc9bf  HDFS-16207. Remove NN logs stack trace for non-existent xattr 
query (#3375)
 add e708836  HADOOP-17887. Remove the wrapper class GzipOutputStream 
(#3377)

No new revisions were added by this update.

Summary of changes:
 .../org/apache/hadoop/io/compress/GzipCodec.java   | 92 --
 .../io/compress/zlib/BuiltInGzipCompressor.java| 11 +--
 .../org/apache/hadoop/io/compress/TestCodec.java   | 42 ++
 3 files changed, 43 insertions(+), 102 deletions(-)

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-16207. Remove NN logs stack trace for non-existent xattr query (#3375)

2021-09-08 Thread cnauroth
This is an automated email from the ASF dual-hosted git repository.

cnauroth pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new bddc9bf  HDFS-16207. Remove NN logs stack trace for non-existent xattr 
query (#3375)
bddc9bf is described below

commit bddc9bf63c3adb3d7445547bd1f8272e53b40bf7
Author: Ahmed Hussein <50450311+amahuss...@users.noreply.github.com>
AuthorDate: Wed Sep 8 23:21:16 2021 -0500

HDFS-16207. Remove NN logs stack trace for non-existent xattr query (#3375)
---
 .../hdfs/protocol/XAttrNotFoundException.java  | 40 ++
 .../hadoop/hdfs/server/namenode/FSDirXAttrOp.java  |  7 ++--
 .../hdfs/server/namenode/NameNodeRpcServer.java|  4 ++-
 .../tools/offlineImageViewer/FSImageLoader.java|  4 +--
 .../java/org/apache/hadoop/hdfs/TestDFSShell.java  |  3 +-
 .../hdfs/server/namenode/FSXAttrBaseTest.java  |  5 +--
 6 files changed, 53 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/XAttrNotFoundException.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/XAttrNotFoundException.java
new file mode 100644
index 000..d958491
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/XAttrNotFoundException.java
@@ -0,0 +1,40 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.protocol;
+
+import java.io.IOException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * The exception that happens when you ask to get a non existing XAttr.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Evolving
+public class XAttrNotFoundException extends IOException {
+  private static final long serialVersionUID = -6506239904158794057L;
+  public static final String DEFAULT_EXCEPTION_MSG =
+  "At least one of the attributes provided was not found.";
+  public XAttrNotFoundException() {
+this(DEFAULT_EXCEPTION_MSG);
+  }
+  public XAttrNotFoundException(String msg) {
+super(msg);
+  }
+}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
index 96dfdf9..632cff9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
@@ -26,6 +26,7 @@ import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.XAttrHelper;
+import org.apache.hadoop.hdfs.protocol.XAttrNotFoundException;
 import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos;
 import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.ReencryptionInfoProto;
 import org.apache.hadoop.hdfs.protocolPB.PBHelperClient;
@@ -116,8 +117,7 @@ public class FSDirXAttrOp {
   return filteredAll;
 }
 if (filteredAll == null || filteredAll.isEmpty()) {
-  throw new IOException(
-  "At least one of the attributes provided was not found.");
+  throw new XAttrNotFoundException();
 }
 List toGet = Lists.newArrayListWithCapacity(xAttrs.size());
 for (XAttr xAttr : xAttrs) {
@@ -131,8 +131,7 @@ public class FSDirXAttrOp {
 }
   }
   if (!foundIt) {
-throw new IOException(
-"At least one of the attributes provided was not found.");
+throw new XAttrNotFoundException();
   }
 }
 return toGet;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
index 580a991..e29fa67 100644
--- 

[hadoop] branch trunk updated: HDFS-16210. RBF: Add the option of refreshCallQueue to RouterAdmin (#3379)

2021-09-08 Thread ferhui
This is an automated email from the ASF dual-hosted git repository.

ferhui pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c0890e6  HDFS-16210. RBF: Add the option of refreshCallQueue to 
RouterAdmin (#3379)
c0890e6 is described below

commit c0890e6d04dda6f2716d07427816721fbdf9c3b4
Author: Symious 
AuthorDate: Thu Sep 9 09:57:27 2021 +0800

HDFS-16210. RBF: Add the option of refreshCallQueue to RouterAdmin (#3379)
---
 .../federation/router/RouterAdminServer.java   | 22 -
 .../hadoop/hdfs/tools/federation/RouterAdmin.java  | 37 ++
 .../federation/router/TestRouterAdminCLI.java  |  9 ++
 3 files changed, 67 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
index b44142f..d2b20bc 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
@@ -81,11 +81,15 @@ import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.ipc.ProtobufRpcEngine2;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.ipc.RPC.Server;
+import org.apache.hadoop.ipc.RefreshCallQueueProtocol;
 import org.apache.hadoop.ipc.RefreshRegistry;
 import org.apache.hadoop.ipc.RefreshResponse;
 import org.apache.hadoop.ipc.proto.GenericRefreshProtocolProtos;
+import org.apache.hadoop.ipc.proto.RefreshCallQueueProtocolProtos;
 import org.apache.hadoop.ipc.protocolPB.GenericRefreshProtocolPB;
 import 
org.apache.hadoop.ipc.protocolPB.GenericRefreshProtocolServerSideTranslatorPB;
+import org.apache.hadoop.ipc.protocolPB.RefreshCallQueueProtocolPB;
+import 
org.apache.hadoop.ipc.protocolPB.RefreshCallQueueProtocolServerSideTranslatorPB;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authorize.ProxyUsers;
@@ -102,7 +106,7 @@ import 
org.apache.hadoop.thirdparty.protobuf.BlockingService;
  * router. It is created, started, and stopped by {@link Router}.
  */
 public class RouterAdminServer extends AbstractService
-implements RouterAdminProtocol {
+implements RouterAdminProtocol, RefreshCallQueueProtocol {
 
   private static final Logger LOG =
   LoggerFactory.getLogger(RouterAdminServer.class);
@@ -197,8 +201,16 @@ public class RouterAdminServer extends AbstractService
 GenericRefreshProtocolProtos.GenericRefreshProtocolService.
 newReflectiveBlockingService(genericRefreshXlator);
 
+RefreshCallQueueProtocolServerSideTranslatorPB refreshCallQueueXlator =
+new RefreshCallQueueProtocolServerSideTranslatorPB(this);
+BlockingService refreshCallQueueService =
+RefreshCallQueueProtocolProtos.RefreshCallQueueProtocolService.
+newReflectiveBlockingService(refreshCallQueueXlator);
+
 DFSUtil.addPBProtocol(conf, GenericRefreshProtocolPB.class,
 genericRefreshService, adminServer);
+DFSUtil.addPBProtocol(conf, RefreshCallQueueProtocolPB.class,
+refreshCallQueueService, adminServer);
   }
 
   /**
@@ -764,4 +776,12 @@ public class RouterAdminServer extends AbstractService
 ProxyUsers.refreshSuperUserGroupsConfiguration();
 return true;
   }
+
+  @Override // RefreshCallQueueProtocol
+  public void refreshCallQueue() throws IOException {
+LOG.info("Refreshing call queue.");
+
+Configuration configuration = new Configuration();
+router.getRpcServer().getServer().refreshCallQueue(configuration);
+  }
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
index 7422989..deadf3d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
@@ -77,6 +77,8 @@ import org.apache.hadoop.ipc.RefreshResponse;
 import org.apache.hadoop.ipc.RemoteException;
 import 
org.apache.hadoop.ipc.protocolPB.GenericRefreshProtocolClientSideTranslatorPB;
 import org.apache.hadoop.ipc.protocolPB.GenericRefreshProtocolPB;
+import 
org.apache.hadoop.ipc.protocolPB.RefreshCallQueueProtocolClientSideTranslatorPB;
+import org.apache.hadoop.ipc.protocolPB.RefreshCallQueueProtocolPB;
 import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.security.UserGroupInformation;
 import 

[hadoop] branch trunk updated: YARN-10829. Follow up: Adding null checks before merging ResourceUsage Report (#3252)

2021-09-08 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a186460  YARN-10829. Follow up: Adding null checks before merging 
ResourceUsage Report (#3252)
a186460 is described below

commit a1864600049f81fae434554003a2e7046d73ccb8
Author: Akshat Bordia <31816865+aksha...@users.noreply.github.com>
AuthorDate: Wed Sep 8 23:06:56 2021 +0530

YARN-10829. Follow up: Adding null checks before merging ResourceUsage 
Report (#3252)
---
 .../router/clientrm/RouterYarnClientUtils.java | 61 --
 .../router/clientrm/TestRouterYarnClientUtils.java | 42 +++
 2 files changed, 75 insertions(+), 28 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterYarnClientUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterYarnClientUtils.java
index 9c36f30..934636b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterYarnClientUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterYarnClientUtils.java
@@ -133,43 +133,48 @@ public final class RouterYarnClientUtils {
 ApplicationResourceUsageReport uamResourceReport =
 uam.getApplicationResourceUsageReport();
 
-amResourceReport.setNumUsedContainers(
-amResourceReport.getNumUsedContainers() +
-uamResourceReport.getNumUsedContainers());
+if (amResourceReport == null) {
+  am.setApplicationResourceUsageReport(uamResourceReport);
+} else if (uamResourceReport != null) {
 
-amResourceReport.setNumReservedContainers(
-amResourceReport.getNumReservedContainers() +
-uamResourceReport.getNumReservedContainers());
+  amResourceReport.setNumUsedContainers(
+  amResourceReport.getNumUsedContainers() +
+  uamResourceReport.getNumUsedContainers());
 
-amResourceReport.setUsedResources(Resources.add(
-amResourceReport.getUsedResources(),
-uamResourceReport.getUsedResources()));
+  amResourceReport.setNumReservedContainers(
+  amResourceReport.getNumReservedContainers() +
+  uamResourceReport.getNumReservedContainers());
 
-amResourceReport.setReservedResources(Resources.add(
-amResourceReport.getReservedResources(),
-uamResourceReport.getReservedResources()));
+  amResourceReport.setUsedResources(Resources.add(
+  amResourceReport.getUsedResources(),
+  uamResourceReport.getUsedResources()));
 
-amResourceReport.setNeededResources(Resources.add(
-amResourceReport.getNeededResources(),
-uamResourceReport.getNeededResources()));
+  amResourceReport.setReservedResources(Resources.add(
+  amResourceReport.getReservedResources(),
+  uamResourceReport.getReservedResources()));
 
-amResourceReport.setMemorySeconds(
-amResourceReport.getMemorySeconds() +
-uamResourceReport.getMemorySeconds());
+  amResourceReport.setNeededResources(Resources.add(
+  amResourceReport.getNeededResources(),
+  uamResourceReport.getNeededResources()));
 
-amResourceReport.setVcoreSeconds(
-amResourceReport.getVcoreSeconds() +
-uamResourceReport.getVcoreSeconds());
+  amResourceReport.setMemorySeconds(
+  amResourceReport.getMemorySeconds() +
+  uamResourceReport.getMemorySeconds());
 
-amResourceReport.setQueueUsagePercentage(
-amResourceReport.getQueueUsagePercentage() +
-uamResourceReport.getQueueUsagePercentage());
+  amResourceReport.setVcoreSeconds(
+  amResourceReport.getVcoreSeconds() +
+  uamResourceReport.getVcoreSeconds());
 
-amResourceReport.setClusterUsagePercentage(
-amResourceReport.getClusterUsagePercentage() +
-uamResourceReport.getClusterUsagePercentage());
+  amResourceReport.setQueueUsagePercentage(
+  amResourceReport.getQueueUsagePercentage() +
+  uamResourceReport.getQueueUsagePercentage());
 
-am.setApplicationResourceUsageReport(amResourceReport);
+  amResourceReport.setClusterUsagePercentage(
+  amResourceReport.getClusterUsagePercentage() +
+  uamResourceReport.getClusterUsagePercentage());
+
+  am.setApplicationResourceUsageReport(amResourceReport);
+}
   }
 
   /**
diff --git 

[hadoop] branch trunk updated: YARN-10870. Missing user filtering check -> yarn.webapp.filter-entity-list-by-user for RM Scheduler page. Contributed by Gergely Pollak

2021-09-08 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2ff3fc5  YARN-10870. Missing user filtering check -> 
yarn.webapp.filter-entity-list-by-user for RM Scheduler page. Contributed by 
Gergely Pollak
2ff3fc5 is described below

commit 2ff3fc50e4a9bc60a1ca968bd495a18728084eaa
Author: Szilard Nemeth 
AuthorDate: Wed Sep 8 18:01:39 2021 +0200

YARN-10870. Missing user filtering check -> 
yarn.webapp.filter-entity-list-by-user for RM Scheduler page. Contributed by 
Gergely Pollak
---
 .../webapp/FairSchedulerAppsBlock.java | 69 +++---
 1 file changed, 61 insertions(+), 8 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java
index 14ad277..f6202cb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java
@@ -23,18 +23,21 @@ import static 
org.apache.hadoop.yarn.webapp.YarnWebParams.APP_STATE;
 import static org.apache.hadoop.yarn.webapp.view.JQueryUI.C_PROGRESSBAR;
 import static org.apache.hadoop.yarn.webapp.view.JQueryUI.C_PROGRESSBAR_VALUE;
 
-import java.util.Collection;
-import java.util.HashSet;
-import java.util.Map;
+import java.security.Principal;
+import java.util.*;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 
 import org.apache.commons.text.StringEscapeUtils;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.util.StringUtils;
+import org.apache.hadoop.yarn.api.records.ApplicationAccessType;
 import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.QueueACL;
 import org.apache.hadoop.yarn.api.records.YarnApplicationState;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppState;
@@ -49,6 +52,8 @@ import org.apache.hadoop.yarn.webapp.view.HtmlBlock;
 
 import com.google.inject.Inject;
 
+import javax.servlet.http.HttpServletRequest;
+
 /**
  * Shows application information specific to the fair
  * scheduler as part of the fair scheduler page.
@@ -58,10 +63,19 @@ public class FairSchedulerAppsBlock extends HtmlBlock {
   final FairSchedulerInfo fsinfo;
   final Configuration conf;
   final ResourceManager rm;
+  final boolean filterAppsByUser;
+
   @Inject
   public FairSchedulerAppsBlock(ResourceManager rm, ViewContext ctx,
   Configuration conf) {
 super(ctx);
+this.conf = conf;
+this.rm = rm;
+
+this.filterAppsByUser  = conf.getBoolean(
+YarnConfiguration.FILTER_ENTITY_LIST_BY_USER,
+YarnConfiguration.DEFAULT_DISPLAY_APPS_FOR_LOGGED_IN_USER);
+
 FairScheduler scheduler = (FairScheduler) rm.getResourceScheduler();
 fsinfo = new FairSchedulerInfo(scheduler);
 apps = new ConcurrentHashMap();
@@ -70,13 +84,52 @@ public class FairSchedulerAppsBlock extends HtmlBlock {
   if (!(RMAppState.NEW.equals(entry.getValue().getState())
   || RMAppState.NEW_SAVING.equals(entry.getValue().getState())
   || RMAppState.SUBMITTED.equals(entry.getValue().getState( {
-apps.put(entry.getKey(), entry.getValue());
+if (!filterAppsByUser || hasAccess(entry.getValue(),
+ctx.requestContext().getRequest())) {
+  apps.put(entry.getKey(), entry.getValue());
+}
   }
 }
-this.conf = conf;
-this.rm = rm;
   }
-  
+
+  private UserGroupInformation getCallerUserGroupInformation(
+  HttpServletRequest hsr, boolean usePrincipal) {
+String remoteUser = hsr.getRemoteUser();
+if (usePrincipal) {
+  Principal princ = hsr.getUserPrincipal();
+  remoteUser = princ == null ? null : princ.getName();
+}
+
+UserGroupInformation callerUGI = null;
+if (remoteUser != null) {
+  callerUGI = UserGroupInformation.createRemoteUser(remoteUser);
+}
+
+return callerUGI;
+  }
+
+  protected Boolean hasAccess(RMApp app, HttpServletRequest hsr) {
+// Check for the 

[hadoop] branch branch-3.3 updated: HADOOP-17894. CredentialProviderFactory.getProviders() recursion loading JCEKS file from S3A (#3393)

2021-09-08 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new a2242df  HADOOP-17894. CredentialProviderFactory.getProviders() 
recursion loading JCEKS file from S3A (#3393)
a2242df is described below

commit a2242df10a1999324db5b3c49ccdc90495e09ae5
Author: Steve Loughran 
AuthorDate: Tue Sep 7 15:29:37 2021 +0100

HADOOP-17894. CredentialProviderFactory.getProviders() recursion loading 
JCEKS file from S3A (#3393)

* CredentialProviderFactory to detect and report on recursion.
* S3AFS to remove incompatible providers.
* Integration Test for this.

Contributed by Steve Loughran.

Change-Id: Ia247b3c9fe8488ffdb7f57b40eb6e37c57e522ef
---
 .../security/alias/CredentialProviderFactory.java  |  35 +++-
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java|   6 +
 .../java/org/apache/hadoop/fs/s3a/S3AUtils.java|   2 +-
 .../apache/hadoop/fs/s3a/auth/ITestJceksIO.java| 190 +
 4 files changed, 225 insertions(+), 8 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProviderFactory.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProviderFactory.java
index 1b2ac41..8b39337 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProviderFactory.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProviderFactory.java
@@ -25,11 +25,13 @@ import java.util.ArrayList;
 import java.util.Iterator;
 import java.util.List;
 import java.util.ServiceLoader;
+import java.util.concurrent.atomic.AtomicBoolean;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.PathIOException;
 
 /**
  * A factory to create a list of CredentialProvider based on the path given in 
a
@@ -59,9 +61,18 @@ public abstract class CredentialProviderFactory {
 }
   }
 
+  /**
+   * Fail fast on any recursive load of credential providers, which can
+   * happen if the FS itself triggers the load.
+   * A simple boolean could be used here, as the synchronized block ensures
+   * that only one thread can be active at a time. An atomic is used
+   * for rigorousness.
+   */
+  private static final AtomicBoolean SERVICE_LOADER_LOCKED = new 
AtomicBoolean(false);
+
   public static List getProviders(Configuration conf
) throws IOException {
-List result = new ArrayList();
+List result = new ArrayList<>();
 for(String path: conf.getStringCollection(CREDENTIAL_PROVIDER_PATH)) {
   try {
 URI uri = new URI(path);
@@ -69,13 +80,23 @@ public abstract class CredentialProviderFactory {
 // Iterate serviceLoader in a synchronized block since
 // serviceLoader iterator is not thread-safe.
 synchronized (serviceLoader) {
-  for (CredentialProviderFactory factory : serviceLoader) {
-CredentialProvider kp = factory.createProvider(uri, conf);
-if (kp != null) {
-  result.add(kp);
-  found = true;
-  break;
+  try {
+if (SERVICE_LOADER_LOCKED.getAndSet(true)) {
+  throw new PathIOException(path,
+  "Recursive load of credential provider; " +
+  "if loading a JCEKS file, this means that the filesystem 
connector is " +
+  "trying to load the same file");
+}
+for (CredentialProviderFactory factory : serviceLoader) {
+  CredentialProvider kp = factory.createProvider(uri, conf);
+  if (kp != null) {
+result.add(kp);
+found = true;
+break;
+  }
 }
+  } finally {
+SERVICE_LOADER_LOCKED.set(false);
   }
 }
 if (!found) {
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
index 06fd604..e0dfb56 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
@@ -186,6 +186,7 @@ import org.apache.hadoop.io.retry.RetryPolicies;
 import org.apache.hadoop.fs.store.EtagChecksum;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.util.BlockingThreadPoolExecutorService;
+import org.apache.hadoop.security.ProviderUtils;
 

[hadoop] branch trunk updated: HADOOP-17857. Check real user ACLs in addition to proxied user ACLs. Contributed by Eric Payne

2021-09-08 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 5428d36  HADOOP-17857. Check real user ACLs in addition to proxied 
user ACLs. Contributed by Eric Payne
5428d36 is described below

commit 5428d36b56fab319ab68258139d6133ded9bbafc
Author: Szilard Nemeth 
AuthorDate: Wed Sep 8 17:27:22 2021 +0200

HADOOP-17857. Check real user ACLs in addition to proxied user ACLs. 
Contributed by Eric Payne
---
 .../hadoop/security/authorize/AccessControlList.java   | 12 +---
 .../security/authorize/TestAccessControlList.java  | 18 ++
 2 files changed, 27 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/AccessControlList.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/AccessControlList.java
index e86d918..aa5b01f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/AccessControlList.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/AccessControlList.java
@@ -56,6 +56,7 @@ public class AccessControlList implements Writable {
   // Indicates an ACL string that represents access to all users
   public static final String WILDCARD_ACL_VALUE = "*";
   private static final int INITIAL_CAPACITY = 256;
+  public static final String USE_REAL_ACLS = "~";
 
   // Set of users who are granted access.
   private Collection users;
@@ -224,9 +225,12 @@ public class AccessControlList implements Writable {
 
   /**
* Checks if a user represented by the provided {@link UserGroupInformation}
-   * is a member of the Access Control List
+   * is a member of the Access Control List. If user was proxied and
+   * USE_REAL_ACLS + the real user name is in the control list, then treat this
+   * case as if user were in the ACL list.
* @param ugi UserGroupInformation to check if contained in the ACL
-   * @return true if ugi is member of the list
+   * @return true if ugi is member of the list or if USE_REAL_ACLS + real user
+   * is in the list
*/
   public final boolean isUserInList(UserGroupInformation ugi) {
 if (allAllowed || users.contains(ugi.getShortUserName())) {
@@ -239,7 +243,9 @@ public class AccessControlList implements Writable {
 }
   }
 }
-return false;
+UserGroupInformation realUgi = ugi.getRealUser();
+return realUgi != null &&
+   users.contains(USE_REAL_ACLS + realUgi.getShortUserName());
   }
 
   public boolean isUserAllowed(UserGroupInformation ugi) {
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestAccessControlList.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestAccessControlList.java
index 8e1b82b..53ab275 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestAccessControlList.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestAccessControlList.java
@@ -471,4 +471,22 @@ public class TestAccessControlList {
 + " is incorrectly granted the access-control!!",
 acl.isUserAllowed(ugi));
   }
+
+  @Test
+  public void testUseRealUserAclsForProxiedUser() {
+String realUser = "realUser";
+AccessControlList acl = new AccessControlList(realUser);
+UserGroupInformation realUserUgi =
+UserGroupInformation.createRemoteUser(realUser);
+UserGroupInformation user1 =
+UserGroupInformation.createProxyUserForTesting("regularJane",
+realUserUgi, new String [] {"group1"});
+assertFalse("User " + user1 + " should not have been granted access.",
+acl.isUserAllowed(user1));
+
+acl = new AccessControlList(AccessControlList.USE_REAL_ACLS + realUser);
+
+assertTrue("User " + user1 + " should have access but was denied.",
+acl.isUserAllowed(user1));
+  }
 }

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (4e209a3 -> 5e16689)

2021-09-08 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 4e209a3  YARN-10919. Remove LeafQueue#scheduler field (#3382)
 add 5e16689  YARN-10901. Permission checking error on an existing 
directory in LogAggregationFileController#verifyAndCreateRemoteLogDir (#3355)

No new revisions were added by this update.

Summary of changes:
 .../LogAggregationFileController.java  | 13 +-
 .../TestLogAggregationFileController.java  | 53 ++
 2 files changed, 64 insertions(+), 2 deletions(-)

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (40e639a -> 4e209a3)

2021-09-08 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 40e639a  YARN-10646. TestCapacitySchedulerWeightMode test descriptor 
comments doesnt reflect the correct scenario (#3339)
 add 4e209a3  YARN-10919. Remove LeafQueue#scheduler field (#3382)

No new revisions were added by this update.

Summary of changes:
 .../resourcemanager/scheduler/capacity/LeafQueue.java   | 13 +
 1 file changed, 5 insertions(+), 8 deletions(-)

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-10646. TestCapacitySchedulerWeightMode test descriptor comments doesnt reflect the correct scenario (#3339)

2021-09-08 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 40e639a  YARN-10646. TestCapacitySchedulerWeightMode test descriptor 
comments doesnt reflect the correct scenario (#3339)
40e639a is described below

commit 40e639ad078a5f6f0503f206c4a0c4c012793349
Author: Benjamin Teke 
AuthorDate: Wed Sep 8 16:11:04 2021 +0200

YARN-10646. TestCapacitySchedulerWeightMode test descriptor comments doesnt 
reflect the correct scenario (#3339)

Co-authored-by: Benjamin Teke 
---
 .../capacity/TestCapacitySchedulerWeightMode.java  | 24 +++---
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerWeightMode.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerWeightMode.java
index 171123a..a5d80dc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerWeightMode.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerWeightMode.java
@@ -198,9 +198,9 @@ public class TestCapacitySchedulerWeightMode {
*   a x(=100%), y(50%)   b y(=50%), z(=100%)
*    __
*  /   /  \
-   * a1 ([x,y]: w=100)b1(no)  b2([y,z]: w=100)
+   * a1 ([x,y]: w=1)b1(no)  b2([y,z]: w=1)
*
-   * Parent uses weight, child uses percentage
+   * Parent uses percentages, child uses weights
*/
   public static Configuration getCSConfWithLabelsParentUsePctChildUseWeight(
   Configuration config) {
@@ -210,9 +210,9 @@ public class TestCapacitySchedulerWeightMode {
 // Define top-level queues
 conf.setQueues(CapacitySchedulerConfiguration.ROOT,
 new String[] { "a", "b" });
-conf.setLabeledQueueWeight(CapacitySchedulerConfiguration.ROOT, "x", 100);
-conf.setLabeledQueueWeight(CapacitySchedulerConfiguration.ROOT, "y", 100);
-conf.setLabeledQueueWeight(CapacitySchedulerConfiguration.ROOT, "z", 100);
+conf.setCapacityByLabel(CapacitySchedulerConfiguration.ROOT, "x", 100);
+conf.setCapacityByLabel(CapacitySchedulerConfiguration.ROOT, "y", 100);
+conf.setCapacityByLabel(CapacitySchedulerConfiguration.ROOT, "z", 100);
 
 conf.setCapacityByLabel(A, RMNodeLabelsManager.NO_LABEL, 10);
 conf.setMaximumCapacity(A, 10);
@@ -228,23 +228,23 @@ public class TestCapacitySchedulerWeightMode {
 
 // Define 2nd-level queues
 conf.setQueues(A, new String[] { "a1" });
-conf.setCapacityByLabel(A1, RMNodeLabelsManager.NO_LABEL, 100);
+conf.setLabeledQueueWeight(A1, RMNodeLabelsManager.NO_LABEL, 1);
 conf.setMaximumCapacity(A1, 100);
 conf.setAccessibleNodeLabels(A1, toSet("x", "y"));
 conf.setDefaultNodeLabelExpression(A1, "x");
-conf.setCapacityByLabel(A1, "x", 100);
-conf.setCapacityByLabel(A1, "y", 100);
+conf.setLabeledQueueWeight(A1, "x", 1);
+conf.setLabeledQueueWeight(A1, "y", 1);
 
 conf.setQueues(B, new String[] { "b1", "b2" });
-conf.setCapacityByLabel(B1, RMNodeLabelsManager.NO_LABEL, 50);
+conf.setLabeledQueueWeight(B1, RMNodeLabelsManager.NO_LABEL, 1);
 conf.setMaximumCapacity(B1, 50);
 conf.setAccessibleNodeLabels(B1, RMNodeLabelsManager.EMPTY_STRING_SET);
 
-conf.setCapacityByLabel(B2, RMNodeLabelsManager.NO_LABEL, 50);
+conf.setLabeledQueueWeight(B2, RMNodeLabelsManager.NO_LABEL, 1);
 conf.setMaximumCapacity(B2, 50);
 conf.setAccessibleNodeLabels(B2, toSet("y", "z"));
-conf.setCapacityByLabel(B2, "y", 100);
-conf.setCapacityByLabel(B2, "z", 100);
+conf.setLabeledQueueWeight(B2, "y", 1);
+conf.setLabeledQueueWeight(B2, "z", 1);
 
 return conf;
   }

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: Add documentation for YARN-10623 auto refresh queue conf in CS (#3279)

2021-09-08 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 3024a47  Add documentation for YARN-10623 auto refresh queue conf in 
CS (#3279)
3024a47 is described below

commit 3024a4702676632f3de71196fe5c2714eae1905f
Author: zhuqi <821684...@qq.com>
AuthorDate: Wed Sep 8 22:03:15 2021 +0800

Add documentation for YARN-10623 auto refresh queue conf in CS (#3279)
---
 .../hadoop-yarn-site/src/site/markdown/CapacityScheduler.md   | 8 
 1 file changed, 8 insertions(+)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
index 254c51a..d35869c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
@@ -831,6 +831,14 @@ Changing queue/scheduler properties and adding/removing 
queues can be done in tw
 
   Remove the queue configurations from the file and run refresh as described 
above
 
+### Enabling periodic configuration refresh
+Enabling queue configuration periodic refresh allows reloading and applying 
the configuration by editing the *conf/capacity-scheduler.xml* without the 
necessicity of calling yarn rmadmin -refreshQueues.
+
+| Property | Description |
+|: |: |
+| `yarn.resourcemanager.scheduler.monitor.enable` | Enabling monitoring is 
necessary for the periodic refresh. Default value is false. |
+| `yarn.resourcemanager.scheduler.monitor.policies` | This is a configuration 
property that holds a list of classes. Adding more classes means more monitor 
tasks will be launched, Add 
`org.apache.hadoop.yarn.server.resourcemanager.capacity.QueueConfigurationAutoRefreshPolicy`
 to the policies list to enable the periodic refresh. Default value of this 
property is 
`org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy`,
 it means the preemption f [...]
+| `yarn.resourcemanager.queue.auto.refresh.monitoring-interval` | Adjusting 
the auto-refresh monitoring interval is possible with this configuration 
property. The value is in milliseconds. The default value is 5000 (5 seconds). |
 ### Changing queue configuration via API
 
   Editing by API uses a backing store for the scheduler configuration. To 
enable this, the following parameters can be configured in yarn-site.xml.

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-10522. Document for Flexible Auto Queue Creation in Capacity Scheduler

2021-09-08 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d9cb698  YARN-10522. Document for Flexible Auto Queue Creation in 
Capacity Scheduler
d9cb698 is described below

commit d9cb69853b51e33d379e29df616689ce9e20
Author: Benjamin Teke 
AuthorDate: Wed Sep 8 15:43:57 2021 +0200

YARN-10522. Document for Flexible Auto Queue Creation in Capacity Scheduler
---
 .../src/site/markdown/CapacityScheduler.md | 72 +++---
 1 file changed, 65 insertions(+), 7 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
index 7e4c3bd..254c51a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
@@ -123,17 +123,25 @@ Configuration
 
 | Property | Description |
 |: |: |
-| `yarn.scheduler.capacity..capacity` | Queue *capacity* in 
percentage (%) as a float (e.g. 12.5) OR as absolute resource queue minimum 
capacity. The sum of capacities for all queues, at each level, must be equal to 
100. However if absolute resource is configured, sum of absolute resources of 
child queues could be less than it's parent absolute resource capacity. 
Applications in the queue may consume more resources than the queue's capacity 
if there are free resources, provid [...]
-| `yarn.scheduler.capacity..maximum-capacity` | Maximum queue 
capacity in percentage (%) as a float OR as absolute resource queue maximum 
capacity. This limits the *elasticity* for applications in the queue. 1) Value 
is between 0 and 100. 2) Admin needs to make sure absolute maximum capacity >= 
absolute capacity for each queue. Also, setting this value to -1 sets maximum 
capacity to 100%. |
+| `yarn.scheduler.capacity..capacity` | Queue *capacity* in 
percentage (%) as a float (e.g. 12.5), weight as a float with the postfix *w* 
(e.g. 2.0w) or as absolute resource queue minimum capacity. When using 
percentage values the sum of capacities for all queues, at each level, must be 
equal to 100. If absolute resource is configured, sum of absolute resources of 
child queues could be less than its parent absolute resource capacity. 
Applications in the queue may consume more [...]
+| `yarn.scheduler.capacity..maximum-capacity` | Maximum queue 
capacity in percentage (%) as a float (when the *capacity* property is defined 
with either percentages or weights) or as absolute resource queue maximum 
capacity. This limits the *elasticity* for applications in the queue. 1) Value 
is between 0 and 100. 2) Admin needs to make sure absolute maximum capacity >= 
absolute capacity for each queue. Also, setting this value to -1 sets maximum 
capacity to 100%. |
 | `yarn.scheduler.capacity..minimum-user-limit-percent` | Each 
queue enforces a limit on the percentage of resources allocated to a user at 
any given time, if there is demand for resources. The user limit can vary 
between a minimum and maximum value. The former (the minimum value) is set to 
this property value and the latter (the maximum value) depends on the number of 
users who have submitted applications. For e.g., suppose the value of this 
property is 25. If two users have [...]
 | `yarn.scheduler.capacity..user-limit-factor` | User limit factor 
provides a way to control the max amount of resources that a single user can 
consume. It is the multiple of the queue's capacity. By default this is set to 
1 which ensures that a single user can never take more than the queue's 
configured capacity irrespective of how idle the cluster is. Increasing it 
means a single user can use more than the minimum capacity of the cluster, 
while decreasing it results in lowe [...]
 | `yarn.scheduler.capacity..maximum-allocation-mb` | The per queue 
maximum limit of memory to allocate to each container request at the Resource 
Manager. This setting overrides the cluster configuration 
`yarn.scheduler.maximum-allocation-mb`. This value must be smaller than or 
equal to the cluster maximum. |
 | `yarn.scheduler.capacity..maximum-allocation-vcores` | The per 
queue maximum limit of virtual cores to allocate to each container request at 
the Resource Manager. This setting overrides the cluster configuration 
`yarn.scheduler.maximum-allocation-vcores`. This value must be smaller than or 
equal to the cluster maximum. |
 | `yarn.scheduler.capacity..user-settings..weight` | 
This floating point value is used when calculating the user limit resource 
values for users in a queue. This value will weight each user more or less than 
the other users in the queue. For example, if user A should receive 50% more 
resources in a 

[hadoop] branch trunk updated: YARN-10576. Update Capacity Scheduler documentation with JSON-based placement mapping. Contributed by Benjamin Teke

2021-09-08 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 9c8fe1e  YARN-10576. Update Capacity Scheduler documentation with 
JSON-based placement mapping. Contributed by Benjamin Teke
9c8fe1e is described below

commit 9c8fe1e512df62be5dc994f07951c5c6d03690f3
Author: Szilard Nemeth 
AuthorDate: Wed Sep 8 15:17:27 2021 +0200

YARN-10576. Update Capacity Scheduler documentation with JSON-based 
placement mapping. Contributed by Benjamin Teke
---
 .../src/site/markdown/CapacityScheduler.md | 51 +-
 1 file changed, 40 insertions(+), 11 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
index ebec79f..7e4c3bd 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
@@ -261,7 +261,7 @@ Below example covers single mapping separately. In case of 
multiple mappings wit
 
   In order to make the queue mapping feature more versatile, a new format and 
evaluation engine has been added to Capacity Scheduler. The new engine is fully 
backwards compatible with the old one and adds several new features. Note that 
it can also parse the old format, but the new features are only available if 
you specify the mappings in JSON.
 
-  * Syntax
+Syntax
 
   Based on the current JSON schema, users can define mapping rules the 
following way:
 
@@ -288,7 +288,27 @@ Below example covers single mapping separately. In case of 
multiple mappings wit
 
 Rules are evaluated from top to bottom. Compared to the legacy mapping rule 
evaluator, it can be adjusted more flexibly what happens when the evaluation 
stops and a given rule does not match.
 
-  * Rules
+How to enable JSON-based queue mapping
+
+The following properties control how the new placement engine expects rules.
+
+| Setting | Description |
+|: |: |
+| `yarn.scheduler.capacity.mapping-rule-format` | Allowed values are `legacy` 
or `json`. If it is not set, then the engine assumes that the old format might 
be in use so it also checks the value of 
`yarn.scheduler.capacity.queue-mappings`. Therefore, this must be set to `json` 
and cannot be left empty. |
+| `yarn.scheduler.capacity.mapping-rule-json` | The value of this property 
should contain the entire chain of rules inline. This is the preferred way of 
configuring Capacity Scheduler if you use the Mutation API, ie. modify 
configuration real-time via the REST interface. |
+| `yarn.scheduler.capacity.mapping-rule-json-file` | Defines an absolute path 
to a JSON file which contains the rules. For example, 
`/opt/hadoop/config/mapping-rules.json`. |
+
+The property `yarn.scheduler.capacity.mapping-rule-json` takes precedence over 
`yarn.scheduler.capacity.mapping-rule-json-file`. If the format is set to 
`json` but you don't define either of these, then you'll get a warning but the 
initialization of Capacity Scheduler will not fail.
+
+Differences between legacy and flexible queue auto-creation modes
+
+To use the flexible Queue Auto-Creation under a parent the queue capacities 
must be configured with weights. The flexible mode gives the user much more 
freedom to automatically create new leaf queues or entire queue hierarchies 
based on mapping rules. "Legacy" mode refers to either percentage-based 
configuration or where capacities are defined with absolute resources.
+
+In flexible Queue Auto-Creation mode, every parent queue can have dynamically 
created parent or leaf queues (if the 
`yarn.scheduler.capacity..auto-queue-creation-v2.enabled` property 
is set to true), even if it already has static child queues. This also means 
that certain settings influence the outcome of the queue placement depending on 
how the scheduler is configured.
+
+When the mode is relevant, the document explains how certain settings or flags 
affect the overall logic.
+
+Rules
 
   Each mapping rule can have the following settings:
 
@@ -298,21 +318,28 @@ Rules are evaluated from top to bottom. Compared to the 
legacy mapping rule eval
 | `matches` | The string to match, or an asterisk "" which means "all". 
For example, if the type is `user` and this string is "hadoop" then the rule 
will only be evaluated if the submitter user is "hadoop". The "" does not 
work with groups. |
 | `policy` | Selects a list of pre-defined policies which defines where the 
application should be placed. This will be explained later in the "Policies" 
section. |
 | `parentQueue` | In case of `user`, `primaryGroup`, `primaryGroupUser`, 
`secondaryGroup`, `secondaryGroupUser` policies, this tells the engine where 
the 

[hadoop] branch branch-3.3 updated (5926ccd -> 76393e1)

2021-09-08 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a change to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 5926ccd  HADOOP-17897. Allow nested blocks in switch case in 
checkstyle settings. (#3394)
 add 76393e1  HADOOP-17899. Avoid using implicit dependency on 
junit-jupiter-api. (#3399)

No new revisions were added by this update.

Summary of changes:
 .../src/test/java/org/apache/hadoop/fs/http/TestHttpFileSystem.java  | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (e183ec8 -> ce7a5bf)

2021-09-08 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from e183ec8  HADOOP-17897. Allow nested blocks in switch case in 
checkstyle settings. (#3394)
 add ce7a5bf  HADOOP-17899. Avoid using implicit dependency on 
junit-jupiter-api. (#3399)

No new revisions were added by this update.

Summary of changes:
 .../src/test/java/org/apache/hadoop/fs/http/TestHttpFileSystem.java  | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.10 updated: HADOOP-17897. Allow nested blocks in switch case in checkstyle settings. (#3394)

2021-09-08 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new ba86e01  HADOOP-17897. Allow nested blocks in switch case in 
checkstyle settings. (#3394)
ba86e01 is described below

commit ba86e013360641137c12e1be0ad26e7eaad9c7ec
Author: Masatake Iwasaki 
AuthorDate: Wed Sep 8 13:55:48 2021 +0900

HADOOP-17897. Allow nested blocks in switch case in checkstyle settings. 
(#3394)

(cherry picked from commit e183ec8998d0272884b73df7eb1a6da5adf1040a)
---
 hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
b/hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml
index 5445b5d..1093984 100644
--- a/hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml
+++ b/hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml
@@ -153,7 +153,9 @@
 
 
 
-
+
+  
+
 
 
 

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HADOOP-17897. Allow nested blocks in switch case in checkstyle settings. (#3394)

2021-09-08 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 34d443a  HADOOP-17897. Allow nested blocks in switch case in 
checkstyle settings. (#3394)
34d443a is described below

commit 34d443a2f063047df1a1e931629cc8da335ff5e1
Author: Masatake Iwasaki 
AuthorDate: Wed Sep 8 13:55:48 2021 +0900

HADOOP-17897. Allow nested blocks in switch case in checkstyle settings. 
(#3394)

(cherry picked from commit e183ec8998d0272884b73df7eb1a6da5adf1040a)
---
 hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
b/hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml
index 753735e..adffe4e 100644
--- a/hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml
+++ b/hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml
@@ -161,7 +161,9 @@
 
 
 
-
+
+  
+
 
 
 

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org