[GitHub] [hadoop] hadoop-yetus commented on pull request #5213: YARN-11391 Add yarn RM DNS support

2022-12-12 Thread GitBox


hadoop-yetus commented on PR #5213:
URL: https://github.com/apache/hadoop/pull/5213#issuecomment-1347845867

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  2s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  2s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  2s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  21m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 25s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m  7s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5213/1/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   2m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   6m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | -1 :x: |  javac  |  24m 33s | 
[/results-compile-javac-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5213/1/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 18 new + 2814 unchanged - 
0 fixed = 2832 total (was 2814)  |
   | +1 :green_heart: |  compile  |  21m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | -1 :x: |  javac  |  21m 39s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5213/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08 with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 18 new + 2611 unchanged - 0 
fixed = 2629 total (was 2611)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 55s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5213/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 21 new + 185 unchanged - 0 fixed = 206 total (was 
185)  |
   | +1 :green_heart: |  mvnsite  |   3m 24s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 59s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5213/1/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   2m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   2m  4s | 
[/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5213/1/artifact/out/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.html)
 |  hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  24m 34s |  |  patch has no errors 
when building and te

[GitHub] [hadoop] lnbest0707 commented on a diff in pull request #5213: YARN-11391 Add yarn RM DNS support

2022-12-12 Thread GitBox


lnbest0707 commented on code in PR #5213:
URL: https://github.com/apache/hadoop/pull/5213#discussion_r1046726153


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/HAUtil.java:
##
@@ -262,6 +301,179 @@ public static String getRMHAId(Configuration conf) {
 return currentRMId;
   }
 
+  /**
+   * This function resolves all RMIds with their address. For multi-A DNS 
records,
+   * it will resolve all of them, and generate a new Id for each of them.
+   *
+   * @param conf Configuration
+   * @return Map key as RMId, value as its address
+   */
+  public static Map getResolvedRMIdPairs(
+  Configuration conf) {
+boolean resolveNeeded = conf.getBoolean(
+YarnConfiguration.RESOLVE_RM_ADDRESS_NEEDED_KEY,
+YarnConfiguration.RESOLVE_RM_ADDRESS_NEEDED_DEFAULT);
+boolean requireFQDN = conf.getBoolean(
+YarnConfiguration.RESOLVE_RM_ADDRESS_TO_FQDN,
+YarnConfiguration.RESOLVE_RM_ADDRESS_TO_FQDN_DEFAULT);
+// In case client using DIFFERENT addresses for each service address
+// need to categorize them first
+Map, List> addressesConfigKeysMap = new HashMap<>();
+Collection rmIds = getOriginalRMHAIds(conf);
+for (String configKey : RM_ADDRESS_CONFIG_KEYS) {
+  List addresses = new ArrayList<>();
+  for (String rmId : rmIds) {
+String keyToRead = addSuffix(configKey, rmId);
+InetSocketAddress address = getInetSocketAddressFromString(
+conf.get(keyToRead));
+if (address != null) {
+  addresses.add(address.getHostName());
+}
+  }
+  Collections.sort(addresses);
+  List configKeysOfTheseAddresses = 
addressesConfigKeysMap.get(addresses);
+  if (configKeysOfTheseAddresses == null) {
+configKeysOfTheseAddresses = new ArrayList<>();
+addressesConfigKeysMap.put(addresses, configKeysOfTheseAddresses);
+  }
+  configKeysOfTheseAddresses.add(configKey);
+}
+// We need to resolve and override by group (categorized by their input 
host)
+// But since the function is called from "getRMHAId",
+// this function would only return value which is corresponded to 
YarnConfiguration.RM_ADDRESS
+Map ret = null;
+for (List configKeys : addressesConfigKeysMap.values()) {
+  Map res = getResolvedIdPairs(conf, 
resolveNeeded, requireFQDN, getOriginalRMHAIds(conf),
+  configKeys.get(0), YarnConfiguration.RM_HA_IDS, configKeys);
+  if (configKeys.contains(YarnConfiguration.RM_ADDRESS)) {
+ret = res;
+  }
+}
+return ret;
+  }
+
+  private static Map getResolvedIdPairs(
+  Configuration conf, boolean resolveNeeded, boolean requireFQDN, 
Collection ids,
+  String configKey, String configKeyToReplace, List 
listOfConfigKeysToReplace) {
+Map idAddressPairs = new HashMap<>();
+Map generatedIdToOriginalId = new HashMap<>();
+for (String id : ids) {
+  String key = addSuffix(configKey, id);
+  String addr = conf.get(key); // string with port
+  InetSocketAddress address = getInetSocketAddressFromString(addr);
+  if (address == null) {
+continue;
+  }
+  if (resolveNeeded) {
+if (dnr == null) {
+  setDnrByConfiguration(conf);
+}
+// If the address needs to be resolved, get all of the IP addresses
+// from this address and pass them into the map
+LOG.info("Multi-A domain name " + addr +
+" will be resolved by " + dnr.getClass().getName());
+int port = address.getPort();
+String[] resolvedHostNames;
+try {
+  resolvedHostNames = dnr.getAllResolvedHostnameByDomainName(
+  address.getHostName(), requireFQDN);
+} catch (UnknownHostException e) {
+  LOG.warn("Exception in resolving socket address "
+  + address.getHostName(), e);
+  continue;
+}
+LOG.info("Resolved addresses for " + addr +
+" is " + Arrays.toString(resolvedHostNames));
+if (resolvedHostNames == null || resolvedHostNames.length < 1) {
+  LOG.warn("Cannot resolve from address " + address.getHostName());
+} else {
+  // If multiple address resolved, corresponding id needs to be created
+  for (int i = 0; i < resolvedHostNames.length; i++) {
+String generatedRMId = id + "_resolved_" + (i + 1);
+idAddressPairs.put(generatedRMId,
+new InetSocketAddress(resolvedHostNames[i], port));
+generatedIdToOriginalId.put(generatedRMId, id);
+  }
+}
+overrideIdsInConfiguration(
+idAddressPairs, generatedIdToOriginalId, configKeyToReplace,
+listOfConfigKeysToReplace, conf);
+  } else {
+idAddressPairs.put(id, address);
+  }
+}
+return idAddressPairs;
+  }
+
+  /**
+   * This function override all RMIds and their addresses by the input Map.
+   *
+   * @pa

[jira] [Commented] (HADOOP-18546) disable purging list of in progress reads in abfs stream closed

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646455#comment-17646455
 ] 

ASF GitHub Bot commented on HADOOP-18546:
-

mehakmeet commented on code in PR #5205:
URL: https://github.com/apache/hadoop/pull/5205#discussion_r1046686949


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java:
##
@@ -828,8 +828,10 @@ public IOStatistics getIOStatistics() {
   @Override
   public String toString() {
 final StringBuilder sb = new StringBuilder(super.toString());
+sb.append("AbfsInputStream@(").append(this.hashCode()).append("){");
+sb.append("[HADOOP-18546]")
+.append(", ");
 if (streamStatistics != null) {
-  sb.append("AbfsInputStream@(").append(this.hashCode()).append("){");
   sb.append(streamStatistics.toString());
   sb.append("}");

Review Comment:
   The closing bracket of the log should be outside the statistics if block



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestReadBufferManager.java:
##
@@ -44,9 +44,23 @@
 import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_READ_AHEAD_QUEUE_DEPTH;
 import static 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.MIN_BUFFER_SIZE;
 import static 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.ONE_MB;
+import static org.apache.hadoop.test.LambdaTestUtils.eventually;
 
 public class ITestReadBufferManager extends AbstractAbfsIntegrationTest {
 
+  /**
+   * Time before the JUnit test times out for eventually() clauses
+   * to fail. This copes with slow network connections and debugging
+   * sessions, yet still allows for tests to fail with meaningful
+   * messages.
+   */
+  public static final int TIMEOUT_OFFSET = 5 * 60_000;
+
+  /**
+   * Interval between eventually preobes.

Review Comment:
   typo: "probes"



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java:
##
@@ -1636,6 +1636,11 @@ public boolean hasPathCapability(final Path path, final 
String capability)
   new TracingContext(clientCorrelationId, fileSystemId,
   FSOperationType.HAS_PATH_CAPABILITY, tracingHeaderFormat,
   listener));
+
+  // probe for presence of HADOOP-18546 fix.
+case "hadoop-18546":

Review Comment:
   Naming the probe on a Hadoop Jira makes it a little difficult to understand 
it from the code directly. Should we have a general name for the probe related 
to the prefetch inconsistent reads and have the Hadoop jira mentioned in the 
comments only?





> disable purging list of in progress reads in abfs stream closed
> ---
>
> Key: HADOOP-18546
> URL: https://issues.apache.org/jira/browse/HADOOP-18546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.5
>
>
> turn off the prune of in progress reads in 
> ReadBufferManager::purgeBuffersForStream
> this will ensure active prefetches for a closed stream complete. they wiill 
> then get to the completed list and hang around until evicted by timeout, but 
> at least prefetching will be safe.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on a diff in pull request #5205: HADOOP-18546. log/probes of HADOOP-18546 presence.

2022-12-12 Thread GitBox


mehakmeet commented on code in PR #5205:
URL: https://github.com/apache/hadoop/pull/5205#discussion_r1046686949


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java:
##
@@ -828,8 +828,10 @@ public IOStatistics getIOStatistics() {
   @Override
   public String toString() {
 final StringBuilder sb = new StringBuilder(super.toString());
+sb.append("AbfsInputStream@(").append(this.hashCode()).append("){");
+sb.append("[HADOOP-18546]")
+.append(", ");
 if (streamStatistics != null) {
-  sb.append("AbfsInputStream@(").append(this.hashCode()).append("){");
   sb.append(streamStatistics.toString());
   sb.append("}");

Review Comment:
   The closing bracket of the log should be outside the statistics if block



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestReadBufferManager.java:
##
@@ -44,9 +44,23 @@
 import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_READ_AHEAD_QUEUE_DEPTH;
 import static 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.MIN_BUFFER_SIZE;
 import static 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.ONE_MB;
+import static org.apache.hadoop.test.LambdaTestUtils.eventually;
 
 public class ITestReadBufferManager extends AbstractAbfsIntegrationTest {
 
+  /**
+   * Time before the JUnit test times out for eventually() clauses
+   * to fail. This copes with slow network connections and debugging
+   * sessions, yet still allows for tests to fail with meaningful
+   * messages.
+   */
+  public static final int TIMEOUT_OFFSET = 5 * 60_000;
+
+  /**
+   * Interval between eventually preobes.

Review Comment:
   typo: "probes"



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java:
##
@@ -1636,6 +1636,11 @@ public boolean hasPathCapability(final Path path, final 
String capability)
   new TracingContext(clientCorrelationId, fileSystemId,
   FSOperationType.HAS_PATH_CAPABILITY, tracingHeaderFormat,
   listener));
+
+  // probe for presence of HADOOP-18546 fix.
+case "hadoop-18546":

Review Comment:
   Naming the probe on a Hadoop Jira makes it a little difficult to understand 
it from the code directly. Should we have a general name for the probe related 
to the prefetch inconsistent reads and have the Hadoop jira mentioned in the 
comments only?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5213: YARN-11391 Add yarn RM DNS support

2022-12-12 Thread GitBox


slfan1989 commented on code in PR #5213:
URL: https://github.com/apache/hadoop/pull/5213#discussion_r1046700624


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/HAUtil.java:
##
@@ -262,6 +301,179 @@ public static String getRMHAId(Configuration conf) {
 return currentRMId;
   }
 
+  /**
+   * This function resolves all RMIds with their address. For multi-A DNS 
records,
+   * it will resolve all of them, and generate a new Id for each of them.
+   *
+   * @param conf Configuration
+   * @return Map key as RMId, value as its address
+   */
+  public static Map getResolvedRMIdPairs(
+  Configuration conf) {
+boolean resolveNeeded = conf.getBoolean(
+YarnConfiguration.RESOLVE_RM_ADDRESS_NEEDED_KEY,
+YarnConfiguration.RESOLVE_RM_ADDRESS_NEEDED_DEFAULT);
+boolean requireFQDN = conf.getBoolean(
+YarnConfiguration.RESOLVE_RM_ADDRESS_TO_FQDN,
+YarnConfiguration.RESOLVE_RM_ADDRESS_TO_FQDN_DEFAULT);
+// In case client using DIFFERENT addresses for each service address
+// need to categorize them first
+Map, List> addressesConfigKeysMap = new HashMap<>();
+Collection rmIds = getOriginalRMHAIds(conf);
+for (String configKey : RM_ADDRESS_CONFIG_KEYS) {
+  List addresses = new ArrayList<>();
+  for (String rmId : rmIds) {
+String keyToRead = addSuffix(configKey, rmId);
+InetSocketAddress address = getInetSocketAddressFromString(
+conf.get(keyToRead));
+if (address != null) {
+  addresses.add(address.getHostName());
+}
+  }
+  Collections.sort(addresses);
+  List configKeysOfTheseAddresses = 
addressesConfigKeysMap.get(addresses);
+  if (configKeysOfTheseAddresses == null) {
+configKeysOfTheseAddresses = new ArrayList<>();
+addressesConfigKeysMap.put(addresses, configKeysOfTheseAddresses);
+  }
+  configKeysOfTheseAddresses.add(configKey);
+}
+// We need to resolve and override by group (categorized by their input 
host)
+// But since the function is called from "getRMHAId",
+// this function would only return value which is corresponded to 
YarnConfiguration.RM_ADDRESS
+Map ret = null;
+for (List configKeys : addressesConfigKeysMap.values()) {
+  Map res = getResolvedIdPairs(conf, 
resolveNeeded, requireFQDN, getOriginalRMHAIds(conf),
+  configKeys.get(0), YarnConfiguration.RM_HA_IDS, configKeys);
+  if (configKeys.contains(YarnConfiguration.RM_ADDRESS)) {
+ret = res;
+  }
+}
+return ret;
+  }
+
+  private static Map getResolvedIdPairs(
+  Configuration conf, boolean resolveNeeded, boolean requireFQDN, 
Collection ids,
+  String configKey, String configKeyToReplace, List 
listOfConfigKeysToReplace) {
+Map idAddressPairs = new HashMap<>();
+Map generatedIdToOriginalId = new HashMap<>();
+for (String id : ids) {
+  String key = addSuffix(configKey, id);
+  String addr = conf.get(key); // string with port
+  InetSocketAddress address = getInetSocketAddressFromString(addr);
+  if (address == null) {
+continue;
+  }
+  if (resolveNeeded) {
+if (dnr == null) {
+  setDnrByConfiguration(conf);
+}
+// If the address needs to be resolved, get all of the IP addresses
+// from this address and pass them into the map
+LOG.info("Multi-A domain name " + addr +
+" will be resolved by " + dnr.getClass().getName());
+int port = address.getPort();
+String[] resolvedHostNames;
+try {
+  resolvedHostNames = dnr.getAllResolvedHostnameByDomainName(
+  address.getHostName(), requireFQDN);
+} catch (UnknownHostException e) {
+  LOG.warn("Exception in resolving socket address "
+  + address.getHostName(), e);
+  continue;
+}
+LOG.info("Resolved addresses for " + addr +
+" is " + Arrays.toString(resolvedHostNames));
+if (resolvedHostNames == null || resolvedHostNames.length < 1) {
+  LOG.warn("Cannot resolve from address " + address.getHostName());
+} else {
+  // If multiple address resolved, corresponding id needs to be created
+  for (int i = 0; i < resolvedHostNames.length; i++) {
+String generatedRMId = id + "_resolved_" + (i + 1);
+idAddressPairs.put(generatedRMId,
+new InetSocketAddress(resolvedHostNames[i], port));
+generatedIdToOriginalId.put(generatedRMId, id);
+  }
+}
+overrideIdsInConfiguration(
+idAddressPairs, generatedIdToOriginalId, configKeyToReplace,
+listOfConfigKeysToReplace, conf);
+  } else {
+idAddressPairs.put(id, address);
+  }
+}
+return idAddressPairs;
+  }
+
+  /**
+   * This function override all RMIds and their addresses by the input Map.
+   *
+   * @par

[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5213: YARN-11391 Add yarn RM DNS support

2022-12-12 Thread GitBox


slfan1989 commented on code in PR #5213:
URL: https://github.com/apache/hadoop/pull/5213#discussion_r1046700480


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/HAUtil.java:
##
@@ -262,6 +301,179 @@ public static String getRMHAId(Configuration conf) {
 return currentRMId;
   }
 
+  /**
+   * This function resolves all RMIds with their address. For multi-A DNS 
records,
+   * it will resolve all of them, and generate a new Id for each of them.
+   *
+   * @param conf Configuration
+   * @return Map key as RMId, value as its address
+   */
+  public static Map getResolvedRMIdPairs(
+  Configuration conf) {
+boolean resolveNeeded = conf.getBoolean(
+YarnConfiguration.RESOLVE_RM_ADDRESS_NEEDED_KEY,
+YarnConfiguration.RESOLVE_RM_ADDRESS_NEEDED_DEFAULT);
+boolean requireFQDN = conf.getBoolean(
+YarnConfiguration.RESOLVE_RM_ADDRESS_TO_FQDN,
+YarnConfiguration.RESOLVE_RM_ADDRESS_TO_FQDN_DEFAULT);
+// In case client using DIFFERENT addresses for each service address
+// need to categorize them first
+Map, List> addressesConfigKeysMap = new HashMap<>();
+Collection rmIds = getOriginalRMHAIds(conf);
+for (String configKey : RM_ADDRESS_CONFIG_KEYS) {
+  List addresses = new ArrayList<>();
+  for (String rmId : rmIds) {
+String keyToRead = addSuffix(configKey, rmId);
+InetSocketAddress address = getInetSocketAddressFromString(
+conf.get(keyToRead));
+if (address != null) {
+  addresses.add(address.getHostName());
+}
+  }
+  Collections.sort(addresses);
+  List configKeysOfTheseAddresses = 
addressesConfigKeysMap.get(addresses);
+  if (configKeysOfTheseAddresses == null) {
+configKeysOfTheseAddresses = new ArrayList<>();
+addressesConfigKeysMap.put(addresses, configKeysOfTheseAddresses);
+  }
+  configKeysOfTheseAddresses.add(configKey);
+}
+// We need to resolve and override by group (categorized by their input 
host)
+// But since the function is called from "getRMHAId",
+// this function would only return value which is corresponded to 
YarnConfiguration.RM_ADDRESS
+Map ret = null;
+for (List configKeys : addressesConfigKeysMap.values()) {
+  Map res = getResolvedIdPairs(conf, 
resolveNeeded, requireFQDN, getOriginalRMHAIds(conf),
+  configKeys.get(0), YarnConfiguration.RM_HA_IDS, configKeys);
+  if (configKeys.contains(YarnConfiguration.RM_ADDRESS)) {
+ret = res;
+  }
+}
+return ret;
+  }
+
+  private static Map getResolvedIdPairs(
+  Configuration conf, boolean resolveNeeded, boolean requireFQDN, 
Collection ids,
+  String configKey, String configKeyToReplace, List 
listOfConfigKeysToReplace) {
+Map idAddressPairs = new HashMap<>();
+Map generatedIdToOriginalId = new HashMap<>();
+for (String id : ids) {
+  String key = addSuffix(configKey, id);
+  String addr = conf.get(key); // string with port
+  InetSocketAddress address = getInetSocketAddressFromString(addr);
+  if (address == null) {
+continue;
+  }
+  if (resolveNeeded) {
+if (dnr == null) {
+  setDnrByConfiguration(conf);
+}
+// If the address needs to be resolved, get all of the IP addresses
+// from this address and pass them into the map
+LOG.info("Multi-A domain name " + addr +

Review Comment:
   The log format uses {}, do not splicing, we should be the way of slf4j



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on pull request #5206: HDFS-16868 Fix audit log duplicate issue when an ACE occurs in FSNamesystem.

2022-12-12 Thread GitBox


Hexiaoqiao commented on PR #5206:
URL: https://github.com/apache/hadoop/pull/5206#issuecomment-1347731603

   Committed to trunk. @curie71 thanks for your contributions! @cnauroth Thanks 
for your reviews!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao merged pull request #5206: HDFS-16868 Fix audit log duplicate issue when an ACE occurs in FSNamesystem.

2022-12-12 Thread GitBox


Hexiaoqiao merged PR #5206:
URL: https://github.com/apache/hadoop/pull/5206


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lnbest0707 closed pull request #5196: YARN-11391 Add yarn RM DNS support

2022-12-12 Thread GitBox


lnbest0707 closed pull request #5196: YARN-11391 Add yarn RM DNS support
URL: https://github.com/apache/hadoop/pull/5196


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lnbest0707 commented on pull request #5196: YARN-11391 Add yarn RM DNS support

2022-12-12 Thread GitBox


lnbest0707 commented on PR #5196:
URL: https://github.com/apache/hadoop/pull/5196#issuecomment-1347678025

   Duplicate to #5213 
   
   @slfan1989 somehow my amending commit went to a new PR as above. Please 
check that one and I will close this. Sorry for inconvenience.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lnbest0707 opened a new pull request, #5213: YARN-11391 Add yarn RM DNS support

2022-12-12 Thread GitBox


lnbest0707 opened a new pull request, #5213:
URL: https://github.com/apache/hadoop/pull/5213

   
   
   ### Description of PR
   The patch reuses the resolver introduced from the HDFS side and apply 
similar logic to YARN RM service addresses resolving.
   To utilize the YARN DNS support and use DNS as endpoint, simply upgrade the 
hadoop binary and revise configs from, for example:
```
 
   yarn.resourcemanager.address.rm1
   rm1_address:8032
 
 
   yarn.resourcemanager.scheduler.address.rm1
   rm1_address:8030
 
 
   yarn.resourcemanager.resource-tracker.address.rm1
   rm1_address:8031
 
 
   yarn.resourcemanager.admin.address.rm1
   rm1_address:8033
 
 
   yarn.resourcemanager.webapp.address.rm1
   rm1_address:8088
 
 
   yarn.resourcemanager.webapp.https.address.rm1
   rm1_address:8090
 
 
   yarn.resourcemanager.address.rm2
   rm2_address:8032
 
 
   yarn.resourcemanager.scheduler.address.rm2
   rm2_address:8030
 
 
   yarn.resourcemanager.resource-tracker.address.rm2
   rm2_address:8031
 
 
   yarn.resourcemanager.admin.address.rm2
   rm2_address:8033
 
 
   yarn.resourcemanager.webapp.address.rm2
   rm2_address:8088
 
 
   yarn.resourcemanager.webapp.https.address.rm2
   rm2_address:8090
 
 
   yarn.resourcemanager.ha.rm-ids
   rm1,rm2
 
   ```
   
   to:
   ```
 
   yarn.resourcemanager.address.rm1
   rm_multi_a_dns:8032
 
 
   yarn.resourcemanager.scheduler.address.rm1
   rm_multi_a_dns:8030
 
 
   yarn.resourcemanager.resource-tracker.address.rm1
   rm_multi_a_dns:8031
 
 
   yarn.resourcemanager.admin.address.rm1
   rm_multi_a_dns:8033
 
 
   yarn.resourcemanager.webapp.address.rm1
   rm_multi_a_dns:8088
 
 
   yarn.resourcemanager.webapp.https.address.rm1
   rm_multi_a_dns:8090
 
 
   yarn.resourcemanager.ha.rm-ids
   rm1
 
 
   yarn.resourcemanager.ha.resolve-needed
   true
 
 
   yarn.resourcemanager.ha.resolver.useFQDN
   true # required in secure mode
 
 
   yarn.resourcemanager.ha.refresh-period-ms
   18 # 3 min
 
   
   ```
   where rm_multi_a_dns is a multi-A DNS record for rm1_address and 
rm2_address. This means the following output on the terminal. 
   
   ```
   $ dig +short  | xargs -n +1 dig +short -x | sort
   
   
   ```
   For the newly introduced flags, please refer to yarn-default.xml.
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18526) Leak of S3AInstrumentation instances via hadoop Metrics references

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646351#comment-17646351
 ] 

ASF GitHub Bot commented on HADOOP-18526:
-

mukund-thakur commented on code in PR #5144:
URL: https://github.com/apache/hadoop/pull/5144#discussion_r1046453289


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##
@@ -459,6 +458,13 @@ public void initialize(URI name, Configuration 
originalConf)
 AuditSpan span = null;
 try {
   LOG.debug("Initializing S3AFileSystem for {}", bucket);
+  if (LOG.isTraceEnabled()) {
+// log a full trace for deep diagnostics of where an object is created,
+// for tracking down memory leak issues.
+LOG.trace("Filesystem for {} created; fs.s3a.impl.disable.cache = {}",
+name, originalConf.getBoolean("fs.s3a.impl.disable.cache", false),
+new RuntimeException(super.toString()));

Review Comment:
   Why not just print it? I mean I don't understand the reason behind wrapping 
in RuntimeEx. 
   Also base FileStystem doesn't implement toString() so there won't be 
anything. Why not use this.tpString()?





> Leak of S3AInstrumentation instances via hadoop Metrics references
> --
>
> Key: HADOOP-18526
> URL: https://issues.apache.org/jira/browse/HADOOP-18526
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> A heap dump of a process running OOM shows that if a process creates then 
> destroys lots of S3AFS instances, you seem to run out of heap due to 
> references to S3AInstrumentation and the IOStatisticsStore kept via the 
> hadoop metrics registry
> It doesn't look like S3AInstrumentation.close() is being invoked in 
> S3AFS.close(). it should -with the IOStats being snapshotted to a local 
> reference before this happens. This allows for stats of a closed fs to be 
> examined.
> If you look at org.apache.hadoop.ipc.DecayRpcScheduler.MetricsProxy it uses a 
> WeakReference to refer back to the larger object. we should do the same for 
> abfs/s3a bindings. ideally do some template proxy class in hadoop common they 
> can both use.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a diff in pull request #5144: HADOOP-18526. Leak of S3AInstrumentation instances via hadoop Metrics references

2022-12-12 Thread GitBox


mukund-thakur commented on code in PR #5144:
URL: https://github.com/apache/hadoop/pull/5144#discussion_r1046453289


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##
@@ -459,6 +458,13 @@ public void initialize(URI name, Configuration 
originalConf)
 AuditSpan span = null;
 try {
   LOG.debug("Initializing S3AFileSystem for {}", bucket);
+  if (LOG.isTraceEnabled()) {
+// log a full trace for deep diagnostics of where an object is created,
+// for tracking down memory leak issues.
+LOG.trace("Filesystem for {} created; fs.s3a.impl.disable.cache = {}",
+name, originalConf.getBoolean("fs.s3a.impl.disable.cache", false),
+new RuntimeException(super.toString()));

Review Comment:
   Why not just print it? I mean I don't understand the reason behind wrapping 
in RuntimeEx. 
   Also base FileStystem doesn't implement toString() so there won't be 
anything. Why not use this.tpString()?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18569) NFS Gateway may release buffer too early

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646337#comment-17646337
 ] 

ASF GitHub Bot commented on HADOOP-18569:
-

hadoop-yetus commented on PR #5212:
URL: https://github.com/apache/hadoop/pull/5212#issuecomment-1347368268

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 16s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  20m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  2s |  |  hadoop-nfs in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 198m 31s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5212/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5212 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 065ae8968491 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 078506fe4581e559d59030044adf4e8a13332735 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5212/1/testReport/ |
   | Max. process+thread count | 560 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-nfs U: 
hadoop-common-project/hadoop-nfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5212/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
  

[GitHub] [hadoop] hadoop-yetus commented on pull request #5212: HADOOP-18569. NFS Gateway may release buffer too early

2022-12-12 Thread GitBox


hadoop-yetus commented on PR #5212:
URL: https://github.com/apache/hadoop/pull/5212#issuecomment-1347368268

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 16s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  20m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  2s |  |  hadoop-nfs in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 198m 31s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5212/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5212 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 065ae8968491 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 078506fe4581e559d59030044adf4e8a13332735 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5212/1/testReport/ |
   | Max. process+thread count | 560 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-nfs U: 
hadoop-common-project/hadoop-nfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5212/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-uns

[jira] [Commented] (HADOOP-18546) disable purging list of in progress reads in abfs stream closed

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646332#comment-17646332
 ] 

ASF GitHub Bot commented on HADOOP-18546:
-

steveloughran commented on PR #5205:
URL: https://github.com/apache/hadoop/pull/5205#issuecomment-1347337424

   @snvijaya @mukund-thakur @mehakmeet can I get a review of this -i want this 
in so there is a programmatic check for the presence of the fix. I'm adding a 
"safeprefetch" command to cloudstore which will identify when an abfs release 
has the bug (everything with etag_aware), has the fix (the new probe) and if 
vulnerable review the options, printing out the correct settings in xml and 
spark conf. we need this probe for it to see when things are good
   
   
https://github.com/steveloughran/cloudstore/blob/trunk/src/main/java/org/apache/hadoop/fs/store/abfs/SafePrefetch.java




> disable purging list of in progress reads in abfs stream closed
> ---
>
> Key: HADOOP-18546
> URL: https://issues.apache.org/jira/browse/HADOOP-18546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.5
>
>
> turn off the prune of in progress reads in 
> ReadBufferManager::purgeBuffersForStream
> this will ensure active prefetches for a closed stream complete. they wiill 
> then get to the completed list and hang around until evicted by timeout, but 
> at least prefetching will be safe.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #5205: HADOOP-18546. log/probes of HADOOP-18546 presence.

2022-12-12 Thread GitBox


steveloughran commented on PR #5205:
URL: https://github.com/apache/hadoop/pull/5205#issuecomment-1347337424

   @snvijaya @mukund-thakur @mehakmeet can I get a review of this -i want this 
in so there is a programmatic check for the presence of the fix. I'm adding a 
"safeprefetch" command to cloudstore which will identify when an abfs release 
has the bug (everything with etag_aware), has the fix (the new probe) and if 
vulnerable review the options, printing out the correct settings in xml and 
spark conf. we need this probe for it to see when things are good
   
   
https://github.com/steveloughran/cloudstore/blob/trunk/src/main/java/org/apache/hadoop/fs/store/abfs/SafePrefetch.java


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18569) NFS Gateway may release buffer too early

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646325#comment-17646325
 ] 

ASF GitHub Bot commented on HADOOP-18569:
-

hadoop-yetus commented on PR #5211:
URL: https://github.com/apache/hadoop/pull/5211#issuecomment-1347284189

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  10m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 17s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  18m 51s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  25m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 59s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m 59s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 50s |  |  hadoop-nfs in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 147m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5211/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5211 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9d6180c04353 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 3340c89772c4841cb49401225b0d607024f86d9a |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~18.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5211/1/testReport/ |
   | Max. process+thread count | 574 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-nfs U: 
hadoop-common-project/hadoop-nfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5211/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> NFS Gateway may release buffer too early
> 
>
> Key: HADOOP-18569
> URL: https://issues.apache.org/jira/browse/HADOOP-18569
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 3.4.0, 3.3.5, 3.2.5, 3.3.9
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Blocker
>  Labels: pull-request-available
>
> After upgrading Netty from 4.1.68 to 4.1.77 (HADOOP-18079), NFS Gateway 
> started crashing when writing data (can be easily reproduced by a few 10MB+ 
> files).  The problem was triggered by [reduced default chunk size in 
> PooledByteBufAllocator|https://github.com/netty/netty/commit/f650303911] (in 
> 4.1.75), but it turned out to be caused by a buffer released to

[GitHub] [hadoop] hadoop-yetus commented on pull request #5211: HADOOP-18569. NFS Gateway may release buffer too early

2022-12-12 Thread GitBox


hadoop-yetus commented on PR #5211:
URL: https://github.com/apache/hadoop/pull/5211#issuecomment-1347284189

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  10m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 17s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  18m 51s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  25m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 59s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m 59s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 50s |  |  hadoop-nfs in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 147m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5211/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5211 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9d6180c04353 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 3340c89772c4841cb49401225b0d607024f86d9a |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~18.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5211/1/testReport/ |
   | Max. process+thread count | 574 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-nfs U: 
hadoop-common-project/hadoop-nfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5211/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18073) Upgrade AWS SDK to v2

2022-12-12 Thread Mukund Thakur (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646318#comment-17646318
 ] 

Mukund Thakur commented on HADOOP-18073:


Looks good to me. Please re-run all the tests here 
[https://github.com/ahmarsuhail/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractVectoredRead.java]
  just to be sure. 

Also think about this https://issues.apache.org/jira/browse/HADOOP-17338 . An 
old related issue as the response of getObject has changed.

> Upgrade AWS SDK to v2
> -
>
> Key: HADOOP-18073
> URL: https://issues.apache.org/jira/browse/HADOOP-18073
> Project: Hadoop Common
>  Issue Type: Task
>  Components: auth, fs/s3
>Affects Versions: 3.3.1
>Reporter: xiaowei sun
>Assignee: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
> Attachments: Upgrading S3A to SDKV2.pdf
>
>
> This task tracks upgrading Hadoop's AWS connector S3A from AWS SDK for Java 
> V1 to AWS SDK for Java V2.
> Original use case:
> {quote}We would like to access s3 with AWS SSO, which is supported in 
> software.amazon.awssdk:sdk-core:2.*.
> In particular, from 
> [https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html],
>  when to set 'fs.s3a.aws.credentials.provider', it must be 
> "com.amazonaws.auth.AWSCredentialsProvider". We would like to support 
> "software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider" which 
> supports AWS SSO, so users only need to authenticate once.
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18546) disable purging list of in progress reads in abfs stream closed

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646279#comment-17646279
 ] 

ASF GitHub Bot commented on HADOOP-18546:
-

steveloughran commented on PR #5205:
URL: https://github.com/apache/hadoop/pull/5205#issuecomment-1347074480

   Ignoring the javadocs, I believe this is ready. Please can I get reviews as 
I consider this a blocker for the 3.3.5 release -I need that api probe




> disable purging list of in progress reads in abfs stream closed
> ---
>
> Key: HADOOP-18546
> URL: https://issues.apache.org/jira/browse/HADOOP-18546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.5
>
>
> turn off the prune of in progress reads in 
> ReadBufferManager::purgeBuffersForStream
> this will ensure active prefetches for a closed stream complete. they wiill 
> then get to the completed list and hang around until evicted by timeout, but 
> at least prefetching will be safe.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #5205: HADOOP-18546. log/probes of HADOOP-18546 presence.

2022-12-12 Thread GitBox


steveloughran commented on PR #5205:
URL: https://github.com/apache/hadoop/pull/5205#issuecomment-1347074480

   Ignoring the javadocs, I believe this is ready. Please can I get reviews as 
I consider this a blocker for the 3.3.5 release -I need that api probe


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18399) SingleFilePerBlockCache to use LocalDirAllocator for file allocation

2022-12-12 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-18399:
--
Target Version/s: 3.4.0
  Status: Patch Available  (was: In Progress)

> SingleFilePerBlockCache to use LocalDirAllocator for file allocation
> 
>
> Key: HADOOP-18399
> URL: https://issues.apache.org/jira/browse/HADOOP-18399
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> prefetching stream's SingleFilePerBlockCache uses Files.tempFile() to 
> allocate a temp file.
> it should be using LocalDirAllocator to allocate space from a list of dirs, 
> taking a config key to use. for s3a we will use the Constants.BUFFER_DIR 
> option, which on yarn deployments is fixed under the env.LOCAL_DIR path, so 
> automatically cleaned up on container exit



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18399) SingleFilePerBlockCache to use LocalDirAllocator for file allocation

2022-12-12 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-18399:
--
Summary: SingleFilePerBlockCache to use LocalDirAllocator for file 
allocation  (was: SingleFilePerBlockCache to use LocalDirAllocator for file 
allocatoin)

> SingleFilePerBlockCache to use LocalDirAllocator for file allocation
> 
>
> Key: HADOOP-18399
> URL: https://issues.apache.org/jira/browse/HADOOP-18399
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> prefetching stream's SingleFilePerBlockCache uses Files.tempFile() to 
> allocate a temp file.
> it should be using LocalDirAllocator to allocate space from a list of dirs, 
> taking a config key to use. for s3a we will use the Constants.BUFFER_DIR 
> option, which on yarn deployments is fixed under the env.LOCAL_DIR path, so 
> automatically cleaned up on container exit



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18568) Magic Committer optional clean up

2022-12-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646271#comment-17646271
 ] 

Steve Loughran commented on HADOOP-18568:
-

another thought -what about if the {{loadAndCommit}} operation invoked on each 
task attempt manifest to load that file and POST commit all its pending uploads 
did the delete of its task attempt dir as it went along. It'd be adding 1 LIST 
plus the (bulk) DELETE, so 2 write calls per file. But it would be incremental 
and not that serialized/paged deep tree delete

interesting question as to what the threshold of switching to delete-in-job vs 
delete-afterwards is reached. that single dir list is 1 LIST per 1000 objects 
and one bulk DELETE per 250 files (configurable BTW... set it to 1000 and 
there'd be less, but still 1000 write op capacity used up)j. the bulk delete is 
serialized now (it can overload the store which is why we've never really tried 
to go overboard there, especially as with s3guard we had to handle partial 
failures too)

[~andre.amorimfons...@gmail.com] try a job with fs.s3a.bulk.delete.page.size 
set to 1000 and see how much faster it gets?

> Magic Committer optional clean up 
> --
>
> Key: HADOOP-18568
> URL: https://issues.apache.org/jira/browse/HADOOP-18568
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: fs/s3
>Affects Versions: 3.3.3
>Reporter: André F.
>Priority: Minor
>
> It seems that deleting the `__magic` folder, depending on the number of 
> tasks/partitions used on a given spark job, can take really long time. I'm 
> having the following behavior on a given Spark job (processing ~30TB, with 
> ~420k tasks) using the magic committer:
> {code:java}
> 2022-12-10T21:25:19.629Z pool-3-thread-32 INFO MagicS3GuardCommitter: 
> Starting: Deleting magic directory s3a://my-bucket/random_hash/__magic
> 2022-12-10T21:52:03.250Z pool-3-thread-32 INFO MagicS3GuardCommitter: 
> Deleting magic directory s3a://my-bucket/random_hash/__magic: duration 
> 26:43.620s {code}
> I don't see a way out of it since the deletion of s3 objects needs to list 
> all objects under a prefix and this is what may be taking too much time. 
> Could we somehow make this cleanup optional? (the idea would be to delegate 
> it through s3 lifecycle policies in order to not create this overhead on the 
> commit phase).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18569) NFS Gateway may release buffer too early

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646259#comment-17646259
 ] 

ASF GitHub Bot commented on HADOOP-18569:
-

adoroszlai opened a new pull request, #5212:
URL: https://github.com/apache/hadoop/pull/5212

   ## What changes were proposed in this pull request?
   
   cherry-picking df4812df65d01889ba93bce1415e01461500208d 
   
   https://issues.apache.org/jira/browse/HADOOP-18569




> NFS Gateway may release buffer too early
> 
>
> Key: HADOOP-18569
> URL: https://issues.apache.org/jira/browse/HADOOP-18569
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 3.4.0, 3.3.5, 3.2.5, 3.3.9
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Blocker
>  Labels: pull-request-available
>
> After upgrading Netty from 4.1.68 to 4.1.77 (HADOOP-18079), NFS Gateway 
> started crashing when writing data (can be easily reproduced by a few 10MB+ 
> files).  The problem was triggered by [reduced default chunk size in 
> PooledByteBufAllocator|https://github.com/netty/netty/commit/f650303911] (in 
> 4.1.75), but it turned out to be caused by a buffer released too early in NFS 
> Gateway.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18569) NFS Gateway may release buffer too early

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646258#comment-17646258
 ] 

ASF GitHub Bot commented on HADOOP-18569:
-

adoroszlai opened a new pull request, #5211:
URL: https://github.com/apache/hadoop/pull/5211

   ## What changes were proposed in this pull request?
   
   cherry-picking df4812df65d01889ba93bce1415e01461500208d 
   
   https://issues.apache.org/jira/browse/HADOOP-18569




> NFS Gateway may release buffer too early
> 
>
> Key: HADOOP-18569
> URL: https://issues.apache.org/jira/browse/HADOOP-18569
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 3.4.0, 3.3.5, 3.2.5, 3.3.9
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Blocker
>  Labels: pull-request-available
>
> After upgrading Netty from 4.1.68 to 4.1.77 (HADOOP-18079), NFS Gateway 
> started crashing when writing data (can be easily reproduced by a few 10MB+ 
> files).  The problem was triggered by [reduced default chunk size in 
> PooledByteBufAllocator|https://github.com/netty/netty/commit/f650303911] (in 
> 4.1.75), but it turned out to be caused by a buffer released too early in NFS 
> Gateway.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai opened a new pull request, #5212: HADOOP-18569. NFS Gateway may release buffer too early

2022-12-12 Thread GitBox


adoroszlai opened a new pull request, #5212:
URL: https://github.com/apache/hadoop/pull/5212

   ## What changes were proposed in this pull request?
   
   cherry-picking df4812df65d01889ba93bce1415e01461500208d 
   
   https://issues.apache.org/jira/browse/HADOOP-18569


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai opened a new pull request, #5211: HADOOP-18569. NFS Gateway may release buffer too early

2022-12-12 Thread GitBox


adoroszlai opened a new pull request, #5211:
URL: https://github.com/apache/hadoop/pull/5211

   ## What changes were proposed in this pull request?
   
   cherry-picking df4812df65d01889ba93bce1415e01461500208d 
   
   https://issues.apache.org/jira/browse/HADOOP-18569


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18569) NFS Gateway may release buffer too early

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646257#comment-17646257
 ] 

ASF GitHub Bot commented on HADOOP-18569:
-

adoroszlai opened a new pull request, #5210:
URL: https://github.com/apache/hadoop/pull/5210

   ## What changes were proposed in this pull request?
   
   cherry-picking df4812df65d01889ba93bce1415e01461500208d 
   
   https://issues.apache.org/jira/browse/HADOOP-18569




> NFS Gateway may release buffer too early
> 
>
> Key: HADOOP-18569
> URL: https://issues.apache.org/jira/browse/HADOOP-18569
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 3.4.0, 3.3.5, 3.2.5, 3.3.9
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Blocker
>  Labels: pull-request-available
>
> After upgrading Netty from 4.1.68 to 4.1.77 (HADOOP-18079), NFS Gateway 
> started crashing when writing data (can be easily reproduced by a few 10MB+ 
> files).  The problem was triggered by [reduced default chunk size in 
> PooledByteBufAllocator|https://github.com/netty/netty/commit/f650303911] (in 
> 4.1.75), but it turned out to be caused by a buffer released too early in NFS 
> Gateway.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai opened a new pull request, #5210: HADOOP-18569. NFS Gateway may release buffer too early

2022-12-12 Thread GitBox


adoroszlai opened a new pull request, #5210:
URL: https://github.com/apache/hadoop/pull/5210

   ## What changes were proposed in this pull request?
   
   cherry-picking df4812df65d01889ba93bce1415e01461500208d 
   
   https://issues.apache.org/jira/browse/HADOOP-18569


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18569) NFS Gateway may release buffer too early

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646252#comment-17646252
 ] 

ASF GitHub Bot commented on HADOOP-18569:
-

adoroszlai commented on PR #5207:
URL: https://github.com/apache/hadoop/pull/5207#issuecomment-1346967676

   Thanks @steveloughran, @szetszwo for the review.




> NFS Gateway may release buffer too early
> 
>
> Key: HADOOP-18569
> URL: https://issues.apache.org/jira/browse/HADOOP-18569
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 3.4.0, 3.3.5, 3.2.5, 3.3.9
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Blocker
>  Labels: pull-request-available
>
> After upgrading Netty from 4.1.68 to 4.1.77 (HADOOP-18079), NFS Gateway 
> started crashing when writing data (can be easily reproduced by a few 10MB+ 
> files).  The problem was triggered by [reduced default chunk size in 
> PooledByteBufAllocator|https://github.com/netty/netty/commit/f650303911] (in 
> 4.1.75), but it turned out to be caused by a buffer released too early in NFS 
> Gateway.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on pull request #5207: HADOOP-18569. NFS Gateway may release buffer too early

2022-12-12 Thread GitBox


adoroszlai commented on PR #5207:
URL: https://github.com/apache/hadoop/pull/5207#issuecomment-1346967676

   Thanks @steveloughran, @szetszwo for the review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18569) NFS Gateway may release buffer too early

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646251#comment-17646251
 ] 

ASF GitHub Bot commented on HADOOP-18569:
-

adoroszlai merged PR #5207:
URL: https://github.com/apache/hadoop/pull/5207




> NFS Gateway may release buffer too early
> 
>
> Key: HADOOP-18569
> URL: https://issues.apache.org/jira/browse/HADOOP-18569
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 3.4.0, 3.3.5, 3.2.5, 3.3.9
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Blocker
>  Labels: pull-request-available
>
> After upgrading Netty from 4.1.68 to 4.1.77 (HADOOP-18079), NFS Gateway 
> started crashing when writing data (can be easily reproduced by a few 10MB+ 
> files).  The problem was triggered by [reduced default chunk size in 
> PooledByteBufAllocator|https://github.com/netty/netty/commit/f650303911] (in 
> 4.1.75), but it turned out to be caused by a buffer released too early in NFS 
> Gateway.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai merged pull request #5207: HADOOP-18569. NFS Gateway may release buffer too early

2022-12-12 Thread GitBox


adoroszlai merged PR #5207:
URL: https://github.com/apache/hadoop/pull/5207


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18546) disable purging list of in progress reads in abfs stream closed

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646242#comment-17646242
 ] 

ASF GitHub Bot commented on HADOOP-18546:
-

steveloughran commented on PR #5205:
URL: https://github.com/apache/hadoop/pull/5205#issuecomment-1346942996

   @slfan1989 wow, big problem...going to need a lot of changes in the code, 
even if just needed the package-info.java files. 




> disable purging list of in progress reads in abfs stream closed
> ---
>
> Key: HADOOP-18546
> URL: https://issues.apache.org/jira/browse/HADOOP-18546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.5
>
>
> turn off the prune of in progress reads in 
> ReadBufferManager::purgeBuffersForStream
> this will ensure active prefetches for a closed stream complete. they wiill 
> then get to the completed list and hang around until evicted by timeout, but 
> at least prefetching will be safe.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #5205: HADOOP-18546. log/probes of HADOOP-18546 presence.

2022-12-12 Thread GitBox


steveloughran commented on PR #5205:
URL: https://github.com/apache/hadoop/pull/5205#issuecomment-1346942996

   @slfan1989 wow, big problem...going to need a lot of changes in the code, 
even if just needed the package-info.java files. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18329) Add support for IBM Semeru OE JRE 11.0.15.0 and greater

2022-12-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-18329:
---

Assignee: Jack

> Add support for IBM Semeru OE JRE 11.0.15.0 and greater
> ---
>
> Key: HADOOP-18329
> URL: https://issues.apache.org/jira/browse/HADOOP-18329
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, common
>Affects Versions: 3.0.0, 3.1.0, 3.0.1, 3.2.0, 3.0.2, 3.1.1, 3.0.3, 3.3.0, 
> 3.1.2, 3.2.1, 3.1.3, 3.1.4, 3.2.2, 3.3.1, 3.2.3, 3.3.2, 3.3.3
> Environment: Running Hadoop (or Apache Spark 3.2.1 or above) on IBM 
> Semeru runtimes open edition 11.0.15.0 or greater.
>Reporter: Jack
>Assignee: Jack
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.5
>
>   Original Estimate: 1h
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> There are checks within the PlatformName class that use the Vendor property 
> of the provided runtime JVM specifically looking for `IBM` within the name. 
> Whilst this check worked for IBM's [java technology 
> edition|https://www.ibm.com/docs/en/sdk-java-technology] it fails to work on 
> [Semeru|https://developer.ibm.com/languages/java/semeru-runtimes/] since 
> 11.0.15.0 due to the following change:
> h4. java.vendor system property
> In this release, the {{java.vendor}} system property has been changed from 
> "International Business Machines Corporation" to "IBM Corporation".
> Modules such as the below are not provided in these runtimes.
> com.ibm.security.auth.module.JAASLoginModule



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18329) Add support for IBM Semeru OE JRE 11.0.15.0 and greater

2022-12-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18329.
-
Fix Version/s: 3.4.0
   3.3.5
   Resolution: Fixed

fixed in 3.3.5+

if you need it in branch-3.2 reopen this and submit a new PR

> Add support for IBM Semeru OE JRE 11.0.15.0 and greater
> ---
>
> Key: HADOOP-18329
> URL: https://issues.apache.org/jira/browse/HADOOP-18329
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, common
>Affects Versions: 3.0.0, 3.1.0, 3.0.1, 3.2.0, 3.0.2, 3.1.1, 3.0.3, 3.3.0, 
> 3.1.2, 3.2.1, 3.1.3, 3.1.4, 3.2.2, 3.3.1, 3.2.3, 3.3.2, 3.3.3
> Environment: Running Hadoop (or Apache Spark 3.2.1 or above) on IBM 
> Semeru runtimes open edition 11.0.15.0 or greater.
>Reporter: Jack
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.5
>
>   Original Estimate: 1h
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> There are checks within the PlatformName class that use the Vendor property 
> of the provided runtime JVM specifically looking for `IBM` within the name. 
> Whilst this check worked for IBM's [java technology 
> edition|https://www.ibm.com/docs/en/sdk-java-technology] it fails to work on 
> [Semeru|https://developer.ibm.com/languages/java/semeru-runtimes/] since 
> 11.0.15.0 due to the following change:
> h4. java.vendor system property
> In this release, the {{java.vendor}} system property has been changed from 
> "International Business Machines Corporation" to "IBM Corporation".
> Modules such as the below are not provided in these runtimes.
> com.ibm.security.auth.module.JAASLoginModule



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18329) Add support for IBM Semeru OE JRE 11.0.15.0 and greater

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646235#comment-17646235
 ] 

ASF GitHub Bot commented on HADOOP-18329:
-

steveloughran merged PR #5208:
URL: https://github.com/apache/hadoop/pull/5208




> Add support for IBM Semeru OE JRE 11.0.15.0 and greater
> ---
>
> Key: HADOOP-18329
> URL: https://issues.apache.org/jira/browse/HADOOP-18329
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, common
>Affects Versions: 3.0.0, 3.1.0, 3.0.1, 3.2.0, 3.0.2, 3.1.1, 3.0.3, 3.3.0, 
> 3.1.2, 3.2.1, 3.1.3, 3.1.4, 3.2.2, 3.3.1, 3.2.3, 3.3.2, 3.3.3
> Environment: Running Hadoop (or Apache Spark 3.2.1 or above) on IBM 
> Semeru runtimes open edition 11.0.15.0 or greater.
>Reporter: Jack
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> There are checks within the PlatformName class that use the Vendor property 
> of the provided runtime JVM specifically looking for `IBM` within the name. 
> Whilst this check worked for IBM's [java technology 
> edition|https://www.ibm.com/docs/en/sdk-java-technology] it fails to work on 
> [Semeru|https://developer.ibm.com/languages/java/semeru-runtimes/] since 
> 11.0.15.0 due to the following change:
> h4. java.vendor system property
> In this release, the {{java.vendor}} system property has been changed from 
> "International Business Machines Corporation" to "IBM Corporation".
> Modules such as the below are not provided in these runtimes.
> com.ibm.security.auth.module.JAASLoginModule



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #5208: HADOOP-18329 - Support for IBM Semeru JVM v>11.0.15.0 Vendor Name Changes

2022-12-12 Thread GitBox


steveloughran merged PR #5208:
URL: https://github.com/apache/hadoop/pull/5208


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #4572: HADOOP-18330-S3AFileSystem removes Path when calling createS3Client

2022-12-12 Thread GitBox


steveloughran commented on PR #4572:
URL: https://github.com/apache/hadoop/pull/4572#issuecomment-1346920739

   look at the jira; comming in 3.3.5 which will be at release candiate 0 this 
week https://issues.apache.org/jira/browse/HADOOP-18330


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18329) Add support for IBM Semeru OE JRE 11.0.15.0 and greater

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646230#comment-17646230
 ] 

ASF GitHub Bot commented on HADOOP-18329:
-

steveloughran commented on PR #4537:
URL: https://github.com/apache/hadoop/pull/4537#issuecomment-1346920543

   get it into 3.3 and i will pull to 3.3.5, they are almost identical. if 
there *are* merge problems again, then we can worry about it




> Add support for IBM Semeru OE JRE 11.0.15.0 and greater
> ---
>
> Key: HADOOP-18329
> URL: https://issues.apache.org/jira/browse/HADOOP-18329
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, common
>Affects Versions: 3.0.0, 3.1.0, 3.0.1, 3.2.0, 3.0.2, 3.1.1, 3.0.3, 3.3.0, 
> 3.1.2, 3.2.1, 3.1.3, 3.1.4, 3.2.2, 3.3.1, 3.2.3, 3.3.2, 3.3.3
> Environment: Running Hadoop (or Apache Spark 3.2.1 or above) on IBM 
> Semeru runtimes open edition 11.0.15.0 or greater.
>Reporter: Jack
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> There are checks within the PlatformName class that use the Vendor property 
> of the provided runtime JVM specifically looking for `IBM` within the name. 
> Whilst this check worked for IBM's [java technology 
> edition|https://www.ibm.com/docs/en/sdk-java-technology] it fails to work on 
> [Semeru|https://developer.ibm.com/languages/java/semeru-runtimes/] since 
> 11.0.15.0 due to the following change:
> h4. java.vendor system property
> In this release, the {{java.vendor}} system property has been changed from 
> "International Business Machines Corporation" to "IBM Corporation".
> Modules such as the below are not provided in these runtimes.
> com.ibm.security.auth.module.JAASLoginModule



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18330) S3AFileSystem removes Path when calling createS3Client

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646231#comment-17646231
 ] 

ASF GitHub Bot commented on HADOOP-18330:
-

steveloughran commented on PR #4572:
URL: https://github.com/apache/hadoop/pull/4572#issuecomment-1346920739

   look at the jira; comming in 3.3.5 which will be at release candiate 0 this 
week https://issues.apache.org/jira/browse/HADOOP-18330




> S3AFileSystem removes Path when calling createS3Client
> --
>
> Key: HADOOP-18330
> URL: https://issues.apache.org/jira/browse/HADOOP-18330
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0, 3.3.1, 3.3.2, 3.3.3
>Reporter: Ashutosh Pant
>Assignee: Ashutosh Pant
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.5
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> when using hadoop and spark to read/write data from an s3 bucket like -> 
> s3a://bucket/path and using a custom Credentials Provider, the path is 
> removed from the s3a URI and the credentials provider fails because the full 
> path is gone.
> In Spark 3.2,
> It was invoked as -> s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, 
> conf)
> .createS3Client(name, bucket, credentials); 
> But In spark 3.3.3
> It is invoked as s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, 
> conf).createS3Client(getUri(), parameters);
> the getUri() removes the path from the s3a URI



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #4537: HADOOP-18329 - Support for IBM Semeru JVM v>11.0.15.0 Vendor Name Changes

2022-12-12 Thread GitBox


steveloughran commented on PR #4537:
URL: https://github.com/apache/hadoop/pull/4537#issuecomment-1346920543

   get it into 3.3 and i will pull to 3.3.5, they are almost identical. if 
there *are* merge problems again, then we can worry about it


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18146) ABFS: Add changes for expect hundred continue header with append requests

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646227#comment-17646227
 ] 

ASF GitHub Bot commented on HADOOP-18146:
-

steveloughran commented on code in PR #4039:
URL: https://github.com/apache/hadoop/pull/4039#discussion_r1045821625


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##
@@ -314,18 +317,29 @@ public void sendRequest(byte[] buffer, int offset, int 
length) throws IOExceptio
 if (this.isTraceEnabled) {
   startTime = System.nanoTime();
 }
-OutputStream outputStream;
+OutputStream outputStream = null;
 try {
   try {
 outputStream = this.connection.getOutputStream();
   } catch (IOException e) {
-// If getOutputStream fails with an exception due to 100-continue
-// enabled, we return back without throwing an exception.
-return;
+// If getOutputStream fails with an exception and 100-continue
+// is enabled, we return back without throwing an exception
+// because processResponse will give the correct status code
+// based on which the retry logic will come into place.
+String expectHeader = this.connection.getRequestProperty(EXPECT);
+if (expectHeader != null && expectHeader.equals(HUNDRED_CONTINUE)) {
+  return;

Review Comment:
   1. add a log @ debug here, including full stack. ideally, collect some 
iostats on how often it is received so we can understand it more.
   
   2. should we ever expect this if isExpectHeaderEnabled is false? if not, and 
we do get it, then what? same as here?
   
   3. javadocs need updating. sorry



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/InvalidAbfsRestOperationException.java:
##
@@ -30,14 +30,24 @@
 @InterfaceAudience.Public
 @InterfaceStability.Evolving
 public class InvalidAbfsRestOperationException extends 
AbfsRestOperationException {
-  public InvalidAbfsRestOperationException(
-  final Exception innerException) {
-super(
-AzureServiceErrorCode.UNKNOWN.getStatusCode(),
-AzureServiceErrorCode.UNKNOWN.getErrorCode(),
-innerException != null
-? innerException.toString()
-: "InvalidAbfsRestOperationException",
-innerException);
-  }
+public InvalidAbfsRestOperationException(
+final Exception innerException) {
+super(
+AzureServiceErrorCode.UNKNOWN.getStatusCode(),
+AzureServiceErrorCode.UNKNOWN.getErrorCode(),
+innerException != null
+? innerException.toString()
+: "InvalidAbfsRestOperationException",
+innerException);
+}
+
+public InvalidAbfsRestOperationException(final Exception innerException, 
int retryCount) {
+super(
+AzureServiceErrorCode.UNKNOWN.getStatusCode(),
+AzureServiceErrorCode.UNKNOWN.getErrorCode(),
+innerException != null
+? innerException.toString()
+: "InvalidAbfsRestOperationException" + "RetryCount: " 
+ String.valueOf(retryCount),

Review Comment:
   needs a space. 



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/InvalidAbfsRestOperationException.java:
##
@@ -30,14 +30,24 @@
 @InterfaceAudience.Public
 @InterfaceStability.Evolving
 public class InvalidAbfsRestOperationException extends 
AbfsRestOperationException {
-  public InvalidAbfsRestOperationException(
-  final Exception innerException) {
-super(
-AzureServiceErrorCode.UNKNOWN.getStatusCode(),
-AzureServiceErrorCode.UNKNOWN.getErrorCode(),
-innerException != null
-? innerException.toString()
-: "InvalidAbfsRestOperationException",
-innerException);
-  }
+public InvalidAbfsRestOperationException(
+final Exception innerException) {
+super(
+AzureServiceErrorCode.UNKNOWN.getStatusCode(),
+AzureServiceErrorCode.UNKNOWN.getErrorCode(),
+innerException != null
+? innerException.toString()
+: "InvalidAbfsRestOperationException",
+innerException);
+}
+
+public InvalidAbfsRestOperationException(final Exception innerException, 
int retryCount) {

Review Comment:
   add some javadoc



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -38,6 +38,7 @@
 import java.util.concurrent.TimeUnit;
 
 import org.apache.hadoop.classification.VisibleForTesting;
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsRestOperationException;

Review Comm

[GitHub] [hadoop] steveloughran commented on a diff in pull request #4039: HADOOP-18146: ABFS: Added changes for expect hundred continue header

2022-12-12 Thread GitBox


steveloughran commented on code in PR #4039:
URL: https://github.com/apache/hadoop/pull/4039#discussion_r1045821625


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##
@@ -314,18 +317,29 @@ public void sendRequest(byte[] buffer, int offset, int 
length) throws IOExceptio
 if (this.isTraceEnabled) {
   startTime = System.nanoTime();
 }
-OutputStream outputStream;
+OutputStream outputStream = null;
 try {
   try {
 outputStream = this.connection.getOutputStream();
   } catch (IOException e) {
-// If getOutputStream fails with an exception due to 100-continue
-// enabled, we return back without throwing an exception.
-return;
+// If getOutputStream fails with an exception and 100-continue
+// is enabled, we return back without throwing an exception
+// because processResponse will give the correct status code
+// based on which the retry logic will come into place.
+String expectHeader = this.connection.getRequestProperty(EXPECT);
+if (expectHeader != null && expectHeader.equals(HUNDRED_CONTINUE)) {
+  return;

Review Comment:
   1. add a log @ debug here, including full stack. ideally, collect some 
iostats on how often it is received so we can understand it more.
   
   2. should we ever expect this if isExpectHeaderEnabled is false? if not, and 
we do get it, then what? same as here?
   
   3. javadocs need updating. sorry



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/InvalidAbfsRestOperationException.java:
##
@@ -30,14 +30,24 @@
 @InterfaceAudience.Public
 @InterfaceStability.Evolving
 public class InvalidAbfsRestOperationException extends 
AbfsRestOperationException {
-  public InvalidAbfsRestOperationException(
-  final Exception innerException) {
-super(
-AzureServiceErrorCode.UNKNOWN.getStatusCode(),
-AzureServiceErrorCode.UNKNOWN.getErrorCode(),
-innerException != null
-? innerException.toString()
-: "InvalidAbfsRestOperationException",
-innerException);
-  }
+public InvalidAbfsRestOperationException(
+final Exception innerException) {
+super(
+AzureServiceErrorCode.UNKNOWN.getStatusCode(),
+AzureServiceErrorCode.UNKNOWN.getErrorCode(),
+innerException != null
+? innerException.toString()
+: "InvalidAbfsRestOperationException",
+innerException);
+}
+
+public InvalidAbfsRestOperationException(final Exception innerException, 
int retryCount) {
+super(
+AzureServiceErrorCode.UNKNOWN.getStatusCode(),
+AzureServiceErrorCode.UNKNOWN.getErrorCode(),
+innerException != null
+? innerException.toString()
+: "InvalidAbfsRestOperationException" + "RetryCount: " 
+ String.valueOf(retryCount),

Review Comment:
   needs a space. 



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/InvalidAbfsRestOperationException.java:
##
@@ -30,14 +30,24 @@
 @InterfaceAudience.Public
 @InterfaceStability.Evolving
 public class InvalidAbfsRestOperationException extends 
AbfsRestOperationException {
-  public InvalidAbfsRestOperationException(
-  final Exception innerException) {
-super(
-AzureServiceErrorCode.UNKNOWN.getStatusCode(),
-AzureServiceErrorCode.UNKNOWN.getErrorCode(),
-innerException != null
-? innerException.toString()
-: "InvalidAbfsRestOperationException",
-innerException);
-  }
+public InvalidAbfsRestOperationException(
+final Exception innerException) {
+super(
+AzureServiceErrorCode.UNKNOWN.getStatusCode(),
+AzureServiceErrorCode.UNKNOWN.getErrorCode(),
+innerException != null
+? innerException.toString()
+: "InvalidAbfsRestOperationException",
+innerException);
+}
+
+public InvalidAbfsRestOperationException(final Exception innerException, 
int retryCount) {

Review Comment:
   add some javadoc



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -38,6 +38,7 @@
 import java.util.concurrent.TimeUnit;
 
 import org.apache.hadoop.classification.VisibleForTesting;
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsRestOperationException;

Review Comment:
   put this dowon in the "real" apache imports; things have got a bit messed up 
with the move off guava. putting it below makes cherrypicking a lot easier



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/except

[GitHub] [hadoop] omalley commented on a diff in pull request #5195: HDFS-16856: Refactor RouterAdmin to use the AdminHelper class.

2022-12-12 Thread GitBox


omalley commented on code in PR #5195:
URL: https://github.com/apache/hadoop/pull/5195#discussion_r1046122861


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java:
##
@@ -1182,6 +1200,19 @@ public static String popFirstNonOption(List 
args) {
 }
 return null;
   }
+  /**
+   * From a list of command-line arguments, ensure that all of the arguments
+   * have been used except a possible "--".
+   *
+   * @param args  List of arguments.
+   * @throws IllegalArgumentException if some arguments were not used
+   */
+  public static void ensureAllUsed(List args) throws 
IllegalArgumentException {
+if (!args.isEmpty() && !(args.size() == 1 && "--".equals(args.get(0 {

Review Comment:
   I find writing code that depends overly on knowing the precedence for 
non-math operators leads to trouble, so I'd prefer to leave them in.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5193: YARN-11374. [Federation] Support refreshSuperUserGroupsConfiguration、refreshUserToGroupsMappings API's for Federation.

2022-12-12 Thread GitBox


hadoop-yetus commented on PR #5193:
URL: https://github.com/apache/hadoop/pull/5193#issuecomment-1346907241

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  0s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   8m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 51s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   2m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   5m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  cc  |   9m  4s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   9m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  cc  |   8m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   8m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 32s |  |  
hadoop-yarn-project/hadoop-yarn: The patch generated 0 new + 0 unchanged - 13 
fixed = 0 total (was 13)  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   2m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   5m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 12s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 40s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 43s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 163m 35s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5193/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5193 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint 
bufcompat |
   | uname | Linux 8c10068a3996 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 70e8a09813f96834c241f33d47d4c14f72fb9d68 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5193/6/testReport/ |
   | Max. process+thread count | 558 (vs

[jira] [Commented] (HADOOP-18329) Add support for IBM Semeru OE JRE 11.0.15.0 and greater

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646195#comment-17646195
 ] 

ASF GitHub Bot commented on HADOOP-18329:
-

hadoop-yetus commented on PR #5208:
URL: https://github.com/apache/hadoop/pull/5208#issuecomment-1346830962

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  10m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 31s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 48s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  18m 46s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   1m  7s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   1m 39s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  25m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 54s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m 54s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  hadoop-common-project: 
The patch generated 0 new + 3 unchanged - 1 fixed = 3 total (was 4)  |
   | +1 :green_heart: |  mvnsite  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   1m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 37s |  |  hadoop-minikdc in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 34s |  |  hadoop-auth in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 158m 35s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5208/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5208 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b3f0d71badd5 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 3774de951867c1c2250851d6eeb21b0ef239e051 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~18.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5208/1/testReport/ |
   | Max. process+thread count | 527 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-minikdc 
hadoop-common-project/hadoop-auth U: hadoop-common-project |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5208/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Add support for IBM Semeru OE JRE 11.0.15.0 and greater
> ---
>
> Key: HADOOP-18329
> URL: https://issues.apache.org/jira/browse/HADOOP-18329
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, common
>Affects Versions: 3.0.0, 3.1.0, 3.0.1, 3.2.0, 3.0.2, 3.1.1, 3.0.3, 3.3.0, 
> 3.1.2, 3.2.1, 3.1.3, 3.1.4, 3.2.2, 3.3.1, 3.2.3, 3.3.2, 3.3.3
> Environment: Running Hadoop (or Apache Spark 3.2.1 or above) on IBM 
> Semeru runtimes open edition 11.0.15.0 or greater.
>Reporter: Jack
>  

[GitHub] [hadoop] hadoop-yetus commented on pull request #5208: HADOOP-18329 - Support for IBM Semeru JVM v>11.0.15.0 Vendor Name Changes

2022-12-12 Thread GitBox


hadoop-yetus commented on PR #5208:
URL: https://github.com/apache/hadoop/pull/5208#issuecomment-1346830962

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  10m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 31s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 48s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  18m 46s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   1m  7s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   1m 39s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  25m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 54s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m 54s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  hadoop-common-project: 
The patch generated 0 new + 3 unchanged - 1 fixed = 3 total (was 4)  |
   | +1 :green_heart: |  mvnsite  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   1m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 37s |  |  hadoop-minikdc in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 34s |  |  hadoop-auth in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 158m 35s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5208/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5208 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b3f0d71badd5 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 3774de951867c1c2250851d6eeb21b0ef239e051 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~18.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5208/1/testReport/ |
   | Max. process+thread count | 527 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-minikdc 
hadoop-common-project/hadoop-auth U: hadoop-common-project |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5208/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5209: MAPREDUCE-7428. Fix failures related to Junit 4 to Junit 5 upgrade in org.apache.hadoop.mapreduce.v2.app.webapp

2022-12-12 Thread GitBox


hadoop-yetus commented on PR #5209:
URL: https://github.com/apache/hadoop/pull/5209#issuecomment-1346824099

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 43s | 
[/branch-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5209/1/artifact/out/branch-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-mapreduce-client-app in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  62m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 23s | 
[/patch-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5209/1/artifact/out/patch-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-mapreduce-client-app in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  21m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 35s |  |  hadoop-mapreduce-client-app in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  88m 40s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5209/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5209 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux 76bc96ee6af4 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8bac4c3ad95272d5998490c85a65fb285738c7fd |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5209/1/testReport/ |
   | Max. process+thread count | 587 (vs. ulimit of 5500) |
   | modul

[jira] [Commented] (HADOOP-18569) NFS Gateway may release buffer too early

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646173#comment-17646173
 ] 

ASF GitHub Bot commented on HADOOP-18569:
-

hadoop-yetus commented on PR #5207:
URL: https://github.com/apache/hadoop/pull/5207#issuecomment-1346729416

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  10m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3.5 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 38s |  |  branch-3.3.5 passed  |
   | +1 :green_heart: |  compile  |  17m 43s |  |  branch-3.3.5 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  |  branch-3.3.5 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  branch-3.3.5 passed  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  branch-3.3.5 passed  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  branch-3.3.5 passed  |
   | +1 :green_heart: |  shadedclient  |  24m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m  4s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m  4s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 22s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  0s |  |  hadoop-nfs in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 141m 31s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5207/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5207 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux d379b18343c5 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3.5 / 3be989682dd86e87905796d63af654b126e8d863 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~18.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5207/1/testReport/ |
   | Max. process+thread count | 550 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-nfs U: 
hadoop-common-project/hadoop-nfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5207/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> NFS Gateway may release buffer too early
> 
>
> Key: HADOOP-18569
> URL: https://issues.apache.org/jira/browse/HADOOP-18569
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 3.4.0, 3.3.5, 3.2.5, 3.3.9
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Blocker
>  Labels: pull-request-available
>
> After upgrading Netty from 4.1.68 to 4.1.77 (HADOOP-18079), NFS Gateway 
> started crashing when writing data (can be easily reproduced by a few 10MB+ 
> files).  The problem was triggered by [reduced default chunk size in 
> PooledByteBufAllocator|https://github.com/netty/netty/commit/f650303911] (in 
> 4.1.75), but it turned out to be caused by a bu

[GitHub] [hadoop] hadoop-yetus commented on pull request #5207: HADOOP-18569. NFS Gateway may release buffer too early

2022-12-12 Thread GitBox


hadoop-yetus commented on PR #5207:
URL: https://github.com/apache/hadoop/pull/5207#issuecomment-1346729416

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  10m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3.5 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 38s |  |  branch-3.3.5 passed  |
   | +1 :green_heart: |  compile  |  17m 43s |  |  branch-3.3.5 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  |  branch-3.3.5 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  branch-3.3.5 passed  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  branch-3.3.5 passed  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  branch-3.3.5 passed  |
   | +1 :green_heart: |  shadedclient  |  24m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m  4s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m  4s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 22s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  0s |  |  hadoop-nfs in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 141m 31s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5207/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5207 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux d379b18343c5 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3.5 / 3be989682dd86e87905796d63af654b126e8d863 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~18.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5207/1/testReport/ |
   | Max. process+thread count | 550 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-nfs U: 
hadoop-common-project/hadoop-nfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5207/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18330) S3AFileSystem removes Path when calling createS3Client

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646162#comment-17646162
 ] 

ASF GitHub Bot commented on HADOOP-18330:
-

khancon commented on PR #4572:
URL: https://github.com/apache/hadoop/pull/4572#issuecomment-1346700452

   Greetings, I was wondering what version of hadoop was this change merged 
into? Is it in 3.3.4 or will this come out with 3.3.5 release? 




> S3AFileSystem removes Path when calling createS3Client
> --
>
> Key: HADOOP-18330
> URL: https://issues.apache.org/jira/browse/HADOOP-18330
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0, 3.3.1, 3.3.2, 3.3.3
>Reporter: Ashutosh Pant
>Assignee: Ashutosh Pant
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.5
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> when using hadoop and spark to read/write data from an s3 bucket like -> 
> s3a://bucket/path and using a custom Credentials Provider, the path is 
> removed from the s3a URI and the credentials provider fails because the full 
> path is gone.
> In Spark 3.2,
> It was invoked as -> s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, 
> conf)
> .createS3Client(name, bucket, credentials); 
> But In spark 3.3.3
> It is invoked as s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, 
> conf).createS3Client(getUri(), parameters);
> the getUri() removes the path from the s3a URI



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] khancon commented on pull request #4572: HADOOP-18330-S3AFileSystem removes Path when calling createS3Client

2022-12-12 Thread GitBox


khancon commented on PR #4572:
URL: https://github.com/apache/hadoop/pull/4572#issuecomment-1346700452

   Greetings, I was wondering what version of hadoop was this change merged 
into? Is it in 3.3.4 or will this come out with 3.3.5 release? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18329) Add support for IBM Semeru OE JRE 11.0.15.0 and greater

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646156#comment-17646156
 ] 

ASF GitHub Bot commented on HADOOP-18329:
-

JackBuggins commented on PR #4537:
URL: https://github.com/apache/hadoop/pull/4537#issuecomment-1346679972

   @steveloughran I've popped up a PR against branch 3.3; should I do the same 
for 3.3.5?




> Add support for IBM Semeru OE JRE 11.0.15.0 and greater
> ---
>
> Key: HADOOP-18329
> URL: https://issues.apache.org/jira/browse/HADOOP-18329
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, common
>Affects Versions: 3.0.0, 3.1.0, 3.0.1, 3.2.0, 3.0.2, 3.1.1, 3.0.3, 3.3.0, 
> 3.1.2, 3.2.1, 3.1.3, 3.1.4, 3.2.2, 3.3.1, 3.2.3, 3.3.2, 3.3.3
> Environment: Running Hadoop (or Apache Spark 3.2.1 or above) on IBM 
> Semeru runtimes open edition 11.0.15.0 or greater.
>Reporter: Jack
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> There are checks within the PlatformName class that use the Vendor property 
> of the provided runtime JVM specifically looking for `IBM` within the name. 
> Whilst this check worked for IBM's [java technology 
> edition|https://www.ibm.com/docs/en/sdk-java-technology] it fails to work on 
> [Semeru|https://developer.ibm.com/languages/java/semeru-runtimes/] since 
> 11.0.15.0 due to the following change:
> h4. java.vendor system property
> In this release, the {{java.vendor}} system property has been changed from 
> "International Business Machines Corporation" to "IBM Corporation".
> Modules such as the below are not provided in these runtimes.
> com.ibm.security.auth.module.JAASLoginModule



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] JackBuggins commented on pull request #4537: HADOOP-18329 - Support for IBM Semeru JVM v>11.0.15.0 Vendor Name Changes

2022-12-12 Thread GitBox


JackBuggins commented on PR #4537:
URL: https://github.com/apache/hadoop/pull/4537#issuecomment-1346679972

   @steveloughran I've popped up a PR against branch 3.3; should I do the same 
for 3.3.5?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5056: YARN-11358. [Federation] Add FederationInterceptor#allow-partial-result config.

2022-12-12 Thread GitBox


slfan1989 commented on code in PR #5056:
URL: https://github.com/apache/hadoop/pull/5056#discussion_r1045952641


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java:
##
@@ -2105,9 +2115,10 @@ private  Map 
invokeConcurrent(Collection c
 if (response != null) {
   results.put(clusterId, response);
 }
-
-Exception exception = pair.getRight();
-if (exception != null) {

Review Comment:
   Thank you very much for your suggestion, I agree with you, I will modify the 
code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5185: YARN-11225. [Federation] Add postDelegationToken postDelegationTokenExpiration cancelDelegationToken REST APIs for Router.

2022-12-12 Thread GitBox


slfan1989 commented on code in PR #5185:
URL: https://github.com/apache/hadoop/pull/5185#discussion_r1045950811


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java:
##
@@ -46,11 +47,15 @@
 import org.apache.commons.lang3.StringUtils;
 import org.apache.commons.lang3.tuple.Pair;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authorize.AuthorizationException;
+import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.util.ReflectionUtils;
 import org.apache.hadoop.util.Sets;
 import org.apache.hadoop.util.Time;
 import org.apache.hadoop.util.concurrent.HadoopExecutors;
+import org.apache.hadoop.yarn.api.protocolrecords.*;

Review Comment:
   I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5185: YARN-11225. [Federation] Add postDelegationToken postDelegationTokenExpiration cancelDelegationToken REST APIs for Router.

2022-12-12 Thread GitBox


slfan1989 commented on code in PR #5185:
URL: https://github.com/apache/hadoop/pull/5185#discussion_r1045950620


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationInterceptorREST.java:
##
@@ -156,6 +172,31 @@ public void setUp() {
   Assert.fail();
 }
 
+RouterClientRMService routerClientRMService = new RouterClientRMService();
+routerClientRMService.initUserPipelineMap(getConf());
+long secretKeyInterval = this.getConf().getLong(
+RM_DELEGATION_KEY_UPDATE_INTERVAL_KEY, 
RM_DELEGATION_KEY_UPDATE_INTERVAL_DEFAULT);
+long tokenMaxLifetime = this.getConf().getLong(

Review Comment:
   Thank you very much for helping to review the code, I will modify the code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5131: YARN-11350. [Federation] Router Support DelegationToken With ZK.

2022-12-12 Thread GitBox


slfan1989 commented on PR #5131:
URL: https://github.com/apache/hadoop/pull/5131#issuecomment-1346671056

   @goiri Can you help to merge this pr into the trunk branch? Thank you very 
much! We have fixed all java doc issues for `hadoop-yarn-server-common`, but 
for `hadoop-yarn-server-resourcemanager` this module is not affected by our 
changes. After this pr is completed, I will improve YARN-11349 as soon as 
possible.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5206: HDFS-16868 Audit log duplicate problem when an ACE occurs in FSNamesystem.

2022-12-12 Thread GitBox


hadoop-yetus commented on PR #5206:
URL: https://github.com/apache/hadoop/pull/5206#issuecomment-1346665236

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 385m 18s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 501m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5206/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5206 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 0ff4c89012d0 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f6c9741455bfc3afbcf0b2923011b0bba261a366 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5206/1/testReport/ |
   | Max. process+thread count | 2194 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5206/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-uns

[GitHub] [hadoop] slfan1989 commented on pull request #5209: MAPREDUCE-7428. Fix failures related to Junit 4 to Junit 5 upgrade in org.apache.hadoop.mapreduce.v2.app.webapp

2022-12-12 Thread GitBox


slfan1989 commented on PR #5209:
URL: https://github.com/apache/hadoop/pull/5209#issuecomment-1346657792

   LGTM.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #5056: YARN-11358. [Federation] Add FederationInterceptor#allow-partial-result config.

2022-12-12 Thread GitBox


goiri commented on code in PR #5056:
URL: https://github.com/apache/hadoop/pull/5056#discussion_r1045925968


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java:
##
@@ -2105,9 +2115,10 @@ private  Map 
invokeConcurrent(Collection c
 if (response != null) {
   results.put(clusterId, response);
 }
-
-Exception exception = pair.getRight();
-if (exception != null) {

Review Comment:
   The old behavior was to fail the query if there was any exception.
   The default configuration setting needs to not allow partial results.



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java:
##
@@ -4294,6 +4294,11 @@ public static boolean isAclEnabled(Configuration conf) {
   ROUTER_PREFIX + "webapp.cross-origin.enabled";
   public static final boolean DEFAULT_ROUTER_WEBAPP_ENABLE_CORS_FILTER = false;
 
+  /** Router Interceptor Allow Partial Result Enable. **/
+  public static final String ROUTER_INTERCEPTOR_ALLOW_PARTIAL_RESULT_ENABLED =
+  ROUTER_PREFIX + "interceptor.allow-partial-result.enable";
+  public static final boolean 
DEFAULT_ROUTER_INTERCEPTOR_ALLOW_PARTIAL_RESULT_ENABLED = true;

Review Comment:
   From the other comments, I think this needs to be false



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml:
##
@@ -5056,4 +5056,18 @@
 
   
 
+  
+yarn.router.interceptor.allow-partial-result.enable
+true

Review Comment:
   The old behavior was to not allow partial results, so this needs to be false.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashutoshcipher opened a new pull request, #5209: MAPREDUCE-7428. Fix failures related to Junit 4 to Junit 5 upgrade in org.apache.hadoop.mapreduce.v2.app.webapp

2022-12-12 Thread GitBox


ashutoshcipher opened a new pull request, #5209:
URL: https://github.com/apache/hadoop/pull/5209

   ### Description of PR
   
   Fix failures related to Junit 4 to Junit 5 upgrade in 
org.apache.hadoop.mapreduce.v2.app.webapp here -
   
   
https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-linux-x86_64/1071/testReport/
   
   JIRA - MAPREDUCE-7428.
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #5185: YARN-11225. [Federation] Add postDelegationToken postDelegationTokenExpiration cancelDelegationToken REST APIs for Router.

2022-12-12 Thread GitBox


goiri commented on code in PR #5185:
URL: https://github.com/apache/hadoop/pull/5185#discussion_r1045908138


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationInterceptorREST.java:
##
@@ -156,6 +172,31 @@ public void setUp() {
   Assert.fail();
 }
 
+RouterClientRMService routerClientRMService = new RouterClientRMService();
+routerClientRMService.initUserPipelineMap(getConf());
+long secretKeyInterval = this.getConf().getLong(
+RM_DELEGATION_KEY_UPDATE_INTERVAL_KEY, 
RM_DELEGATION_KEY_UPDATE_INTERVAL_DEFAULT);
+long tokenMaxLifetime = this.getConf().getLong(

Review Comment:
   Can't we make all of them getTimeDuration?



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java:
##
@@ -46,11 +47,15 @@
 import org.apache.commons.lang3.StringUtils;
 import org.apache.commons.lang3.tuple.Pair;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authorize.AuthorizationException;
+import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.util.ReflectionUtils;
 import org.apache.hadoop.util.Sets;
 import org.apache.hadoop.util.Time;
 import org.apache.hadoop.util.concurrent.HadoopExecutors;
+import org.apache.hadoop.yarn.api.protocolrecords.*;

Review Comment:
   Expand



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5193: YARN-11374. [Federation] Support refreshSuperUserGroupsConfiguration、refreshUserToGroupsMappings API's for Federation.

2022-12-12 Thread GitBox


hadoop-yetus commented on PR #5193:
URL: https://github.com/apache/hadoop/pull/5193#issuecomment-1346587999

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 31s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  10m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   8m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 45s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   2m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 58s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  cc  |   9m 48s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   9m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  cc  |   8m 48s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   8m 48s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 37s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5193/5/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 0 unchanged - 
13 fixed = 1 total (was 13)  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 53s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   5m 12s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  3s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 22s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 35s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 168m 50s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5193/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5193 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint 
bufcompat |
   | uname | Linux b1b60835e086 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7e1bcdc8ecf8bc54463e7825c637ce400a062462 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:

[jira] [Commented] (HADOOP-18329) Add support for IBM Semeru OE JRE 11.0.15.0 and greater

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646103#comment-17646103
 ] 

ASF GitHub Bot commented on HADOOP-18329:
-

JackBuggins opened a new pull request, #5208:
URL: https://github.com/apache/hadoop/pull/5208

   
   
   ### Description of PR
   
   Applies patches provided in to branch-3.3 via cherry-picking a merge commit 
of the proposed changes, eg. 
   ```
   git cherry-pick a46b20d25f12dfb6af1d89c6bd8e39220cc8c928 -m 1
   ```
   
   The original change request can be found at 
https://github.com/apache/hadoop/pull/4537
   
   ---
   There are checks within the PlatformName class that use the Vendor property 
of the provided runtime JVM specifically looking for `IBM` within the name. 
Whilst this check worked for IBM's [java technology 
edition](https://www.ibm.com/docs/en/sdk-java-technology) it fails to work on 
[Semeru](https://developer.ibm.com/languages/java/semeru-runtimes/) since 
11.0.15.0 due to the following change:
   
   **java.vendor system property**
   In this release, the java.vendor system property has been changed from 
"International Business Machines Corporation" to "IBM Corporation".
   
   Modules such as the below are not provided in these runtimes.
   com.ibm.security.auth.module.JAASLoginModule
   
   This change attempts to use reflection to ensure that a class common to IBM 
JT runtimes exists, extending upon the vendor check, since IBM vendored JVM's 
may not actually require special logic to use custom security modules. The same 
3.3.3 versions were working correctly until the vendor name change was observed 
during routine upgrades by internal CI.
   
   ### How was this patch tested?
   
   CI + Unit test
   
   ### How was this patch tested?
   
   - C.I
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Add support for IBM Semeru OE JRE 11.0.15.0 and greater
> ---
>
> Key: HADOOP-18329
> URL: https://issues.apache.org/jira/browse/HADOOP-18329
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, common
>Affects Versions: 3.0.0, 3.1.0, 3.0.1, 3.2.0, 3.0.2, 3.1.1, 3.0.3, 3.3.0, 
> 3.1.2, 3.2.1, 3.1.3, 3.1.4, 3.2.2, 3.3.1, 3.2.3, 3.3.2, 3.3.3
> Environment: Running Hadoop (or Apache Spark 3.2.1 or above) on IBM 
> Semeru runtimes open edition 11.0.15.0 or greater.
>Reporter: Jack
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> There are checks within the PlatformName class that use the Vendor property 
> of the provided runtime JVM specifically looking for `IBM` within the name. 
> Whilst this check worked for IBM's [java technology 
> edition|https://www.ibm.com/docs/en/sdk-java-technology] it fails to work on 
> [Semeru|https://developer.ibm.com/languages/java/semeru-runtimes/] since 
> 11.0.15.0 due to the following change:
> h4. java.vendor system property
> In this release, the {{java.vendor}} system property has been changed from 
> "International Business Machines Corporation" to "IBM Corporation".
> Modules such as the below are not provided in these runtimes.
> com.ibm.security.auth.module.JAASLoginModule



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] JackBuggins opened a new pull request, #5208: HADOOP-18329 - Support for IBM Semeru JVM v>11.0.15.0 Vendor Name Changes

2022-12-12 Thread GitBox


JackBuggins opened a new pull request, #5208:
URL: https://github.com/apache/hadoop/pull/5208

   
   
   ### Description of PR
   
   Applies patches provided in to branch-3.3 via cherry-picking a merge commit 
of the proposed changes, eg. 
   ```
   git cherry-pick a46b20d25f12dfb6af1d89c6bd8e39220cc8c928 -m 1
   ```
   
   The original change request can be found at 
https://github.com/apache/hadoop/pull/4537
   
   ---
   There are checks within the PlatformName class that use the Vendor property 
of the provided runtime JVM specifically looking for `IBM` within the name. 
Whilst this check worked for IBM's [java technology 
edition](https://www.ibm.com/docs/en/sdk-java-technology) it fails to work on 
[Semeru](https://developer.ibm.com/languages/java/semeru-runtimes/) since 
11.0.15.0 due to the following change:
   
   **java.vendor system property**
   In this release, the java.vendor system property has been changed from 
"International Business Machines Corporation" to "IBM Corporation".
   
   Modules such as the below are not provided in these runtimes.
   com.ibm.security.auth.module.JAASLoginModule
   
   This change attempts to use reflection to ensure that a class common to IBM 
JT runtimes exists, extending upon the vendor check, since IBM vendored JVM's 
may not actually require special logic to use custom security modules. The same 
3.3.3 versions were working correctly until the vendor name change was observed 
during routine upgrades by internal CI.
   
   ### How was this patch tested?
   
   CI + Unit test
   
   ### How was this patch tested?
   
   - C.I
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18546) disable purging list of in progress reads in abfs stream closed

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646098#comment-17646098
 ] 

ASF GitHub Bot commented on HADOOP-18546:
-

slfan1989 commented on PR #5205:
URL: https://github.com/apache/hadoop/pull/5205#issuecomment-1346494132

   @steveloughran When I completed YARN-related PR, I also encountered related 
issues, and I submitted the repair pr for each moudle.
   
   We can refer to https://github.com/apache/hadoop/pull/5182
   
   I added a comment before the code to solve related issues.
   
   
![image](https://user-images.githubusercontent.com/55643692/207055796-7714f141-8e81-42fe-8bed-247fcd072add.png)
   




> disable purging list of in progress reads in abfs stream closed
> ---
>
> Key: HADOOP-18546
> URL: https://issues.apache.org/jira/browse/HADOOP-18546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.5
>
>
> turn off the prune of in progress reads in 
> ReadBufferManager::purgeBuffersForStream
> this will ensure active prefetches for a closed stream complete. they wiill 
> then get to the completed list and hang around until evicted by timeout, but 
> at least prefetching will be safe.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5205: HADOOP-18546. log/probes of HADOOP-18546 presence.

2022-12-12 Thread GitBox


slfan1989 commented on PR #5205:
URL: https://github.com/apache/hadoop/pull/5205#issuecomment-1346494132

   @steveloughran When I completed YARN-related PR, I also encountered related 
issues, and I submitted the repair pr for each moudle.
   
   We can refer to https://github.com/apache/hadoop/pull/5182
   
   I added a comment before the code to solve related issues.
   
   
![image](https://user-images.githubusercontent.com/55643692/207055796-7714f141-8e81-42fe-8bed-247fcd072add.png)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18569) NFS Gateway may release buffer too early

2022-12-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18569:

Labels: pull-request-available  (was: )

> NFS Gateway may release buffer too early
> 
>
> Key: HADOOP-18569
> URL: https://issues.apache.org/jira/browse/HADOOP-18569
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 3.4.0, 3.3.5, 3.2.5, 3.3.9
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Blocker
>  Labels: pull-request-available
>
> After upgrading Netty from 4.1.68 to 4.1.77 (HADOOP-18079), NFS Gateway 
> started crashing when writing data (can be easily reproduced by a few 10MB+ 
> files).  The problem was triggered by [reduced default chunk size in 
> PooledByteBufAllocator|https://github.com/netty/netty/commit/f650303911] (in 
> 4.1.75), but it turned out to be caused by a buffer released too early in NFS 
> Gateway.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18569) NFS Gateway may release buffer too early

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646090#comment-17646090
 ] 

ASF GitHub Bot commented on HADOOP-18569:
-

adoroszlai opened a new pull request, #5207:
URL: https://github.com/apache/hadoop/pull/5207

   ## What changes were proposed in this pull request?
   
   After upgrading Netty from 4.1.68 to 4.1.77 (HADOOP-18079), NFS Gateway 
started randomly crashing when writing data (can be easily reproduced by a few 
10MB+ files).  The problem was triggered by [reduced default chunk size in 
PooledByteBufAllocator](https://github.com/netty/netty/commit/f650303911) (in 
4.1.75), but it turned out to be caused by a buffer released too early in NFS 
Gateway (HADOOP-11245).
   
   https://issues.apache.org/jira/browse/HADOOP-18569
   
   ## How was this patch tested?
   
   Deployed cluster with the change, tested write/read via NFS mount.




> NFS Gateway may release buffer too early
> 
>
> Key: HADOOP-18569
> URL: https://issues.apache.org/jira/browse/HADOOP-18569
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 3.4.0, 3.3.5, 3.2.5, 3.3.9
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Blocker
>
> After upgrading Netty from 4.1.68 to 4.1.77 (HADOOP-18079), NFS Gateway 
> started crashing when writing data (can be easily reproduced by a few 10MB+ 
> files).  The problem was triggered by [reduced default chunk size in 
> PooledByteBufAllocator|https://github.com/netty/netty/commit/f650303911] (in 
> 4.1.75), but it turned out to be caused by a buffer released too early in NFS 
> Gateway.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai opened a new pull request, #5207: HADOOP-18569. NFS Gateway may release buffer too early

2022-12-12 Thread GitBox


adoroszlai opened a new pull request, #5207:
URL: https://github.com/apache/hadoop/pull/5207

   ## What changes were proposed in this pull request?
   
   After upgrading Netty from 4.1.68 to 4.1.77 (HADOOP-18079), NFS Gateway 
started randomly crashing when writing data (can be easily reproduced by a few 
10MB+ files).  The problem was triggered by [reduced default chunk size in 
PooledByteBufAllocator](https://github.com/netty/netty/commit/f650303911) (in 
4.1.75), but it turned out to be caused by a buffer released too early in NFS 
Gateway (HADOOP-11245).
   
   https://issues.apache.org/jira/browse/HADOOP-18569
   
   ## How was this patch tested?
   
   Deployed cluster with the change, tested write/read via NFS mount.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18526) Leak of S3AInstrumentation instances via hadoop Metrics references

2022-12-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646087#comment-17646087
 ] 

ASF GitHub Bot commented on HADOOP-18526:
-

steveloughran commented on code in PR #5144:
URL: https://github.com/apache/hadoop/pull/5144#discussion_r1043226728


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##
@@ -459,6 +458,13 @@ public void initialize(URI name, Configuration 
originalConf)
 AuditSpan span = null;
 try {
   LOG.debug("Initializing S3AFileSystem for {}", bucket);
+  if (LOG.isTraceEnabled()) {
+// log a full trace for deep diagnostics of where an object is created,
+// for tracking down memory leak issues.
+LOG.trace("Filesystem for {} created; fs.s3a.impl.disable.cache = {}",
+name, originalConf.getBoolean("fs.s3a.impl.disable.cache", false),
+new RuntimeException(super.toString()));

Review Comment:
   we don't throw it, just trace it. it can be anything. what is your 
suggestion?



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##
@@ -3999,22 +4005,18 @@ public void close() throws IOException {
 }
 isClosed = true;
 LOG.debug("Filesystem {} is closed", uri);
-if (getConf() != null) {
-  String iostatisticsLoggingLevel =
-  getConf().getTrimmed(IOSTATISTICS_LOGGING_LEVEL,
-  IOSTATISTICS_LOGGING_LEVEL_DEFAULT);
-  logIOStatisticsAtLevel(LOG, iostatisticsLoggingLevel, getIOStatistics());
-}
 try {
   super.close();
 } finally {
   stopAllServices();
-}
-// Log IOStatistics at debug.
-if (LOG.isDebugEnabled()) {
-  // robust extract and convert to string
-  LOG.debug("Statistics for {}: {}", uri,
-  IOStatisticsLogging.ioStatisticsToPrettyString(getIOStatistics()));
+  // log IO statistics, including of any file deletion during

Review Comment:
   it means "including iostatistics of any file deletion..." so IMO it's valid



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##
@@ -3999,22 +4005,18 @@ public void close() throws IOException {
 }
 isClosed = true;
 LOG.debug("Filesystem {} is closed", uri);
-if (getConf() != null) {
-  String iostatisticsLoggingLevel =
-  getConf().getTrimmed(IOSTATISTICS_LOGGING_LEVEL,
-  IOSTATISTICS_LOGGING_LEVEL_DEFAULT);
-  logIOStatisticsAtLevel(LOG, iostatisticsLoggingLevel, getIOStatistics());
-}
 try {
   super.close();
 } finally {
   stopAllServices();

Review Comment:
   not worried there. the system tests verify that you can still call 
instrumentation methods safely, it is just unregistered



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInstrumentation.java:
##
@@ -257,7 +275,8 @@ private void registerAsMetricsSource(URI name) {
   number = ++metricsSourceNameCounter;
 }
 String msName = METRICS_SOURCE_BASENAME + number;
-metricsSourceName = msName + "-" + name.getHost();
+String metricsSourceName = msName + "-" + name.getHost();
+metricsSourceReference = new WeakRefMetricsSource(metricsSourceName, this);

Review Comment:
   not using this though, are we?





> Leak of S3AInstrumentation instances via hadoop Metrics references
> --
>
> Key: HADOOP-18526
> URL: https://issues.apache.org/jira/browse/HADOOP-18526
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> A heap dump of a process running OOM shows that if a process creates then 
> destroys lots of S3AFS instances, you seem to run out of heap due to 
> references to S3AInstrumentation and the IOStatisticsStore kept via the 
> hadoop metrics registry
> It doesn't look like S3AInstrumentation.close() is being invoked in 
> S3AFS.close(). it should -with the IOStats being snapshotted to a local 
> reference before this happens. This allows for stats of a closed fs to be 
> examined.
> If you look at org.apache.hadoop.ipc.DecayRpcScheduler.MetricsProxy it uses a 
> WeakReference to refer back to the larger object. we should do the same for 
> abfs/s3a bindings. ideally do some template proxy class in hadoop common they 
> can both use.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a diff in pull request #5144: HADOOP-18526. Leak of S3AInstrumentation instances via hadoop Metrics references

2022-12-12 Thread GitBox


steveloughran commented on code in PR #5144:
URL: https://github.com/apache/hadoop/pull/5144#discussion_r1043226728


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##
@@ -459,6 +458,13 @@ public void initialize(URI name, Configuration 
originalConf)
 AuditSpan span = null;
 try {
   LOG.debug("Initializing S3AFileSystem for {}", bucket);
+  if (LOG.isTraceEnabled()) {
+// log a full trace for deep diagnostics of where an object is created,
+// for tracking down memory leak issues.
+LOG.trace("Filesystem for {} created; fs.s3a.impl.disable.cache = {}",
+name, originalConf.getBoolean("fs.s3a.impl.disable.cache", false),
+new RuntimeException(super.toString()));

Review Comment:
   we don't throw it, just trace it. it can be anything. what is your 
suggestion?



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##
@@ -3999,22 +4005,18 @@ public void close() throws IOException {
 }
 isClosed = true;
 LOG.debug("Filesystem {} is closed", uri);
-if (getConf() != null) {
-  String iostatisticsLoggingLevel =
-  getConf().getTrimmed(IOSTATISTICS_LOGGING_LEVEL,
-  IOSTATISTICS_LOGGING_LEVEL_DEFAULT);
-  logIOStatisticsAtLevel(LOG, iostatisticsLoggingLevel, getIOStatistics());
-}
 try {
   super.close();
 } finally {
   stopAllServices();
-}
-// Log IOStatistics at debug.
-if (LOG.isDebugEnabled()) {
-  // robust extract and convert to string
-  LOG.debug("Statistics for {}: {}", uri,
-  IOStatisticsLogging.ioStatisticsToPrettyString(getIOStatistics()));
+  // log IO statistics, including of any file deletion during

Review Comment:
   it means "including iostatistics of any file deletion..." so IMO it's valid



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##
@@ -3999,22 +4005,18 @@ public void close() throws IOException {
 }
 isClosed = true;
 LOG.debug("Filesystem {} is closed", uri);
-if (getConf() != null) {
-  String iostatisticsLoggingLevel =
-  getConf().getTrimmed(IOSTATISTICS_LOGGING_LEVEL,
-  IOSTATISTICS_LOGGING_LEVEL_DEFAULT);
-  logIOStatisticsAtLevel(LOG, iostatisticsLoggingLevel, getIOStatistics());
-}
 try {
   super.close();
 } finally {
   stopAllServices();

Review Comment:
   not worried there. the system tests verify that you can still call 
instrumentation methods safely, it is just unregistered



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInstrumentation.java:
##
@@ -257,7 +275,8 @@ private void registerAsMetricsSource(URI name) {
   number = ++metricsSourceNameCounter;
 }
 String msName = METRICS_SOURCE_BASENAME + number;
-metricsSourceName = msName + "-" + name.getHost();
+String metricsSourceName = msName + "-" + name.getHost();
+metricsSourceReference = new WeakRefMetricsSource(metricsSourceName, this);

Review Comment:
   not using this though, are we?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18569) NFS Gateway may release buffer too early

2022-12-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646079#comment-17646079
 ] 

Steve Loughran commented on HADOOP-18569:
-

added as a blocker for 3.3.5; can you do a patch ASAP. pretty significant

> NFS Gateway may release buffer too early
> 
>
> Key: HADOOP-18569
> URL: https://issues.apache.org/jira/browse/HADOOP-18569
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 3.4.0, 3.3.5, 3.2.5, 3.3.9
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Blocker
>
> After upgrading Netty from 4.1.68 to 4.1.77 (HADOOP-18079), NFS Gateway 
> started crashing when writing data (can be easily reproduced by a few 10MB+ 
> files).  The problem was triggered by [reduced default chunk size in 
> PooledByteBufAllocator|https://github.com/netty/netty/commit/f650303911] (in 
> 4.1.75), but it turned out to be caused by a buffer released too early in NFS 
> Gateway.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18569) NFS Gateway may release buffer too early

2022-12-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18569:

Priority: Blocker  (was: Major)

> NFS Gateway may release buffer too early
> 
>
> Key: HADOOP-18569
> URL: https://issues.apache.org/jira/browse/HADOOP-18569
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 3.4.0, 3.3.5, 3.2.5, 3.3.9
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Blocker
>
> After upgrading Netty from 4.1.68 to 4.1.77 (HADOOP-18079), NFS Gateway 
> started crashing when writing data (can be easily reproduced by a few 10MB+ 
> files).  The problem was triggered by [reduced default chunk size in 
> PooledByteBufAllocator|https://github.com/netty/netty/commit/f650303911] (in 
> 4.1.75), but it turned out to be caused by a buffer released too early in NFS 
> Gateway.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18568) Magic Committer optional clean up

2022-12-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646049#comment-17646049
 ] 

Steve Loughran commented on HADOOP-18568:
-

wow, that is a lot of tasks! your life would be a lot better if you could have 
fewer of them.

Your proposal makes sense.
Supply a pr with
* new option in CommitConstants, say "fs.s3a.cleanup.magic.enabled"
* check for this in MagicS3GuardCommitter.cleanupStagingDirs()
* add a test/extend an existing one to not do the cleanup, and verify the job 
dir still exists.

You have to be confident here that all your spark jobs are creating unique job 
IDs. We've had problems there in the past but recent spark releases are all 
good.

I am surprised and impressed by the number of tasks. It's the sheer volume of 
tasks which is creating your problem as we can only delete a few hundred I 
entries at a time and I there will be two files (filename, filename + .pending) 
per file written plus per task stuff. Even listing 420k and loading files as a 
precursor to committing them is a major overhead.

We are about to do a 3.3.5 release with some major enhancements to the magic 
committer in terms of performance creating files (no overwrite checks, even 
when parquet lib requests them), mkdirs (they all become noops) and others, 
plus more parallelism. see HADOOP-17833 for the work. It also tries to collect 
more IOStatistics on operations, but looks like it omits the cleanup timings 
because we write the stats into the _SUCCESS file before starting that clean 
up. Maybe successful jobs we could kick off the cleanup before writing the file.

(note, that 3.3.5 release adds the option to save the _SUCCESS) files into a 
history dir elsewhere. If they could explicitly list the job dir then some 
internal script to list the files, read the field and delete the dirs would be 
straightforward.

Looking forward to seeing your work. Afraid it has missed the 3.3.5 cut off but 
there will be an inevitable 3.3.6 released before long.

oh, and any stats on job improvements on 3.3.5 RC0 would be nice -any 
regressions even more so!


> Magic Committer optional clean up 
> --
>
> Key: HADOOP-18568
> URL: https://issues.apache.org/jira/browse/HADOOP-18568
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: fs/s3
>Affects Versions: 3.3.3
>Reporter: André F.
>Priority: Minor
>
> It seems that deleting the `__magic` folder, depending on the number of 
> tasks/partitions used on a given spark job, can take really long time. I'm 
> having the following behavior on a given Spark job (processing ~30TB, with 
> ~420k tasks) using the magic committer:
> {code:java}
> 2022-12-10T21:25:19.629Z pool-3-thread-32 INFO MagicS3GuardCommitter: 
> Starting: Deleting magic directory s3a://my-bucket/random_hash/__magic
> 2022-12-10T21:52:03.250Z pool-3-thread-32 INFO MagicS3GuardCommitter: 
> Deleting magic directory s3a://my-bucket/random_hash/__magic: duration 
> 26:43.620s {code}
> I don't see a way out of it since the deletion of s3 objects needs to list 
> all objects under a prefix and this is what may be taking too much time. 
> Could we somehow make this cleanup optional? (the idea would be to delegate 
> it through s3 lifecycle policies in order to not create this overhead on the 
> commit phase).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18281) Tune S3A storage class support

2022-12-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646046#comment-17646046
 ] 

Steve Loughran commented on HADOOP-18281:
-

so the only remaining thing here is the idea of making this a createFile() 
option. Do we really perceive that as a need (alternatively: should these 
decisions be compiled into code?)

> Tune S3A storage class support
> --
>
> Key: HADOOP-18281
> URL: https://issues.apache.org/jira/browse/HADOOP-18281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Priority: Minor
>
> Followup to HADOOP-12020, with work/review from rebasing HADOOP-17833 atop it.
> * Can we merge ITestS3AHugeFilesStorageClass into one of the existing test 
> cases? just because it is slow...ideally we want as few of those as possible, 
> even if by testing multiple things at the same we break the rules of testing.
> * move setting the storage class into
> setOptionalMultipartUploadRequestParameters and 
> setOptionalPutRequestParameters
> * both newPutObjectRequest() calls to set storage class
> * docs to list the valid option strings. I had to delve into the AWS SDK to 
> work them out
> Once HADOOP-17833 is in, make this a new option something which can be 
> explicitly used in createFile().
> I've updated PutObjectOptions to pass a value around, and made sure it gets 
> down to to the request factory. that leaves
> * setting the storage class from the options {{CreateFileBuilder}}
> * testing!
> * doc update



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17833) Improve Magic Committer Performance

2022-12-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17833:

Summary: Improve Magic Committer Performance  (was: Improve Magic Committer 
cleanup Performance)

> Improve Magic Committer Performance
> ---
>
> Key: HADOOP-17833
> URL: https://issues.apache.org/jira/browse/HADOOP-17833
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.5
>
>  Time Spent: 14h
>  Remaining Estimate: 0h
>
> Magic committer tasks can be slow because every file created with 
> overwrite=false triggers a HEAD (verify there's no file) and a LIST (that 
> there's no dir). And because of delayed manifestations, it may not behave as 
> expected.
> ParquetOutputFormat is one example of a library which does this.
> we could fix parquet to use overwrite=true, but (a) there may be surprises in 
> other uses (b) it'd still leave the list and (c) do nothing for other formats 
> call
> Proposed: createFile() under a magic path to skip all probes for file/dir at 
> end of path
> Only a single task attempt Will be writing to that directory and it should 
> know what it is doing. If there is conflicting file names and parts across 
> tasks that won't even get picked up at this point. Oh and none of the 
> committers ever check for this: you'll get the last file manifested (s3a) or 
> renamed (file)
> If we skip the checks we will save 2 HTTP requests/file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17833) Improve Magic Committer cleanup Performance

2022-12-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17833:

Summary: Improve Magic Committer cleanup Performance  (was: Improve Magic 
Committer Performance)

> Improve Magic Committer cleanup Performance
> ---
>
> Key: HADOOP-17833
> URL: https://issues.apache.org/jira/browse/HADOOP-17833
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.5
>
>  Time Spent: 14h
>  Remaining Estimate: 0h
>
> Magic committer tasks can be slow because every file created with 
> overwrite=false triggers a HEAD (verify there's no file) and a LIST (that 
> there's no dir). And because of delayed manifestations, it may not behave as 
> expected.
> ParquetOutputFormat is one example of a library which does this.
> we could fix parquet to use overwrite=true, but (a) there may be surprises in 
> other uses (b) it'd still leave the list and (c) do nothing for other formats 
> call
> Proposed: createFile() under a magic path to skip all probes for file/dir at 
> end of path
> Only a single task attempt Will be writing to that directory and it should 
> know what it is doing. If there is conflicting file names and parts across 
> tasks that won't even get picked up at this point. Oh and none of the 
> committers ever check for this: you'll get the last file manifested (s3a) or 
> renamed (file)
> If we skip the checks we will save 2 HTTP requests/file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18571) Qualify the upgrade.

2022-12-12 Thread Ahmar Suhail (Jira)
Ahmar Suhail created HADOOP-18571:
-

 Summary: Qualify the upgrade. 
 Key: HADOOP-18571
 URL: https://issues.apache.org/jira/browse/HADOOP-18571
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmar Suhail


Run tests as per [qualifying aws ask 
update|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md#-qualifying-an-aws-sdk-update]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18570) Update region logic

2022-12-12 Thread Ahmar Suhail (Jira)
Ahmar Suhail created HADOOP-18570:
-

 Summary: Update region logic
 Key: HADOOP-18570
 URL: https://issues.apache.org/jira/browse/HADOOP-18570
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmar Suhail


SDK V2 will no longer resolve a buckets region if it is not set when 
initialising the client. 

 

Current logic will always make a head bucket call on FS initialisation. We 
should review this. Possible solution:
 * Warn if region is not set.
 * If no region, try and resolve. If resolution fails, throw an exception. 
Cache the region to optimise for short lived FS. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18569) NFS Gateway may release buffer too early

2022-12-12 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HADOOP-18569:
-

 Summary: NFS Gateway may release buffer too early
 Key: HADOOP-18569
 URL: https://issues.apache.org/jira/browse/HADOOP-18569
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 3.4.0, 3.3.5, 3.2.5, 3.3.9
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


After upgrading Netty from 4.1.68 to 4.1.77 (HADOOP-18079), NFS Gateway started 
crashing when writing data (can be easily reproduced by a few 10MB+ files).  
The problem was triggered by [reduced default chunk size in 
PooledByteBufAllocator|https://github.com/netty/netty/commit/f650303911] (in 
4.1.75), but it turned out to be caused by a buffer released too early in NFS 
Gateway.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5184: HDFS-16861. RBF. Truncate API always fails when dirs use AllResolver oder on Router

2022-12-12 Thread GitBox


hadoop-yetus commented on PR #5184:
URL: https://github.com/apache/hadoop/pull/5184#issuecomment-1346164098

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 59s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  32m  8s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 128m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5184/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5184 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 22de0d9734d4 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 13fea2b96ba86961503f8a64ae824090fa4289fa |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5184/6/testReport/ |
   | Max. process+thread count | 3559 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5184/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.o

[jira] [Created] (HADOOP-18568) Magic Committer optional clean up

2022-12-12 Thread Jira
André F. created HADOOP-18568:
-

 Summary: Magic Committer optional clean up 
 Key: HADOOP-18568
 URL: https://issues.apache.org/jira/browse/HADOOP-18568
 Project: Hadoop Common
  Issue Type: Wish
  Components: fs/s3
Affects Versions: 3.3.3
Reporter: André F.


It seems that deleting the `__magic` folder, depending on the number of 
tasks/partitions used on a given spark job, can take really long time. I'm 
having the following behavior on a given Spark job (processing ~30TB, with 
~420k tasks) using the magic committer:
{code:java}
2022-12-10T21:25:19.629Z pool-3-thread-32 INFO MagicS3GuardCommitter: Starting: 
Deleting magic directory s3a://my-bucket/random_hash/__magic
2022-12-10T21:52:03.250Z pool-3-thread-32 INFO MagicS3GuardCommitter: Deleting 
magic directory s3a://my-bucket/random_hash/__magic: duration 26:43.620s {code}
I don't see a way out of it since the deletion of s3 objects needs to list all 
objects under a prefix and this is what may be taking too much time. Could we 
somehow make this cleanup optional? (the idea would be to delegate it through 
s3 lifecycle policies in order to not create this overhead on the commit phase).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org