[GitHub] [hadoop] hadoop-yetus commented on pull request #6018: HDFS-17178: BootstrapStandby needs to handle RollingUpgrade

2023-09-07 Thread via GitHub


hadoop-yetus commented on PR #6018:
URL: https://github.com/apache/hadoop/pull/6018#issuecomment-1711096914

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 307 unchanged - 1 
fixed = 307 total (was 308)  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 12s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  36m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 221m 15s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6018/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 364m  8s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6018/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6018 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux 48d9cb677cb2 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e791f4e61f7e44d866140219c5593c6f766aa8cd |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6018/9/testReport/ |
   | Max. process+thread count | 3046 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6018/9/console |
   | versions | git=2.25.1 

[GitHub] [hadoop] dannytbecker commented on a diff in pull request #6018: HDFS-17178: BootstrapStandby needs to handle RollingUpgrade

2023-09-07 Thread via GitHub


dannytbecker commented on code in PR #6018:
URL: https://github.com/apache/hadoop/pull/6018#discussion_r1319368699


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestBootstrapStandby.java:
##
@@ -172,6 +185,104 @@ public void testDownloadingLaterCheckpoint() throws 
Exception {
 restartNameNodesFromIndex(1);
   }
 
+  /**
+   * Test for downloading a checkpoint while the cluster is in rolling upgrade.
+   */
+  @Test
+  public void testRollingUpgradeBootstrapStandby() throws Exception {

Review Comment:
   I created a draft PR to show my minimal changes for the test. I added a 
comment with where the test fails and the output from the log.
   https://github.com/apache/hadoop/pull/6031/files



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dannytbecker commented on a diff in pull request #6031: Dannytbecker/confirm unit test

2023-09-07 Thread via GitHub


dannytbecker commented on code in PR #6031:
URL: https://github.com/apache/hadoop/pull/6031#discussion_r1319367767


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestBootstrapStandby.java:
##
@@ -172,6 +185,104 @@ public void testDownloadingLaterCheckpoint() throws 
Exception {
 restartNameNodesFromIndex(1);
   }
 
+  /**
+   * Test for downloading a checkpoint made at a later checkpoint
+   * from the active.
+   */
+  @Test
+  public void testRollingUpgradeBootstrapStandby() throws Exception {
+removeStandbyNameDirs();
+
+int futureVersion = NameNodeLayoutVersion.CURRENT_LAYOUT_VERSION - 1;
+
+DistributedFileSystem fs = cluster.getFileSystem(0);
+fs.setSafeMode(SafeModeAction.SAFEMODE_ENTER);
+fs.saveNamespace();
+fs.setSafeMode(SafeModeAction.SAFEMODE_LEAVE);
+
+// Setup BootstrapStandby to think it is a future NameNode version
+BootstrapStandby bs = spy(new BootstrapStandby());
+doAnswer(nsInfo ->  {
+  NamespaceInfo nsInfoSpy = (NamespaceInfo) spy(nsInfo.callRealMethod());
+  doReturn(futureVersion).when(nsInfoSpy).getServiceLayoutVersion();
+  return nsInfoSpy;
+}).when(bs).getProxyNamespaceInfo(any());
+
+// BootstrapStandby should fail if the node has a future version
+// and the cluster isn't in rolling upgrade
+bs.setConf(cluster.getConfiguration(1));
+assertEquals("BootstrapStandby should return ERR_CODE_INVALID_VERSION",
+ERR_CODE_INVALID_VERSION, bs.run(new String[]{"-force"}));
+
+// Start rolling upgrade
+fs.rollingUpgrade(RollingUpgradeAction.PREPARE);
+nn0 = spy(nn0);
+
+// Make nn0 think it is a future version
+doAnswer(fsImage -> {
+  FSImage fsImageSpy = (FSImage) spy(fsImage.callRealMethod());
+  doAnswer(storage -> {
+NNStorage storageSpy = (NNStorage) spy(storage.callRealMethod());
+doReturn(futureVersion).when(storageSpy).getServiceLayoutVersion();
+return storageSpy;
+  }).when(fsImageSpy).getStorage();
+  return fsImageSpy;
+}).when(nn0).getFSImage();
+
+// Roll edit logs a few times to inflate txid
+nn0.getRpcServer().rollEditLog();
+nn0.getRpcServer().rollEditLog();
+// Make checkpoint
+NameNodeAdapter.enterSafeMode(nn0, false);
+NameNodeAdapter.saveNamespace(nn0);
+NameNodeAdapter.leaveSafeMode(nn0);
+
+long expectedCheckpointTxId = NameNodeAdapter.getNamesystem(nn0)
+.getFSImage().getMostRecentCheckpointTxId();
+assertEquals(11, expectedCheckpointTxId);
+
+for (int i = 1; i < maxNNCount; i++) {
+  // BootstrapStandby on Standby NameNode
+  bs.setConf(cluster.getConfiguration(i));
+  bs.run(new String[]{"-force"});

Review Comment:
   This returns a failure code, ERR_CODE_INVALID_VERSION (3). The 
assertNNHasCheckpoints on the next line throws an NPE because the name dir is 
empty. Here is the output from my test run:
   `2023-09-07T22:03:52,791 ERROR ha.BootstrapStandby 
(BootstrapStandby.java:doRun(217)) - Layout version on remote node (-67) does 
not match this node's layout version (-68)`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dannytbecker opened a new pull request, #6031: Dannytbecker/confirm unit test

2023-09-07 Thread via GitHub


dannytbecker opened a new pull request, #6031:
URL: https://github.com/apache/hadoop/pull/6031

   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Taher-Ghaleb commented on pull request #5982: HADOOP-18866. Refactor @Test(expected) with assertThrows

2023-09-07 Thread via GitHub


Taher-Ghaleb commented on PR #5982:
URL: https://github.com/apache/hadoop/pull/5982#issuecomment-1711080180

   Right, I agree. That is the ultimate goal of our research, to automatically 
improve test quality from various perspectives. Thanks @steveloughran.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #6030: YARN-11564. Fix wrong config in yarn-default.xml

2023-09-07 Thread via GitHub


hadoop-yetus commented on PR #6030:
URL: https://github.com/apache/hadoop/pull/6030#issuecomment-1711078913

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  shadedclient  |  89m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  shadedclient  |  40m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   5m 21s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 143m 18s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6030/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6030 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux 4e8d2669691c 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 878a8e41e22bcfb4ae58ca32ed121c6b0d7388c0 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6030/1/testReport/ |
   | Max. process+thread count | 600 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6030/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hadoop] granewang commented on pull request #6026: YARN-11563. Fix word misspellings from CSAssignemnt to CSAssignment

2023-09-07 Thread via GitHub


granewang commented on PR #6026:
URL: https://github.com/apache/hadoop/pull/6026#issuecomment-1710997109

   UT is not need ,just misspellings


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhengchenyu opened a new pull request, #6030: YARN-11564. Fix wrong config in yarn-default.xml

2023-09-07 Thread via GitHub


zhengchenyu opened a new pull request, #6030:
URL: https://github.com/apache/hadoop/pull/6030

   ### Description of PR
   
   https://issues.apache.org/jira/browse/YARN-11564
   
   
   ### How was this patch tested?
   
   ### For code changes:
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] LiuGuH opened a new pull request, #6029: Add synchronized on lockLeakCheck() because threadCountMap is not thr…

2023-09-07 Thread via GitHub


LiuGuH opened a new pull request, #6029:
URL: https://github.com/apache/hadoop/pull/6029

   …ead safe.
   
   
   
   ### Description of PR
   threadCountMap is not thread-safe. Other functions add protected by 
synchronized expect   lockLeakCheck(). Add synchronized on function 
lockLeakCheck().
   
   
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] KeeProMise commented on pull request #5917: HDFS-17139. RBF: For the doc of the class RouterAdminProtocolTranslatorPB, it describes the function of the class ClientNamenodeProtocolTr

2023-09-07 Thread via GitHub


KeeProMise commented on PR #5917:
URL: https://github.com/apache/hadoop/pull/5917#issuecomment-1710958664

   @goiri If no more comments here, please help merge it, thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth commented on a diff in pull request #6027: YARN-11468. Zookeeper SSL/TLS support

2023-09-07 Thread via GitHub


szilard-nemeth commented on code in PR #6027:
URL: https://github.com/apache/hadoop/pull/6027#discussion_r1319234289


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java:
##
@@ -853,6 +853,11 @@ public static boolean isAclEnabled(Configuration conf) {
   /** Zookeeper interaction configs */
   public static final String RM_ZK_PREFIX = RM_PREFIX + "zk-";
 
+  /** Enable Zookeeper SSL/TLS communication */
+  public static final String RM_ZK_CLIENT_SSL_ENABLED =
+  RM_ZK_PREFIX + "client-ssl.enabled";
+  public static final boolean DEFAULT_RM_ZK_CLIENT_SSL_ENABLED = Boolean.FALSE;

Review Comment:
   In this case I don't think it makes too much sense to use the static 
Boolean.FALSE object as the type is a primitive boolean anyways. 
   See: 
https://stackoverflow.com/questions/20090306/what-is-the-difference-between-false-and-boolean-false



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMStoreCommands.java:
##
@@ -101,6 +102,16 @@ public void testFormatConfStoreCmdForZK() throws Exception 
{
 }
   }
 
+  @Test
+  public void testSSLEnabledConfiguration() {
+//Test if we can enable SSL/TLS for the ZK Curator Client in YARN.
+Configuration conf = new Configuration();
+conf.set(YarnConfiguration.RM_ZK_CLIENT_SSL_ENABLED, 
Boolean.TRUE.toString());
+
+assertEquals("The " + YarnConfiguration.RM_ZK_CLIENT_SSL_ENABLED + " value 
should be true.",
+conf.get(YarnConfiguration.RM_ZK_CLIENT_SSL_ENABLED), 
Boolean.TRUE.toString());
+  }

Review Comment:
   Here, you are only testing the behavior of the YarnConfiguration class as 
you set something to it, and you assert what's coming back with `conf.get()`.
   In my view, this kind of test does not belong here but to the tests of 
`YarnConfiguration` but I assume the `Configuration` class has a similar or 
same testcase for string values.
   
   Besides, is this config gonna be a boolean-typed or string-typed config?
   There's also a method called `Configuration#setBoolean`.
   
   Can you add 2 testcases for the SSL settings? 
   They should test that if the ZKCuratorManager is started with SSL disabled 
by default. The other test should check if the SSL setting is set to true, 
ZKCuratorManager is started with SSL enabled.
   I'm not sure which test class those 2 testcases would belong to, but I 
assume there's something where the Curator classes are mocked.
   If not, probably you can get away with a MockRM based tests, even if the 
curator classes are real instances and not mocks.
   Between the two, I would prefer the former approach.



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java:
##
@@ -853,6 +853,11 @@ public static boolean isAclEnabled(Configuration conf) {
   /** Zookeeper interaction configs */
   public static final String RM_ZK_PREFIX = RM_PREFIX + "zk-";
 
+  /** Enable Zookeeper SSL/TLS communication */
+  public static final String RM_ZK_CLIENT_SSL_ENABLED =
+  RM_ZK_PREFIX + "client-ssl.enabled";

Review Comment:
   Nit: Can fit into a single line.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #6011: YARN-11434. [Router] UGI conf doesn't read user overridden configurations on Router startup.

2023-09-07 Thread via GitHub


slfan1989 commented on PR #6011:
URL: https://github.com/apache/hadoop/pull/6011#issuecomment-1710898839

   @goiri Thank you very much for your help in reviewing the code!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 merged pull request #6011: YARN-11434. [Router] UGI conf doesn't read user overridden configurations on Router startup.

2023-09-07 Thread via GitHub


slfan1989 merged PR #6011:
URL: https://github.com/apache/hadoop/pull/6011


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] simbadzina commented on a diff in pull request #6018: HDFS-17178: BootstrapStandby needs to handle RollingUpgrade

2023-09-07 Thread via GitHub


simbadzina commented on code in PR #6018:
URL: https://github.com/apache/hadoop/pull/6018#discussion_r1319211387


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestBootstrapStandby.java:
##
@@ -172,6 +185,104 @@ public void testDownloadingLaterCheckpoint() throws 
Exception {
 restartNameNodesFromIndex(1);
   }
 
+  /**
+   * Test for downloading a checkpoint while the cluster is in rolling upgrade.
+   */
+  @Test
+  public void testRollingUpgradeBootstrapStandby() throws Exception {

Review Comment:
   Cool. Yeah, it would be great to verify the test fails outside of the 
`assertThrow` and getting a non-zero error code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] simbadzina commented on a diff in pull request #6018: HDFS-17178: BootstrapStandby needs to handle RollingUpgrade

2023-09-07 Thread via GitHub


simbadzina commented on code in PR #6018:
URL: https://github.com/apache/hadoop/pull/6018#discussion_r1319209867


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/BootstrapStandby.java:
##
@@ -405,8 +423,14 @@ private boolean checkLogsAvailableForRead(FSImage image, 
long imageTxId,
 }
   }
 
-  private boolean checkLayoutVersion(NamespaceInfo nsInfo) throws IOException {
-return (nsInfo.getLayoutVersion() == 
HdfsServerConstants.NAMENODE_LAYOUT_VERSION);
+  private boolean checkLayoutVersion(NamespaceInfo nsInfo, boolean 
isRollingUpgrade) {
+if (isRollingUpgrade) {
+  // During a rolling upgrade the service layout versions may be different,
+  // but we should check that the layout version being sent is compatible
+  return nsInfo.getLayoutVersion() <=
+  HdfsServerConstants.MINIMUM_COMPATIBLE_NAMENODE_LAYOUT_VERSION;

Review Comment:
   Got you. Thanks for clarifying.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dannytbecker commented on a diff in pull request #6018: HDFS-17178: BootstrapStandby needs to handle RollingUpgrade

2023-09-07 Thread via GitHub


dannytbecker commented on code in PR #6018:
URL: https://github.com/apache/hadoop/pull/6018#discussion_r1319203417


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestBootstrapStandby.java:
##
@@ -172,6 +185,104 @@ public void testDownloadingLaterCheckpoint() throws 
Exception {
 restartNameNodesFromIndex(1);
   }
 
+  /**
+   * Test for downloading a checkpoint while the cluster is in rolling upgrade.
+   */
+  @Test
+  public void testRollingUpgradeBootstrapStandby() throws Exception {
+removeStandbyNameDirs();
+
+int futureVersion = NameNodeLayoutVersion.CURRENT_LAYOUT_VERSION - 1;
+
+DistributedFileSystem fs = cluster.getFileSystem(0);
+fs.setSafeMode(SafeModeAction.SAFEMODE_ENTER);
+fs.saveNamespace();
+fs.setSafeMode(SafeModeAction.SAFEMODE_LEAVE);

Review Comment:
   Changed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dannytbecker commented on a diff in pull request #6018: HDFS-17178: BootstrapStandby needs to handle RollingUpgrade

2023-09-07 Thread via GitHub


dannytbecker commented on code in PR #6018:
URL: https://github.com/apache/hadoop/pull/6018#discussion_r1319189453


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestBootstrapStandby.java:
##
@@ -172,6 +185,104 @@ public void testDownloadingLaterCheckpoint() throws 
Exception {
 restartNameNodesFromIndex(1);
   }
 
+  /**
+   * Test for downloading a checkpoint while the cluster is in rolling upgrade.
+   */
+  @Test
+  public void testRollingUpgradeBootstrapStandby() throws Exception {

Review Comment:
   I added some comments to your draft change which would cause the test not to 
work. I will cherry-pick the tests to an older version to verify on my side 
that the test would fail before this change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dannytbecker commented on a diff in pull request #6018: HDFS-17178: BootstrapStandby needs to handle RollingUpgrade

2023-09-07 Thread via GitHub


dannytbecker commented on code in PR #6018:
URL: https://github.com/apache/hadoop/pull/6018#discussion_r1319177733


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/BootstrapStandby.java:
##
@@ -405,8 +423,14 @@ private boolean checkLogsAvailableForRead(FSImage image, 
long imageTxId,
 }
   }
 
-  private boolean checkLayoutVersion(NamespaceInfo nsInfo) throws IOException {
-return (nsInfo.getLayoutVersion() == 
HdfsServerConstants.NAMENODE_LAYOUT_VERSION);
+  private boolean checkLayoutVersion(NamespaceInfo nsInfo, boolean 
isRollingUpgrade) {
+if (isRollingUpgrade) {
+  // During a rolling upgrade the service layout versions may be different,
+  // but we should check that the layout version being sent is compatible
+  return nsInfo.getLayoutVersion() <=
+  HdfsServerConstants.MINIMUM_COMPATIBLE_NAMENODE_LAYOUT_VERSION;

Review Comment:
   The version numbers are negative so the 
`HdfsServerConstants.MINIMUM_COMPATIBLE_NAMENODE_LAYOUT_VERSION` is -61. We 
want the nsInfo's layout version to be a "higher" version than the minimum 
which is -61. So we need to use `<=` because a "higher" version is a lower 
negative like -67. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] simbadzina commented on a diff in pull request #6018: HDFS-17178: BootstrapStandby needs to handle RollingUpgrade

2023-09-07 Thread via GitHub


simbadzina commented on code in PR #6018:
URL: https://github.com/apache/hadoop/pull/6018#discussion_r1319140765


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestBootstrapStandby.java:
##
@@ -172,6 +185,104 @@ public void testDownloadingLaterCheckpoint() throws 
Exception {
 restartNameNodesFromIndex(1);
   }
 
+  /**
+   * Test for downloading a checkpoint while the cluster is in rolling upgrade.
+   */
+  @Test
+  public void testRollingUpgradeBootstrapStandby() throws Exception {
+removeStandbyNameDirs();
+
+int futureVersion = NameNodeLayoutVersion.CURRENT_LAYOUT_VERSION - 1;
+
+DistributedFileSystem fs = cluster.getFileSystem(0);
+fs.setSafeMode(SafeModeAction.SAFEMODE_ENTER);
+fs.saveNamespace();
+fs.setSafeMode(SafeModeAction.SAFEMODE_LEAVE);

Review Comment:
   `setSafeMode(HdfsConstants.SafeModeAction action)` is deprecated. There is a 
new method `setSafeMode(SafeModeAction)`



##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestBootstrapStandby.java:
##
@@ -172,6 +185,104 @@ public void testDownloadingLaterCheckpoint() throws 
Exception {
 restartNameNodesFromIndex(1);
   }
 
+  /**
+   * Test for downloading a checkpoint while the cluster is in rolling upgrade.
+   */
+  @Test
+  public void testRollingUpgradeBootstrapStandby() throws Exception {

Review Comment:
   Is it possible to have a unit test that reproduces the error you shared in 
the PR description, and then the error is resolved with your new code? Besides 
an error code and expecting an expense, this test passes with the old code: 
https://github.com/simbadzina/hadoop/pull/2/files



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/BootstrapStandby.java:
##
@@ -405,8 +423,14 @@ private boolean checkLogsAvailableForRead(FSImage image, 
long imageTxId,
 }
   }
 
-  private boolean checkLayoutVersion(NamespaceInfo nsInfo) throws IOException {
-return (nsInfo.getLayoutVersion() == 
HdfsServerConstants.NAMENODE_LAYOUT_VERSION);
+  private boolean checkLayoutVersion(NamespaceInfo nsInfo, boolean 
isRollingUpgrade) {
+if (isRollingUpgrade) {
+  // During a rolling upgrade the service layout versions may be different,
+  // but we should check that the layout version being sent is compatible
+  return nsInfo.getLayoutVersion() <=
+  HdfsServerConstants.MINIMUM_COMPATIBLE_NAMENODE_LAYOUT_VERSION;

Review Comment:
   Shouldn't the comparison here be `>=`, to validate that the layoutVersion is 
at least the minimum compatible one.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on pull request #6018: HDFS-17178: BootstrapStandby needs to handle RollingUpgrade

2023-09-07 Thread via GitHub


goiri commented on PR #6018:
URL: https://github.com/apache/hadoop/pull/6018#issuecomment-1710839253

   @simbadzina any comments?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #6027: YARN-11468. Zookeeper SSL/TLS support

2023-09-07 Thread via GitHub


hadoop-yetus commented on PR #6027:
URL: https://github.com/apache/hadoop/pull/6027#issuecomment-1710806621

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  18m 14s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  33m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   7m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  compile  |   7m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   2m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   3m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   6m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javac  |   7m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   7m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 49s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6027/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 203 unchanged 
- 0 fixed = 204 total (was 203)  |
   | +1 :green_heart: |  mvnsite  |   3m 11s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   6m 40s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  36m 21s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 18s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 55s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 101m 20s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  7s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 307m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6027/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6027 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 5608647ed35e 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3ece3935a2b941e7e2fdbe09acbffebba5afe360 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6027/1/testReport/ |
   | Max. process+thread 

[jira] [Updated] (HADOOP-18815) unnecessary NullPointerException encountered when starting HttpServer2 with prometheus enabled

2023-09-07 Thread ConfX (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ConfX updated HADOOP-18815:
---
Description: 
h2. What happened?

Attempt to start an {{HttpServer2}} failed due to an NPE thrown in 
{{{}MetricsSystemImpl{}}}.
h2. Where's the bug?

In line 1278 of {{{}HttpServer2{}}}, if the support for prometheus is enabled 
the server registers a prometheus sink:
{noformat}
        if (prometheusSupport) {
          DefaultMetricsSystem.instance()
              .register("prometheus", "Hadoop metrics prometheus exporter",
                  prometheusMetricsSink);
        }{noformat}
However, a problem is that if the MetricsSystemImpl returned by the 
DefaultMetricsSystem.instance has not been start nor init, the config of the 
metric system would be set to null, thus failing the nullity check at the start 
of MetricsSystemImpl.registerSink. A better way of handling this would be to 
check in advance if the metric system has been initialized and initialize it if 
it has not been initialized.
h2. How to reproduce?

(1) set hadoop.prometheus.endpoint.enabled to true

(2) run org.apache.hadoop.http.TestHttpServer#testHttpResonseContainsDeny
h2. Stacktrace
{noformat}
java.io.IOException: Problem starting http server
        ...
Caused by: java.lang.NullPointerException: config
    at 
org.apache.hadoop.thirdparty.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:899)
    at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSink(MetricsSystemImpl.java:298)
    at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:277)
    at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1279)
    ... 34 more{noformat}
For an easy reproduction, run the reproduce.sh in the attachment.

We are happy to provide a patch if this issue is confirmed.

  was:
h2. What happened?

Attempt to start an {{HttpServer2}} failed due to an NPE thrown in 
{{{}MetricsSystemImpl{}}}.
h2. Where's the bug?

In line 1278 of {{{}HttpServer2{}}}, if the support for prometheus is enabled 
the server registers a prometheus sink:
{noformat}
        if (prometheusSupport) {
          DefaultMetricsSystem.instance()
              .register("prometheus", "Hadoop metrics prometheus exporter",
                  prometheusMetricsSink);
        }{noformat}
However, a problem is that if the MetricsSystemImpl returned by the 
DefaultMetricsSystem.instance has not been start nor init, the config of the 
metric system would be set to null, thus failing the nullity check at the start 
of MetricsSystemImpl.registerSink. A better way of handling this would be to 
check in advance if the metric system has been initialized and initialize it if 
it has not been initialized.
h2. How to reproduce?

(1) set hadoop.prometheus.endpoint.enabled to true

(2) run 
org.apache.hadoop.http.TestHttpServer#testHttpResonseContainsDenyStacktrace
{noformat}
java.io.IOException: Problem starting http server
        ...
Caused by: java.lang.NullPointerException: config
    at 
org.apache.hadoop.thirdparty.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:899)
    at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSink(MetricsSystemImpl.java:298)
    at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:277)
    at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1279)
    ... 34 more{noformat}
For an easy reproduction, run the reproduce.sh in the attachment.

We are happy to provide a patch if this issue is confirmed.


> unnecessary NullPointerException encountered when starting HttpServer2 with 
> prometheus enabled 
> ---
>
> Key: HADOOP-18815
> URL: https://issues.apache.org/jira/browse/HADOOP-18815
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.3
>Reporter: ConfX
>Priority: Critical
> Attachments: reproduce.sh
>
>
> h2. What happened?
> Attempt to start an {{HttpServer2}} failed due to an NPE thrown in 
> {{{}MetricsSystemImpl{}}}.
> h2. Where's the bug?
> In line 1278 of {{{}HttpServer2{}}}, if the support for prometheus is enabled 
> the server registers a prometheus sink:
> {noformat}
>         if (prometheusSupport) {
>           DefaultMetricsSystem.instance()
>               .register("prometheus", "Hadoop metrics prometheus exporter",
>                   prometheusMetricsSink);
>         }{noformat}
> However, a problem is that if the MetricsSystemImpl returned by the 
> DefaultMetricsSystem.instance has not been start nor init, the config of the 
> metric system would be set to null, thus failing the nullity check at the 
> start of MetricsSystemImpl.registerSink. A better way of handling this would 
> be to check in advance if the metric system 

[jira] [Updated] (HADOOP-18815) unnecessary NullPointerException encountered when starting HttpServer2 with prometheus enabled

2023-09-07 Thread ConfX (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ConfX updated HADOOP-18815:
---
Description: 
h2. What happened?

Attempt to start an {{HttpServer2}} failed due to an NPE thrown in 
{{{}MetricsSystemImpl{}}}.
h2. Where's the bug?

In line 1278 of {{{}HttpServer2{}}}, if the support for prometheus is enabled 
the server registers a prometheus sink:
{noformat}
        if (prometheusSupport) {
          DefaultMetricsSystem.instance()
              .register("prometheus", "Hadoop metrics prometheus exporter",
                  prometheusMetricsSink);
        }{noformat}
However, a problem is that if the MetricsSystemImpl returned by the 
DefaultMetricsSystem.instance has not been start nor init, the config of the 
metric system would be set to null, thus failing the nullity check at the start 
of MetricsSystemImpl.registerSink. A better way of handling this would be to 
check in advance if the metric system has been initialized and initialize it if 
it has not been initialized.
h2. How to reproduce?

(1) set hadoop.prometheus.endpoint.enabled to true

(2) run 
org.apache.hadoop.http.TestHttpServer#testHttpResonseContainsDenyStacktrace
{noformat}
java.io.IOException: Problem starting http server
        ...
Caused by: java.lang.NullPointerException: config
    at 
org.apache.hadoop.thirdparty.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:899)
    at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSink(MetricsSystemImpl.java:298)
    at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:277)
    at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1279)
    ... 34 more{noformat}
For an easy reproduction, run the reproduce.sh in the attachment.

We are happy to provide a patch if this issue is confirmed.

  was:
h2. What happened?

Attempt to start an {{HttpServer2}} failed due to an NPE thrown in 
{{{}MetricsSystemImpl{}}}.
h2. Where's the bug?

In line 1278 of {{{}HttpServer2{}}}, if the support for prometheus is enabled 
the server registers a prometheus sink:
{noformat}
        if (prometheusSupport) {
          DefaultMetricsSystem.instance()
              .register("prometheus", "Hadoop metrics prometheus exporter",
                  prometheusMetricsSink);
        }{noformat}
However, a problem is that if the MetricsSystemImpl returned by the 
DefaultMetricsSystem.instance has not been start nor init, the config of the 
metric system would be set to null, thus failing the nullity check at the start 
of MetricsSystemImpl.registerSink. A better way of handling this would be to 
check in advance if the metric system has been initialized and initialize it if 
it has not been initialized.How to reproduce?(1) set 
hadoop.prometheus.endpoint.enabled to true (2) run 
org.apache.hadoop.http.TestHttpServer#testHttpResonseContainsDenyStacktrace
{noformat}
java.io.IOException: Problem starting http server
        ...
Caused by: java.lang.NullPointerException: config
    at 
org.apache.hadoop.thirdparty.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:899)
    at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSink(MetricsSystemImpl.java:298)
    at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:277)
    at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1279)
    ... 34 more{noformat}
For an easy reproduction, run the reproduce.sh in the attachment.

We are happy to provide a patch if this issue is confirmed.


> unnecessary NullPointerException encountered when starting HttpServer2 with 
> prometheus enabled 
> ---
>
> Key: HADOOP-18815
> URL: https://issues.apache.org/jira/browse/HADOOP-18815
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.3
>Reporter: ConfX
>Priority: Critical
> Attachments: reproduce.sh
>
>
> h2. What happened?
> Attempt to start an {{HttpServer2}} failed due to an NPE thrown in 
> {{{}MetricsSystemImpl{}}}.
> h2. Where's the bug?
> In line 1278 of {{{}HttpServer2{}}}, if the support for prometheus is enabled 
> the server registers a prometheus sink:
> {noformat}
>         if (prometheusSupport) {
>           DefaultMetricsSystem.instance()
>               .register("prometheus", "Hadoop metrics prometheus exporter",
>                   prometheusMetricsSink);
>         }{noformat}
> However, a problem is that if the MetricsSystemImpl returned by the 
> DefaultMetricsSystem.instance has not been start nor init, the config of the 
> metric system would be set to null, thus failing the nullity check at the 
> start of MetricsSystemImpl.registerSink. A better way of handling this would 
> be to check in advance if the metric system has been 

[GitHub] [hadoop] hadoop-yetus commented on pull request #6028: HDFS-17180. HttpFS Add Support getTrashRoots API

2023-09-07 Thread via GitHub


hadoop-yetus commented on PR #6028:
URL: https://github.com/apache/hadoop/pull/6028#issuecomment-1710686163

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 57s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   5m 54s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 141m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6028/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6028 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 88c8414e5b16 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2949e93cbc4cf6cb8498553e661ab6b1341d1083 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6028/1/testReport/ |
   | Max. process+thread count | 851 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6028/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5892: HDFS-17129. mis-order of ibr and fbr on datanode

2023-09-07 Thread via GitHub


hadoop-yetus commented on PR #5892:
URL: https://github.com/apache/hadoop/pull/5892#issuecomment-1710616242

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 238m  3s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5892/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 392m 54s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5892/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5892 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6c025fd83afd 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7be8614fcdd4342ac4b68f81c9b655eba66a6a79 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5892/4/testReport/ |
   | Max. process+thread count | 2335 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5892/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 

[GitHub] [hadoop] hadoop-yetus commented on pull request #6024: HDFS-17177. ErasureCodingWork reconstruct ignore the block length is Long.MAX_VALUE

2023-09-07 Thread via GitHub


hadoop-yetus commented on PR #6024:
URL: https://github.com/apache/hadoop/pull/6024#issuecomment-1710589948

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  36m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 224m 22s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 368m 27s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6024/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6024 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 69572b2943f7 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b9ea3fe053d7a63b0e723891a5e4737a64443272 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6024/2/testReport/ |
   | Max. process+thread count | 2903 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6024/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 

[GitHub] [hadoop] hadoop-yetus commented on pull request #6024: HDFS-17177. ErasureCodingWork reconstruct ignore the block length is Long.MAX_VALUE

2023-09-07 Thread via GitHub


hadoop-yetus commented on PR #6024:
URL: https://github.com/apache/hadoop/pull/6024#issuecomment-1710588545

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 224m 27s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 365m 53s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6024/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6024 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2242d61bea7b 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b9ea3fe053d7a63b0e723891a5e4737a64443272 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6024/3/testReport/ |
   | Max. process+thread count | 3179 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6024/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 

[GitHub] [hadoop] steveloughran commented on a diff in pull request #5979: HADOOP-18861 ABFS: Fix failing tests for CPK

2023-09-07 Thread via GitHub


steveloughran commented on code in PR #5979:
URL: https://github.com/apache/hadoop/pull/5979#discussion_r1318937306


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestCustomerProvidedKey.java:
##
@@ -112,7 +112,7 @@ public ITestCustomerProvidedKey() throws Exception {
   @Test
   public void testReadWithCPK() throws Exception {
 final AzureBlobFileSystem fs = getAbfs(true);
-String fileName = path("/" + methodName.getMethodName()).toString();
+String fileName = path("/" + methodName.getMethodName()).toUri().getPath();

Review Comment:
   provide a single method for this, use it everywhere



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #5982: HADOOP-18866. Refactor @Test(expected) with assertThrows

2023-09-07 Thread via GitHub


steveloughran commented on PR #5982:
URL: https://github.com/apache/hadoop/pull/5982#issuecomment-1710548213

   ok. now, one thing to consider there is: what stylecheckers etc can we use 
to stop new prs coming in which don't do all of this, or lose stack traces when 
validating caught exceptions? As all to often, the work of getting a PR in is 
the time spent teaching people how to write tests that meet my expectations 
(for me) and the time spent waiting for review, making the changes and 
repeating (for them). see #6003 as an example. if we have the CI tooling 
automatically imposing policies on tests, then everyone's time is better used.
   
   now, we do run checkstyle on all PRs, so if you have suggestions about how 
to do it there, or other maven plugins (better yet, prs with tests) then I'd be 
very happy. 
   
   put differently: lets automate enforcing test quality on new submissions 
before worrying about old tests which *appear to work ok*


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a diff in pull request #6003: HADOOP-18869: [ABFS] Fixing Behavior of a File System APIs on root path

2023-09-07 Thread via GitHub


steveloughran commented on code in PR #6003:
URL: https://github.com/apache/hadoop/pull/6003#discussion_r1318926470


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java:
##
@@ -146,6 +148,30 @@ public void testCreateNonRecursive2() throws Exception {
 assertIsFile(fs, testFile);
   }
 
+  @Test
+  public void testCreateOnRoot() throws Exception {
+final AzureBlobFileSystem fs = getFileSystem();
+Path testFile = path(AbfsHttpConstants.ROOT_PATH);
+try {

Review Comment:
   you shouldn't need the double catch. intercept() will return the caught 
exception, type case to the class of arg 1, so can be asserted on. and if an 
exception is not the one expected, the stack trace is *too important to lose*. 
so rethrow it or use as the cause of an assertion error
   ```java
   AbfsRestOperationException e = intercept(...)
   if (e.getStatusCode()!=HTTP_CONFLICT) {
// rethrow if its not the expected one.
throw e;
   }
   



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java:
##
@@ -146,6 +149,26 @@ public void testCreateNonRecursive2() throws Exception {
 assertIsFile(fs, testFile);
   }
 
+  @Test
+  public void testCreateOnRoot() throws Exception {

Review Comment:
   aah. well, do that then. it's off by default to stop people accidentally 
deleting their local disk and then complaining. (this has never happened, 
but...)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhtttylz opened a new pull request, #6028: HDFS-17180. HttpFS Add Support getTrashRoots API

2023-09-07 Thread via GitHub


zhtttylz opened a new pull request, #6028:
URL: https://github.com/apache/hadoop/pull/6028

   JIRA: HDFS-17180. HttpFS Add Support getTrashRoots API


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #5995: HADOOP-18818. Merge aws v2 upgrade feature branch into trunk

2023-09-07 Thread via GitHub


steveloughran commented on PR #5995:
URL: https://github.com/apache/hadoop/pull/5995#issuecomment-1710523411

   ok, this merge is *working*. 
   
   I will do the merge on friday, -I just want to make sure I've got the commit 
message right.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18884) [ABFS] Support VectorIO in ABFS Input Stream

2023-09-07 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18884:
---

 Summary: [ABFS] Support VectorIO in ABFS Input Stream
 Key: HADOOP-18884
 URL: https://issues.apache.org/jira/browse/HADOOP-18884
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.3.9
Reporter: Steve Loughran


the hadoop vector IO APIs are supported in file;// and s3a://; there's a hive 
ORC patch for this and PARQUET-2171 adds it for parquet -after which all apps 
using the library with a matching hadoop version and the feature enabled will 
get a significant speedup.

abfs needs to support too, which needs support for parallel GET requests for 
different ranges



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferdelyi opened a new pull request, #6027: YARN-11468. Zookeeper SSL/TLS support

2023-09-07 Thread via GitHub


ferdelyi opened a new pull request, #6027:
URL: https://github.com/apache/hadoop/pull/6027

   
   
   ### New parameter introduced to enable SSL/TLS for ZK Client for YARN HA, 
which takes effect when Curator is used (when 
yarn.resourcemanager.ha.curator-leader-elector.enabled  is enabled).
   
   
   ### How was this patch tested?
   Via an integration test on my cluster and also created a simple unit test to 
show that the new config setting works.
   
   ### For code changes:
   
   - [x ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18447) Vectored IO: Threadpool should be closed on interrupts or during close calls

2023-09-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18447.
-
Resolution: Duplicate

HADOOP-18347 uses bounded pool, so is shutdown in fs.close()

> Vectored IO: Threadpool should be closed on interrupts or during close calls
> 
>
> Key: HADOOP-18447
> URL: https://issues.apache.org/jira/browse/HADOOP-18447
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, fs, fs/adl, fs/s3
>Affects Versions: 3.3.5
>Reporter: Rajesh Balamohan
>Priority: Major
>  Labels: performance, stability
> Attachments: Screenshot 2022-09-08 at 9.22.07 AM.png
>
>
> Vectored IO threadpool should be closed on any interrupts or during 
> S3AFileSystem/S3AInputStream close() calls.
> E.g Query which got cancelled in the middle of the run. However, in 
> background (e.g LLAP) vectored IO threads continued to run.
>  
> !Screenshot 2022-09-08 at 9.22.07 AM.png|width=537,height=164!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18447) Vectored IO: Threadpool should be closed on interrupts or during close calls

2023-09-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18447:

Affects Version/s: 3.3.5

> Vectored IO: Threadpool should be closed on interrupts or during close calls
> 
>
> Key: HADOOP-18447
> URL: https://issues.apache.org/jira/browse/HADOOP-18447
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, fs, fs/adl, fs/s3
>Affects Versions: 3.3.5
>Reporter: Rajesh Balamohan
>Priority: Major
>  Labels: performance, stability
> Attachments: Screenshot 2022-09-08 at 9.22.07 AM.png
>
>
> Vectored IO threadpool should be closed on any interrupts or during 
> S3AFileSystem/S3AInputStream close() calls.
> E.g Query which got cancelled in the middle of the run. However, in 
> background (e.g LLAP) vectored IO threads continued to run.
>  
> !Screenshot 2022-09-08 at 9.22.07 AM.png|width=537,height=164!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] saxenapranav commented on a diff in pull request #6025: Pass down eTag as part of accessCondition to SDK

2023-09-07 Thread via GitHub


saxenapranav commented on code in PR #6025:
URL: https://github.com/apache/hadoop/pull/6025#discussion_r1318832021


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java:
##
@@ -1242,7 +1242,10 @@ public void setEncodedKey(String anEncodedKey) {
  * when the stream is closed.
  */
 private void restoreKey() throws IOException {
-  store.rename(getEncodedKey(), getKey());
+  String key = getKey();
+  FileMetadata existingMetadata = store.retrieveMetadata(key);

Review Comment:
   As I understand from the code, there shouldn't be any write on the original 
file and on the close of outputstream on the tmpFile, it rename it to original 
file.
   Is there any flow which would be writing on the original file before the 
rename.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18691) Add a CallerContext getter on the Schedulable interface

2023-09-07 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762791#comment-17762791
 ] 

Steve Loughran commented on HADOOP-18691:
-

thanks, just wanted to cross-link so that the cross proejct dependeies were 
known.

> Add a CallerContext getter on the Schedulable interface
> ---
>
> Key: HADOOP-18691
> URL: https://issues.apache.org/jira/browse/HADOOP-18691
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Christos Bisias
>Assignee: Christos Bisias
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.6
>
>
> We would like to add a default *{color:#00875a}CallerContext{color}* getter 
> on the *{color:#00875a}Schedulable{color}* interface
> {code:java}
> default public CallerContext getCallerContext() {
>   return null;  
> } {code}
> and then override it on the 
> *{color:#00875a}i{color}{color:#00875a}{*}pc/{*}Server.Call{color}* class
> {code:java}
> @Override
> public CallerContext getCallerContext() {  
>   return this.callerContext;
> } {code}
> to expose the already existing *{color:#00875a}callerContext{color}* field.
>  
> This change will help us access the *{color:#00875a}CallerContext{color}* on 
> an Apache Ozone *{color:#00875a}IdentityProvider{color}* implementation.
> On Ozone side the *{color:#00875a}FairCallQueue{color}* doesn't work with the 
> Ozone S3G, because all users are masked under a special S3G user and there is 
> no impersonation. Therefore, the FCQ reads only 1 user and becomes 
> ineffective. We can use the *{color:#00875a}CallerContext{color}* field to 
> store the current user and access it on the Ozone 
> {*}{color:#00875a}IdentityProvider{color}{*}.
>  
> This is a presentation with the proposed approach.
> [https://docs.google.com/presentation/d/1iChpCz_qf-LXiPyvotpOGiZ31yEUyxAdU4RhWMKo0c0/edit#slide=id.p]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18691) Add a CallerContext getter on the Schedulable interface

2023-09-07 Thread Christos Bisias (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762751#comment-17762751
 ] 

Christos Bisias commented on HADOOP-18691:
--

[~ste...@apache.org]

This is the ozone jira: HDDS-7319

Do you want me to extend the javadoc comment, above the code change, 
referencing the ozone Jira ticket?

https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Schedulable.java#L32-L41

> Add a CallerContext getter on the Schedulable interface
> ---
>
> Key: HADOOP-18691
> URL: https://issues.apache.org/jira/browse/HADOOP-18691
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Christos Bisias
>Assignee: Christos Bisias
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.6
>
>
> We would like to add a default *{color:#00875a}CallerContext{color}* getter 
> on the *{color:#00875a}Schedulable{color}* interface
> {code:java}
> default public CallerContext getCallerContext() {
>   return null;  
> } {code}
> and then override it on the 
> *{color:#00875a}i{color}{color:#00875a}{*}pc/{*}Server.Call{color}* class
> {code:java}
> @Override
> public CallerContext getCallerContext() {  
>   return this.callerContext;
> } {code}
> to expose the already existing *{color:#00875a}callerContext{color}* field.
>  
> This change will help us access the *{color:#00875a}CallerContext{color}* on 
> an Apache Ozone *{color:#00875a}IdentityProvider{color}* implementation.
> On Ozone side the *{color:#00875a}FairCallQueue{color}* doesn't work with the 
> Ozone S3G, because all users are masked under a special S3G user and there is 
> no impersonation. Therefore, the FCQ reads only 1 user and becomes 
> ineffective. We can use the *{color:#00875a}CallerContext{color}* field to 
> store the current user and access it on the Ozone 
> {*}{color:#00875a}IdentityProvider{color}{*}.
>  
> This is a presentation with the proposed approach.
> [https://docs.google.com/presentation/d/1iChpCz_qf-LXiPyvotpOGiZ31yEUyxAdU4RhWMKo0c0/edit#slide=id.p]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anmolanmol1234 commented on a diff in pull request #6025: Pass down eTag as part of accessCondition to SDK

2023-09-07 Thread via GitHub


anmolanmol1234 commented on code in PR #6025:
URL: https://github.com/apache/hadoop/pull/6025#discussion_r1318632229


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java:
##
@@ -1242,7 +1242,10 @@ public void setEncodedKey(String anEncodedKey) {
  * when the stream is closed.
  */
 private void restoreKey() throws IOException {
-  store.rename(getEncodedKey(), getKey());
+  String key = getKey();
+  FileMetadata existingMetadata = store.retrieveMetadata(key);

Review Comment:
   I did the eTag setting in the earlier iteration by passing down the 
initially fetched eTag and it results in 412 error. Hence an additional HEAD 
call would be needed.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anmolanmol1234 commented on a diff in pull request #6025: Pass down eTag as part of accessCondition to SDK

2023-09-07 Thread via GitHub


anmolanmol1234 commented on code in PR #6025:
URL: https://github.com/apache/hadoop/pull/6025#discussion_r1318630549


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java:
##
@@ -1242,7 +1242,10 @@ public void setEncodedKey(String anEncodedKey) {
  * when the stream is closed.
  */
 private void restoreKey() throws IOException {
-  store.rename(getEncodedKey(), getKey());
+  String key = getKey();
+  FileMetadata existingMetadata = store.retrieveMetadata(key);

Review Comment:
   We want the latest eTag on the path, not the initial eTag. Hence we need to 
fetch the latest eTag on that path. Simply passing the initial eTag won't help 
here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18879) Recommended Docker config file missing environment variable

2023-09-07 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762743#comment-17762743
 ] 

Steve Loughran commented on HADOOP-18879:
-

you got a PR here?

> Recommended Docker config file missing environment variable
> ---
>
> Key: HADOOP-18879
> URL: https://issues.apache.org/jira/browse/HADOOP-18879
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, scripts
>Affects Versions: 3.3.6
> Environment: config 
> docker-compose.yaml 
>Reporter: Konstantin Doulepov
>Priority: Major
>
> Docker config is missing {*}HADOOP_HOME=/opt/hadoop{*}, docker environment 
> referencing this variable and docker container cant run examples and behaves 
> erratically
> currently not set in environment and config uses it eg
> MAPRED-SITE.XML_yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=$HADOOP_HOME
> but its not set in docker container



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #6026: YARN-11563. Fix word misspellings from CSAssignemnt to CSAssignment

2023-09-07 Thread via GitHub


hadoop-yetus commented on PR #6026:
URL: https://github.com/apache/hadoop/pull/6026#issuecomment-1710166327

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 29s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  85m 54s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 174m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6026/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6026 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux d1e27c395f25 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 55104e01a97ce0c034c43f7485c32844fa929470 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6026/1/testReport/ |
   | Max. process+thread count | 963 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6026/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache 

[jira] [Commented] (HADOOP-18691) Add a CallerContext getter on the Schedulable interface

2023-09-07 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762740#comment-17762740
 ] 

Steve Loughran commented on HADOOP-18691:
-

[~xBis] could you add a cross reference to the ozone change which needed this? 
thanks

> Add a CallerContext getter on the Schedulable interface
> ---
>
> Key: HADOOP-18691
> URL: https://issues.apache.org/jira/browse/HADOOP-18691
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Christos Bisias
>Assignee: Christos Bisias
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.6
>
>
> We would like to add a default *{color:#00875a}CallerContext{color}* getter 
> on the *{color:#00875a}Schedulable{color}* interface
> {code:java}
> default public CallerContext getCallerContext() {
>   return null;  
> } {code}
> and then override it on the 
> *{color:#00875a}i{color}{color:#00875a}{*}pc/{*}Server.Call{color}* class
> {code:java}
> @Override
> public CallerContext getCallerContext() {  
>   return this.callerContext;
> } {code}
> to expose the already existing *{color:#00875a}callerContext{color}* field.
>  
> This change will help us access the *{color:#00875a}CallerContext{color}* on 
> an Apache Ozone *{color:#00875a}IdentityProvider{color}* implementation.
> On Ozone side the *{color:#00875a}FairCallQueue{color}* doesn't work with the 
> Ozone S3G, because all users are masked under a special S3G user and there is 
> no impersonation. Therefore, the FCQ reads only 1 user and becomes 
> ineffective. We can use the *{color:#00875a}CallerContext{color}* field to 
> store the current user and access it on the Ozone 
> {*}{color:#00875a}IdentityProvider{color}{*}.
>  
> This is a presentation with the proposed approach.
> [https://docs.google.com/presentation/d/1iChpCz_qf-LXiPyvotpOGiZ31yEUyxAdU4RhWMKo0c0/edit#slide=id.p]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #6022: HADOOP-18883. [ABFS]: Expect-100 JDK bug resolution: prevent multiple server calls

2023-09-07 Thread via GitHub


hadoop-yetus commented on PR #6022:
URL: https://github.com/apache/hadoop/pull/6022#issuecomment-1710047550

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  51m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  5s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 145m 35s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6022 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 781acc985df6 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 75c722adfa86288f813be37c8fd2760d017da2c3 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/4/testReport/ |
   | Max. process+thread count | 586 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #6022: HADOOP-18883. [ABFS]: Expect-100 JDK bug resolution: prevent multiple server calls

2023-09-07 Thread via GitHub


hadoop-yetus commented on PR #6022:
URL: https://github.com/apache/hadoop/pull/6022#issuecomment-1710044740

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 46s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  1s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 141m 45s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6022 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b63840926cab 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 75c722adfa86288f813be37c8fd2760d017da2c3 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/5/testReport/ |
   | Max. process+thread count | 531 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hadoop] haiyang1987 commented on pull request #6024: HDFS-17177. ErasureCodingWork reconstruct ignore the block length is Long.MAX_VALUE

2023-09-07 Thread via GitHub


haiyang1987 commented on PR #6024:
URL: https://github.com/apache/hadoop/pull/6024#issuecomment-1710003005

   Thanks @ZanderXu @zhangshuyan0 help me review.
   
   yeah, use `getBlock().isDeleted()` can better intuitively indicate that the 
current block has been deleted,
   In the current code, `ReplicationWork#chooseTargets `and` 
DatanodeManager#handleHeartbeat` maybe can be changed to 
`getBlock().isDeleted()`,  what do you think about?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #6025: Pass down eTag as part of accessCondition to SDK

2023-09-07 Thread via GitHub


hadoop-yetus commented on PR #6025:
URL: https://github.com/apache/hadoop/pull/6025#issuecomment-1709987453

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 22s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/5/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 5 new + 67 unchanged - 0 
fixed = 72 total (was 67)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 10s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 129m 14s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6025 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux a1dd1d55d213 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fa86293ba347c1781557079936140a4c33e17c53 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/5/testReport/ |
   | Max. process+thread count | 734 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond 

[jira] [Created] (HADOOP-18883) Expect-100 JDK bug resolution: prevent multiple server calls

2023-09-07 Thread Pranav Saxena (Jira)
Pranav Saxena created HADOOP-18883:
--

 Summary: Expect-100 JDK bug resolution: prevent multiple server 
calls
 Key: HADOOP-18883
 URL: https://issues.apache.org/jira/browse/HADOOP-18883
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Reporter: Pranav Saxena
Assignee: Pranav Saxena
 Fix For: 3.4.0


This is inline to JDK bug: [https://bugs.openjdk.org/browse/JDK-8314978].

 
With the current implementation of HttpURLConnection if server rejects the 
“Expect 100-continue” then there will be ‘java.net.ProtocolException’ will be 
thrown from 'expect100Continue()' method.

After the exception thrown, If we call any other method on the same instance 
(ex getHeaderField(), or getHeaderFields()). They will internally call 
getOuputStream() which invokes writeRequests(), which make the actual server 
call. 




In the AbfsHttpOperation, after sendRequest() we call processResponse() method 
from AbfsRestOperation. Even if the conn.getOutputStream() fails due to 
expect-100 error, we consume the exception and let the code go ahead. So, we 
can have getHeaderField() / getHeaderFields() / getHeaderFieldLong() which will 
be triggered after getOutputStream is failed. These invocation will lead to 
server calls.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] saxenapranav commented on pull request #6022: Expect100resolution

2023-09-07 Thread via GitHub


saxenapranav commented on PR #6022:
URL: https://github.com/apache/hadoop/pull/6022#issuecomment-1709972382

   
    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 
Expected a 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException 
to be thrown, but got the result: : 
"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
   [INFO]
   [ERROR] Tests run: 141, Failures: 1, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=120).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 572, Failures: 1, Errors: 1, Skipped: 54
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 
Expected a 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException 
to be thrown, but got the result: : 
"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
   [INFO]
   [ERROR] Tests run: 141, Failures: 1, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=90).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 589, Failures: 1, Errors: 1, Skipped: 54
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 
Expected a 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException 
to be thrown, but got the result: : 
"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
   [INFO]
   [ERROR] Tests run: 141, Failures: 1, Errors: 0, Skipped: 11
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   
ITestAzureBlobFileSystemLease.testAcquireRetry:344->lambda$testAcquireRetry$6:345
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 589, Failures: 0, Errors: 1, Skipped: 277
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 
Expected a 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException 
to be thrown, but got the result: : 
"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
   [INFO]
   [ERROR] Tests run: 141, Failures: 1, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   
ITestAzureBlobFileSystemLease.testAcquireRetry:329->Object.hashCode:-2 » 
TestTimedOut
   [INFO]
   [ERROR] Tests run: 572, Failures: 0, Errors: 1, Skipped: 54
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41
   
   Time taken: 51 mins 2 secs.
   azureuser@Hadoop-VM-EAST2:~/hadoop/hadoop-tools/hadoop-azure$ git log
   commit 75c722adfa86288f813be37c8fd2760d017da2c3 (HEAD -> 
expect100resolution, origin/expect100resolution)
   Author: Pranav Saxena <>
   Date:   Thu Sep 7 02:50:45 2023 -0700
   
   refactor undo


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] saxenapranav commented on a diff in pull request #6025: Pass down eTag as part of accessCondition to SDK

2023-09-07 Thread via GitHub


saxenapranav commented on code in PR #6025:
URL: https://github.com/apache/hadoop/pull/6025#discussion_r1318448648


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java:
##
@@ -1242,7 +1242,10 @@ public void setEncodedKey(String anEncodedKey) {
  * when the stream is closed.
  */
 private void restoreKey() throws IOException {
-  store.rename(getEncodedKey(), getKey());
+  String key = getKey();
+  FileMetadata existingMetadata = store.retrieveMetadata(key);

Review Comment:
   Lets not call this call as discussed in the other comment around how to get 
etag.
   +
   By the emptySrc file was created, till this moment some other process would 
have made change to that path and hence etag would change. so, now if we read 
etag here, we will get the wrong etag.



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java:
##
@@ -1858,7 +1881,13 @@ public void storeEmptyLinkFile(String key, String 
tempBlobKey,
   CloudBlobWrapper blob = getBlobReference(key);
   storePermissionStatus(blob, permissionStatus);
   storeLinkAttribute(blob, tempBlobKey);
-  openOutputStream(blob).close();
+  if (eTag != null) {
+AccessCondition accessCondition = new AccessCondition();
+accessCondition.setIfMatch(eTag);
+openOutputStream(blob, accessCondition).close();

Review Comment:
   We can get etag here and pass that till the point where we want to rename. 
No need of metadata call is required.
   
   `blob.getBlob().getProperties().getEtag();` will help. why?
   `blob` here is CloudBlockBlobWrapper. `blob.getBlob()` is CloudBlockBlob.
   Now, in `CloudBlockBlob.commitBlockListImpl.preProcessResponse`, etag and 
LMT will be updated in the CloudBlockBlob obj.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #6025: Pass down eTag as part of accessCondition to SDK

2023-09-07 Thread via GitHub


hadoop-yetus commented on PR #6025:
URL: https://github.com/apache/hadoop/pull/6025#issuecomment-1709929351

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   1m 21s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/4/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   4m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 22s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/4/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 5 new + 67 unchanged - 0 
fixed = 72 total (was 67)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | -1 :x: |  spotbugs  |   1m  3s | 
[/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/4/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  33m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   0m 20s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/4/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  92m 30s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Unused field:NativeAzureFileSystem.java |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6025 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b1752cd6061b 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b5bea926653072fdbb4386ce14adcd76f640fd6f |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 

[GitHub] [hadoop] granewang opened a new pull request, #6026: YARN-11563. Fix word misspellings from CSAssignemnt to CSAssignment

2023-09-07 Thread via GitHub


granewang opened a new pull request, #6026:
URL: https://github.com/apache/hadoop/pull/6026

   misspellings of CSAssignemnt which shoud be CSAssignment
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5892: HDFS-17129. mis-order of ibr and fbr on datanode

2023-09-07 Thread via GitHub


hadoop-yetus commented on PR #5892:
URL: https://github.com/apache/hadoop/pull/5892#issuecomment-1709897902

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 26s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m 27s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 24s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  42m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 240m  7s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5892/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 398m 24s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5892/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5892 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8c423e8252bc 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 771c97478b599c0514a1b012666569ef2b2e0f9b |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5892/3/testReport/ |
   | Max. process+thread count | 2672 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5892/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 

[GitHub] [hadoop] saxenapranav commented on a diff in pull request #6022: Expect100resolution

2023-09-07 Thread via GitHub


saxenapranav commented on code in PR #6022:
URL: https://github.com/apache/hadoop/pull/6022#discussion_r1318384739


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##
@@ -85,6 +88,7 @@ public class AbfsHttpOperation implements AbfsPerfLoggable {
   private long sendRequestTimeMs;
   private long recvResponseTimeMs;
   private boolean shouldMask = false;
+  private boolean expect100failureReceived = false;

Review Comment:
   Have taken it. Refactored the variable name. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] saxenapranav commented on a diff in pull request #6022: Expect100resolution

2023-09-07 Thread via GitHub


saxenapranav commented on code in PR #6022:
URL: https://github.com/apache/hadoop/pull/6022#discussion_r1318384400


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/AbfsHttpConstants.java:
##
@@ -69,6 +69,7 @@ public final class AbfsHttpConstants {
* and should qualify for retry.
*/
   public static final int HTTP_CONTINUE = 100;
+  public static final String EXPECT_100_JDK_ERROR = "Server rejected 
operation";

Review Comment:
   For any IOException thrown by getOutputStream, no headers / inputStream will 
be parsed. The flow of code is such that for IOException other expect-100 
error, exception will be thrown back to AbfsRestOperation which will retry 
again as per retry-policy.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] saxenapranav commented on a diff in pull request #6022: Expect100resolution

2023-09-07 Thread via GitHub


saxenapranav commented on code in PR #6022:
URL: https://github.com/apache/hadoop/pull/6022#discussion_r1318385635


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##
@@ -340,8 +344,11 @@ public void sendRequest(byte[] buffer, int offset, int 
length) throws IOExceptio
If expect header is not enabled, we throw back the exception.
  */
 String expectHeader = getConnProperty(EXPECT);
-if (expectHeader != null && expectHeader.equals(HUNDRED_CONTINUE)) {
+if (expectHeader != null && expectHeader.equals(HUNDRED_CONTINUE)
+&& e instanceof ProtocolException
+&& EXPECT_100_JDK_ERROR.equals(e.getMessage())) {

Review Comment:
   Taken. The setting of connectionDisconnectedOnError in starting of the catch 
block.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18882) HDFS defaults tls cipher to "no encryption" when keystore key is unset or empty

2023-09-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-18882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau updated HADOOP-18882:
--
Description: 
It looks like some hdfs servers default the cipher suite to not encrypt traffic 
when the keystore password is not set or set to an empty string.

Historically this has probably not often been an issue as java `keytool` 
refuses to create a keystore with less than 6 characters, so usually people 
would need to set passwords on the keystores (and hence in the config).

When using keystores without a password, we noticed that HDFS refuses to load 
keys from this keystore when `ssl.server.keystore.password` is unset or set to 
an empty string - and instead of erroring out sets the cipher suite for rpc 
connections to `TLS_NULL_WITH_NULL_NULL` which is basically TLS but without any 
encryption.

The impact varies depending on which communication channel we looked at, what 
we saw was:
 * JournalNodes seem to happily go along with this and NameNodes equally 
happily connect to the JournalNodes without any warnings - we do have tls 
enabled after all :)
 * NameNodes refuse connections with a handshake exception, so the real world 
impact of this should hopefully be small, but it does seem like less than ideal 
behavior.

 

So effectively, HDFS cannot use keystores without passwords, as connections 
cannot be established successfully.

  was:
It looks like some hdfs servers default the cipher suite to not encrypt traffic 
when the keystore password is not set or set to an empty string.

Historically this has probably not often been an issue as java `keytool` 
refuses to create a keystore with less than 6 characters, so usually people 
would need to set passwords on the keystores (and hence in the config).

When using keystores without a password, we noticed that HDFS refuses to load 
keys from this keystore when `ssl.server.keystore.password` is unset or set to 
an empty string - and instead of erroring out sets the cipher suite for rpc 
connections to `TLS_NULL_WITH_NULL_NULL` which is basically TLS but without any 
encryption.

The impact varies depending on which communication channel we looked at, what 
we saw was:
 * JournalNodes seem to happily go along with this and NameNodes equally 
happily connect to the JournalNodes without any warnings - we do have tls 
enabled after all :)
 * NameNodes refuse connections with a handshake exception, so the real world 
impact of this should hopefully be small, but it does seem like less than ideal 
behavior.


> HDFS defaults tls cipher to "no encryption" when keystore key is unset or 
> empty
> ---
>
> Key: HADOOP-18882
> URL: https://issues.apache.org/jira/browse/HADOOP-18882
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.3.4
> Environment: We saw this issue when running in a Kubernetes 
> environment.
> Hadoop was deployed using the [Stackable Operator for Apache 
> Hadoop|[https://github.com/stackabletech/hdfs-operator|http://example.com/]]. 
> The binaries contained in the deployed images are taken from the ASF mirrors, 
> not self-compiled.
>Reporter: Sönke Liebau
>Priority: Major
>
> It looks like some hdfs servers default the cipher suite to not encrypt 
> traffic when the keystore password is not set or set to an empty string.
> Historically this has probably not often been an issue as java `keytool` 
> refuses to create a keystore with less than 6 characters, so usually people 
> would need to set passwords on the keystores (and hence in the config).
> When using keystores without a password, we noticed that HDFS refuses to load 
> keys from this keystore when `ssl.server.keystore.password` is unset or set 
> to an empty string - and instead of erroring out sets the cipher suite for 
> rpc connections to `TLS_NULL_WITH_NULL_NULL` which is basically TLS but 
> without any encryption.
> The impact varies depending on which communication channel we looked at, what 
> we saw was:
>  * JournalNodes seem to happily go along with this and NameNodes equally 
> happily connect to the JournalNodes without any warnings - we do have tls 
> enabled after all :)
>  * NameNodes refuse connections with a handshake exception, so the real world 
> impact of this should hopefully be small, but it does seem like less than 
> ideal behavior.
>  
> So effectively, HDFS cannot use keystores without passwords, as connections 
> cannot be established successfully.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18870) CURATOR-599 change broke functionality introduced in HADOOP-18139 and HADOOP-18709

2023-09-07 Thread Ferenc Erdelyi (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762658#comment-17762658
 ] 

Ferenc Erdelyi commented on HADOOP-18870:
-

[~snemeth] you are correct. Fixed the description of the Jira. Thank you for 
spotting it and for the CR/merge.

> CURATOR-599 change broke functionality introduced in HADOOP-18139 and 
> HADOOP-18709
> --
>
> Key: HADOOP-18870
> URL: https://issues.apache.org/jira/browse/HADOOP-18870
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.4.0, 3.3.5
>Reporter: Ferenc Erdelyi
>Assignee: Ferenc Erdelyi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> [Curator PR#391 
> |https://github.com/apache/curator/pull/391/files#diff-687a4ed1252bfb4f56b3aeeb28bee4413b7df9bec4b969b72215587158ac875dR59]
>  introduced a default method in the ZooKeeperFactory interface, hence the 
> override of the 4-parameter NewZookeeper method in the HadoopZookeeperFactory 
> class is not taking effect due to this. 
> Proposing routing the 4-parameter method to a 5-parameter method, which 
> instantiates the ZKClientConfig as the 5th parameter. This is a non-breaking 
> change, as the ZKClientConfig is currently instantiated within the method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18870) CURATOR-599 change broke functionality introduced in HADOOP-18139 and HADOOP-18709

2023-09-07 Thread Ferenc Erdelyi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Erdelyi updated HADOOP-18870:

Description: 
[Curator PR#391 
|https://github.com/apache/curator/pull/391/files#diff-687a4ed1252bfb4f56b3aeeb28bee4413b7df9bec4b969b72215587158ac875dR59]
 introduced a default method in the ZooKeeperFactory interface, hence the 
override of the 4-parameter NewZookeeper method in the HadoopZookeeperFactory 
class is not taking effect due to this. 

Proposing routing the 4-parameter method to a 5-parameter method, which 
instantiates the ZKClientConfig as the 5th parameter. This is a non-breaking 
change, as the ZKClientConfig is currently instantiated within the method.

  was:
[Curator PR#391 
|https://github.com/apache/curator/pull/391/files#diff-687a4ed1252bfb4f56b3aeeb28bee4413b7df9bec4b969b72215587158ac875dR59]
 introduced a default method in the ZooKeeperFactory interface, hence the 
override of the 4-parameter NewZookeeper method in the HadoopZookeeperFactory 
class is not taking effect due to this. 

Proposing routing the 4-parameter method to a 5-parameter method, which 
instantiates the ZKConfiguration as the 5th parameter. This is a non-breaking 
change, as the ZKConfiguration is currently instantiated within the method.


> CURATOR-599 change broke functionality introduced in HADOOP-18139 and 
> HADOOP-18709
> --
>
> Key: HADOOP-18870
> URL: https://issues.apache.org/jira/browse/HADOOP-18870
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.4.0, 3.3.5
>Reporter: Ferenc Erdelyi
>Assignee: Ferenc Erdelyi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> [Curator PR#391 
> |https://github.com/apache/curator/pull/391/files#diff-687a4ed1252bfb4f56b3aeeb28bee4413b7df9bec4b969b72215587158ac875dR59]
>  introduced a default method in the ZooKeeperFactory interface, hence the 
> override of the 4-parameter NewZookeeper method in the HadoopZookeeperFactory 
> class is not taking effect due to this. 
> Proposing routing the 4-parameter method to a 5-parameter method, which 
> instantiates the ZKClientConfig as the 5th parameter. This is a non-breaking 
> change, as the ZKClientConfig is currently instantiated within the method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18882) HDFS defaults tls cipher to "no encryption" when keystore key is unset or empty

2023-09-07 Thread Jira
Sönke Liebau created HADOOP-18882:
-

 Summary: HDFS defaults tls cipher to "no encryption" when keystore 
key is unset or empty
 Key: HADOOP-18882
 URL: https://issues.apache.org/jira/browse/HADOOP-18882
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.3.4
 Environment: We saw this issue when running in a Kubernetes 
environment.

Hadoop was deployed using the [Stackable Operator for Apache 
Hadoop|[https://github.com/stackabletech/hdfs-operator|http://example.com/]]. 

The binaries contained in the deployed images are taken from the ASF mirrors, 
not self-compiled.
Reporter: Sönke Liebau


It looks like some hdfs servers default the cipher suite to not encrypt traffic 
when the keystore password is not set or set to an empty string.

Historically this has probably not often been an issue as java `keytool` 
refuses to create a keystore with less than 6 characters, so usually people 
would need to set passwords on the keystores (and hence in the config).

When using keystores without a password, we noticed that HDFS refuses to load 
keys from this keystore when `ssl.server.keystore.password` is unset or set to 
an empty string - and instead of erroring out sets the cipher suite for rpc 
connections to `TLS_NULL_WITH_NULL_NULL` which is basically TLS but without any 
encryption.

The impact varies depending on which communication channel we looked at, what 
we saw was:
 * JournalNodes seem to happily go along with this and NameNodes equally 
happily connect to the JournalNodes without any warnings - we do have tls 
enabled after all :)
 * NameNodes refuse connections with a handshake exception, so the real world 
impact of this should hopefully be small, but it does seem like less than ideal 
behavior.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #6025: Pass down eTag as part of accessCondition to SDK

2023-09-07 Thread via GitHub


hadoop-yetus commented on PR #6025:
URL: https://github.com/apache/hadoop/pull/6025#issuecomment-1709734385

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  50m 36s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/3/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   0m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 31s | 
[/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/3/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04.txt)
 |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04.  |
   | -1 :x: |  javadoc  |   0m 33s | 
[/branch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/3/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05.txt)
 |  hadoop-azure in trunk failed with JDK Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05.  |
   | -1 :x: |  spotbugs  |   0m 34s | 
[/branch-spotbugs-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/3/artifact/out/branch-spotbugs-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   4m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 24s | 
[/patch-mvninstall-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/3/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 25s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/3/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04.  |
   | -1 :x: |  javac  |   0m 25s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/3/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04.  |
   | -1 :x: |  compile  |   0m 25s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/3/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05.txt)
 |  hadoop-azure in the patch failed with JDK Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05.  |
   | -1 :x: |  javac  |   0m 25s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/3/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05.txt)
 |  hadoop-azure in the patch failed with JDK Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 22s | 
[/buildtool-patch-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6025/3/artifact/out/buildtool-patch-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  The patch fails to run checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 24s | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #6018: HDFS-17178: BootstrapStandby needs to handle RollingUpgrade

2023-09-07 Thread via GitHub


hadoop-yetus commented on PR #6018:
URL: https://github.com/apache/hadoop/pull/6018#issuecomment-1709547902

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 307 unchanged - 1 
fixed = 307 total (was 308)  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  36m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 219m 29s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 371m 57s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6018/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6018 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux 393eac39eb37 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d93143ac056459af10b4227a681040934b99a73a |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6018/8/testReport/ |
   | Max. process+thread count | 3142 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6018/8/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the 

[GitHub] [hadoop] saxenapranav commented on a diff in pull request #6022: Expect100resolution

2023-09-07 Thread via GitHub


saxenapranav commented on code in PR #6022:
URL: https://github.com/apache/hadoop/pull/6022#discussion_r1318128092


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##
@@ -85,6 +88,7 @@ public class AbfsHttpOperation implements AbfsPerfLoggable {
   private long sendRequestTimeMs;
   private long recvResponseTimeMs;
   private boolean shouldMask = false;
+  private boolean expect100failureReceived = false;

Review Comment:
   I believe, getOutputStream can throw only two major types of exception:
   1. Expect-100 error
   2. Other IOException
   In case of other IOException, we should immediately throw back to 
AbfsRestOperation to do the retry. We can take responseCode only in case of 
Except-100 error, because there was a valid server call made. In other 
IOException, they can contain, CT, connection-reset etc.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org