[GitHub] [hadoop] lujiefsi opened a new pull request #2966: HDFS-16004.startLogSegment and journal in BackupNode lack Permission …
lujiefsi opened a new pull request #2966: URL: https://github.com/apache/hadoop/pull/2966 I have some doubt when i configurate secure HDFS. I know we have Service Level Authorization for protocols like NamenodeProtocol,DatanodeProtocol and so on. But i do not find such Authorization for JournalProtocol after reading the code in HDFSPolicyProvider. And if we have, how can i configurate such Authorization? Besides even NamenodeProtocol has Service Level Authorization, its methods still have Permission check. Take startCheckpoint in NameNodeRpcServer who implemented NamenodeProtocol for example: public NamenodeCommand startCheckpoint(NamenodeRegistration registration) throws IOException { String operationName = "startCheckpoint"; checkNNStartup(); namesystem.checkSuperuserPrivilege(operationName); .. I found that the methods in BackupNodeRpcServer who implemented JournalProtocol lack of such Permission check. See below: public void startLogSegment(JournalInfo journalInfo, long epoch, long txid) throws IOException { namesystem.checkOperation(OperationCategory.JOURNAL); verifyJournalRequest(journalInfo); getBNImage().namenodeStartedLogSegment(txid); } @Override public void journal(JournalInfo journalInfo, long epoch, long firstTxId, int numTxns, byte[] records) throws IOException { namesystem.checkOperation(OperationCategory.JOURNAL); verifyJournalRequest(journalInfo); getBNImage().journal(firstTxId, numTxns, records); } Do we need add Permission check for them? Please point out my mistakes if i am wrong or miss something. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17675) LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException
[ https://issues.apache.org/jira/browse/HADOOP-17675?focusedWorklogId=591350=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591350 ] ASF GitHub Bot logged work on HADOOP-17675: --- Author: ASF GitHub Bot Created on: 30/Apr/21 03:28 Start Date: 30/Apr/21 03:28 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2965: URL: https://github.com/apache/hadoop/pull/2965#issuecomment-829780056 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 21m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 13s | | trunk passed | | +1 :green_heart: | compile | 22m 26s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 19m 5s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 58s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 30s | | trunk passed | | +1 :green_heart: | javadoc | 0m 57s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 29s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 21s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 17s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 53s | | the patch passed | | +1 :green_heart: | compile | 21m 42s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 21m 42s | | the patch passed | | +1 :green_heart: | compile | 18m 56s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 18m 56s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 57s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 26s | | the patch passed | | +1 :green_heart: | javadoc | 0m 55s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 32s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | spotbugs | 2m 31s | [/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2965/1/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html) | hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 18m 24s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 17m 44s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2965/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 48s | | The patch does not generate ASF License warnings. | | | | 207m 26s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-common-project/hadoop-common | | | Load of known null value in org.apache.hadoop.security.LdapGroupsMapping.getDirContext() At LdapGroupsMapping.java:in org.apache.hadoop.security.LdapGroupsMapping.getDirContext() At LdapGroupsMapping.java:[line 664] | | Failed junit tests | hadoop.security.ssl.TestReloadingX509TrustManager | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2965/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2965 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 4042e2ffc2aa
[GitHub] [hadoop] hadoop-yetus commented on pull request #2965: HADOOP-17675 LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException
hadoop-yetus commented on pull request #2965: URL: https://github.com/apache/hadoop/pull/2965#issuecomment-829780056 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 21m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 13s | | trunk passed | | +1 :green_heart: | compile | 22m 26s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 19m 5s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 58s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 30s | | trunk passed | | +1 :green_heart: | javadoc | 0m 57s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 29s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 21s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 17s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 53s | | the patch passed | | +1 :green_heart: | compile | 21m 42s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 21m 42s | | the patch passed | | +1 :green_heart: | compile | 18m 56s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 18m 56s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 57s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 26s | | the patch passed | | +1 :green_heart: | javadoc | 0m 55s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 32s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | spotbugs | 2m 31s | [/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2965/1/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html) | hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 18m 24s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 17m 44s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2965/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 48s | | The patch does not generate ASF License warnings. | | | | 207m 26s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-common-project/hadoop-common | | | Load of known null value in org.apache.hadoop.security.LdapGroupsMapping.getDirContext() At LdapGroupsMapping.java:in org.apache.hadoop.security.LdapGroupsMapping.getDirContext() At LdapGroupsMapping.java:[line 664] | | Failed junit tests | hadoop.security.ssl.TestReloadingX509TrustManager | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2965/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2965 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 4042e2ffc2aa 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4ffc830cdf1c42442d0928715a857c7261fd5dcd | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04
[jira] [Work logged] (HADOOP-17653) Do not use guava's Files.createTempDir()
[ https://issues.apache.org/jira/browse/HADOOP-17653?focusedWorklogId=591335=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591335 ] ASF GitHub Bot logged work on HADOOP-17653: --- Author: ASF GitHub Bot Created on: 30/Apr/21 02:00 Start Date: 30/Apr/21 02:00 Worklog Time Spent: 10m Work Description: jojochuang commented on pull request #2945: URL: https://github.com/apache/hadoop/pull/2945#issuecomment-829743401 TestRouterFederationRename failure looks caused by this PR. I'll check again. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 591335) Time Spent: 2h 50m (was: 2h 40m) > Do not use guava's Files.createTempDir() > > > Key: HADOOP-17653 > URL: https://issues.apache.org/jira/browse/HADOOP-17653 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.4.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Time Spent: 2h 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on pull request #2945: HADOOP-17653. Do not use guava's Files.createTempDir().
jojochuang commented on pull request #2945: URL: https://github.com/apache/hadoop/pull/2945#issuecomment-829743401 TestRouterFederationRename failure looks caused by this PR. I'll check again. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on a change in pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
jojochuang commented on a change in pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#discussion_r623513249 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java ## @@ -1527,34 +1535,49 @@ public Response delete( @QueryParam(RecursiveParam.NAME) @DefaultValue(RecursiveParam.DEFAULT) final RecursiveParam recursive, @QueryParam(SnapshotNameParam.NAME) @DefaultValue(SnapshotNameParam.DEFAULT) - final SnapshotNameParam snapshotName + final SnapshotNameParam snapshotName, + @QueryParam(DeleteSkipTrashParam.NAME) + @DefaultValue(DeleteSkipTrashParam.DEFAULT) + final DeleteSkipTrashParam skiptrash ) throws IOException, InterruptedException { -init(ugi, delegation, username, doAsUser, path, op, recursive, snapshotName); +init(ugi, delegation, username, doAsUser, path, op, recursive, +snapshotName, skiptrash); -return doAs(ugi, new PrivilegedExceptionAction() { - @Override - public Response run() throws IOException { - return delete(ugi, delegation, username, doAsUser, - path.getAbsolutePath(), op, recursive, snapshotName); - } -}); +return doAs(ugi, () -> delete( +path.getAbsolutePath(), op, recursive, snapshotName, skiptrash)); } protected Response delete( - final UserGroupInformation ugi, - final DelegationParam delegation, - final UserParam username, - final DoAsParam doAsUser, final String fullpath, final DeleteOpParam op, final RecursiveParam recursive, - final SnapshotNameParam snapshotName - ) throws IOException { + final SnapshotNameParam snapshotName, + final DeleteSkipTrashParam skipTrash) throws IOException { final ClientProtocol cp = getRpcClientProtocol(); switch(op.getValue()) { case DELETE: { + Configuration conf = + (Configuration) context.getAttribute(JspHelper.CURRENT_CONF); + long trashInterval = + conf.getLong(FS_TRASH_INTERVAL_KEY, FS_TRASH_INTERVAL_DEFAULT); + if (trashInterval > 0 && !skipTrash.getValue()) { +LOG.info("{} is {} , trying to archive {} instead of removing", +FS_TRASH_INTERVAL_KEY, trashInterval, fullpath); +org.apache.hadoop.fs.Path path = +new org.apache.hadoop.fs.Path(fullpath); +boolean movedToTrash = Trash.moveToAppropriateTrash( +FileSystem.get(conf), path, conf); Review comment: This could lead to OOM. We should not create FileSystem object inside NameNode. See https://issues.apache.org/jira/browse/HDFS-15052 for a similar problem. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2949: HADOOP-17657: implement StreamCapabilities in SequenceFile.Writer and fall back to flush, if hflush is not supported
hadoop-yetus commented on pull request #2949: URL: https://github.com/apache/hadoop/pull/2949#issuecomment-829709523 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 43s | | trunk passed | | +1 :green_heart: | compile | 20m 43s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 0s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 7s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 32s | | trunk passed | | +1 :green_heart: | javadoc | 1m 4s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 41s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 21s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 26s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 53s | | the patch passed | | +1 :green_heart: | compile | 19m 58s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javac | 19m 58s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/10/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 2 new + 1985 unchanged - 0 fixed = 1987 total (was 1985) | | +1 :green_heart: | compile | 18m 2s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | javac | 18m 2s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/10/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 2 new + 1887 unchanged - 0 fixed = 1889 total (was 1887) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 5s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/10/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 3 new + 352 unchanged - 0 fixed = 355 total (was 352) | | +1 :green_heart: | mvnsite | 1m 28s | | the patch passed | | +1 :green_heart: | javadoc | 1m 1s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 36s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 30s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 48s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 23s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 53s | | The patch does not generate ASF License warnings. | | | | 178m 19s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2949 | | JIRA Issue | HADOOP-17657 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 774c6c672fe5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 13251c98036e143d1505ec165a2f9e385f6acdc3 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
[jira] [Commented] (HADOOP-17675) LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException
[ https://issues.apache.org/jira/browse/HADOOP-17675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17336994#comment-17336994 ] István Fajth commented on HADOOP-17675: --- As stated in this article: https://www.infoworld.com/article/2077344/find-a-way-out-of-the-classloader-maze.html A native thread has its context classloader set to null by default. If the context classloader which is used internally by JNDI to load a class is null, then the bootstrap classloader is used, according to the apidoc here: https://docs.oracle.com/javase/8/docs/api/java/lang/Class.html#forName-java.lang.String-boolean-java.lang.ClassLoader- JNDI uses this form with the context classloader as can be seen here: https://github.com/openjdk/jdk11u/blob/master/src/java.naming/share/classes/com/sun/jndi/ldap/VersionHelper.java#L107 or here: https://github.com/openjdk/jdk8u/blob/master/jdk/src/share/classes/com/sun/jndi/ldap/VersionHelper12.java#L72 In Impala this call happens from a Thread created in native space, so in that case, the System/Application classloader loads LdapSslSocketFactory fine in LdapGroupsMapping.getDirContext() while creating the environment, but then InitialDirContext constructor gets to instantiation the LdapSslSocketFactory inside JNDI with the help of the linked VersionHelper impelmentations, and fails to load the class with the bootstrap classloader as context classloader is null. In order to solve this problem, we can safely use the classloader of the LdapGroupsMapping class, as it had to load the LdapSslSocketFactory class before. > LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException > - > > Key: HADOOP-17675 > URL: https://issues.apache.org/jira/browse/HADOOP-17675 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2.2 >Reporter: Tamas Mate >Assignee: István Fajth >Priority: Major > Labels: pull-request-available > Attachments: stacktrace.txt > > Time Spent: 10m > Remaining Estimate: 0h > > Using LdapGroupsMapping with SSL enabled causes ClassNotFoundException when > it is called through native threads, such as Apache Impala does. > When a thread is attached to the VM, the currentThread's context classloader > is null, so when jndi internally tries to use the current thread's context > classloader to load the socket factory implementation, the > Class.forname(String, boolean, ClassLoader) method gets a null as the loader > uses the bootstrap classloader. > Meanwhile the LdapGroupsMapping class and the SslSocketFactory defined in it > is loaded by the application classloader from its classpath. > As the bootstrap classloader does not have hadoop-common in its classpath, > when a native thread tries to use/load the LdapGroupsMapping class it can't > because the bootstrap loader can't load anything from hadoop-common. The > correct solution seems to be to set the currentThread's context classloader > to the classloader of LdapGroupsMapping class before initializing the jndi > internals, and then reset to the original value after, with that we can > ensure that the behaviour of other things does not change, but this failure > can be avoided as well. > Attached the complete stacktrace to this Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2958: HDFS-15997. Implement dfsadmin -provisionSnapshotTrash -all
hadoop-yetus commented on pull request #2958: URL: https://github.com/apache/hadoop/pull/2958#issuecomment-829704322 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 1s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 39s | | trunk passed | | +1 :green_heart: | compile | 1m 21s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 19s | | trunk passed | | +1 :green_heart: | javadoc | 0m 54s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 27s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 4s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 10s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 9s | | the patch passed | | +1 :green_heart: | compile | 1m 11s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 11s | | the patch passed | | +1 :green_heart: | compile | 1m 5s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 5s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 55s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 9s | | the patch passed | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 17s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 4s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 52s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 232m 9s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2958/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | | The patch does not generate ASF License warnings. | | | | 319m 10s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized | | | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2958/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2958 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux ce7a005114d5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 16b8ad6aa7a83ad296d8b5cbfb4d90bde528af2b | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2958/2/testReport/ | | Max. process+thread count | 3371 (vs. ulimit
[jira] [Commented] (HADOOP-17657) SequeneFile.Writer should implement StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17336995#comment-17336995 ] Hadoop QA commented on HADOOP-17657: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s{color} | | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} codespell {color} | {color:blue} 0m 1s{color} | | {color:blue} codespell was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 34m 43s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 43s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 0s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 7s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 2m 21s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 26s{color} | | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 58s{color} | | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 19m 58s{color} | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/10/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt] | {color:red} root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 2 new + 1985 unchanged - 0 fixed = 1987 total (was 1985) {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 2s{color} | | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 18m 2s{color} | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/10/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt] | {color:red} root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 2 new + 1887 unchanged - 0 fixed = 1889 total (was 1887) {color} | | {color:green}+1{color} | {color:green} blanks {color} | {color:green} 0m 0s{color} | | {color:green} The patch has no blanks issues. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 5s{color} | [/results-checkstyle-hadoop-common-project_hadoop-common.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/10/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt] | {color:orange} hadoop-common-project/hadoop-common: The patch generated 3 new + 352 unchanged - 0 fixed = 355 total (was 352) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
[jira] [Work logged] (HADOOP-17675) LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException
[ https://issues.apache.org/jira/browse/HADOOP-17675?focusedWorklogId=591314=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591314 ] ASF GitHub Bot logged work on HADOOP-17675: --- Author: ASF GitHub Bot Created on: 29/Apr/21 23:59 Start Date: 29/Apr/21 23:59 Worklog Time Spent: 10m Work Description: fapifta opened a new pull request #2965: URL: https://github.com/apache/hadoop/pull/2965 As stated in this article: https://www.infoworld.com/article/2077344/find-a-way-out-of-the-classloader-maze.html A native thread has its context classloader set to null by default. If the context classloader which is used internally by JNDI to load a class is null, then the bootstrap classloader is used, according to the apidoc here: https://docs.oracle.com/javase/8/docs/api/java/lang/Class.html#forName-java.lang.String-boolean-java.lang.ClassLoader- JNDI uses this form with the context classloader as can be seen here: https://github.com/openjdk/jdk11u/blob/master/src/java.naming/share/classes/com/sun/jndi/ldap/VersionHelper.java#L107 or here: https://github.com/openjdk/jdk8u/blob/master/jdk/src/share/classes/com/sun/jndi/ldap/VersionHelper12.java#L72 In Impala this call happens from a Thread created in native space, so in that case, the System/Application classloader loads LdapSslSocketFactory fine in LdapGroupsMapping.getDirContext() while creating the environment, but then InitialDirContext constructor gets to instantiation the LdapSslSocketFactory inside JNDI with the help of the linked VersionHelper impelmentations, and fails to load the class with the bootstrap classloader as context classloader is null. In order to solve this problem, we can safely use the classloader of the LdapGroupsMapping class, as it had to load the LdapSslSocketFactory class before. ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 591314) Remaining Estimate: 0h Time Spent: 10m > LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException > - > > Key: HADOOP-17675 > URL: https://issues.apache.org/jira/browse/HADOOP-17675 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2.2 >Reporter: Tamas Mate >Assignee: István Fajth >Priority: Major > Attachments: stacktrace.txt > > Time Spent: 10m > Remaining Estimate: 0h > > Using LdapGroupsMapping with SSL enabled causes ClassNotFoundException when > it is called through native threads, such as Apache Impala does. > When a thread is attached to the VM, the currentThread's context classloader > is null, so when jndi internally tries to use the current thread's context > classloader to load the socket factory implementation, the > Class.forname(String, boolean, ClassLoader) method gets a null as the loader > uses the bootstrap classloader. > Meanwhile the LdapGroupsMapping class and the SslSocketFactory defined in it > is loaded by the application classloader from its classpath. > As the bootstrap classloader does not have hadoop-common in its classpath, > when a native thread tries to use/load the LdapGroupsMapping class it can't > because the bootstrap loader can't load anything from hadoop-common. The > correct solution seems to be to set the currentThread's context classloader > to the classloader of LdapGroupsMapping class before initializing the jndi > internals, and then reset to the original value after, with that we can > ensure that the behaviour of other things does not change, but this failure > can be avoided as well. > Attached the complete stacktrace to this Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
hadoop-yetus commented on pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#issuecomment-829705908 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | jshint | 0m 1s | | jshint was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 24s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 30s | | trunk passed | | +1 :green_heart: | compile | 5m 3s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 4m 44s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 20s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 41s | | trunk passed | | +1 :green_heart: | javadoc | 2m 43s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 35s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 8m 24s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 29s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 5s | | the patch passed | | +1 :green_heart: | compile | 5m 13s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 5m 13s | | the patch passed | | +1 :green_heart: | compile | 4m 55s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 4m 55s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 20s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/11/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 9 new + 621 unchanged - 6 fixed = 630 total (was 627) | | +1 :green_heart: | mvnsite | 3m 18s | | the patch passed | | +1 :green_heart: | javadoc | 2m 25s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 8s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 8m 33s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 31s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 22s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 234m 29s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/11/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 5m 51s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/11/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt) | hadoop-hdfs-httpfs in the patch passed. | | -1 :x: | unit | 18m 10s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/11/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 399m 18s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | |
[jira] [Updated] (HADOOP-17675) LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException
[ https://issues.apache.org/jira/browse/HADOOP-17675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-17675: Labels: pull-request-available (was: ) > LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException > - > > Key: HADOOP-17675 > URL: https://issues.apache.org/jira/browse/HADOOP-17675 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2.2 >Reporter: Tamas Mate >Assignee: István Fajth >Priority: Major > Labels: pull-request-available > Attachments: stacktrace.txt > > Time Spent: 10m > Remaining Estimate: 0h > > Using LdapGroupsMapping with SSL enabled causes ClassNotFoundException when > it is called through native threads, such as Apache Impala does. > When a thread is attached to the VM, the currentThread's context classloader > is null, so when jndi internally tries to use the current thread's context > classloader to load the socket factory implementation, the > Class.forname(String, boolean, ClassLoader) method gets a null as the loader > uses the bootstrap classloader. > Meanwhile the LdapGroupsMapping class and the SslSocketFactory defined in it > is loaded by the application classloader from its classpath. > As the bootstrap classloader does not have hadoop-common in its classpath, > when a native thread tries to use/load the LdapGroupsMapping class it can't > because the bootstrap loader can't load anything from hadoop-common. The > correct solution seems to be to set the currentThread's context classloader > to the classloader of LdapGroupsMapping class before initializing the jndi > internals, and then reset to the original value after, with that we can > ensure that the behaviour of other things does not change, but this failure > can be avoided as well. > Attached the complete stacktrace to this Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2925: HDFS-15982. Deleted data using HTTP API should be saved to the trash
hadoop-yetus commented on pull request #2925: URL: https://github.com/apache/hadoop/pull/2925#issuecomment-829712363 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | jshint | 0m 0s | | jshint was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ branch-3.3 Compile Tests _ | | +0 :ok: | mvndep | 15m 0s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 24m 7s | | branch-3.3 passed | | +1 :green_heart: | compile | 4m 10s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 1m 6s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 3m 46s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 3m 57s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 8m 43s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 21m 2s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 33s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 48s | | the patch passed | | +1 :green_heart: | compile | 5m 6s | | the patch passed | | +1 :green_heart: | javac | 5m 6s | | the patch passed | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 10s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/10/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 9 new + 618 unchanged - 6 fixed = 627 total (was 624) | | +1 :green_heart: | mvnsite | 4m 3s | | the patch passed | | +1 :green_heart: | javadoc | 3m 42s | | the patch passed | | +1 :green_heart: | spotbugs | 11m 29s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 28s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 24s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 249m 40s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 5m 7s | | hadoop-hdfs-httpfs in the patch passed. | | -1 :x: | unit | 15m 8s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | | The patch does not generate ASF License warnings. | | | | 411m 30s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestReencryption | | | hadoop.hdfs.server.namenode.TestFSEditLogLoader | | | hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport | | | hadoop.hdfs.server.namenode.TestNamenodeStorageDirectives | | | hadoop.hdfs.server.namenode.TestNameEditsConfigs | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.TestNetworkTopologyServlet | | | hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile | | | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | | | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | | hadoop.hdfs.server.namenode.TestQuotaWithStripedBlocksWithRandomECPolicy | | | hadoop.cli.TestHDFSCLI | | | hadoop.hdfs.server.namenode.TestStoragePolicySatisfierWithHA | | | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport | | | hadoop.hdfs.server.datanode.TestBatchIbr | | | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.hdfs.server.namenode.TestEditLog | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.server.federation.router.TestRouterRpc | | Subsystem |
[GitHub] [hadoop] fapifta opened a new pull request #2965: HADOOP-17675 LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException
fapifta opened a new pull request #2965: URL: https://github.com/apache/hadoop/pull/2965 As stated in this article: https://www.infoworld.com/article/2077344/find-a-way-out-of-the-classloader-maze.html A native thread has its context classloader set to null by default. If the context classloader which is used internally by JNDI to load a class is null, then the bootstrap classloader is used, according to the apidoc here: https://docs.oracle.com/javase/8/docs/api/java/lang/Class.html#forName-java.lang.String-boolean-java.lang.ClassLoader- JNDI uses this form with the context classloader as can be seen here: https://github.com/openjdk/jdk11u/blob/master/src/java.naming/share/classes/com/sun/jndi/ldap/VersionHelper.java#L107 or here: https://github.com/openjdk/jdk8u/blob/master/jdk/src/share/classes/com/sun/jndi/ldap/VersionHelper12.java#L72 In Impala this call happens from a Thread created in native space, so in that case, the System/Application classloader loads LdapSslSocketFactory fine in LdapGroupsMapping.getDirContext() while creating the environment, but then InitialDirContext constructor gets to instantiation the LdapSslSocketFactory inside JNDI with the help of the linked VersionHelper impelmentations, and fails to load the class with the bootstrap classloader as context classloader is null. In order to solve this problem, we can safely use the classloader of the LdapGroupsMapping class, as it had to load the LdapSslSocketFactory class before. ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17657) SequeneFile.Writer should implement StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-17657?focusedWorklogId=591318=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591318 ] ASF GitHub Bot logged work on HADOOP-17657: --- Author: ASF GitHub Bot Created on: 30/Apr/21 00:11 Start Date: 30/Apr/21 00:11 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2949: URL: https://github.com/apache/hadoop/pull/2949#issuecomment-829709523 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 43s | | trunk passed | | +1 :green_heart: | compile | 20m 43s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 0s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 7s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 32s | | trunk passed | | +1 :green_heart: | javadoc | 1m 4s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 41s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 21s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 26s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 53s | | the patch passed | | +1 :green_heart: | compile | 19m 58s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javac | 19m 58s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/10/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 2 new + 1985 unchanged - 0 fixed = 1987 total (was 1985) | | +1 :green_heart: | compile | 18m 2s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | javac | 18m 2s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/10/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 2 new + 1887 unchanged - 0 fixed = 1889 total (was 1887) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 5s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/10/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 3 new + 352 unchanged - 0 fixed = 355 total (was 352) | | +1 :green_heart: | mvnsite | 1m 28s | | the patch passed | | +1 :green_heart: | javadoc | 1m 1s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 36s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 30s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 48s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 23s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 53s | | The patch does not generate ASF License warnings. | | | | 178m 19s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2949 | | JIRA Issue | HADOOP-17657 | | Optional Tests | dupname asflicense compile
[GitHub] [hadoop] hadoop-yetus commented on pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
hadoop-yetus commented on pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#issuecomment-829593545 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | jshint | 0m 1s | | jshint was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 7s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 26s | | trunk passed | | +1 :green_heart: | compile | 5m 22s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 4m 56s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 21s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 46s | | trunk passed | | +1 :green_heart: | javadoc | 2m 47s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 40s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 8m 0s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 4s | | the patch passed | | +1 :green_heart: | compile | 4m 43s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 4m 43s | | the patch passed | | +1 :green_heart: | compile | 4m 20s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 4m 20s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 11s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/10/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 9 new + 621 unchanged - 6 fixed = 630 total (was 627) | | +1 :green_heart: | mvnsite | 3m 3s | | the patch passed | | +1 :green_heart: | javadoc | 2m 10s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 2s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 7m 41s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 9s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 20s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 236m 12s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 6m 30s | | hadoop-hdfs-httpfs in the patch passed. | | -1 :x: | unit | 18m 33s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 42s | | The patch does not generate ASF License warnings. | | | | 396m 53s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.federation.router.TestRouterRpc | | | hadoop.hdfs.server.federation.router.TestRouterAllResolver | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41
[GitHub] [hadoop] hadoop-yetus commented on pull request #2925: HDFS-15982. Deleted data using HTTP API should be saved to the trash
hadoop-yetus commented on pull request #2925: URL: https://github.com/apache/hadoop/pull/2925#issuecomment-829577178 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 5s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | jshint | 0m 0s | | jshint was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ branch-3.3 Compile Tests _ | | +0 :ok: | mvndep | 14m 37s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 18s | | branch-3.3 passed | | +1 :green_heart: | compile | 3m 48s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 1m 2s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 3m 22s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 3m 11s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 7m 38s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 18m 6s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 53s | | the patch passed | | +1 :green_heart: | compile | 3m 42s | | the patch passed | | +1 :green_heart: | javac | 3m 42s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 56s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/9/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 9 new + 618 unchanged - 6 fixed = 627 total (was 624) | | +1 :green_heart: | mvnsite | 2m 58s | | the patch passed | | +1 :green_heart: | javadoc | 2m 49s | | the patch passed | | +1 :green_heart: | spotbugs | 7m 58s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 52s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 9s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 222m 15s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 5m 58s | | hadoop-hdfs-httpfs in the patch passed. | | -1 :x: | unit | 17m 14s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 42s | | The patch does not generate ASF License warnings. | | | | 365m 48s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestFileCreation | | | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload | | | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2925 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell jshint markdownlint | | uname | Linux 600c3befd35a 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / d91c0608a3d16c83381a3cbe4560ed971c1f5657 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/9/testReport/ | | Max. process+thread count | 2018 (vs. ulimit of 5500) | | modules | C:
[jira] [Created] (HADOOP-17676) Restrict imports from org.apache.curator.shaded
Viraj Jasani created HADOOP-17676: - Summary: Restrict imports from org.apache.curator.shaded Key: HADOOP-17676 URL: https://issues.apache.org/jira/browse/HADOOP-17676 Project: Hadoop Common Issue Type: Task Reporter: Viraj Jasani Assignee: Viraj Jasani Once HADOOP-17653 gets in, we should ban "org.apache.curator.shaded" imports as discussed on PR#2945. We can use enforcer-rule to restrict imports such that if ever used, mvn build fails. Thanks for the suggestion [~weichiu] [~aajisaka] [~ste...@apache.org] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17657) SequeneFile.Writer should implement StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-17657?focusedWorklogId=591203=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591203 ] ASF GitHub Bot logged work on HADOOP-17657: --- Author: ASF GitHub Bot Created on: 29/Apr/21 19:08 Start Date: 29/Apr/21 19:08 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2949: URL: https://github.com/apache/hadoop/pull/2949#issuecomment-829515094 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 46s | | trunk passed | | +1 :green_heart: | compile | 23m 21s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 19m 14s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 4s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 31s | | trunk passed | | +1 :green_heart: | javadoc | 1m 2s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 31s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 23s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 53s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 58s | | the patch passed | | +1 :green_heart: | compile | 20m 30s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javac | 20m 30s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/9/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 2 new + 1983 unchanged - 0 fixed = 1985 total (was 1983) | | +1 :green_heart: | compile | 21m 19s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | javac | 21m 19s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/9/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 2 new + 1883 unchanged - 0 fixed = 1885 total (was 1883) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 5s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/9/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 3 new + 352 unchanged - 0 fixed = 355 total (was 352) | | +1 :green_heart: | mvnsite | 1m 35s | | the patch passed | | +1 :green_heart: | javadoc | 1m 2s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 38s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 50s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 11s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 19m 30s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/9/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 57s | | The patch does not generate ASF License warnings. | | | | 193m 33s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.metrics2.source.TestJvmMetrics | | Subsystem | Report/Notes | |--:|:-|
[jira] [Commented] (HADOOP-17657) SequeneFile.Writer should implement StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335765#comment-17335765 ] Hadoop QA commented on HADOOP-17657: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 41s{color} | | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} codespell {color} | {color:blue} 0m 0s{color} | | {color:blue} codespell was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 35m 46s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 21s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 14s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 4s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 2m 23s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 53s{color} | | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 30s{color} | | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 20m 30s{color} | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/9/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt] | {color:red} root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 2 new + 1983 unchanged - 0 fixed = 1985 total (was 1983) {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 19s{color} | | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 21m 19s{color} | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/9/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt] | {color:red} root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 2 new + 1883 unchanged - 0 fixed = 1885 total (was 1883) {color} | | {color:green}+1{color} | {color:green} blanks {color} | {color:green} 0m 0s{color} | | {color:green} The patch has no blanks issues. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 5s{color} | [/results-checkstyle-hadoop-common-project_hadoop-common.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/9/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt] | {color:orange} hadoop-common-project/hadoop-common: The patch generated 3 new + 352 unchanged - 0 fixed = 355 total (was 352) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
[GitHub] [hadoop] hadoop-yetus commented on pull request #2949: HADOOP-17657: implement StreamCapabilities in SequenceFile.Writer and fall back to flush, if hflush is not supported
hadoop-yetus commented on pull request #2949: URL: https://github.com/apache/hadoop/pull/2949#issuecomment-829515094 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 46s | | trunk passed | | +1 :green_heart: | compile | 23m 21s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 19m 14s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 4s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 31s | | trunk passed | | +1 :green_heart: | javadoc | 1m 2s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 31s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 23s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 53s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 58s | | the patch passed | | +1 :green_heart: | compile | 20m 30s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javac | 20m 30s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/9/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 2 new + 1983 unchanged - 0 fixed = 1985 total (was 1983) | | +1 :green_heart: | compile | 21m 19s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | javac | 21m 19s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/9/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 2 new + 1883 unchanged - 0 fixed = 1885 total (was 1883) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 5s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/9/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 3 new + 352 unchanged - 0 fixed = 355 total (was 352) | | +1 :green_heart: | mvnsite | 1m 35s | | the patch passed | | +1 :green_heart: | javadoc | 1m 2s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 38s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 50s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 11s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 19m 30s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/9/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 57s | | The patch does not generate ASF License warnings. | | | | 193m 33s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.metrics2.source.TestJvmMetrics | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2949 | | JIRA Issue | HADOOP-17657 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 7ee77576d905 4.15.0-58-generic
[GitHub] [hadoop] smengcl commented on a change in pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
smengcl commented on a change in pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#discussion_r623268777 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md ## @@ -462,7 +462,7 @@ See also: [`destination`](#Destination), [FileSystem](../../api/org/apache/hadoo * Submit a HTTP DELETE request. curl -i -X DELETE "http://:/webhdfs/v1/?op=DELETE - [=]" + [=][=]" Review comment: Yes this would be good enough. Thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mooons commented on a change in pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
mooons commented on a change in pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#discussion_r623263562 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md ## @@ -462,7 +462,7 @@ See also: [`destination`](#Destination), [FileSystem](../../api/org/apache/hadoo * Submit a HTTP DELETE request. curl -i -X DELETE "http://:/webhdfs/v1/?op=DELETE - [=]" + [=][=]" Review comment: Looks good. Thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mooons commented on a change in pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
mooons commented on a change in pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#discussion_r623263562 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md ## @@ -462,7 +462,7 @@ See also: [`destination`](#Destination), [FileSystem](../../api/org/apache/hadoo * Submit a HTTP DELETE request. curl -i -X DELETE "http://:/webhdfs/v1/?op=DELETE - [=]" + [=][=]" Review comment: Looks good. Thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17657) SequeneFile.Writer should implement StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-17657?focusedWorklogId=591155=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591155 ] ASF GitHub Bot logged work on HADOOP-17657: --- Author: ASF GitHub Bot Created on: 29/Apr/21 17:40 Start Date: 29/Apr/21 17:40 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2949: URL: https://github.com/apache/hadoop/pull/2949#issuecomment-829458899 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 51s | | trunk passed | | +1 :green_heart: | compile | 20m 43s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 1s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 8s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 28s | | trunk passed | | +1 :green_heart: | javadoc | 1m 5s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 31s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 22s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 57s | | the patch passed | | +1 :green_heart: | compile | 21m 26s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javac | 21m 26s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/8/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 1 new + 1985 unchanged - 0 fixed = 1986 total (was 1985) | | +1 :green_heart: | compile | 19m 44s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | javac | 19m 44s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/8/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1887 unchanged - 0 fixed = 1888 total (was 1887) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 1s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/8/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 1 new + 352 unchanged - 0 fixed = 353 total (was 352) | | +1 :green_heart: | mvnsite | 1m 36s | | the patch passed | | +1 :green_heart: | javadoc | 0m 59s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 34s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 39s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 1s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 17m 20s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/8/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 183m 10s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.io.TestSequenceFile | | Subsystem | Report/Notes | |--:|:-| | Docker
[jira] [Commented] (HADOOP-17657) SequeneFile.Writer should implement StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335718#comment-17335718 ] Hadoop QA commented on HADOOP-17657: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s{color} | | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} codespell {color} | {color:blue} 0m 0s{color} | | {color:blue} codespell was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 34m 51s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 43s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 1s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 8s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 2m 22s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 32s{color} | | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 26s{color} | | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 21m 26s{color} | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/8/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt] | {color:red} root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 1 new + 1985 unchanged - 0 fixed = 1986 total (was 1985) {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 44s{color} | | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 19m 44s{color} | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/8/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt] | {color:red} root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1887 unchanged - 0 fixed = 1888 total (was 1887) {color} | | {color:green}+1{color} | {color:green} blanks {color} | {color:green} 0m 0s{color} | | {color:green} The patch has no blanks issues. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 1s{color} | [/results-checkstyle-hadoop-common-project_hadoop-common.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/8/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt] | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 352 unchanged - 0 fixed = 353 total (was 352) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
[GitHub] [hadoop] hadoop-yetus commented on pull request #2949: HADOOP-17657: implement StreamCapabilities in SequenceFile.Writer and fall back to flush, if hflush is not supported
hadoop-yetus commented on pull request #2949: URL: https://github.com/apache/hadoop/pull/2949#issuecomment-829458899 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 51s | | trunk passed | | +1 :green_heart: | compile | 20m 43s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 1s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 8s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 28s | | trunk passed | | +1 :green_heart: | javadoc | 1m 5s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 31s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 22s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 57s | | the patch passed | | +1 :green_heart: | compile | 21m 26s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javac | 21m 26s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/8/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 1 new + 1985 unchanged - 0 fixed = 1986 total (was 1985) | | +1 :green_heart: | compile | 19m 44s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | javac | 19m 44s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/8/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1887 unchanged - 0 fixed = 1888 total (was 1887) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 1s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/8/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 1 new + 352 unchanged - 0 fixed = 353 total (was 352) | | +1 :green_heart: | mvnsite | 1m 36s | | the patch passed | | +1 :green_heart: | javadoc | 0m 59s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 34s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 39s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 1s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 17m 20s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/8/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 183m 10s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.io.TestSequenceFile | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2949 | | JIRA Issue | HADOOP-17657 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 20266c118a02 4.15.0-58-generic #64-Ubuntu SMP
[GitHub] [hadoop] virajjasani commented on a change in pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
virajjasani commented on a change in pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#discussion_r623254234 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md ## @@ -462,7 +462,7 @@ See also: [`destination`](#Destination), [FileSystem](../../api/org/apache/hadoo * Submit a HTTP DELETE request. curl -i -X DELETE "http://:/webhdfs/v1/?op=DELETE - [=]" + [=][=]" Review comment: Done. @smengcl does this look good? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on a change in pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
virajjasani commented on a change in pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#discussion_r623247100 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md ## @@ -462,7 +462,7 @@ See also: [`destination`](#Destination), [FileSystem](../../api/org/apache/hadoo * Submit a HTTP DELETE request. curl -i -X DELETE "http://:/webhdfs/v1/?op=DELETE - [=]" + [=][=]" Review comment: Unfortunately, making these values bold by `**` is not working as this text is covered by scrollable (Insert code mode). However, let me make special note of default value right below the curl command. Thanks for the suggestion. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on a change in pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
smengcl commented on a change in pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#discussion_r623234638 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DeleteSkipTrashParam.java ## @@ -0,0 +1,50 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.web.resources; + +/** + * SkipTrash param to be used by DELETE query. + */ +public class DeleteSkipTrashParam extends BooleanParam { + + public static final String NAME = "skiptrash"; + public static final String DEFAULT = FALSE; Review comment: . Let's include the incompatible change note in 3.3.1 and 3.4.0 release notes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17198) Support S3 Access Points
[ https://issues.apache.org/jira/browse/HADOOP-17198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335635#comment-17335635 ] Bogdan Stolojan commented on HADOOP-17198: -- Great, thanks for the tips! Also, glad to see this ticket for removing S3Guard exists https://issues.apache.org/jira/browse/HADOOP-17409 > Support S3 Access Points > > > Key: HADOOP-17198 > URL: https://issues.apache.org/jira/browse/HADOOP-17198 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Major > > Improve VPC integration by supporting access points for buckets > https://docs.aws.amazon.com/AmazonS3/latest/dev/access-points.html > Not sure how to do this *at all*; -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on a change in pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
virajjasani commented on a change in pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#discussion_r623196324 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DeleteSkipTrashParam.java ## @@ -0,0 +1,50 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.web.resources; + +/** + * SkipTrash param to be used by DELETE query. + */ +public class DeleteSkipTrashParam extends BooleanParam { + + public static final String NAME = "skiptrash"; + public static final String DEFAULT = FALSE; Review comment: I understand your concerns @smengcl. @jojochuang's [comment](https://issues.apache.org/jira/browse/HDFS-15982?focusedCommentId=17331521=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17331521) from Jira that you might be interested to look at: ``` This is a big incompatible change. If we think this should be part of 3.4.0, risking our compatibility guarantee (which I think makes sense, given how many times I was involved in accidental data deletion), I think it can be part of 3.3.1. We traditionally regard 3.3.0 as non-production ready, so making an incompat change in 3.3.1 probably is justifiable. ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on a change in pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
smengcl commented on a change in pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#discussion_r623196326 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md ## @@ -462,7 +462,7 @@ See also: [`destination`](#Destination), [FileSystem](../../api/org/apache/hadoo * Submit a HTTP DELETE request. curl -i -X DELETE "http://:/webhdfs/v1/?op=DELETE - [=]" + [=][=]" Review comment: nit: if we can emphasize the default value of `recursive` (`false`) and `skiptrash` here in the doc it would be great! Try bold font: `[=]` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on a change in pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
virajjasani commented on a change in pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#discussion_r623196324 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DeleteSkipTrashParam.java ## @@ -0,0 +1,50 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.web.resources; + +/** + * SkipTrash param to be used by DELETE query. + */ +public class DeleteSkipTrashParam extends BooleanParam { + + public static final String NAME = "skiptrash"; + public static final String DEFAULT = FALSE; Review comment: I understand your concerns @smengcl. @jojochuang's [comment](https://issues.apache.org/jira/browse/HDFS-15982?focusedCommentId=17331521=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17331521) from Jira that you might be interested to look at: ``` This is a big incompatible change. If we think this should be part of 3.4.0, risking our compatibility guarantee (which I think makes sense, given how many times I was involved in accidental data deletion), I think it can be part of 3.3.1. We traditionally regard 3.3.0 as non-production ready, so making an incompat change in 3.3.1 probably is justifiable. ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17511) Add an Audit plugin point for S3A auditing/context
[ https://issues.apache.org/jira/browse/HADOOP-17511?focusedWorklogId=591089=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591089 ] ASF GitHub Bot logged work on HADOOP-17511: --- Author: ASF GitHub Bot Created on: 29/Apr/21 16:03 Start Date: 29/Apr/21 16:03 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2807: URL: https://github.com/apache/hadoop/pull/2807#issuecomment-829365518 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 19s | | https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/2807 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/18/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 591089) Time Spent: 16h 10m (was: 16h) > Add an Audit plugin point for S3A auditing/context > -- > > Key: HADOOP-17511 > URL: https://issues.apache.org/jira/browse/HADOOP-17511 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 16h 10m > Remaining Estimate: 0h > > Add a way for auditing tools to correlate S3 object calls with Hadoop FS API > calls. > Initially just to log/forward to an auditing service. > Later: let us attach them as parameters in S3 requests, such as opentrace > headeers or (my initial idea: http referrer header -where it will get into > the log) > Challenges > * ensuring the audit span is created for every public entry point. That will > have to include those used in s3guard tools, some defacto public APIs > * and not re-entered for active spans. s3A code must not call back into the > FS API points > * Propagation across worker threads -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector
hadoop-yetus commented on pull request #2807: URL: https://github.com/apache/hadoop/pull/2807#issuecomment-829365518 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 19s | | https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/2807 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/18/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17657) SequeneFile.Writer should implement StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-17657?focusedWorklogId=591088=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591088 ] ASF GitHub Bot logged work on HADOOP-17657: --- Author: ASF GitHub Bot Created on: 29/Apr/21 16:02 Start Date: 29/Apr/21 16:02 Worklog Time Spent: 10m Work Description: hadoop-yetus removed a comment on pull request #2949: URL: https://github.com/apache/hadoop/pull/2949#issuecomment-828814785 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 591088) Time Spent: 3h 10m (was: 3h) > SequeneFile.Writer should implement StreamCapabilities > -- > > Key: HADOOP-17657 > URL: https://issues.apache.org/jira/browse/HADOOP-17657 > Project: Hadoop Common > Issue Type: Bug >Reporter: Kishen Das >Assignee: Kishen Das >Priority: Major > Labels: pull-request-available > Time Spent: 3h 10m > Remaining Estimate: 0h > > Following exception is thrown whenever we invoke ProtoMessageWriter.hflush on > S3 from Tez, which internally calls > org.apache.hadoop.io.SequenceFile$Writer.hflush -> org.apache.hadoop.fs.FS > DataOutputStream.hflush -> S3ABlockOutputStream.hflush which is not > implemented and throws java.lang.UnsupportedOperationException. > bdffe22d96ae [mdc@18060 class="yarn.YarnUncaughtExceptionHandler" > level="ERROR" thread="HistoryEventHandlingThread"] Thread > Thread[HistoryEventHandlingThread, 5,main] threw an > Exception.^Mjava.lang.UnsupportedOperationException: S3A streams are not > Syncable^M at > org.apache.hadoop.fs.s3a.S3ABlockOutputStream.hflush(S3ABlockOutputStream.java:657)^M > at org.apache.hadoop.fs.FS > DataOutputStream.hflush(FSDataOutputStream.java:136)^M at > org.apache.hadoop.io.SequenceFile$Writer.hflush(SequenceFile.java:1367)^M at > org.apache.tez.dag.history.logging.proto.ProtoMessageWriter.hflush(ProtoMessageWr > iter.java:64)^M at > org.apache.tez.dag.history.logging.proto.ProtoHistoryLoggingService.finishCurrentDag(ProtoHistoryLoggingService.java:239)^M > at org.apache.tez.dag.history.logging.proto.ProtoHistoryLoggingService.han > dleEvent(ProtoHistoryLoggingService.java:198)^M at > org.apache.tez.dag.history.logging.proto.ProtoHistoryLoggingService.loop(ProtoHistoryLoggingService.java:153)^M > at java.lang.Thread.run(Thread.java:748)^M > In order to fix this issue we should implement StreamCapabilities in > SequenceFile.Writer. Also, we should fall back to flush(), if hflush() is not > supported. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2949: HADOOP-17657: implement StreamCapabilities in SequenceFile.Writer and fall back to flush, if hflush is not supported
hadoop-yetus removed a comment on pull request #2949: URL: https://github.com/apache/hadoop/pull/2949#issuecomment-828814785 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on a change in pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
smengcl commented on a change in pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#discussion_r623184950 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DeleteSkipTrashParam.java ## @@ -0,0 +1,50 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.web.resources; + +/** + * SkipTrash param to be used by DELETE query. + */ +public class DeleteSkipTrashParam extends BooleanParam { + + public static final String NAME = "skiptrash"; + public static final String DEFAULT = FALSE; Review comment: Ah I just noticed target version includes 3.3.1, backporting to 3.3.x might be a problem if this is an incompatible change. According to the [compatibility guideline](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html#REST_APIs): > Each API has an API-specific version number. Any incompatible changes MUST increment the API version number. How about this: - Change `skiptrash=true` default to be compatible with WebHDFS v1, backport this to 3.3.1 - Set `skiptrash=false` in a separate jira for 3.4.0, which will be an incompatible change Or: - Increment WebHDFS REST API version to v2 which has `skiptrash=false` as default for DELETE -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
hadoop-yetus commented on pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#issuecomment-829358556 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | jshint | 0m 0s | | jshint was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 48s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 22s | | trunk passed | | +1 :green_heart: | compile | 5m 30s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 4m 57s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 41s | | trunk passed | | +1 :green_heart: | javadoc | 2m 42s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 31s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 8m 22s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 14s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 6s | | the patch passed | | +1 :green_heart: | compile | 5m 1s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 5m 1s | | the patch passed | | +1 :green_heart: | compile | 4m 50s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 4m 50s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 19s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/8/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 9 new + 621 unchanged - 6 fixed = 630 total (was 627) | | +1 :green_heart: | mvnsite | 3m 18s | | the patch passed | | +1 :green_heart: | javadoc | 2m 26s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 12s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 8m 2s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 7s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 25s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 239m 55s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 6m 16s | | hadoop-hdfs-httpfs in the patch passed. | | +1 :green_heart: | unit | 18m 15s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 405m 13s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.web.TestWebHDFS | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2927 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell jshint markdownlint | | uname | Linux e22b80e9a479 4.15.0-112-generic #113-Ubuntu
[GitHub] [hadoop] hadoop-yetus commented on pull request #2925: HDFS-15982. Deleted data using HTTP API should be saved to the trash
hadoop-yetus commented on pull request #2925: URL: https://github.com/apache/hadoop/pull/2925#issuecomment-829354581 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 23m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | jshint | 0m 0s | | jshint was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ branch-3.3 Compile Tests _ | | +0 :ok: | mvndep | 11m 41s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 31s | | branch-3.3 passed | | +1 :green_heart: | compile | 4m 36s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 1m 8s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 3m 44s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 3m 21s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 8m 41s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 18m 26s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 17s | | the patch passed | | +1 :green_heart: | compile | 4m 26s | | the patch passed | | +1 :green_heart: | javac | 4m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 56s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/8/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 9 new + 619 unchanged - 6 fixed = 628 total (was 625) | | +1 :green_heart: | mvnsite | 3m 22s | | the patch passed | | +1 :green_heart: | javadoc | 3m 5s | | the patch passed | | +1 :green_heart: | spotbugs | 9m 2s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 26s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 11s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 225m 39s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 5m 42s | | hadoop-hdfs-httpfs in the patch passed. | | +1 :green_heart: | unit | 16m 27s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | | The patch does not generate ASF License warnings. | | | | 395m 58s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes | | | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2925 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell jshint markdownlint | | uname | Linux 58188c9b6b03 4.15.0-126-generic #129-Ubuntu SMP Mon Nov 23 18:53:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 4ee943682cebc28bbe37b40d29cced08eb7fd968 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/8/testReport/ | | Max. process+thread count | 2110 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/8/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT
[GitHub] [hadoop] hadoop-yetus commented on pull request #2925: HDFS-15982. Deleted data using HTTP API should be saved to the trash
hadoop-yetus commented on pull request #2925: URL: https://github.com/apache/hadoop/pull/2925#issuecomment-829352574 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 24m 22s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | jshint | 0m 1s | | jshint was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ branch-3.3 Compile Tests _ | | +0 :ok: | mvndep | 11m 40s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 21s | | branch-3.3 passed | | +1 :green_heart: | compile | 4m 41s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 1m 8s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 3m 44s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 3m 20s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 8m 41s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 18m 4s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 17s | | the patch passed | | +1 :green_heart: | compile | 4m 39s | | the patch passed | | +1 :green_heart: | javac | 4m 39s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 58s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/7/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 9 new + 619 unchanged - 6 fixed = 628 total (was 625) | | +1 :green_heart: | mvnsite | 3m 21s | | the patch passed | | +1 :green_heart: | javadoc | 3m 1s | | the patch passed | | +1 :green_heart: | spotbugs | 9m 1s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 10s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 17s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 223m 40s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 5m 51s | | hadoop-hdfs-httpfs in the patch passed. | | -1 :x: | unit | 16m 19s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 52s | | The patch does not generate ASF License warnings. | | | | 394m 21s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination | | | hadoop.hdfs.server.federation.router.TestRouterRpc | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2925 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell jshint markdownlint | | uname | Linux 0e6778b1e820 4.15.0-126-generic #129-Ubuntu SMP Mon Nov 23 18:53:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 4ee943682cebc28bbe37b40d29cced08eb7fd968 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/7/testReport/ | | Max. process+thread count | 2404 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs
[GitHub] [hadoop] hadoop-yetus commented on pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
hadoop-yetus commented on pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#issuecomment-829344056 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | jshint | 0m 1s | | jshint was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 42s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 21s | | trunk passed | | +1 :green_heart: | compile | 4m 55s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 4m 26s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 17s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 39s | | trunk passed | | +1 :green_heart: | javadoc | 2m 43s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 28s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 7m 28s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 22s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 55s | | the patch passed | | +1 :green_heart: | compile | 4m 39s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 4m 39s | | the patch passed | | +1 :green_heart: | compile | 4m 21s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 4m 21s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 10s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/9/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 9 new + 621 unchanged - 6 fixed = 630 total (was 627) | | +1 :green_heart: | mvnsite | 3m 2s | | the patch passed | | +1 :green_heart: | javadoc | 2m 13s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 57s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 7m 38s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 19s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 21s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 231m 19s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 6m 10s | | hadoop-hdfs-httpfs in the patch passed. | | -1 :x: | unit | 18m 19s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 48s | | The patch does not generate ASF License warnings. | | | | 384m 4s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.TestReconstructStripedFileWithValidator | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.server.federation.router.TestRouterRpc | | | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
[GitHub] [hadoop] virajjasani edited a comment on pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
virajjasani edited a comment on pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#issuecomment-829285381 @smengcl This is `hadoop-3.3` backport PR #2925 and I have kept it upto date with this PR while addressing review comments. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani edited a comment on pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
virajjasani edited a comment on pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#issuecomment-829285381 @smengcl This is `hadoop-3.3` backport PR #2925 and I have kept it upto date while addressing review comments. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
virajjasani commented on pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#issuecomment-829285381 @smengcl This is hadoop-3.3 backport PR #2925 and I have kept it upto date while addressing review comments. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on a change in pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
virajjasani commented on a change in pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#discussion_r623098996 ## File path: hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java ## @@ -756,6 +760,10 @@ public JSONObject execute(FileSystem fs) throws IOException { return toJSON( StringUtils.toLowerCase(HttpFSFileSystem.DELETE_JSON), true); } +// Same is the behavior with Delete shell command. +// If moveToAppropriateTrash() returns false, file deletion +// is attempted rather than throwing Error. +LOG.error("Could not move {} to Trash, attempting removal", path); Review comment: Sure, sounds good. Let me do it right away. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao merged pull request #2954: HDFS-15561. RBF: Remove NPE when local namenode is not configured
Hexiaoqiao merged pull request #2954: URL: https://github.com/apache/hadoop/pull/2954 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on a change in pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
smengcl commented on a change in pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#discussion_r623028802 ## File path: hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java ## @@ -756,6 +760,10 @@ public JSONObject execute(FileSystem fs) throws IOException { return toJSON( StringUtils.toLowerCase(HttpFSFileSystem.DELETE_JSON), true); } +// Same is the behavior with Delete shell command. +// If moveToAppropriateTrash() returns false, file deletion +// is attempted rather than throwing Error. +LOG.error("Could not move {} to Trash, attempting removal", path); Review comment: Let's lower this log level to `debug` instead **if we decide to make skiptrash default to false**. `error` could generate a lot of noise if trash is not enabled here. When skiptrash defaults to true then I'm fine with `error`. But `warn` might still be better. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on a change in pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
smengcl commented on a change in pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#discussion_r623036457 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DeleteSkipTrashParam.java ## @@ -0,0 +1,50 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.web.resources; + +/** + * SkipTrash param to be used by DELETE query. + */ +public class DeleteSkipTrashParam extends BooleanParam { + + public static final String NAME = "skiptrash"; + public static final String DEFAULT = FALSE; Review comment: Alright, if the jira has the incompatible label I'm fine with skiptrash=false default. :) @jojochuang -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on a change in pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
smengcl commented on a change in pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#discussion_r623031345 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java ## @@ -1527,34 +1535,49 @@ public Response delete( @QueryParam(RecursiveParam.NAME) @DefaultValue(RecursiveParam.DEFAULT) final RecursiveParam recursive, @QueryParam(SnapshotNameParam.NAME) @DefaultValue(SnapshotNameParam.DEFAULT) - final SnapshotNameParam snapshotName + final SnapshotNameParam snapshotName, + @QueryParam(DeleteSkipTrashParam.NAME) + @DefaultValue(DeleteSkipTrashParam.DEFAULT) + final DeleteSkipTrashParam skiptrash ) throws IOException, InterruptedException { -init(ugi, delegation, username, doAsUser, path, op, recursive, snapshotName); +init(ugi, delegation, username, doAsUser, path, op, recursive, +snapshotName, skiptrash); -return doAs(ugi, new PrivilegedExceptionAction() { - @Override - public Response run() throws IOException { - return delete(ugi, delegation, username, doAsUser, - path.getAbsolutePath(), op, recursive, snapshotName); - } -}); +return doAs(ugi, () -> delete( +path.getAbsolutePath(), op, recursive, snapshotName, skiptrash)); } protected Response delete( - final UserGroupInformation ugi, - final DelegationParam delegation, - final UserParam username, - final DoAsParam doAsUser, final String fullpath, final DeleteOpParam op, final RecursiveParam recursive, - final SnapshotNameParam snapshotName - ) throws IOException { + final SnapshotNameParam snapshotName, + final DeleteSkipTrashParam skipTrash) throws IOException { final ClientProtocol cp = getRpcClientProtocol(); switch(op.getValue()) { case DELETE: { + Configuration conf = + (Configuration) context.getAttribute(JspHelper.CURRENT_CONF); + long trashInterval = + conf.getLong(FS_TRASH_INTERVAL_KEY, FS_TRASH_INTERVAL_DEFAULT); + if (trashInterval > 0 && !skipTrash.getValue()) { +LOG.info("{} is {} , trying to archive {} instead of removing", +FS_TRASH_INTERVAL_KEY, trashInterval, fullpath); +org.apache.hadoop.fs.Path path = +new org.apache.hadoop.fs.Path(fullpath); +boolean movedToTrash = Trash.moveToAppropriateTrash( +FileSystem.get(conf), path, conf); +if (movedToTrash) { + final String js = JsonUtil.toJsonString("boolean", true); + return Response.ok(js).type(MediaType.APPLICATION_JSON).build(); +} +// Same is the behavior with Delete shell command. +// If moveToAppropriateTrash() returns false, file deletion +// is attempted rather than throwing Error. +LOG.error("Could not move {} to Trash, attempting removal", fullpath); Review comment: Same as above -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on a change in pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
smengcl commented on a change in pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#discussion_r623028802 ## File path: hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java ## @@ -756,6 +760,10 @@ public JSONObject execute(FileSystem fs) throws IOException { return toJSON( StringUtils.toLowerCase(HttpFSFileSystem.DELETE_JSON), true); } +// Same is the behavior with Delete shell command. +// If moveToAppropriateTrash() returns false, file deletion +// is attempted rather than throwing Error. +LOG.error("Could not move {} to Trash, attempting removal", path); Review comment: Let's lower this log level to `debug` instead **if we decide to make skiptrash default to false**. `error` could generate a lot of noise if trash is not enabled here. When skiptrash defaults to true then I'm fine with error. But `warn` might be better. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on a change in pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
smengcl commented on a change in pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#discussion_r623028802 ## File path: hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java ## @@ -756,6 +760,10 @@ public JSONObject execute(FileSystem fs) throws IOException { return toJSON( StringUtils.toLowerCase(HttpFSFileSystem.DELETE_JSON), true); } +// Same is the behavior with Delete shell command. +// If moveToAppropriateTrash() returns false, file deletion +// is attempted rather than throwing Error. +LOG.error("Could not move {} to Trash, attempting removal", path); Review comment: Let's lower this log level to `debug` instead. `error` could generate a lot of noise if trash is not enabled here. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17675) LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException
[ https://issues.apache.org/jira/browse/HADOOP-17675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Mate updated HADOOP-17675: Description: Using LdapGroupsMapping with SSL enabled causes ClassNotFoundException when it is called through native threads, such as Apache Impala does. When a thread is attached to the VM, the currentThread's context classloader is null, so when jndi internally tries to use the current thread's context classloader to load the socket factory implementation, the Class.forname(String, boolean, ClassLoader) method gets a null as the loader uses the bootstrap classloader. Meanwhile the LdapGroupsMapping class and the SslSocketFactory defined in it is loaded by the application classloader from its classpath. As the bootstrap classloader does not have hadoop-common in its classpath, when a native thread tries to use/load the LdapGroupsMapping class it can't because the bootstrap loader can't load anything from hadoop-common. The correct solution seems to be to set the currentThread's context classloader to the classloader of LdapGroupsMapping class before initializing the jndi internals, and then reset to the original value after, with that we can ensure that the behaviour of other things does not change, but this failure can be avoided as well. Attached the complete stacktrace to this Jira. was: Using LdapGroupsMapping with SSL enabled causes ClassNotFoundException when the it is called through native threads, such as Apache Impala does. When a thread is attached to the VM, the currentThread's context classloader is null, so when jndi internally tries to use the current thread's context classloader to load the socket factory implementation, the Class.forname(String, boolean, ClassLoader) method gets a null as the loader, and uses the bootstrap classloader. Meanwhile the LdapGroupsMapping class and the SslSocketFactory defined in it is loaded by the application classloader from its classpath. As the bootstrap classloader does not have hadoop-common in its classpath, when a native thread tries to use/load the LdapGroupsMapping class it can't because the bootstrap loader can't load anything from hadoop-common. The correct solution seems to be to set the currentThread's context classloader to the classloader of LdapGroupsMapping class before initializing the jndi internals, and then reset to the original value after, with that we can ensure that the behaviour of other things does not change, but this failure can be avoided as well. Attached the complete stacktrace to this Jira. > LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException > - > > Key: HADOOP-17675 > URL: https://issues.apache.org/jira/browse/HADOOP-17675 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2.2 >Reporter: Tamas Mate >Assignee: István Fajth >Priority: Major > Attachments: stacktrace.txt > > > Using LdapGroupsMapping with SSL enabled causes ClassNotFoundException when > it is called through native threads, such as Apache Impala does. > When a thread is attached to the VM, the currentThread's context classloader > is null, so when jndi internally tries to use the current thread's context > classloader to load the socket factory implementation, the > Class.forname(String, boolean, ClassLoader) method gets a null as the loader > uses the bootstrap classloader. > Meanwhile the LdapGroupsMapping class and the SslSocketFactory defined in it > is loaded by the application classloader from its classpath. > As the bootstrap classloader does not have hadoop-common in its classpath, > when a native thread tries to use/load the LdapGroupsMapping class it can't > because the bootstrap loader can't load anything from hadoop-common. The > correct solution seems to be to set the currentThread's context classloader > to the classloader of LdapGroupsMapping class before initializing the jndi > internals, and then reset to the original value after, with that we can > ensure that the behaviour of other things does not change, but this failure > can be avoided as well. > Attached the complete stacktrace to this Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17657) SequeneFile.Writer should implement StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-17657?focusedWorklogId=590931=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590931 ] ASF GitHub Bot logged work on HADOOP-17657: --- Author: ASF GitHub Bot Created on: 29/Apr/21 10:58 Start Date: 29/Apr/21 10:58 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2949: URL: https://github.com/apache/hadoop/pull/2949#issuecomment-829134626 I do like the new test, now there's just the little detail that it's not quite working yet ``` [INFO] Running org.apache.hadoop.io.TestSequenceFile [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 9.818 s <<< FAILURE! - in org.apache.hadoop.io.TestSequenceFile [ERROR] testSequenceFileWriter(org.apache.hadoop.io.TestSequenceFile) Time elapsed: 0.614 s <<< ERROR! java.io.IOException: wrong key class: org.apache.hadoop.io.LongWritable is not class org.apache.hadoop.io.NullWritable at org.apache.hadoop.io.SequenceFile$RecordCompressWriter.append(SequenceFile.java:1508) at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1425) at org.apache.hadoop.io.TestSequenceFile.testSequenceFileWriter(TestSequenceFile.java:745) ``` + minor checkstyles ``` ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java:737: Path p = new Path(GenericTestUtils.getTempPath("testSequenceFileWriter.seq"));: Line is longer than 80 characters (found 82). [LineLength] ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java:745: writer.append(key,value);:24: ',' is not followed by whitespace. [WhitespaceAfter] ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 590931) Time Spent: 3h (was: 2h 50m) > SequeneFile.Writer should implement StreamCapabilities > -- > > Key: HADOOP-17657 > URL: https://issues.apache.org/jira/browse/HADOOP-17657 > Project: Hadoop Common > Issue Type: Bug >Reporter: Kishen Das >Assignee: Kishen Das >Priority: Major > Labels: pull-request-available > Time Spent: 3h > Remaining Estimate: 0h > > Following exception is thrown whenever we invoke ProtoMessageWriter.hflush on > S3 from Tez, which internally calls > org.apache.hadoop.io.SequenceFile$Writer.hflush -> org.apache.hadoop.fs.FS > DataOutputStream.hflush -> S3ABlockOutputStream.hflush which is not > implemented and throws java.lang.UnsupportedOperationException. > bdffe22d96ae [mdc@18060 class="yarn.YarnUncaughtExceptionHandler" > level="ERROR" thread="HistoryEventHandlingThread"] Thread > Thread[HistoryEventHandlingThread, 5,main] threw an > Exception.^Mjava.lang.UnsupportedOperationException: S3A streams are not > Syncable^M at > org.apache.hadoop.fs.s3a.S3ABlockOutputStream.hflush(S3ABlockOutputStream.java:657)^M > at org.apache.hadoop.fs.FS > DataOutputStream.hflush(FSDataOutputStream.java:136)^M at > org.apache.hadoop.io.SequenceFile$Writer.hflush(SequenceFile.java:1367)^M at > org.apache.tez.dag.history.logging.proto.ProtoMessageWriter.hflush(ProtoMessageWr > iter.java:64)^M at > org.apache.tez.dag.history.logging.proto.ProtoHistoryLoggingService.finishCurrentDag(ProtoHistoryLoggingService.java:239)^M > at org.apache.tez.dag.history.logging.proto.ProtoHistoryLoggingService.han > dleEvent(ProtoHistoryLoggingService.java:198)^M at > org.apache.tez.dag.history.logging.proto.ProtoHistoryLoggingService.loop(ProtoHistoryLoggingService.java:153)^M > at java.lang.Thread.run(Thread.java:748)^M > In order to fix this issue we should implement StreamCapabilities in > SequenceFile.Writer. Also, we should fall back to flush(), if hflush() is not > supported. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2949: HADOOP-17657: implement StreamCapabilities in SequenceFile.Writer and fall back to flush, if hflush is not supported
steveloughran commented on pull request #2949: URL: https://github.com/apache/hadoop/pull/2949#issuecomment-829134626 I do like the new test, now there's just the little detail that it's not quite working yet ``` [INFO] Running org.apache.hadoop.io.TestSequenceFile [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 9.818 s <<< FAILURE! - in org.apache.hadoop.io.TestSequenceFile [ERROR] testSequenceFileWriter(org.apache.hadoop.io.TestSequenceFile) Time elapsed: 0.614 s <<< ERROR! java.io.IOException: wrong key class: org.apache.hadoop.io.LongWritable is not class org.apache.hadoop.io.NullWritable at org.apache.hadoop.io.SequenceFile$RecordCompressWriter.append(SequenceFile.java:1508) at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1425) at org.apache.hadoop.io.TestSequenceFile.testSequenceFileWriter(TestSequenceFile.java:745) ``` + minor checkstyles ``` ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java:737: Path p = new Path(GenericTestUtils.getTempPath("testSequenceFileWriter.seq"));: Line is longer than 80 characters (found 82). [LineLength] ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java:745: writer.append(key,value);:24: ',' is not followed by whitespace. [WhitespaceAfter] ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17657) SequeneFile.Writer should implement StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-17657?focusedWorklogId=590929=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590929 ] ASF GitHub Bot logged work on HADOOP-17657: --- Author: ASF GitHub Bot Created on: 29/Apr/21 10:56 Start Date: 29/Apr/21 10:56 Worklog Time Spent: 10m Work Description: hadoop-yetus removed a comment on pull request #2949: URL: https://github.com/apache/hadoop/pull/2949#issuecomment-827158640 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 24s | | trunk passed | | +1 :green_heart: | compile | 20m 33s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 19m 28s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 8s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 36s | | trunk passed | | +1 :green_heart: | javadoc | 1m 8s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 36s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 23s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 53s | | the patch passed | | +1 :green_heart: | compile | 20m 4s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javac | 20m 4s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/4/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 4 new + 1940 unchanged - 0 fixed = 1944 total (was 1940) | | +1 :green_heart: | compile | 19m 31s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | javac | 19m 31s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/4/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 4 new + 1835 unchanged - 0 fixed = 1839 total (was 1835) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 22s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/4/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 4 new + 352 unchanged - 0 fixed = 356 total (was 352) | | +1 :green_heart: | mvnsite | 1m 41s | | the patch passed | | +1 :green_heart: | javadoc | 1m 4s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 41s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 51s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 30s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 19m 8s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/4/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 3s | | The patch does not generate ASF License warnings. | | | | 187m 7s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.metrics2.source.TestJvmMetrics | | | hadoop.ipc.TestCallQueueManager | |
[jira] [Work logged] (HADOOP-17657) SequeneFile.Writer should implement StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-17657?focusedWorklogId=590928=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590928 ] ASF GitHub Bot logged work on HADOOP-17657: --- Author: ASF GitHub Bot Created on: 29/Apr/21 10:56 Start Date: 29/Apr/21 10:56 Worklog Time Spent: 10m Work Description: hadoop-yetus removed a comment on pull request #2949: URL: https://github.com/apache/hadoop/pull/2949#issuecomment-826074516 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 590928) Time Spent: 2h 40m (was: 2.5h) > SequeneFile.Writer should implement StreamCapabilities > -- > > Key: HADOOP-17657 > URL: https://issues.apache.org/jira/browse/HADOOP-17657 > Project: Hadoop Common > Issue Type: Bug >Reporter: Kishen Das >Assignee: Kishen Das >Priority: Major > Labels: pull-request-available > Time Spent: 2h 40m > Remaining Estimate: 0h > > Following exception is thrown whenever we invoke ProtoMessageWriter.hflush on > S3 from Tez, which internally calls > org.apache.hadoop.io.SequenceFile$Writer.hflush -> org.apache.hadoop.fs.FS > DataOutputStream.hflush -> S3ABlockOutputStream.hflush which is not > implemented and throws java.lang.UnsupportedOperationException. > bdffe22d96ae [mdc@18060 class="yarn.YarnUncaughtExceptionHandler" > level="ERROR" thread="HistoryEventHandlingThread"] Thread > Thread[HistoryEventHandlingThread, 5,main] threw an > Exception.^Mjava.lang.UnsupportedOperationException: S3A streams are not > Syncable^M at > org.apache.hadoop.fs.s3a.S3ABlockOutputStream.hflush(S3ABlockOutputStream.java:657)^M > at org.apache.hadoop.fs.FS > DataOutputStream.hflush(FSDataOutputStream.java:136)^M at > org.apache.hadoop.io.SequenceFile$Writer.hflush(SequenceFile.java:1367)^M at > org.apache.tez.dag.history.logging.proto.ProtoMessageWriter.hflush(ProtoMessageWr > iter.java:64)^M at > org.apache.tez.dag.history.logging.proto.ProtoHistoryLoggingService.finishCurrentDag(ProtoHistoryLoggingService.java:239)^M > at org.apache.tez.dag.history.logging.proto.ProtoHistoryLoggingService.han > dleEvent(ProtoHistoryLoggingService.java:198)^M at > org.apache.tez.dag.history.logging.proto.ProtoHistoryLoggingService.loop(ProtoHistoryLoggingService.java:153)^M > at java.lang.Thread.run(Thread.java:748)^M > In order to fix this issue we should implement StreamCapabilities in > SequenceFile.Writer. Also, we should fall back to flush(), if hflush() is not > supported. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2949: HADOOP-17657: implement StreamCapabilities in SequenceFile.Writer and fall back to flush, if hflush is not supported
hadoop-yetus removed a comment on pull request #2949: URL: https://github.com/apache/hadoop/pull/2949#issuecomment-827158640 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 24s | | trunk passed | | +1 :green_heart: | compile | 20m 33s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 19m 28s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 8s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 36s | | trunk passed | | +1 :green_heart: | javadoc | 1m 8s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 36s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 23s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 53s | | the patch passed | | +1 :green_heart: | compile | 20m 4s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javac | 20m 4s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/4/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 4 new + 1940 unchanged - 0 fixed = 1944 total (was 1940) | | +1 :green_heart: | compile | 19m 31s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | javac | 19m 31s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/4/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 4 new + 1835 unchanged - 0 fixed = 1839 total (was 1835) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 22s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/4/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 4 new + 352 unchanged - 0 fixed = 356 total (was 352) | | +1 :green_heart: | mvnsite | 1m 41s | | the patch passed | | +1 :green_heart: | javadoc | 1m 4s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 41s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 51s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 30s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 19m 8s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/4/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 3s | | The patch does not generate ASF License warnings. | | | | 187m 7s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.metrics2.source.TestJvmMetrics | | | hadoop.ipc.TestCallQueueManager | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2949/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2949 | | JIRA Issue | HADOOP-17657 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | |
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2949: HADOOP-17657: implement StreamCapabilities in SequenceFile.Writer and fall back to flush, if hflush is not supported
hadoop-yetus removed a comment on pull request #2949: URL: https://github.com/apache/hadoop/pull/2949#issuecomment-826074516 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17675) LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException
[ https://issues.apache.org/jira/browse/HADOOP-17675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Mate updated HADOOP-17675: Attachment: stacktrace.txt > LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException > - > > Key: HADOOP-17675 > URL: https://issues.apache.org/jira/browse/HADOOP-17675 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2.2 >Reporter: Tamas Mate >Assignee: István Fajth >Priority: Major > Attachments: stacktrace.txt > > > Using LdapGroupsMapping with SSL enabled causes ClassNotFoundException when > the it is called through native threads, such as Apache Impala does. > When a thread is attached to the VM, the currentThread's context classloader > is null, so when jndi internally tries to use the current thread's context > classloader to load the socket factory implementation, the > Class.forname(String, boolean, ClassLoader) method gets a null as the loader, > and uses the bootstrap classloader. > Meanwhile the LdapGroupsMapping class and the SslSocketFactory defined in it > is loaded by the application classloader from its classpath. > As the bootstrap classloader does not have hadoop-common in its classpath, > when a native thread tries to use/load the LdapGroupsMapping class it can't > because the bootstrap loader can't load anything from hadoop-common. The > correct solution seems to be to set the currentThread's context classloader > to the classloader of LdapGroupsMapping class before initializing the jndi > internals, and then reset to the original value after, with that we can > ensure that the behaviour of other things does not change, but this failure > can be avoided as well. > Attached the complete stacktrace to this Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17675) LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException
[ https://issues.apache.org/jira/browse/HADOOP-17675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] István Fajth updated HADOOP-17675: -- Description: Using LdapGroupsMapping with SSL enabled causes ClassNotFoundException when the it is called through native threads, such as Apache Impala does. When a thread is attached to the VM, the currentThread's context classloader is null, so when jndi internally tries to use the current thread's context classloader to load the socket factory implementation, the Class.forname(String, boolean, ClassLoader) method gets a null as the loader, and uses the bootstrap classloader. Meanwhile the LdapGroupsMapping class and the SslSocketFactory defined in it is loaded by the application classloader from its classpath. As the bootstrap classloader does not have hadoop-common in its classpath, when a native thread tries to use/load the LdapGroupsMapping class it can't because the bootstrap loader can't load anything from hadoop-common. The correct solution seems to be to set the currentThread's context classloader to the classloader of LdapGroupsMapping class before initializing the jndi internals, and then reset to the original value after, with that we can ensure that the behaviour of other things does not change, but this failure can be avoided as well. Attached the complete stacktrace to this Jira. was: Using LdapGroupsMapping with SSL enabled causes ClassNotFoundException when the it is called through native threads, such as Apache Impala does. When a thread is attached to the VM, the context classloader is the bootstrap loader. Meanwhile the LdapGroupsMapping class is loaded by the application classloader from its classpath. Therefore when a native thread tries to use/load the LdapGroupsMapping class it can't to find it because it was loaded by another classloader and it can't load it either, because the jar is not on the bootstrap classloader's classpath. Attached the complete stacktrace to this Jira. > LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException > - > > Key: HADOOP-17675 > URL: https://issues.apache.org/jira/browse/HADOOP-17675 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2.2 >Reporter: Tamas Mate >Assignee: István Fajth >Priority: Major > > Using LdapGroupsMapping with SSL enabled causes ClassNotFoundException when > the it is called through native threads, such as Apache Impala does. > When a thread is attached to the VM, the currentThread's context classloader > is null, so when jndi internally tries to use the current thread's context > classloader to load the socket factory implementation, the > Class.forname(String, boolean, ClassLoader) method gets a null as the loader, > and uses the bootstrap classloader. > Meanwhile the LdapGroupsMapping class and the SslSocketFactory defined in it > is loaded by the application classloader from its classpath. > As the bootstrap classloader does not have hadoop-common in its classpath, > when a native thread tries to use/load the LdapGroupsMapping class it can't > because the bootstrap loader can't load anything from hadoop-common. The > correct solution seems to be to set the currentThread's context classloader > to the classloader of LdapGroupsMapping class before initializing the jndi > internals, and then reset to the original value after, with that we can > ensure that the behaviour of other things does not change, but this failure > can be avoided as well. > Attached the complete stacktrace to this Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-17675) LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException
[ https://issues.apache.org/jira/browse/HADOOP-17675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] István Fajth reassigned HADOOP-17675: - Assignee: István Fajth > LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException > - > > Key: HADOOP-17675 > URL: https://issues.apache.org/jira/browse/HADOOP-17675 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2.2 >Reporter: Tamas Mate >Assignee: István Fajth >Priority: Major > > Using LdapGroupsMapping with SSL enabled causes ClassNotFoundException when > the it is called through native threads, such as Apache Impala does. > When a thread is attached to the VM, the context classloader is the bootstrap > loader. Meanwhile the LdapGroupsMapping class is loaded by the application > classloader from its classpath. Therefore when a native thread tries to > use/load the LdapGroupsMapping class it can't to find it because it was > loaded by another classloader and it can't load it either, because the jar is > not on the bootstrap classloader's classpath. > Attached the complete stacktrace to this Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17675) LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException
Tamas Mate created HADOOP-17675: --- Summary: LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException Key: HADOOP-17675 URL: https://issues.apache.org/jira/browse/HADOOP-17675 Project: Hadoop Common Issue Type: Improvement Components: common Affects Versions: 3.2.2 Reporter: Tamas Mate Using LdapGroupsMapping with SSL enabled causes ClassNotFoundException when the it is called through native threads, such as Apache Impala does. When a thread is attached to the VM, the context classloader is the bootstrap loader. Meanwhile the LdapGroupsMapping class is loaded by the application classloader from its classpath. Therefore when a native thread tries to use/load the LdapGroupsMapping class it can't to find it because it was loaded by another classloader and it can't load it either, because the jar is not on the bootstrap classloader's classpath. Attached the complete stacktrace to this Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bshashikant commented on pull request #2958: HDFS-15997. Implement dfsadmin -provisionSnapshotTrash -all
bshashikant commented on pull request #2958: URL: https://github.com/apache/hadoop/pull/2958#issuecomment-829061453 @smengcl , please check the checkstyle and other failures if any. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16989) Update JaegerTracing
[ https://issues.apache.org/jira/browse/HADOOP-16989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-16989: - Target Version/s: thirdparty-1.2.0 (was: thirdparty-1.1.0) > Update JaegerTracing > > > Key: HADOOP-16989 > URL: https://issues.apache.org/jira/browse/HADOOP-16989 > Project: Hadoop Common > Issue Type: Task > Components: hadoop-thirdparty >Affects Versions: thirdparty-1.0.0 >Reporter: Wei-Chiu Chuang >Priority: Major > > We currently use JaegerTracing 0.34.0. The latest is 1.2.0. We are several > versions behind and should update. Note this update requires the latest > version fo OpenTracing and has several breaking changes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on a change in pull request #2927: HDFS-15982. Deleted data using HTTP API should be saved to the trash
virajjasani commented on a change in pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#discussion_r622855584 ## File path: hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java ## @@ -743,6 +748,15 @@ public FSDelete(String path, boolean recursive) { */ @Override public JSONObject execute(FileSystem fs) throws IOException { + if (!skipTrash) { +boolean movedToTrash = Trash.moveToAppropriateTrash(fs, path, +fs.getConf()); +if (movedToTrash) { + HttpFSServerWebApp.getMetrics().incrOpsDelete(); + return toJSON( + StringUtils.toLowerCase(HttpFSFileSystem.DELETE_JSON), true); +} Review comment: Sure thing. I put comment on `NamenodeWebHdfsMethods` but somehow missed here. ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DeleteSkipTrashParam.java ## @@ -0,0 +1,50 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.web.resources; + +/** + * SkipTrash param to be used by DELETE query. + */ +public class DeleteSkipTrashParam extends BooleanParam { + + public static final String NAME = "skiptrash"; + public static final String DEFAULT = FALSE; Review comment: Thanks for the suggestion. In fact, there was similar discussion on Jira as well and it seems so far the consensus was to keep it `false` by default. Because of which, this will be incompatible change w.r.t default behaviour of DELETE API. Hence, the decision was to mark Jira Incompatible change and we can still go ahead with this new behaviour starting 3.3.1/3.4.0 release. However, I am fine changing this to `true` as well if that's where majority would like to go with. FYI @jojochuang -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2954: HDFS-15561. RBF: Remove NPE when local namenode is not configured
hadoop-yetus commented on pull request #2954: URL: https://github.com/apache/hadoop/pull/2954#issuecomment-828994479 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 29s | | trunk passed | | +1 :green_heart: | compile | 0m 43s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 43s | | trunk passed | | +1 :green_heart: | javadoc | 0m 41s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 53s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 19s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 16s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 18s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 31s | | the patch passed | | +1 :green_heart: | javadoc | 0m 31s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 16s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 17s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 31s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 94m 37s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2954/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2954 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 25436df36b00 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 10230955dab780cb961459ab66cdf9e40258c1bf | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2954/4/testReport/ | | Max. process+thread count | 2395 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2954/4/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail:
[GitHub] [hadoop] jojochuang merged pull request #2955: HDFS-15624. fix the function of setting quota by storage type
jojochuang merged pull request #2955: URL: https://github.com/apache/hadoop/pull/2955 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on pull request #2955: HDFS-15624. fix the function of setting quota by storage type
jojochuang commented on pull request #2955: URL: https://github.com/apache/hadoop/pull/2955#issuecomment-828985536 Failed tests do not reproduce locally. Merging the PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-11245) Update NFS gateway to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-11245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HADOOP-11245. -- Fix Version/s: 3.4.0 Resolution: Fixed We're finally able to put a closure on this. I'll leave it in trunk for a while, doing more tests/checks before backport. > Update NFS gateway to use Netty4 > > > Key: HADOOP-11245 > URL: https://issues.apache.org/jira/browse/HADOOP-11245 > Project: Hadoop Common > Issue Type: Sub-task > Components: nfs >Reporter: Brandon Li >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-11245) Update NFS gateway to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-11245?focusedWorklogId=590855=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590855 ] ASF GitHub Bot logged work on HADOOP-11245: --- Author: ASF GitHub Bot Created on: 29/Apr/21 06:43 Start Date: 29/Apr/21 06:43 Worklog Time Spent: 10m Work Description: jojochuang commented on pull request #2832: URL: https://github.com/apache/hadoop/pull/2832#issuecomment-82898 Thanks Nicholas! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 590855) Time Spent: 3h 20m (was: 3h 10m) > Update NFS gateway to use Netty4 > > > Key: HADOOP-11245 > URL: https://issues.apache.org/jira/browse/HADOOP-11245 > Project: Hadoop Common > Issue Type: Sub-task > Components: nfs >Reporter: Brandon Li >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Time Spent: 3h 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-11245) Update NFS gateway to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-11245?focusedWorklogId=590854=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590854 ] ASF GitHub Bot logged work on HADOOP-11245: --- Author: ASF GitHub Bot Created on: 29/Apr/21 06:43 Start Date: 29/Apr/21 06:43 Worklog Time Spent: 10m Work Description: jojochuang merged pull request #2832: URL: https://github.com/apache/hadoop/pull/2832 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 590854) Time Spent: 3h 10m (was: 3h) > Update NFS gateway to use Netty4 > > > Key: HADOOP-11245 > URL: https://issues.apache.org/jira/browse/HADOOP-11245 > Project: Hadoop Common > Issue Type: Sub-task > Components: nfs >Reporter: Brandon Li >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Time Spent: 3h 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on pull request #2832: HADOOP-11245. Update NFS gateway to use Netty4
jojochuang commented on pull request #2832: URL: https://github.com/apache/hadoop/pull/2832#issuecomment-82898 Thanks Nicholas! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang merged pull request #2832: HADOOP-11245. Update NFS gateway to use Netty4
jojochuang merged pull request #2832: URL: https://github.com/apache/hadoop/pull/2832 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17618) ABFS: Partially obfuscate SAS object IDs in Logs
[ https://issues.apache.org/jira/browse/HADOOP-17618?focusedWorklogId=590853=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590853 ] ASF GitHub Bot logged work on HADOOP-17618: --- Author: ASF GitHub Bot Created on: 29/Apr/21 06:30 Start Date: 29/Apr/21 06:30 Worklog Time Spent: 10m Work Description: sumangala-patki commented on pull request #2845: URL: https://github.com/apache/hadoop/pull/2845#issuecomment-828973979 Hi @steveloughran, thanks for the review. Have addressed the comments, please take a look. Thank you! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 590853) Time Spent: 7h 20m (was: 7h 10m) > ABFS: Partially obfuscate SAS object IDs in Logs > > > Key: HADOOP-17618 > URL: https://issues.apache.org/jira/browse/HADOOP-17618 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sumangala Patki >Assignee: Sumangala Patki >Priority: Major > Labels: pull-request-available > Time Spent: 7h 20m > Remaining Estimate: 0h > > Delegation SAS tokens are created using various parameters for specifying > details such as permissions and validity. The requests are logged, along with > values of all the query parameters. This change will partially mask values > logged for the following object IDs representing the security principal: > skoid, saoid, suoid -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sumangala-patki commented on pull request #2845: HADOOP-17618. ABFS: Partially obfuscate SAS object IDs in Logs
sumangala-patki commented on pull request #2845: URL: https://github.com/apache/hadoop/pull/2845#issuecomment-828973979 Hi @steveloughran, thanks for the review. Have addressed the comments, please take a look. Thank you! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org