[jira] [Commented] (HDFS-16259) Catch and re-throw sub-classes of AccessControlException thrown by any permission provider plugins (eg Ranger)
[ https://issues.apache.org/jira/browse/HDFS-16259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17425951#comment-17425951 ] Ayush Saxena commented on HDFS-16259: - Hmm, I think for this problem, we can go ahead changing to an ACE, Can take up the unwrap stuff separately and isolate it only to trunk. Should try to keep the actual exception in the cause if possible, so it doesn't get lost. > Catch and re-throw sub-classes of AccessControlException thrown by any > permission provider plugins (eg Ranger) > -- > > Key: HDFS-16259 > URL: https://issues.apache.org/jira/browse/HDFS-16259 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell >Priority: Major > > When a permission provider plugin is enabled (eg Ranger) there are some > scenarios where it can throw a sub-class of an AccessControlException (eg > RangerAccessControlException). If this exception is allowed to propagate up > the stack, it can give problems in the HDFS Client, when it unwraps the > remote exception containing the AccessControlException sub-class. > Ideally, we should make AccessControlException final so it cannot be > sub-classed, but that would be a breaking change at this point. Therefore I > believe the safest thing to do, is to catch any AccessControlException that > comes out of the permission enforcer plugin, and re-throw an > AccessControlException instead. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16263) Add CMakeLists for hdfs_allowSnapshot
[ https://issues.apache.org/jira/browse/HDFS-16263?focusedWorklogId=662488&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-662488 ] ASF GitHub Bot logged work on HDFS-16263: - Author: ASF GitHub Bot Created on: 08/Oct/21 03:50 Start Date: 08/Oct/21 03:50 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3531: URL: https://github.com/apache/hadoop/pull/3531#issuecomment-938321114 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 20m 38s | | trunk passed | | +1 :green_heart: | compile | 2m 48s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 2m 50s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | mvnsite | 0m 28s | | trunk passed | | +1 :green_heart: | shadedclient | 45m 14s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 17s | | the patch passed | | +1 :green_heart: | compile | 2m 39s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | cc | 2m 39s | | the patch passed | | +1 :green_heart: | golang | 2m 39s | | the patch passed | | +1 :green_heart: | javac | 2m 39s | | the patch passed | | +1 :green_heart: | compile | 2m 44s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | cc | 2m 44s | | the patch passed | | +1 :green_heart: | golang | 2m 44s | | the patch passed | | +1 :green_heart: | javac | 2m 44s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 20s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 13s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 31m 57s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 105m 10s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3531/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3531 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux d2dc607cad78 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c53fb6ad18631cb120a2c3885d0dab6d7828522 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3531/1/testReport/ | | Max. process+thread count | 568 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3531/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 662488) Time Spent: 1h 10m (was: 1h) > Add CMakeLists for hdfs_allowSnapshot > --
[jira] [Work logged] (HDFS-16263) Add CMakeLists for hdfs_allowSnapshot
[ https://issues.apache.org/jira/browse/HDFS-16263?focusedWorklogId=662459&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-662459 ] ASF GitHub Bot logged work on HDFS-16263: - Author: ASF GitHub Bot Created on: 08/Oct/21 02:05 Start Date: 08/Oct/21 02:05 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3531: URL: https://github.com/apache/hadoop/pull/3531#issuecomment-938280005 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 12m 27s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 28m 37s | | trunk passed | | +1 :green_heart: | compile | 2m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 27s | | trunk passed | | +1 :green_heart: | shadedclient | 59m 18s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 17s | | the patch passed | | +1 :green_heart: | compile | 2m 42s | | the patch passed | | +1 :green_heart: | cc | 2m 42s | | the patch passed | | +1 :green_heart: | golang | 2m 42s | | the patch passed | | +1 :green_heart: | javac | 2m 42s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 34s | | the patch passed | | +1 :green_heart: | shadedclient | 27m 57s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 31m 33s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 137m 43s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3531/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3531 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux 5d632eb9ad7d 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c53fb6ad18631cb120a2c3885d0dab6d7828522 | | Default Java | Debian-11.0.12+7-post-Debian-2deb10u1 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3531/1/testReport/ | | Max. process+thread count | 747 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3531/1/console | | versions | git=2.20.1 maven=3.6.0 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 662459) Time Spent: 1h (was: 50m) > Add CMakeLists for hdfs_allowSnapshot > - > > Key: HDFS-16263 > URL: https://issues.apache.org/jira/browse/HDFS-16263 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client, libhdfs++, tools >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Currently, hdfs_allowSnapshot is built in it's [parent directory's > CMakeLists.txt|https://github.com/apache/hadoop/blob/95b537ee6a9ff3082c9ad9bc773f86fd4be04e50/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tools/CMakeLists.txt#L83-L89]. > Need to mo
[jira] [Commented] (HDFS-16243) The available disk space is less than the reserved space, and no log message is displayed
[ https://issues.apache.org/jira/browse/HDFS-16243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17425911#comment-17425911 ] Hadoop QA commented on HDFS-16243: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 12s{color} | {color:red}{color} | {color:red} HDFS-16243 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-16243 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13034674/HDFS-16243.0.patch | | Console output | https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/720/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. > The available disk space is less than the reserved space, and no log message > is displayed > - > > Key: HDFS-16243 > URL: https://issues.apache.org/jira/browse/HDFS-16243 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.7.2 >Reporter: Hualong Zhang >Priority: Major > Fix For: 2.7.2 > > Attachments: HDFS-16243.0.patch > > > When I submitted a task to the hadoop test cluster, it appeared "could only > be replicated to 0 nodes instead of minReplication (=1)" > I checked the namenode and datanode logs and did not find any error logs. It > was not until the use of dfsadmin -report that the available capacity was 0 > and I realized that it may be a configuration problem. > Checking the configuration found that the value of the > "dfs.datanode.du.reserved" configuration is greater than the available disk > space of HDFS, which caused this problem > It seems that there should be some warnings or errors in the log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16262) Async refresh of cached locations in DFSInputStream
[ https://issues.apache.org/jira/browse/HDFS-16262?focusedWorklogId=662456&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-662456 ] ASF GitHub Bot logged work on HDFS-16262: - Author: ASF GitHub Bot Created on: 08/Oct/21 01:58 Start Date: 08/Oct/21 01:58 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3527: URL: https://github.com/apache/hadoop/pull/3527#issuecomment-938277521 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 54s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 38s | | trunk passed | | +1 :green_heart: | compile | 5m 5s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 4m 39s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 14s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 21s | | trunk passed | | +1 :green_heart: | javadoc | 1m 40s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 6s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 5m 35s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 2s | | the patch passed | | +1 :green_heart: | compile | 4m 49s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 4m 49s | | the patch passed | | +1 :green_heart: | compile | 4m 33s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 4m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 7s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3527/5/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 18 new + 105 unchanged - 0 fixed = 123 total (was 105) | | +1 :green_heart: | mvnsite | 2m 5s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 56s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 5m 45s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 53s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 21s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 226m 9s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 350m 53s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3527/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3527 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux 78e514d2a3be 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / ceba806587e37fac6b309c9e788eb6cb428f5f75 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20
[jira] [Updated] (HDFS-16243) The available disk space is less than the reserved space, and no log message is displayed
[ https://issues.apache.org/jira/browse/HDFS-16243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-16243: - Attachment: HDFS-16243.0.patch Fix Version/s: 2.7.2 Target Version/s: 2.7.2 Status: Patch Available (was: Open) > The available disk space is less than the reserved space, and no log message > is displayed > - > > Key: HDFS-16243 > URL: https://issues.apache.org/jira/browse/HDFS-16243 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.7.2 >Reporter: Hualong Zhang >Priority: Major > Fix For: 2.7.2 > > Attachments: HDFS-16243.0.patch > > > When I submitted a task to the hadoop test cluster, it appeared "could only > be replicated to 0 nodes instead of minReplication (=1)" > I checked the namenode and datanode logs and did not find any error logs. It > was not until the use of dfsadmin -report that the available capacity was 0 > and I realized that it may be a configuration problem. > Checking the configuration found that the value of the > "dfs.datanode.du.reserved" configuration is greater than the available disk > space of HDFS, which caused this problem > It seems that there should be some warnings or errors in the log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16243) The available disk space is less than the reserved space, and no log message is displayed
[ https://issues.apache.org/jira/browse/HDFS-16243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-16243: - Attachment: (was: HDFS-16243.patch) > The available disk space is less than the reserved space, and no log message > is displayed > - > > Key: HDFS-16243 > URL: https://issues.apache.org/jira/browse/HDFS-16243 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.7.2 >Reporter: Hualong Zhang >Priority: Major > > When I submitted a task to the hadoop test cluster, it appeared "could only > be replicated to 0 nodes instead of minReplication (=1)" > I checked the namenode and datanode logs and did not find any error logs. It > was not until the use of dfsadmin -report that the available capacity was 0 > and I realized that it may be a configuration problem. > Checking the configuration found that the value of the > "dfs.datanode.du.reserved" configuration is greater than the available disk > space of HDFS, which caused this problem > It seems that there should be some warnings or errors in the log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16243) The available disk space is less than the reserved space, and no log message is displayed
[ https://issues.apache.org/jira/browse/HDFS-16243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-16243: - Flags: Patch > The available disk space is less than the reserved space, and no log message > is displayed > - > > Key: HDFS-16243 > URL: https://issues.apache.org/jira/browse/HDFS-16243 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.7.2 >Reporter: Hualong Zhang >Priority: Major > Attachments: HDFS-16243.patch > > > When I submitted a task to the hadoop test cluster, it appeared "could only > be replicated to 0 nodes instead of minReplication (=1)" > I checked the namenode and datanode logs and did not find any error logs. It > was not until the use of dfsadmin -report that the available capacity was 0 > and I realized that it may be a configuration problem. > Checking the configuration found that the value of the > "dfs.datanode.du.reserved" configuration is greater than the available disk > space of HDFS, which caused this problem > It seems that there should be some warnings or errors in the log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16262) Async refresh of cached locations in DFSInputStream
[ https://issues.apache.org/jira/browse/HDFS-16262?focusedWorklogId=662448&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-662448 ] ASF GitHub Bot logged work on HDFS-16262: - Author: ASF GitHub Bot Created on: 08/Oct/21 01:29 Start Date: 08/Oct/21 01:29 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3527: URL: https://github.com/apache/hadoop/pull/3527#issuecomment-938268341 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 57s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 54s | | trunk passed | | +1 :green_heart: | compile | 5m 26s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 5m 0s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 11s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 16s | | trunk passed | | +1 :green_heart: | javadoc | 1m 39s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 5s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 5m 38s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 24s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 2s | | the patch passed | | +1 :green_heart: | compile | 5m 27s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 5m 27s | | the patch passed | | +1 :green_heart: | compile | 5m 0s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 5m 0s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 9s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3527/4/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 18 new + 105 unchanged - 0 fixed = 123 total (was 105) | | +1 :green_heart: | mvnsite | 2m 11s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 26s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 55s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 5m 47s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 10s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 22s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 236m 0s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 41s | | The patch does not generate ASF License warnings. | | | | 365m 24s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3527/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3527 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux 0f16589d8394 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f932816ed48cada018af35ff6bd859847f4a1a0d | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.2
[jira] [Work logged] (HDFS-16263) Add CMakeLists for hdfs_allowSnapshot
[ https://issues.apache.org/jira/browse/HDFS-16263?focusedWorklogId=662425&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-662425 ] ASF GitHub Bot logged work on HDFS-16263: - Author: ASF GitHub Bot Created on: 07/Oct/21 23:47 Start Date: 07/Oct/21 23:47 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3531: URL: https://github.com/apache/hadoop/pull/3531#issuecomment-938231474 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 24m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 24m 49s | | trunk passed | | +1 :green_heart: | compile | 3m 11s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 41s | | trunk passed | | +1 :green_heart: | shadedclient | 50m 17s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 25s | | the patch passed | | +1 :green_heart: | compile | 2m 44s | | the patch passed | | +1 :green_heart: | cc | 2m 44s | | the patch passed | | +1 :green_heart: | golang | 2m 44s | | the patch passed | | +1 :green_heart: | javac | 2m 44s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 3s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 40m 26s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 143m 10s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3531/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3531 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux 0c1817a8aa55 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c53fb6ad18631cb120a2c3885d0dab6d7828522 | | Default Java | Red Hat, Inc.-1.8.0_302-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3531/1/testReport/ | | Max. process+thread count | 615 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3531/1/console | | versions | git=2.27.0 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 662425) Time Spent: 50m (was: 40m) > Add CMakeLists for hdfs_allowSnapshot > - > > Key: HDFS-16263 > URL: https://issues.apache.org/jira/browse/HDFS-16263 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client, libhdfs++, tools >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > Currently, hdfs_allowSnapshot is built in it's [parent directory's > CMakeLists.txt|https://github.com/apache/hadoop/blob/95b537ee6a9ff3082c9ad9bc773f86fd4be04e50/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tools/CMakeLists.txt#L83-L89]. > Need
[jira] [Commented] (HDFS-16261) Configurable grace period around deletion of invalidated blocks
[ https://issues.apache.org/jira/browse/HDFS-16261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17425866#comment-17425866 ] Bryan Beaudreault commented on HDFS-16261: -- I've verified that setting "dfs.namenode.redundancy.interval.seconds" to, for example, 5 minutes and setting the DFSClient block location refresh to 10 seconds (https://issues.apache.org/jira/browse/HDFS-16262) results in zero ReplicaNotFoundExceptions even when all the primary replica for all blocks are shuffled to do different hosts. Enabling debug logging of the refresh thread, I can see that while blocks are being shuffled the refresh thread will trigger for files whose blocks have moved and then once all block moves are finished the refresh thread will settle down to 0 blocks refreshed. I'm going to dig more into the above comment tomorrow, but wanted to test the simple change just to prove the concept. That appears to have been a success. > Configurable grace period around deletion of invalidated blocks > --- > > Key: HDFS-16261 > URL: https://issues.apache.org/jira/browse/HDFS-16261 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Bryan Beaudreault >Assignee: Bryan Beaudreault >Priority: Major > > When a block is moved with REPLACE_BLOCK, the new location is recorded in the > NameNode and the NameNode instructs the old host to in invalidate the block > using DNA_INVALIDATE. As it stands today, this invalidation is async but > tends to happen relatively quickly. > I'm working on a feature for HBase which enables efficient healing of > locality through Balancer-style low level block moves (HBASE-26250). One > issue is that HBase tends to keep open long running DFSInputStreams and > moving blocks from under them causes lots of warns in the RegionServer and > increases long tail latencies due to the necessary retries in the DFSClient. > One way I'd like to fix this is to provide a configurable grace period on > async invalidations. This would give the DFSClient enough time to refresh > block locations before hitting any errors. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16262) Async refresh of cached locations in DFSInputStream
[ https://issues.apache.org/jira/browse/HDFS-16262?focusedWorklogId=662350&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-662350 ] ASF GitHub Bot logged work on HDFS-16262: - Author: ASF GitHub Bot Created on: 07/Oct/21 23:37 Start Date: 07/Oct/21 23:37 Worklog Time Spent: 10m Work Description: bbeaudreault opened a new pull request #3527: URL: https://github.com/apache/hadoop/pull/3527 ### Description of PR Refactor refreshing of cached block locations so that it happens as part of an async process, with rate limiting. Add the ability to limit to only refresh DFSInputStreams if necessary. This defaults to false to preserve backwards compatibility with the old behavior from https://issues.apache.org/jira/browse/HDFS-15119 See https://issues.apache.org/jira/browse/HDFS-16262 ### How was this patch tested? I added a new test class TestLocatedBlocksRefresher. I am in the process of deploying this internally on one of our hadoop-3.3 clusters, will report back. ### For code changes: - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 662350) Time Spent: 1h 20m (was: 1h 10m) > Async refresh of cached locations in DFSInputStream > --- > > Key: HDFS-16262 > URL: https://issues.apache.org/jira/browse/HDFS-16262 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bryan Beaudreault >Assignee: Bryan Beaudreault >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > HDFS-15119 added the ability to invalidate cached block locations in > DFSInputStream. As written, the feature will affect all DFSInputStreams > regardless of whether they need it or not. The invalidation also only applies > on the next request, so the next request will pay the cost of calling > openInfo before reading the data. > I'm working on a feature for HBase which enables efficient healing of > locality through Balancer-style low level block moves (HBASE-26250). I'd like > to utilize the idea started in HDFS-15119 in order to update DFSInputStreams > after blocks have been moved to local hosts. > I was considering using the feature as is, but some of our clusters are quite > large and I'm concerned about the impact on the namenode: > * We have some clusters with over 350k StoreFiles, so that'd be 350k > DFSInputStreams. With such a large number and very active usage, having the > refresh be in-line makes it too hard to ensure we don't DDOS the NameNode. > * Currently we need to pay the price of openInfo the next time a > DFSInputStream is invoked. Moving that async would minimize the latency hit. > Also, some StoreFiles might be far less frequently accessed, so they may live > on for a long time before ever refreshing. We'd like to be able to know that > all DFSInputStreams are refreshed by a given time. > * We may have 350k files, but only a small percentage of them are ever > non-local at a given time. Refreshing only if necessary will save a lot of > work. > In order to make this as painless to end users as possible, I'd like to: > * Update the implementation to utilize an async thread for managing > refreshes. This will give more control over rate limiting across all > DFSInputStreams in a DFSClient, and also ensure that all DFSInputStreams are > refreshed. > * Only refresh files which are lacking a local replica or have known > deadNodes to be cleaned up > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-11045) TestDirectoryScanner#testThrottling fails: Throttle is too permissive
[ https://issues.apache.org/jira/browse/HDFS-11045?focusedWorklogId=662303&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-662303 ] ASF GitHub Bot logged work on HDFS-11045: - Author: ASF GitHub Bot Created on: 07/Oct/21 23:33 Start Date: 07/Oct/21 23:33 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3448: URL: https://github.com/apache/hadoop/pull/3448#issuecomment-937475429 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 46s | | trunk passed | | +1 :green_heart: | compile | 1m 21s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 18s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 25s | | trunk passed | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 18s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 33s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 13s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 18s | | the patch passed | | +1 :green_heart: | compile | 1m 8s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 8s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 49s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3448/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 4 new + 23 unchanged - 0 fixed = 27 total (was 23) | | +1 :green_heart: | mvnsite | 1m 12s | | the patch passed | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 21s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 28m 8s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 238m 26s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3448/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 340m 47s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3448/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3448 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 81fc94348930 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9c5c5bc2e63dc2b57d98df55675f7d7de0c3ac64 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr
[jira] [Work logged] (HDFS-15979) Move within EZ fails and cannot remove nested EZs
[ https://issues.apache.org/jira/browse/HDFS-15979?focusedWorklogId=662266&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-662266 ] ASF GitHub Bot logged work on HDFS-15979: - Author: ASF GitHub Bot Created on: 07/Oct/21 23:30 Start Date: 07/Oct/21 23:30 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2919: URL: https://github.com/apache/hadoop/pull/2919#issuecomment-937907197 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 9s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 40m 23s | | trunk passed | | +1 :green_heart: | compile | 1m 39s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 10s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 40s | | trunk passed | | +1 :green_heart: | javadoc | 1m 8s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 40s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 4m 13s | | trunk passed | | +1 :green_heart: | shadedclient | 29m 22s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 46s | | the patch passed | | +1 :green_heart: | compile | 1m 36s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 36s | | the patch passed | | +1 :green_heart: | compile | 1m 28s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 1s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 37s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 7s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 50s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 4m 11s | | the patch passed | | +1 :green_heart: | shadedclient | 25m 59s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 349m 38s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2919/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | | The patch does not generate ASF License warnings. | | | | 470m 52s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestHDFSFileSystemContract | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2919/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2919 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux c68e33dd6dcc 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / dcee377d8001638015e01acab762ca1f4667dbf8 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2919/1/testReport/ | | Max. proces
[jira] [Work logged] (HDFS-15516) Add info for create flags in NameNode audit logs
[ https://issues.apache.org/jira/browse/HDFS-15516?focusedWorklogId=662255&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-662255 ] ASF GitHub Bot logged work on HDFS-15516: - Author: ASF GitHub Bot Created on: 07/Oct/21 23:29 Start Date: 07/Oct/21 23:29 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2281: URL: https://github.com/apache/hadoop/pull/2281#issuecomment-938074701 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 43s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 31s | | trunk passed | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 21s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 53s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2281/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 131 unchanged - 0 fixed = 134 total (was 131) | | +1 :green_heart: | mvnsite | 1m 13s | | the patch passed | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 16s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 7s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 46s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 237m 59s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 336m 43s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2281/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2281 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux ad79a91b588f 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / e667f64ad54aa013f5a9a1a3b7e2dcdb4a7f63b7 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2281/1/testReport/ | | Max. process+thread count | 3470 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U:
[jira] [Work logged] (HDFS-16257) [HDFS] [RBF] Guava cache performance issue in Router MountTableResolver
[ https://issues.apache.org/jira/browse/HDFS-16257?focusedWorklogId=662215&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-662215 ] ASF GitHub Bot logged work on HDFS-16257: - Author: ASF GitHub Bot Created on: 07/Oct/21 23:26 Start Date: 07/Oct/21 23:26 Worklog Time Spent: 10m Work Description: symious commented on a change in pull request #3524: URL: https://github.com/apache/hadoop/pull/3524#discussion_r72379 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java ## @@ -138,6 +138,8 @@ public MountTableResolver(Configuration conf, Router routerService, FEDERATION_MOUNT_TABLE_MAX_CACHE_SIZE, FEDERATION_MOUNT_TABLE_MAX_CACHE_SIZE_DEFAULT); this.locationCache = CacheBuilder.newBuilder() + // To warkaround guava bug https://github.com/google/guava/issues/1055 Review comment: Updated, please help to check. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 662215) Time Spent: 2h 40m (was: 2.5h) > [HDFS] [RBF] Guava cache performance issue in Router MountTableResolver > --- > > Key: HDFS-16257 > URL: https://issues.apache.org/jira/browse/HDFS-16257 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.10.1 >Reporter: Janus Chow >Assignee: Janus Chow >Priority: Major > Labels: pull-request-available > Time Spent: 2h 40m > Remaining Estimate: 0h > > Branch 2.10.1 uses guava version of 11.0.2, which has a bug which affects the > performance of cache, which was mentioned in HDFS-13821. > Since upgrading guava version seems affecting too much, this ticket is to add > a configuration setting when initializing cache to walk around this issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16263) Add CMakeLists for hdfs_allowSnapshot
[ https://issues.apache.org/jira/browse/HDFS-16263?focusedWorklogId=662167&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-662167 ] ASF GitHub Bot logged work on HDFS-16263: - Author: ASF GitHub Bot Created on: 07/Oct/21 23:22 Start Date: 07/Oct/21 23:22 Worklog Time Spent: 10m Work Description: GauthamBanasandra opened a new pull request #3531: URL: https://github.com/apache/hadoop/pull/3531 ### Description of PR * Currently, hdfs_allowSnapshot is built in it's parent directory's CMakeLists.txt. * Need to move this into a separate CMakeLists.txt file under hdfs-allow-snapshot so that it's more modular. ### How was this patch tested? Unit tests ran successfully. ### For code changes: - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 662167) Time Spent: 40m (was: 0.5h) > Add CMakeLists for hdfs_allowSnapshot > - > > Key: HDFS-16263 > URL: https://issues.apache.org/jira/browse/HDFS-16263 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client, libhdfs++, tools >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > Currently, hdfs_allowSnapshot is built in it's [parent directory's > CMakeLists.txt|https://github.com/apache/hadoop/blob/95b537ee6a9ff3082c9ad9bc773f86fd4be04e50/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tools/CMakeLists.txt#L83-L89]. > Need to move this into a separate CMakeLists.txt file under > hdfs-allow-snapshot so that it's more modular. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16262) Async refresh of cached locations in DFSInputStream
[ https://issues.apache.org/jira/browse/HDFS-16262?focusedWorklogId=662156&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-662156 ] ASF GitHub Bot logged work on HDFS-16262: - Author: ASF GitHub Bot Created on: 07/Oct/21 23:21 Start Date: 07/Oct/21 23:21 Worklog Time Spent: 10m Work Description: bbeaudreault commented on pull request #3527: URL: https://github.com/apache/hadoop/pull/3527#issuecomment-938136446 I've had this running in one of our test clusters, under load and with block moves occurring. I had it tuned to a short interval of 10s just to put it in an extreme condition. It works really well. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 662156) Time Spent: 1h 10m (was: 1h) > Async refresh of cached locations in DFSInputStream > --- > > Key: HDFS-16262 > URL: https://issues.apache.org/jira/browse/HDFS-16262 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bryan Beaudreault >Assignee: Bryan Beaudreault >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > HDFS-15119 added the ability to invalidate cached block locations in > DFSInputStream. As written, the feature will affect all DFSInputStreams > regardless of whether they need it or not. The invalidation also only applies > on the next request, so the next request will pay the cost of calling > openInfo before reading the data. > I'm working on a feature for HBase which enables efficient healing of > locality through Balancer-style low level block moves (HBASE-26250). I'd like > to utilize the idea started in HDFS-15119 in order to update DFSInputStreams > after blocks have been moved to local hosts. > I was considering using the feature as is, but some of our clusters are quite > large and I'm concerned about the impact on the namenode: > * We have some clusters with over 350k StoreFiles, so that'd be 350k > DFSInputStreams. With such a large number and very active usage, having the > refresh be in-line makes it too hard to ensure we don't DDOS the NameNode. > * Currently we need to pay the price of openInfo the next time a > DFSInputStream is invoked. Moving that async would minimize the latency hit. > Also, some StoreFiles might be far less frequently accessed, so they may live > on for a long time before ever refreshing. We'd like to be able to know that > all DFSInputStreams are refreshed by a given time. > * We may have 350k files, but only a small percentage of them are ever > non-local at a given time. Refreshing only if necessary will save a lot of > work. > In order to make this as painless to end users as possible, I'd like to: > * Update the implementation to utilize an async thread for managing > refreshes. This will give more control over rate limiting across all > DFSInputStreams in a DFSClient, and also ensure that all DFSInputStreams are > refreshed. > * Only refresh files which are lacking a local replica or have known > deadNodes to be cleaned up > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15042) Add more tests for ByteBufferPositionedReadable
[ https://issues.apache.org/jira/browse/HDFS-15042?focusedWorklogId=662144&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-662144 ] ASF GitHub Bot logged work on HDFS-15042: - Author: ASF GitHub Bot Created on: 07/Oct/21 23:20 Start Date: 07/Oct/21 23:20 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #1747: URL: https://github.com/apache/hadoop/pull/1747#issuecomment-937687471 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 12m 13s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 1s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 55s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 50s | | trunk passed | | +1 :green_heart: | compile | 21m 14s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 28s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 43s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 27s | | trunk passed | | +1 :green_heart: | javadoc | 3m 16s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 4m 17s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 8m 24s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 0s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 56s | | the patch passed | | +1 :green_heart: | compile | 20m 42s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 20m 42s | | the patch passed | | +1 :green_heart: | compile | 18m 31s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 18m 31s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 3m 37s | | root: The patch generated 0 new + 45 unchanged - 5 fixed = 45 total (was 50) | | +1 :green_heart: | mvnsite | 4m 22s | | the patch passed | | +1 :green_heart: | javadoc | 3m 13s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 4m 13s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 8m 53s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 40s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 39s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 228m 10s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 6s | | The patch does not generate ASF License warnings. | | | | 473m 52s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1747/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1747 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux b225aedf019b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 44494c7fb289a8935135d70350c4bf5148f1ef6d | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1747/1/testReport
[jira] [Work logged] (HDFS-16251) Make hdfs_cat tool cross platform
[ https://issues.apache.org/jira/browse/HDFS-16251?focusedWorklogId=662117&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-662117 ] ASF GitHub Bot logged work on HDFS-16251: - Author: ASF GitHub Bot Created on: 07/Oct/21 23:18 Start Date: 07/Oct/21 23:18 Worklog Time Spent: 10m Work Description: goiri merged pull request #3523: URL: https://github.com/apache/hadoop/pull/3523 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 662117) Time Spent: 1h 20m (was: 1h 10m) > Make hdfs_cat tool cross platform > - > > Key: HDFS-16251 > URL: https://issues.apache.org/jira/browse/HDFS-16251 > Project: Hadoop HDFS > Issue Type: Improvement > Components: libhdfs++, tools >Affects Versions: 3.4.0 > Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > The source files for hdfs_cat uses *getopt* for parsing the command line > arguments. getopt is available only on Linux and thus, isn't cross platform. > We need to replace getopt with *boost::program_options* to make this cross > platform. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16263) Add CMakeLists for hdfs_allowSnapshot
[ https://issues.apache.org/jira/browse/HDFS-16263?focusedWorklogId=662115&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-662115 ] ASF GitHub Bot logged work on HDFS-16263: - Author: ASF GitHub Bot Created on: 07/Oct/21 23:17 Start Date: 07/Oct/21 23:17 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3531: URL: https://github.com/apache/hadoop/pull/3531#issuecomment-938166189 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 26s | | trunk passed | | +1 :green_heart: | compile | 3m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 24s | | trunk passed | | +1 :green_heart: | shadedclient | 56m 1s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 15s | | the patch passed | | +1 :green_heart: | compile | 2m 36s | | the patch passed | | +1 :green_heart: | cc | 2m 36s | | the patch passed | | +1 :green_heart: | golang | 2m 36s | | the patch passed | | +1 :green_heart: | javac | 2m 36s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 18s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 33m 54s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 114m 51s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3531/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3531 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux d803562655aa 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c53fb6ad18631cb120a2c3885d0dab6d7828522 | | Default Java | Red Hat, Inc.-1.8.0_302-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3531/1/testReport/ | | Max. process+thread count | 598 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3531/1/console | | versions | git=2.9.5 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 662115) Time Spent: 0.5h (was: 20m) > Add CMakeLists for hdfs_allowSnapshot > - > > Key: HDFS-16263 > URL: https://issues.apache.org/jira/browse/HDFS-16263 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client, libhdfs++, tools >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Currently, hdfs_allowSnapshot is built in it's [parent directory's > CMakeLists.txt|https://github.com/apache/hadoop/blob/95b537ee6a9ff3082c9ad9bc773f86fd4be04e50/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tools/CMakeLists.txt#L83-L89]. > Nee
[jira] [Work logged] (HDFS-15987) Improve oiv tool to parse fsimage file in parallel with delimited format
[ https://issues.apache.org/jira/browse/HDFS-15987?focusedWorklogId=662112&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-662112 ] ASF GitHub Bot logged work on HDFS-15987: - Author: ASF GitHub Bot Created on: 07/Oct/21 23:17 Start Date: 07/Oct/21 23:17 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2918: URL: https://github.com/apache/hadoop/pull/2918#issuecomment-937920307 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 4s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 28s | | trunk passed | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 59s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 24s | | trunk passed | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 22s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 16s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 18s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 19s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 19s | | the patch passed | | +1 :green_heart: | compile | 1m 11s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 11s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 52s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2918/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 39 unchanged - 0 fixed = 40 total (was 39) | | +1 :green_heart: | mvnsite | 1m 19s | | the patch passed | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 12s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 382m 35s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2918/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 486m 38s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestSnapshotCommands | | | hadoop.hdfs.TestHDFSFileSystemContract | | | hadoop.hdfs.web.TestWebHdfsFileSystemContract | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2918/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2918 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux f5ed3d4b1bd5 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 66502f901c3d5ec3410965ea5fdef2b31947d816 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions
[jira] [Work logged] (HDFS-16257) [HDFS] [RBF] Guava cache performance issue in Router MountTableResolver
[ https://issues.apache.org/jira/browse/HDFS-16257?focusedWorklogId=662047&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-662047 ] ASF GitHub Bot logged work on HDFS-16257: - Author: ASF GitHub Bot Created on: 07/Oct/21 23:11 Start Date: 07/Oct/21 23:11 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3524: URL: https://github.com/apache/hadoop/pull/3524#issuecomment-937409811 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 662047) Time Spent: 2.5h (was: 2h 20m) > [HDFS] [RBF] Guava cache performance issue in Router MountTableResolver > --- > > Key: HDFS-16257 > URL: https://issues.apache.org/jira/browse/HDFS-16257 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.10.1 >Reporter: Janus Chow >Assignee: Janus Chow >Priority: Major > Labels: pull-request-available > Time Spent: 2.5h > Remaining Estimate: 0h > > Branch 2.10.1 uses guava version of 11.0.2, which has a bug which affects the > performance of cache, which was mentioned in HDFS-13821. > Since upgrading guava version seems affecting too much, this ticket is to add > a configuration setting when initializing cache to walk around this issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16262) Async refresh of cached locations in DFSInputStream
[ https://issues.apache.org/jira/browse/HDFS-16262?focusedWorklogId=661958&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-661958 ] ASF GitHub Bot logged work on HDFS-16262: - Author: ASF GitHub Bot Created on: 07/Oct/21 23:04 Start Date: 07/Oct/21 23:04 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3527: URL: https://github.com/apache/hadoop/pull/3527#issuecomment-937438221 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 661958) Time Spent: 1h (was: 50m) > Async refresh of cached locations in DFSInputStream > --- > > Key: HDFS-16262 > URL: https://issues.apache.org/jira/browse/HDFS-16262 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bryan Beaudreault >Assignee: Bryan Beaudreault >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > HDFS-15119 added the ability to invalidate cached block locations in > DFSInputStream. As written, the feature will affect all DFSInputStreams > regardless of whether they need it or not. The invalidation also only applies > on the next request, so the next request will pay the cost of calling > openInfo before reading the data. > I'm working on a feature for HBase which enables efficient healing of > locality through Balancer-style low level block moves (HBASE-26250). I'd like > to utilize the idea started in HDFS-15119 in order to update DFSInputStreams > after blocks have been moved to local hosts. > I was considering using the feature as is, but some of our clusters are quite > large and I'm concerned about the impact on the namenode: > * We have some clusters with over 350k StoreFiles, so that'd be 350k > DFSInputStreams. With such a large number and very active usage, having the > refresh be in-line makes it too hard to ensure we don't DDOS the NameNode. > * Currently we need to pay the price of openInfo the next time a > DFSInputStream is invoked. Moving that async would minimize the latency hit. > Also, some StoreFiles might be far less frequently accessed, so they may live > on for a long time before ever refreshing. We'd like to be able to know that > all DFSInputStreams are refreshed by a given time. > * We may have 350k files, but only a small percentage of them are ever > non-local at a given time. Refreshing only if necessary will save a lot of > work. > In order to make this as painless to end users as possible, I'd like to: > * Update the implementation to utilize an async thread for managing > refreshes. This will give more control over rate limiting across all > DFSInputStreams in a DFSClient, and also ensure that all DFSInputStreams are > refreshed. > * Only refresh files which are lacking a local replica or have known > deadNodes to be cleaned up > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16261) Configurable grace period around deletion of invalidated blocks
[ https://issues.apache.org/jira/browse/HDFS-16261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17425805#comment-17425805 ] Bryan Beaudreault commented on HDFS-16261: -- I'm looking at this now. I don't have much experience in this area, but am looking into two possibilities high level: handling this on the namenode or handling in the datanode. h2. Handling in the NameNode When a DataNode receives a block, it notifies the namenode via notifyNamenodeReceivedBlock. This sends a RECEIVED_BLOCK to the namenode along with a "delHint", which tells the namenode to invalidate that block on the old host. Tracing that delHint through on the namenode side through a bunch of layers, you eventually land in BlockManager.processExtraRedundancyBlock which eventually lands in processChosenExcessRedundancy. processChosenExcessRedundancy adds the block to a excessRedundancyMap and to a nodeToBlocks map in InvalidateBlocks. There is a RedundancyChore which periodically checks InvalidateBlocks, pulling a configurable amount of blocks and adding them to the DatanodeDescriptor's invalidateBlocks map. One quick option might be to configure the RedundancyChore with dfs.namenode.redundancy.interval.seconds, though that's not exactly what we want which is a per-block grace period. Next time a DataNode sends a heartbeat, the Namenode processes various state for that datanode and sends back a series of commands. Here the NameNode pulls a configurable number of blocks from the DatanodeDescriptor's invalidateBlocks and sends them to the DataNode as part of a DNA_INVALIDATE command. If we were to handle this in the NameNode, we could potentially hook in a couple places: * When adding to nodeToBlocks, we could include a timestamp. The RedundancyChore could only add blocks to the Descriptor's invalidateBlocks map if older than a threshold. * When adding to Descriptor's invalidateBlocks, we could add a timestamp. When processing heartbeats, we could only send blocks via DNA_INVALIDATE which have been in invalidateBlocks for more than a threshold * As mentioned above, we could try tuning dfs.namenode.redundancy.interval.seconds, though that isn't perfect because a block could be added right before the chore runs and thus get immediately invalidated. h2. Handling in the DataNode When a DataNode gets a request for a block, it looks that up in its FsDatasetImpl volumeMap. If the block does not exist, a ReplicaNotFoundException is thrown. The DataNode receives the list of blocks to invalidate from the DNA_INVALIDATE command, which is processed by BPOfferService. This is immediately handed off to FsDatasetImpl.invalidate, which validates the request and immediately removes the block from volumeMap. At this point, the data still exists on disk but requests for the block would throw a ReplicaNotFoundException per above. Once removed from volumeMap, the deletion of data is handled by the FsDatasetAsyncDiskService. The processing is done async, but is immediately handed off to a ThreadPoolExecutor which should execute fairly quickly. A couple options: * Defer the call to FsDatasetImpl.invalidate, at the highest level. This could be passed off to a thread pool to be executed after a delay. In this case, the block would remain in the volumeMap until the task is executed. * Execute invalidate immediately, but defer the data deletion. We're already using a thread pool here, so it might be easier to execute after a delay. It's worth noting that there are other actions taken around the volumeMap removal. We'd need to verify whether those need to be synchronized with removal from volumeMap. In this case we'd need to either: ** relocate the volumeMap.remove call to within the FsDatasetAsyncDiskService. This seems like somewhat of a leaky abstraction. ** Add a pendingDeletion map and add to that when removing from volumeMap. The FsDatasetAsyncDiskService would remove from pendingDeletion once completed. We'd need to update our block fetch code to check volumeMap _or_ pendingDeletion. This separation might give us opportunities in the future, such as including a flag in the response that instructs the DFSClient "this block may go away soon". I'm doing more investigation and specifically want to look into what would happen if the handling service died before invalidating blocks. I'm assuming this is already handled since this process is very async already, but it will be good to know. I also want to do a bit more thinking of the pros and cons of each option above, and some experimenting with the easiest option of tuning the redundancy chore. I'll report back when I have some more information, and also open to other opinions or suggestions. > Configurable grace period around deletion of invalidated blocks > --- > > Key: H
[jira] [Work logged] (HDFS-16263) Add CMakeLists for hdfs_allowSnapshot
[ https://issues.apache.org/jira/browse/HDFS-16263?focusedWorklogId=661930&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-661930 ] ASF GitHub Bot logged work on HDFS-16263: - Author: ASF GitHub Bot Created on: 07/Oct/21 21:24 Start Date: 07/Oct/21 21:24 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3531: URL: https://github.com/apache/hadoop/pull/3531#issuecomment-938166189 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 26s | | trunk passed | | +1 :green_heart: | compile | 3m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 24s | | trunk passed | | +1 :green_heart: | shadedclient | 56m 1s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 15s | | the patch passed | | +1 :green_heart: | compile | 2m 36s | | the patch passed | | +1 :green_heart: | cc | 2m 36s | | the patch passed | | +1 :green_heart: | golang | 2m 36s | | the patch passed | | +1 :green_heart: | javac | 2m 36s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 18s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 33m 54s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 114m 51s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3531/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3531 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux d803562655aa 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c53fb6ad18631cb120a2c3885d0dab6d7828522 | | Default Java | Red Hat, Inc.-1.8.0_302-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3531/1/testReport/ | | Max. process+thread count | 598 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3531/1/console | | versions | git=2.9.5 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 661930) Time Spent: 20m (was: 10m) > Add CMakeLists for hdfs_allowSnapshot > - > > Key: HDFS-16263 > URL: https://issues.apache.org/jira/browse/HDFS-16263 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client, libhdfs++, tools >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Currently, hdfs_allowSnapshot is built in it's [parent directory's > CMakeLists.txt|https://github.com/apache/hadoop/blob/95b537ee6a9ff3082c9ad9bc773f86fd4be04e50/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tools/CMakeLists.txt#L83-L89]. > Need
[jira] [Commented] (HDFS-16262) Async refresh of cached locations in DFSInputStream
[ https://issues.apache.org/jira/browse/HDFS-16262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17425784#comment-17425784 ] Bryan Beaudreault commented on HDFS-16262: -- PR submitted: [https://github.com/apache/hadoop/pull/3527] I've had it running in one of our test clusters (hadoop 3.3), under load and with block moves occurring. I had it tuned to a short interval of 10s just to put it in an extreme condition. It works really well. [~kihwal] [~ahussein] just wanted to tag you both because you worked on the original issue. Thanks for the inspiration and I tried to implement this in a way that is backwards compatible with your original intention. > Async refresh of cached locations in DFSInputStream > --- > > Key: HDFS-16262 > URL: https://issues.apache.org/jira/browse/HDFS-16262 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bryan Beaudreault >Assignee: Bryan Beaudreault >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > HDFS-15119 added the ability to invalidate cached block locations in > DFSInputStream. As written, the feature will affect all DFSInputStreams > regardless of whether they need it or not. The invalidation also only applies > on the next request, so the next request will pay the cost of calling > openInfo before reading the data. > I'm working on a feature for HBase which enables efficient healing of > locality through Balancer-style low level block moves (HBASE-26250). I'd like > to utilize the idea started in HDFS-15119 in order to update DFSInputStreams > after blocks have been moved to local hosts. > I was considering using the feature as is, but some of our clusters are quite > large and I'm concerned about the impact on the namenode: > * We have some clusters with over 350k StoreFiles, so that'd be 350k > DFSInputStreams. With such a large number and very active usage, having the > refresh be in-line makes it too hard to ensure we don't DDOS the NameNode. > * Currently we need to pay the price of openInfo the next time a > DFSInputStream is invoked. Moving that async would minimize the latency hit. > Also, some StoreFiles might be far less frequently accessed, so they may live > on for a long time before ever refreshing. We'd like to be able to know that > all DFSInputStreams are refreshed by a given time. > * We may have 350k files, but only a small percentage of them are ever > non-local at a given time. Refreshing only if necessary will save a lot of > work. > In order to make this as painless to end users as possible, I'd like to: > * Update the implementation to utilize an async thread for managing > refreshes. This will give more control over rate limiting across all > DFSInputStreams in a DFSClient, and also ensure that all DFSInputStreams are > refreshed. > * Only refresh files which are lacking a local replica or have known > deadNodes to be cleaned up > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16262) Async refresh of cached locations in DFSInputStream
[ https://issues.apache.org/jira/browse/HDFS-16262?focusedWorklogId=661897&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-661897 ] ASF GitHub Bot logged work on HDFS-16262: - Author: ASF GitHub Bot Created on: 07/Oct/21 20:35 Start Date: 07/Oct/21 20:35 Worklog Time Spent: 10m Work Description: bbeaudreault commented on pull request #3527: URL: https://github.com/apache/hadoop/pull/3527#issuecomment-938136446 I've had this running in one of our test clusters, under load and with block moves occurring. I had it tuned to a short interval of 10s just to put it in an extreme condition. It works really well. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 661897) Time Spent: 50m (was: 40m) > Async refresh of cached locations in DFSInputStream > --- > > Key: HDFS-16262 > URL: https://issues.apache.org/jira/browse/HDFS-16262 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bryan Beaudreault >Assignee: Bryan Beaudreault >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > HDFS-15119 added the ability to invalidate cached block locations in > DFSInputStream. As written, the feature will affect all DFSInputStreams > regardless of whether they need it or not. The invalidation also only applies > on the next request, so the next request will pay the cost of calling > openInfo before reading the data. > I'm working on a feature for HBase which enables efficient healing of > locality through Balancer-style low level block moves (HBASE-26250). I'd like > to utilize the idea started in HDFS-15119 in order to update DFSInputStreams > after blocks have been moved to local hosts. > I was considering using the feature as is, but some of our clusters are quite > large and I'm concerned about the impact on the namenode: > * We have some clusters with over 350k StoreFiles, so that'd be 350k > DFSInputStreams. With such a large number and very active usage, having the > refresh be in-line makes it too hard to ensure we don't DDOS the NameNode. > * Currently we need to pay the price of openInfo the next time a > DFSInputStream is invoked. Moving that async would minimize the latency hit. > Also, some StoreFiles might be far less frequently accessed, so they may live > on for a long time before ever refreshing. We'd like to be able to know that > all DFSInputStreams are refreshed by a given time. > * We may have 350k files, but only a small percentage of them are ever > non-local at a given time. Refreshing only if necessary will save a lot of > work. > In order to make this as painless to end users as possible, I'd like to: > * Update the implementation to utilize an async thread for managing > refreshes. This will give more control over rate limiting across all > DFSInputStreams in a DFSClient, and also ensure that all DFSInputStreams are > refreshed. > * Only refresh files which are lacking a local replica or have known > deadNodes to be cleaned up > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16263) Add CMakeLists for hdfs_allowSnapshot
[ https://issues.apache.org/jira/browse/HDFS-16263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-16263: -- Labels: pull-request-available (was: ) > Add CMakeLists for hdfs_allowSnapshot > - > > Key: HDFS-16263 > URL: https://issues.apache.org/jira/browse/HDFS-16263 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client, libhdfs++, tools >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Currently, hdfs_allowSnapshot is built in it's [parent directory's > CMakeLists.txt|https://github.com/apache/hadoop/blob/95b537ee6a9ff3082c9ad9bc773f86fd4be04e50/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tools/CMakeLists.txt#L83-L89]. > Need to move this into a separate CMakeLists.txt file under > hdfs-allow-snapshot so that it's more modular. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16263) Add CMakeLists for hdfs_allowSnapshot
[ https://issues.apache.org/jira/browse/HDFS-16263?focusedWorklogId=661865&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-661865 ] ASF GitHub Bot logged work on HDFS-16263: - Author: ASF GitHub Bot Created on: 07/Oct/21 19:27 Start Date: 07/Oct/21 19:27 Worklog Time Spent: 10m Work Description: GauthamBanasandra opened a new pull request #3531: URL: https://github.com/apache/hadoop/pull/3531 ### Description of PR * Currently, hdfs_allowSnapshot is built in it's parent directory's CMakeLists.txt. * Need to move this into a separate CMakeLists.txt file under hdfs-allow-snapshot so that it's more modular. ### How was this patch tested? Unit tests ran successfully. ### For code changes: - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 661865) Remaining Estimate: 0h Time Spent: 10m > Add CMakeLists for hdfs_allowSnapshot > - > > Key: HDFS-16263 > URL: https://issues.apache.org/jira/browse/HDFS-16263 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client, libhdfs++, tools >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Currently, hdfs_allowSnapshot is built in it's [parent directory's > CMakeLists.txt|https://github.com/apache/hadoop/blob/95b537ee6a9ff3082c9ad9bc773f86fd4be04e50/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tools/CMakeLists.txt#L83-L89]. > Need to move this into a separate CMakeLists.txt file under > hdfs-allow-snapshot so that it's more modular. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15516) Add info for create flags in NameNode audit logs
[ https://issues.apache.org/jira/browse/HDFS-15516?focusedWorklogId=661853&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-661853 ] ASF GitHub Bot logged work on HDFS-15516: - Author: ASF GitHub Bot Created on: 07/Oct/21 19:05 Start Date: 07/Oct/21 19:05 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2281: URL: https://github.com/apache/hadoop/pull/2281#issuecomment-938074701 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 43s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 31s | | trunk passed | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 21s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 53s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2281/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 131 unchanged - 0 fixed = 134 total (was 131) | | +1 :green_heart: | mvnsite | 1m 13s | | the patch passed | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 16s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 7s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 46s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 237m 59s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 336m 43s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2281/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2281 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux ad79a91b588f 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / e667f64ad54aa013f5a9a1a3b7e2dcdb4a7f63b7 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2281/1/testReport/ | | Max. process+thread count | 3470 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U:
[jira] [Created] (HDFS-16263) Add CMakeLists for hdfs_allowSnapshot
Gautham Banasandra created HDFS-16263: - Summary: Add CMakeLists for hdfs_allowSnapshot Key: HDFS-16263 URL: https://issues.apache.org/jira/browse/HDFS-16263 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs-client, libhdfs++, tools Affects Versions: 3.4.0 Reporter: Gautham Banasandra Assignee: Gautham Banasandra Currently, hdfs_allowSnapshot is built in it's [parent directory's CMakeLists.txt|https://github.com/apache/hadoop/blob/95b537ee6a9ff3082c9ad9bc773f86fd4be04e50/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tools/CMakeLists.txt#L83-L89]. Need to move this into a separate CMakeLists.txt file under hdfs-allow-snapshot so that it's more modular. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16251) Make hdfs_cat tool cross platform
[ https://issues.apache.org/jira/browse/HDFS-16251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17425719#comment-17425719 ] Íñigo Goiri commented on HDFS-16251: Thanks [~gautham] for the work. Merged PR 3523. > Make hdfs_cat tool cross platform > - > > Key: HDFS-16251 > URL: https://issues.apache.org/jira/browse/HDFS-16251 > Project: Hadoop HDFS > Issue Type: Improvement > Components: libhdfs++, tools >Affects Versions: 3.4.0 > Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > The source files for hdfs_cat uses *getopt* for parsing the command line > arguments. getopt is available only on Linux and thus, isn't cross platform. > We need to replace getopt with *boost::program_options* to make this cross > platform. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-16251) Make hdfs_cat tool cross platform
[ https://issues.apache.org/jira/browse/HDFS-16251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri resolved HDFS-16251. Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed > Make hdfs_cat tool cross platform > - > > Key: HDFS-16251 > URL: https://issues.apache.org/jira/browse/HDFS-16251 > Project: Hadoop HDFS > Issue Type: Improvement > Components: libhdfs++, tools >Affects Versions: 3.4.0 > Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > The source files for hdfs_cat uses *getopt* for parsing the command line > arguments. getopt is available only on Linux and thus, isn't cross platform. > We need to replace getopt with *boost::program_options* to make this cross > platform. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16251) Make hdfs_cat tool cross platform
[ https://issues.apache.org/jira/browse/HDFS-16251?focusedWorklogId=661805&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-661805 ] ASF GitHub Bot logged work on HDFS-16251: - Author: ASF GitHub Bot Created on: 07/Oct/21 17:57 Start Date: 07/Oct/21 17:57 Worklog Time Spent: 10m Work Description: goiri merged pull request #3523: URL: https://github.com/apache/hadoop/pull/3523 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 661805) Time Spent: 1h 10m (was: 1h) > Make hdfs_cat tool cross platform > - > > Key: HDFS-16251 > URL: https://issues.apache.org/jira/browse/HDFS-16251 > Project: Hadoop HDFS > Issue Type: Improvement > Components: libhdfs++, tools >Affects Versions: 3.4.0 > Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > The source files for hdfs_cat uses *getopt* for parsing the command line > arguments. getopt is available only on Linux and thus, isn't cross platform. > We need to replace getopt with *boost::program_options* to make this cross > platform. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15987) Improve oiv tool to parse fsimage file in parallel with delimited format
[ https://issues.apache.org/jira/browse/HDFS-15987?focusedWorklogId=661724&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-661724 ] ASF GitHub Bot logged work on HDFS-15987: - Author: ASF GitHub Bot Created on: 07/Oct/21 15:47 Start Date: 07/Oct/21 15:47 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2918: URL: https://github.com/apache/hadoop/pull/2918#issuecomment-937920307 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 4s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 28s | | trunk passed | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 59s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 24s | | trunk passed | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 22s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 16s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 18s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 19s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 19s | | the patch passed | | +1 :green_heart: | compile | 1m 11s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 11s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 52s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2918/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 39 unchanged - 0 fixed = 40 total (was 39) | | +1 :green_heart: | mvnsite | 1m 19s | | the patch passed | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 12s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 382m 35s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2918/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 486m 38s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestSnapshotCommands | | | hadoop.hdfs.TestHDFSFileSystemContract | | | hadoop.hdfs.web.TestWebHdfsFileSystemContract | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2918/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2918 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux f5ed3d4b1bd5 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 66502f901c3d5ec3410965ea5fdef2b31947d816 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions
[jira] [Work logged] (HDFS-15979) Move within EZ fails and cannot remove nested EZs
[ https://issues.apache.org/jira/browse/HDFS-15979?focusedWorklogId=661719&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-661719 ] ASF GitHub Bot logged work on HDFS-15979: - Author: ASF GitHub Bot Created on: 07/Oct/21 15:32 Start Date: 07/Oct/21 15:32 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2919: URL: https://github.com/apache/hadoop/pull/2919#issuecomment-937907197 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 9s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 40m 23s | | trunk passed | | +1 :green_heart: | compile | 1m 39s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 10s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 40s | | trunk passed | | +1 :green_heart: | javadoc | 1m 8s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 40s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 4m 13s | | trunk passed | | +1 :green_heart: | shadedclient | 29m 22s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 46s | | the patch passed | | +1 :green_heart: | compile | 1m 36s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 36s | | the patch passed | | +1 :green_heart: | compile | 1m 28s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 1s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 37s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 7s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 50s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 4m 11s | | the patch passed | | +1 :green_heart: | shadedclient | 25m 59s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 349m 38s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2919/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | | The patch does not generate ASF License warnings. | | | | 470m 52s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestHDFSFileSystemContract | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2919/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2919 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux c68e33dd6dcc 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / dcee377d8001638015e01acab762ca1f4667dbf8 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2919/1/testReport/ | | Max. proces
[jira] [Work logged] (HDFS-16257) [HDFS] [RBF] Guava cache performance issue in Router MountTableResolver
[ https://issues.apache.org/jira/browse/HDFS-16257?focusedWorklogId=661631&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-661631 ] ASF GitHub Bot logged work on HDFS-16257: - Author: ASF GitHub Bot Created on: 07/Oct/21 14:07 Start Date: 07/Oct/21 14:07 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3524: URL: https://github.com/apache/hadoop/pull/3524#issuecomment-937827956 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 36s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-2.10 Compile Tests _ | | +1 :green_heart: | mvninstall | 14m 32s | branch-2.10 passed | | +1 :green_heart: | compile | 0m 35s | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | compile | 0m 30s | branch-2.10 passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 | | +1 :green_heart: | checkstyle | 0m 21s | branch-2.10 passed | | +1 :green_heart: | mvnsite | 0m 47s | branch-2.10 passed | | +1 :green_heart: | javadoc | 0m 51s | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javadoc | 0m 37s | branch-2.10 passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 | | +0 :ok: | spotbugs | 3m 48s | Both FindBugs and SpotBugs are enabled, using SpotBugs. | | +1 :green_heart: | spotbugs | 1m 3s | branch-2.10 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | the patch passed | | +1 :green_heart: | compile | 0m 31s | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javac | 0m 31s | the patch passed | | +1 :green_heart: | compile | 0m 23s | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 | | +1 :green_heart: | javac | 0m 23s | the patch passed | | +1 :green_heart: | checkstyle | 0m 14s | the patch passed | | +1 :green_heart: | mvnsite | 0m 29s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | javadoc | 0m 40s | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javadoc | 0m 31s | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 | | +1 :green_heart: | spotbugs | 1m 8s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 17m 0s | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 26s | The patch does not generate ASF License warnings. | | | | 45m 55s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3524/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3524 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle | | uname | Linux 023779c2cd06 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-2.10 / dc03afc | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 | | Multi-JDK versions | /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_262-b10 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3524/8/testReport/ | | Max. process+thread count | 1295 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3524/8/console | | versions | git=2.7.4 maven=3.3.9 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment.
[jira] [Commented] (HDFS-16259) Catch and re-throw sub-classes of AccessControlException thrown by any permission provider plugins (eg Ranger)
[ https://issues.apache.org/jira/browse/HDFS-16259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17425573#comment-17425573 ] Stephen O'Donnell commented on HDFS-16259: -- {quote} What do you think about Compatibility? I think even if you unwrap at DfsClient or convert to ACE at Namenode, Compatibility guidelines would definitely break {quote} This is why I think catching the enforcer exceptions in the Namenode and throwing a plain AccessControlException is the safest bet, at least for the 3.3 and 3.2 branches. Perhaps we should do something different on trunk that may be incompatible, eg change the client. Nothing that calls the DFS Client should depend on a Ranger or other plugin defined exception coming out of the DFS client, and the way the client has been coded, it doesn't expect it either, as it only unwraps specific exceptions right now. {quote} Why we would just need to unwrap only a selective Exceptions {quote} Yea I agree, this was a strange decision. It means that sometimes you get a useful exception, and others you get a RemoteException, and with RemoteException you cannot even call "getCause()" on it to get the real exception. It would probably have been better to unwrap the remote exception always and just return the real cause to the caller. > Catch and re-throw sub-classes of AccessControlException thrown by any > permission provider plugins (eg Ranger) > -- > > Key: HDFS-16259 > URL: https://issues.apache.org/jira/browse/HDFS-16259 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell >Priority: Major > > When a permission provider plugin is enabled (eg Ranger) there are some > scenarios where it can throw a sub-class of an AccessControlException (eg > RangerAccessControlException). If this exception is allowed to propagate up > the stack, it can give problems in the HDFS Client, when it unwraps the > remote exception containing the AccessControlException sub-class. > Ideally, we should make AccessControlException final so it cannot be > sub-classed, but that would be a breaking change at this point. Therefore I > believe the safest thing to do, is to catch any AccessControlException that > comes out of the permission enforcer plugin, and re-throw an > AccessControlException instead. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16257) [HDFS] [RBF] Guava cache performance issue in Router MountTableResolver
[ https://issues.apache.org/jira/browse/HDFS-16257?focusedWorklogId=661626&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-661626 ] ASF GitHub Bot logged work on HDFS-16257: - Author: ASF GitHub Bot Created on: 07/Oct/21 14:01 Start Date: 07/Oct/21 14:01 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3524: URL: https://github.com/apache/hadoop/pull/3524#issuecomment-937822584 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 37s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-2.10 Compile Tests _ | | +1 :green_heart: | mvninstall | 14m 33s | branch-2.10 passed | | +1 :green_heart: | compile | 0m 35s | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | compile | 0m 31s | branch-2.10 passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 | | +1 :green_heart: | checkstyle | 0m 20s | branch-2.10 passed | | +1 :green_heart: | mvnsite | 0m 37s | branch-2.10 passed | | +1 :green_heart: | javadoc | 0m 48s | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javadoc | 0m 33s | branch-2.10 passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 | | +0 :ok: | spotbugs | 3m 34s | Both FindBugs and SpotBugs are enabled, using SpotBugs. | | +1 :green_heart: | spotbugs | 1m 7s | branch-2.10 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | the patch passed | | +1 :green_heart: | compile | 0m 31s | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javac | 0m 31s | the patch passed | | +1 :green_heart: | compile | 0m 25s | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 | | +1 :green_heart: | javac | 0m 25s | the patch passed | | +1 :green_heart: | checkstyle | 0m 14s | the patch passed | | +1 :green_heart: | mvnsite | 0m 30s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | javadoc | 0m 42s | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javadoc | 0m 30s | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 | | +1 :green_heart: | spotbugs | 1m 10s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 17m 26s | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 27s | The patch does not generate ASF License warnings. | | | | 46m 22s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3524/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3524 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle | | uname | Linux e47a6fe08c90 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-2.10 / dc03afc | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 | | Multi-JDK versions | /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_262-b10 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3524/7/testReport/ | | Max. process+thread count | 1389 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3524/7/console | | versions | git=2.7.4 maven=3.3.9 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment
[jira] [Work logged] (HDFS-16262) Async refresh of cached locations in DFSInputStream
[ https://issues.apache.org/jira/browse/HDFS-16262?focusedWorklogId=661552&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-661552 ] ASF GitHub Bot logged work on HDFS-16262: - Author: ASF GitHub Bot Created on: 07/Oct/21 11:41 Start Date: 07/Oct/21 11:41 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3527: URL: https://github.com/apache/hadoop/pull/3527#issuecomment-937711713 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 19s | | https://github.com/apache/hadoop/pull/3527 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3527/3/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 661552) Time Spent: 40m (was: 0.5h) > Async refresh of cached locations in DFSInputStream > --- > > Key: HDFS-16262 > URL: https://issues.apache.org/jira/browse/HDFS-16262 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bryan Beaudreault >Assignee: Bryan Beaudreault >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > HDFS-15119 added the ability to invalidate cached block locations in > DFSInputStream. As written, the feature will affect all DFSInputStreams > regardless of whether they need it or not. The invalidation also only applies > on the next request, so the next request will pay the cost of calling > openInfo before reading the data. > I'm working on a feature for HBase which enables efficient healing of > locality through Balancer-style low level block moves (HBASE-26250). I'd like > to utilize the idea started in HDFS-15119 in order to update DFSInputStreams > after blocks have been moved to local hosts. > I was considering using the feature as is, but some of our clusters are quite > large and I'm concerned about the impact on the namenode: > * We have some clusters with over 350k StoreFiles, so that'd be 350k > DFSInputStreams. With such a large number and very active usage, having the > refresh be in-line makes it too hard to ensure we don't DDOS the NameNode. > * Currently we need to pay the price of openInfo the next time a > DFSInputStream is invoked. Moving that async would minimize the latency hit. > Also, some StoreFiles might be far less frequently accessed, so they may live > on for a long time before ever refreshing. We'd like to be able to know that > all DFSInputStreams are refreshed by a given time. > * We may have 350k files, but only a small percentage of them are ever > non-local at a given time. Refreshing only if necessary will save a lot of > work. > In order to make this as painless to end users as possible, I'd like to: > * Update the implementation to utilize an async thread for managing > refreshes. This will give more control over rate limiting across all > DFSInputStreams in a DFSClient, and also ensure that all DFSInputStreams are > refreshed. > * Only refresh files which are lacking a local replica or have known > deadNodes to be cleaned up > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16259) Catch and re-throw sub-classes of AccessControlException thrown by any permission provider plugins (eg Ranger)
[ https://issues.apache.org/jira/browse/HDFS-16259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17425497#comment-17425497 ] Ayush Saxena commented on HDFS-16259: - {quote}I think it can be argued both ways. HDFS should have made AccessControlException final so it was clear what Ranger should do {quote} Definitely yes, That is what Ranger Folks will tell, If we blame them. :) {quote}but we cannot do that now as it will break Ranger {quote} HDFS will break as well, we too have {{SnapshotAccessControlException}} and {{TraverseAccessControlException}} I am not sure how this inconsistency got introduced and what are the reasons for that, need to pull in the author of those codes, not sure how old the code is. What do you think about Compatibility? I think even if you unwrap at DfsClient or convert to ACE at Namenode, Compatibility guidelines would definitely break Regarding changing in the DFSClient, I think couple of APIs still do {{re.unwrapRemoteException();}} (eg. Snapshot ones), whether we do it for this or not. I think we someday should do this, though a breaking change, Why we would just need to unwrap only a selective Exceptions > Catch and re-throw sub-classes of AccessControlException thrown by any > permission provider plugins (eg Ranger) > -- > > Key: HDFS-16259 > URL: https://issues.apache.org/jira/browse/HDFS-16259 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell >Priority: Major > > When a permission provider plugin is enabled (eg Ranger) there are some > scenarios where it can throw a sub-class of an AccessControlException (eg > RangerAccessControlException). If this exception is allowed to propagate up > the stack, it can give problems in the HDFS Client, when it unwraps the > remote exception containing the AccessControlException sub-class. > Ideally, we should make AccessControlException final so it cannot be > sub-classed, but that would be a breaking change at this point. Therefore I > believe the safest thing to do, is to catch any AccessControlException that > comes out of the permission enforcer plugin, and re-throw an > AccessControlException instead. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15042) Add more tests for ByteBufferPositionedReadable
[ https://issues.apache.org/jira/browse/HDFS-15042?focusedWorklogId=661533&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-661533 ] ASF GitHub Bot logged work on HDFS-15042: - Author: ASF GitHub Bot Created on: 07/Oct/21 11:06 Start Date: 07/Oct/21 11:06 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #1747: URL: https://github.com/apache/hadoop/pull/1747#issuecomment-937687471 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 12m 13s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 1s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 55s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 50s | | trunk passed | | +1 :green_heart: | compile | 21m 14s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 28s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 43s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 27s | | trunk passed | | +1 :green_heart: | javadoc | 3m 16s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 4m 17s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 8m 24s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 0s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 56s | | the patch passed | | +1 :green_heart: | compile | 20m 42s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 20m 42s | | the patch passed | | +1 :green_heart: | compile | 18m 31s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 18m 31s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 3m 37s | | root: The patch generated 0 new + 45 unchanged - 5 fixed = 45 total (was 50) | | +1 :green_heart: | mvnsite | 4m 22s | | the patch passed | | +1 :green_heart: | javadoc | 3m 13s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 4m 13s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 8m 53s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 40s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 39s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 228m 10s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 6s | | The patch does not generate ASF License warnings. | | | | 473m 52s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1747/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1747 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux b225aedf019b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 44494c7fb289a8935135d70350c4bf5148f1ef6d | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1747/1/testReport
[jira] [Work logged] (HDFS-16262) Async refresh of cached locations in DFSInputStream
[ https://issues.apache.org/jira/browse/HDFS-16262?focusedWorklogId=661457&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-661457 ] ASF GitHub Bot logged work on HDFS-16262: - Author: ASF GitHub Bot Created on: 07/Oct/21 09:12 Start Date: 07/Oct/21 09:12 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3527: URL: https://github.com/apache/hadoop/pull/3527#issuecomment-937604480 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 59s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 43s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 19s | | trunk passed | | +1 :green_heart: | compile | 5m 33s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 5m 10s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 16s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 30s | | trunk passed | | +1 :green_heart: | javadoc | 1m 44s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 10s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 6m 20s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 38s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 3s | | the patch passed | | +1 :green_heart: | compile | 4m 51s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 4m 51s | [/results-compile-javac-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3527/2/artifact/out/results-compile-javac-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 1 new + 644 unchanged - 1 fixed = 645 total (was 645) | | +1 :green_heart: | compile | 4m 31s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 4m 31s | [/results-compile-javac-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3527/2/artifact/out/results-compile-javac-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 1 new + 624 unchanged - 1 fixed = 625 total (was 625) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 5s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3527/2/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 15 new + 105 unchanged - 0 fixed = 120 total (was 105) | | +1 :green_heart: | mvnsite | 2m 7s | | the patch passed | | +1 :green_heart: | javadoc | 1m 23s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 54s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 5m 38s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 56s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 22s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 227m 0s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3527/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s |
[jira] [Commented] (HDFS-16258) HDFS-13671 breaks TestBlockManager in branch-3.2
[ https://issues.apache.org/jira/browse/HDFS-16258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17425395#comment-17425395 ] Wei-Chiu Chuang commented on HDFS-16258: It's reproducible even prior to this change. I guess it's just an environmental issue for me. > HDFS-13671 breaks TestBlockManager in branch-3.2 > > > Key: HDFS-16258 > URL: https://issues.apache.org/jira/browse/HDFS-16258 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.2.3 >Reporter: Wei-Chiu Chuang >Priority: Blocker > > TestBlockManager in branch-3.2 has two failed tests: > * testDeleteCorruptReplicaWithStatleStorages > * testBlockManagerMachinesArray > Looks like broken by HDFS-13671. CC: [~brahmareddy] > Branch-3.3 seems fine. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16259) Catch and re-throw sub-classes of AccessControlException thrown by any permission provider plugins (eg Ranger)
[ https://issues.apache.org/jira/browse/HDFS-16259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17425383#comment-17425383 ] Stephen O'Donnell commented on HDFS-16259: -- I think it can be argued both ways. HDFS should have made AccessControlException final so it was clear what Ranger should do, but we cannot do that now as it will break Ranger, and any other plugins that may use this interface. The HDFS client currently unwraps specific exceptions, so changing as you suggested above may need to be made in quite a few places and it could also change what the client returns in some circumstances. To me it seems safer to ensure that the access plugins internal exceptions never make it to the client by catching them at the namenode. There is already some code that does this inFSPermissionChecker: {code} void checkPermission(INode inode, int snapshotId, FsAction access) throws AccessControlException { byte[][] pathComponents = inode.getPathComponents(); INodeAttributes nodeAttributes = getINodeAttrs(pathComponents, pathComponents.length - 1, inode, snapshotId); try { INodeAttributes[] iNodeAttr = {nodeAttributes}; AccessControlEnforcer enforcer = getAccessControlEnforcer(); String opType = operationType.get(); if (this.authorizeWithContext && opType != null) { INodeAttributeProvider.AuthorizationContext.Builder builder = new INodeAttributeProvider.AuthorizationContext.Builder(); builder.fsOwner(fsOwner) .supergroup(supergroup) .callerUgi(callerUgi) .inodeAttrs(iNodeAttr) // single inode attr in the array .inodes(new INode[] { inode }) // single inode attr in the array .pathByNameArr(pathComponents) .snapshotId(snapshotId) .path(null) .ancestorIndex(-1) // this will skip checkTraverse() // because not checking ancestor here .doCheckOwner(false) .ancestorAccess(null) .parentAccess(null) .access(access)// the target access to be checked against // the inode .subAccess(null) // passing null sub access avoids checking // children .ignoreEmptyDir(false) .operationName(opType) .callerContext(CallerContext.getCurrent()); enforcer.checkPermissionWithContext(builder.build()); } else { enforcer.checkPermission( fsOwner, supergroup, callerUgi, iNodeAttr, // single inode attr in the array new INode[]{inode}, // single inode in the array pathComponents, snapshotId, null, -1, // this will skip checkTraverse() because // not checking ancestor here false, null, null, access, // the target access to be checked against the inode null, // passing null sub access avoids checking children false); } } catch (AccessControlException ace) { throw new AccessControlException( toAccessControlString(nodeAttributes, inode.getFullPathName(), access)); } } {code} The enforcer is also called from the method: {code} void checkPermission(INodesInPath inodesInPath, boolean doCheckOwner, FsAction ancestorAccess, FsAction parentAccess, FsAction access, FsAction subAccess, boolean ignoreEmptyDir) throws AccessControlException { {code} Which does not catch it, so right now, the behaviour is inconsistent across calls to the enforcer. > Catch and re-throw sub-classes of AccessControlException thrown by any > permission provider plugins (eg Ranger) > -- > > Key: HDFS-16259 > URL: https://issues.apache.org/jira/browse/HDFS-16259 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell >Priority: Major > > When a permission provider plugin is enabled (eg Ranger) there are some > scenarios where it can throw a sub-class of an AccessControlException (eg > RangerAccessControlException). If this exception is allowed to propagate up > the stack, it can give problems in the HDFS Client, when it unwraps the > remote exception containing the AccessControlException sub-class. > Ideally, we should make AccessControlException final so it cannot be > sub-classed, but that would be a breaking change at this point. Therefore I > believe the safest thing to do, is to catch any AccessControlException that > comes out of the permission enforcer plugin, and re-throw an > AccessControlException instead
[jira] [Work logged] (HDFS-16257) [HDFS] [RBF] Guava cache performance issue in Router MountTableResolver
[ https://issues.apache.org/jira/browse/HDFS-16257?focusedWorklogId=661362&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-661362 ] ASF GitHub Bot logged work on HDFS-16257: - Author: ASF GitHub Bot Created on: 07/Oct/21 07:25 Start Date: 07/Oct/21 07:25 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3524: URL: https://github.com/apache/hadoop/pull/3524#issuecomment-937526316 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 37s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-2.10 Compile Tests _ | | +1 :green_heart: | mvninstall | 14m 56s | branch-2.10 passed | | +1 :green_heart: | compile | 0m 35s | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | compile | 0m 29s | branch-2.10 passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 | | +1 :green_heart: | checkstyle | 0m 21s | branch-2.10 passed | | +1 :green_heart: | mvnsite | 0m 35s | branch-2.10 passed | | +1 :green_heart: | javadoc | 0m 49s | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javadoc | 0m 36s | branch-2.10 passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 | | +0 :ok: | spotbugs | 3m 33s | Both FindBugs and SpotBugs are enabled, using SpotBugs. | | +1 :green_heart: | spotbugs | 1m 5s | branch-2.10 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 27s | the patch passed | | +1 :green_heart: | compile | 0m 30s | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javac | 0m 30s | the patch passed | | +1 :green_heart: | compile | 0m 24s | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 | | +1 :green_heart: | javac | 0m 24s | the patch passed | | +1 :green_heart: | checkstyle | 0m 14s | the patch passed | | +1 :green_heart: | mvnsite | 0m 28s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | javadoc | 0m 41s | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javadoc | 0m 29s | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 | | +1 :green_heart: | spotbugs | 1m 10s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 17m 6s | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 25s | The patch does not generate ASF License warnings. | | | | 46m 4s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3524/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3524 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle | | uname | Linux 2c1fa729faf1 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-2.10 / dc03afc | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 | | Multi-JDK versions | /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_262-b10 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3524/6/testReport/ | | Max. process+thread count | 1434 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3524/6/console | | versions | git=2.7.4 maven=3.3.9 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment