[jira] [Work logged] (HDFS-16539) RBF: Support refreshing/changing router fairness policy controller without rebooting router
[ https://issues.apache.org/jira/browse/HDFS-16539?focusedWorklogId=770043&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-770043 ] ASF GitHub Bot logged work on HDFS-16539: - Author: ASF GitHub Bot Created on: 13/May/22 06:54 Start Date: 13/May/22 06:54 Worklog Time Spent: 10m Work Description: kokonguyen191 opened a new pull request, #4307: URL: https://github.com/apache/hadoop/pull/4307 ### Description of PR Add a `DynamicRouterRpcFairnessPolicyController` class that resizes permit capacity periodically based on traffic to namespaces + minor fixes to make it work with HDFS-16539 ### How was this patch tested? Unit tests, local deployment + modelling for performance improvement. ### For code changes: Added a few fixes in comparison with https://github.com/apache/hadoop/pull/4168 - Fix dead executor when dynamic controller is created more than once - Controller refresh does not work with dynamic controller Issue Time Tracking --- Worklog Id: (was: 770043) Time Spent: 2h 10m (was: 2h) > RBF: Support refreshing/changing router fairness policy controller without > rebooting router > --- > > Key: HDFS-16539 > URL: https://issues.apache.org/jira/browse/HDFS-16539 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Felix N >Assignee: Felix N >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 10m > Remaining Estimate: 0h > > Add support for refreshing/changing router fairness policy controller without > the need to reboot a router. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-14750) RBF: Improved isolation for downstream name nodes. {Dynamic}
[ https://issues.apache.org/jira/browse/HDFS-14750?focusedWorklogId=770041&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-770041 ] ASF GitHub Bot logged work on HDFS-14750: - Author: ASF GitHub Bot Created on: 13/May/22 06:47 Start Date: 13/May/22 06:47 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4306: URL: https://github.com/apache/hadoop/pull/4306#issuecomment-1125712338 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 20s | | https://github.com/apache/hadoop/pull/4306 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/4306 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4306/1/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. Issue Time Tracking --- Worklog Id: (was: 770041) Time Spent: 3h 10m (was: 3h) > RBF: Improved isolation for downstream name nodes. {Dynamic} > > > Key: HDFS-14750 > URL: https://issues.apache.org/jira/browse/HDFS-14750 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: CR Hota >Assignee: CR Hota >Priority: Major > Labels: pull-request-available > Time Spent: 3h 10m > Remaining Estimate: 0h > > This Jira tracks the work around dynamic allocation of resources in routers > for downstream hdfs clusters. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-14750) RBF: Improved isolation for downstream name nodes. {Dynamic}
[ https://issues.apache.org/jira/browse/HDFS-14750?focusedWorklogId=770039&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-770039 ] ASF GitHub Bot logged work on HDFS-14750: - Author: ASF GitHub Bot Created on: 13/May/22 06:44 Start Date: 13/May/22 06:44 Worklog Time Spent: 10m Work Description: ferhui opened a new pull request, #4306: URL: https://github.com/apache/hadoop/pull/4306 Reverts apache/hadoop#4199 Issue Time Tracking --- Worklog Id: (was: 770039) Time Spent: 2h 50m (was: 2h 40m) > RBF: Improved isolation for downstream name nodes. {Dynamic} > > > Key: HDFS-14750 > URL: https://issues.apache.org/jira/browse/HDFS-14750 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: CR Hota >Assignee: CR Hota >Priority: Major > Labels: pull-request-available > Time Spent: 2h 50m > Remaining Estimate: 0h > > This Jira tracks the work around dynamic allocation of resources in routers > for downstream hdfs clusters. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-14750) RBF: Improved isolation for downstream name nodes. {Dynamic}
[ https://issues.apache.org/jira/browse/HDFS-14750?focusedWorklogId=770040&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-770040 ] ASF GitHub Bot logged work on HDFS-14750: - Author: ASF GitHub Bot Created on: 13/May/22 06:44 Start Date: 13/May/22 06:44 Worklog Time Spent: 10m Work Description: ferhui merged PR #4306: URL: https://github.com/apache/hadoop/pull/4306 Issue Time Tracking --- Worklog Id: (was: 770040) Time Spent: 3h (was: 2h 50m) > RBF: Improved isolation for downstream name nodes. {Dynamic} > > > Key: HDFS-14750 > URL: https://issues.apache.org/jira/browse/HDFS-14750 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: CR Hota >Assignee: CR Hota >Priority: Major > Labels: pull-request-available > Time Spent: 3h > Remaining Estimate: 0h > > This Jira tracks the work around dynamic allocation of resources in routers > for downstream hdfs clusters. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-14750) RBF: Improved isolation for downstream name nodes. {Dynamic}
[ https://issues.apache.org/jira/browse/HDFS-14750?focusedWorklogId=770036&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-770036 ] ASF GitHub Bot logged work on HDFS-14750: - Author: ASF GitHub Bot Created on: 13/May/22 06:28 Start Date: 13/May/22 06:28 Worklog Time Spent: 10m Work Description: ferhui merged PR #4199: URL: https://github.com/apache/hadoop/pull/4199 Issue Time Tracking --- Worklog Id: (was: 770036) Time Spent: 2h 40m (was: 2.5h) > RBF: Improved isolation for downstream name nodes. {Dynamic} > > > Key: HDFS-14750 > URL: https://issues.apache.org/jira/browse/HDFS-14750 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: CR Hota >Assignee: CR Hota >Priority: Major > Labels: pull-request-available > Time Spent: 2h 40m > Remaining Estimate: 0h > > This Jira tracks the work around dynamic allocation of resources in routers > for downstream hdfs clusters. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16570) RBF: The router using MultipleDestinationMountTableResolver remove Multiple subcluster data under the mount point failed
[ https://issues.apache.org/jira/browse/HDFS-16570?focusedWorklogId=769998&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-769998 ] ASF GitHub Bot logged work on HDFS-16570: - Author: ASF GitHub Bot Created on: 13/May/22 02:31 Start Date: 13/May/22 02:31 Worklog Time Spent: 10m Work Description: zhangxiping1 commented on PR #4269: URL: https://github.com/apache/hadoop/pull/4269#issuecomment-1125597219 @goiri @ferhui Can you take a look at the pr?Thanx Issue Time Tracking --- Worklog Id: (was: 769998) Time Spent: 1h (was: 50m) > RBF: The router using MultipleDestinationMountTableResolver remove Multiple > subcluster data under the mount point failed > > > Key: HDFS-16570 > URL: https://issues.apache.org/jira/browse/HDFS-16570 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Reporter: Xiping Zhang >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Please look at the following example : > hadoop>{color:#FF}hdfs dfsrouteradmin -add /home/data ns0,ns1 /home/data > -order RANDOM{color} > Successfully removed mount point /home/data > hadoop>{color:#FF}hdfs dfsrouteradmin -ls{color} > Mount Table Entries: > Source Destinations Owner > Group Mode Quota/Usage > /home/data ns0->/home/data,ns1->/home/data zhangxiping > Administrators rwxr-xr-x [NsQuota: -/-, SsQuota: -/-] > hadoop>{color:#FF}hdfs dfs -touch > hdfs://ns0/home/data/test/fileNs0.txt{color} > hadoop>{color:#FF}hdfs dfs -touch > hdfs://ns1/home/data/test/fileNs1.txt{color} > hadoop>{color:#FF}hdfs dfs -ls > hdfs://ns0/home/data/test/fileNs0.txt{color} > {-}rw-r{-}{-}r{-}- 3 zhangxiping supergroup 0 2022-05-06 18:01 > hdfs://ns0/home/data/test/fileNs0.txt > hadoop>{color:#FF}hdfs dfs -ls > hdfs://ns1/home/data/test/fileNs1.txt{color} > {-}rw-r{-}{-}r{-}- 3 zhangxiping supergroup 0 2022-05-06 18:01 > hdfs://ns1/home/data/test/fileNs1.txt > hadoop>{color:#FF}hdfs dfs -ls > hdfs://127.0.0.1:40250/home/data/test{color} > Found 2 items > {-}rw-r{-}{-}r{-}- 3 zhangxiping supergroup 0 2022-05-06 18:01 > hdfs://127.0.0.1:40250/home/data/test/fileNs0.txt > {-}rw-r{-}{-}r{-}- 3 zhangxiping supergroup 0 2022-05-06 18:01 > hdfs://127.0.0.1:40250/home/data/test/fileNs1.txt > hadoop>{color:#FF}hdfs dfs -rm -r > hdfs://127.0.0.1:40250/home/data/test{color} > rm: Failed to move to trash: hdfs://127.0.0.1:40250/home/data/test: rename > destination parent /user/zhangxiping/.Trash/Current/home/data/test not found. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16272) Int overflow in computing safe length during EC block recovery
[ https://issues.apache.org/jira/browse/HDFS-16272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-16272: --- Component/s: ec erasure-coding > Int overflow in computing safe length during EC block recovery > -- > > Key: HDFS-16272 > URL: https://issues.apache.org/jira/browse/HDFS-16272 > Project: Hadoop HDFS > Issue Type: Bug > Components: 3.1.1, ec, erasure-coding >Affects Versions: 3.3.0, 3.3.1 > Environment: Cluster settings: EC RS-8-2-256k, Block Size 1GiB. >Reporter: daimin >Assignee: daimin >Priority: Critical > Labels: pull-request-available > Fix For: 3.4.0, 3.2.3, 3.3.2 > > Time Spent: 1.5h > Remaining Estimate: 0h > > There exists an int overflow problem in StripedBlockUtil#getSafeLength, which > will produce a negative or zero length: > 1. With negative length, it fails to the later >=0 check, and will crash the > BlockRecoveryWorker thread, which make the lease recovery operation unable to > finish. > 2. With zero length, it passes the check, and directly truncate the block > size to zero, leads to data lossing. > If you are using any of the default EC policies (3-2, 6-3 or 10-4) and the > default HDFS block size of 128MB, then you will not be impacted by this issue. > To be impacted, the EC dataNumber * blockSize has to be larger than the Java > max int of 2,147,483,647. > For example 10-4 is 10 * 134217728 = 1,342,177,280 which is OK. > However 10-4 with 256MB blocks is 2,684,354,560 which overflows the INT and > causes the problem. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16540) Data locality is lost when DataNode pod restarts in kubernetes
[ https://issues.apache.org/jira/browse/HDFS-16540?focusedWorklogId=769499&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-769499 ] ASF GitHub Bot logged work on HDFS-16540: - Author: ASF GitHub Bot Created on: 12/May/22 09:26 Start Date: 12/May/22 09:26 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4246: URL: https://github.com/apache/hadoop/pull/4246#issuecomment-1124741659 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ branch-3.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 42s | | branch-3.3 passed | | +1 :green_heart: | compile | 1m 33s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 1m 13s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 1m 40s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 1m 58s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 3m 39s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 27m 1s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 20s | | the patch passed | | +1 :green_heart: | compile | 1m 16s | | the patch passed | | +1 :green_heart: | javac | 1m 16s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 50s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 23s | | the patch passed | | +1 :green_heart: | javadoc | 1m 26s | | the patch passed | | +1 :green_heart: | spotbugs | 3m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 13s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 193m 17s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4246/11/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 15s | | The patch does not generate ASF License warnings. | | | | 302m 10s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4246/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4246 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 14ee2742708c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 53773ea019ca5ed793d36035c7adbfe589f5926c | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4246/11/testReport/ | | Max. process+thread count | 2983 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4246/11/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. Issue Time Tracking --- Worklog Id: (was: 769499) Time Spent: 6h (was: 5h 50m) > Data locality is lost when DataNode pod restarts in kubernetes > --- > > Key: HDFS-16540 > URL: https://issues.apache.org/jira/browse/HDFS-16540 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.3.2 >Reporter: Huaxiang Sun >Assignee: Huaxiang
[jira] [Work logged] (HDFS-16525) System.err should be used when error occurs in multiple methods in DFSAdmin class
[ https://issues.apache.org/jira/browse/HDFS-16525?focusedWorklogId=769498&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-769498 ] ASF GitHub Bot logged work on HDFS-16525: - Author: ASF GitHub Bot Created on: 12/May/22 09:16 Start Date: 12/May/22 09:16 Worklog Time Spent: 10m Work Description: singer-bin commented on PR #4122: URL: https://github.com/apache/hadoop/pull/4122#issuecomment-1124732420 Thanks @ferhui Issue Time Tracking --- Worklog Id: (was: 769498) Time Spent: 2h 10m (was: 2h) > System.err should be used when error occurs in multiple methods in DFSAdmin > class > - > > Key: HDFS-16525 > URL: https://issues.apache.org/jira/browse/HDFS-16525 > Project: Hadoop HDFS > Issue Type: Bug > Components: dfsadmin >Affects Versions: 3.3.2 >Reporter: yanbin.zhang >Assignee: yanbin.zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 10m > Remaining Estimate: 0h > > System.err should be used when error occurs in multiple methods in DFSAdmin > class,as follows: > {code:java} > //DFSAdmin#refreshCallQueue > ... > try{ > proxy.getProxy().refreshCallQueue(); > System.out.println("Refresh call queue successful for " > + proxy.getAddress()); > }catch (IOException ioe){ > System.out.println("Refresh call queue failed for " > + proxy.getAddress()); > exceptions.add(ioe); > } > ... > {code} > The test method closed first in TestDFSAdminWithHA also needs to be > modified,otherwise an error will be reported,similar to the following: > {code:java} > [ERROR] Failures: > [ERROR] > TestDFSAdminWithHA.testRefreshCallQueueNN1DownNN2Up:726->assertOutputMatches:77 > Expected output to match 'Refresh call queue failed for.* > Refresh call queue successful for.* > ' but err_output was: > Refresh call queue failed for localhost/127.0.0.1:12876 > refreshCallQueue: Call From h110/10.1.234.110 to localhost:12876 failed on > connection exception: java.net.ConnectException: Connection refused; For more > details see: http://wiki.apache.org/hadoop/ConnectionRefused and output was: > Refresh call queue successful for localhost/127.0.0.1:12878{code} -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-16525) System.err should be used when error occurs in multiple methods in DFSAdmin class
[ https://issues.apache.org/jira/browse/HDFS-16525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hui Fei resolved HDFS-16525. Fix Version/s: 3.4.0 Resolution: Fixed > System.err should be used when error occurs in multiple methods in DFSAdmin > class > - > > Key: HDFS-16525 > URL: https://issues.apache.org/jira/browse/HDFS-16525 > Project: Hadoop HDFS > Issue Type: Bug > Components: dfsadmin >Affects Versions: 3.3.2 >Reporter: yanbin.zhang >Assignee: yanbin.zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h > Remaining Estimate: 0h > > System.err should be used when error occurs in multiple methods in DFSAdmin > class,as follows: > {code:java} > //DFSAdmin#refreshCallQueue > ... > try{ > proxy.getProxy().refreshCallQueue(); > System.out.println("Refresh call queue successful for " > + proxy.getAddress()); > }catch (IOException ioe){ > System.out.println("Refresh call queue failed for " > + proxy.getAddress()); > exceptions.add(ioe); > } > ... > {code} > The test method closed first in TestDFSAdminWithHA also needs to be > modified,otherwise an error will be reported,similar to the following: > {code:java} > [ERROR] Failures: > [ERROR] > TestDFSAdminWithHA.testRefreshCallQueueNN1DownNN2Up:726->assertOutputMatches:77 > Expected output to match 'Refresh call queue failed for.* > Refresh call queue successful for.* > ' but err_output was: > Refresh call queue failed for localhost/127.0.0.1:12876 > refreshCallQueue: Call From h110/10.1.234.110 to localhost:12876 failed on > connection exception: java.net.ConnectException: Connection refused; For more > details see: http://wiki.apache.org/hadoop/ConnectionRefused and output was: > Refresh call queue successful for localhost/127.0.0.1:12878{code} -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16525) System.err should be used when error occurs in multiple methods in DFSAdmin class
[ https://issues.apache.org/jira/browse/HDFS-16525?focusedWorklogId=769489&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-769489 ] ASF GitHub Bot logged work on HDFS-16525: - Author: ASF GitHub Bot Created on: 12/May/22 08:54 Start Date: 12/May/22 08:54 Worklog Time Spent: 10m Work Description: ferhui commented on PR #4122: URL: https://github.com/apache/hadoop/pull/4122#issuecomment-1124709825 @singer-bin Thanks for your contribution! @ayushtkn @tomscut Thanks for your reviews! Merged Issue Time Tracking --- Worklog Id: (was: 769489) Time Spent: 2h (was: 1h 50m) > System.err should be used when error occurs in multiple methods in DFSAdmin > class > - > > Key: HDFS-16525 > URL: https://issues.apache.org/jira/browse/HDFS-16525 > Project: Hadoop HDFS > Issue Type: Bug > Components: dfsadmin >Affects Versions: 3.3.2 >Reporter: yanbin.zhang >Assignee: yanbin.zhang >Priority: Major > Labels: pull-request-available > Time Spent: 2h > Remaining Estimate: 0h > > System.err should be used when error occurs in multiple methods in DFSAdmin > class,as follows: > {code:java} > //DFSAdmin#refreshCallQueue > ... > try{ > proxy.getProxy().refreshCallQueue(); > System.out.println("Refresh call queue successful for " > + proxy.getAddress()); > }catch (IOException ioe){ > System.out.println("Refresh call queue failed for " > + proxy.getAddress()); > exceptions.add(ioe); > } > ... > {code} > The test method closed first in TestDFSAdminWithHA also needs to be > modified,otherwise an error will be reported,similar to the following: > {code:java} > [ERROR] Failures: > [ERROR] > TestDFSAdminWithHA.testRefreshCallQueueNN1DownNN2Up:726->assertOutputMatches:77 > Expected output to match 'Refresh call queue failed for.* > Refresh call queue successful for.* > ' but err_output was: > Refresh call queue failed for localhost/127.0.0.1:12876 > refreshCallQueue: Call From h110/10.1.234.110 to localhost:12876 failed on > connection exception: java.net.ConnectException: Connection refused; For more > details see: http://wiki.apache.org/hadoop/ConnectionRefused and output was: > Refresh call queue successful for localhost/127.0.0.1:12878{code} -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16525) System.err should be used when error occurs in multiple methods in DFSAdmin class
[ https://issues.apache.org/jira/browse/HDFS-16525?focusedWorklogId=769488&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-769488 ] ASF GitHub Bot logged work on HDFS-16525: - Author: ASF GitHub Bot Created on: 12/May/22 08:53 Start Date: 12/May/22 08:53 Worklog Time Spent: 10m Work Description: ferhui merged PR #4122: URL: https://github.com/apache/hadoop/pull/4122 Issue Time Tracking --- Worklog Id: (was: 769488) Time Spent: 1h 50m (was: 1h 40m) > System.err should be used when error occurs in multiple methods in DFSAdmin > class > - > > Key: HDFS-16525 > URL: https://issues.apache.org/jira/browse/HDFS-16525 > Project: Hadoop HDFS > Issue Type: Bug > Components: dfsadmin >Affects Versions: 3.3.2 >Reporter: yanbin.zhang >Assignee: yanbin.zhang >Priority: Major > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > System.err should be used when error occurs in multiple methods in DFSAdmin > class,as follows: > {code:java} > //DFSAdmin#refreshCallQueue > ... > try{ > proxy.getProxy().refreshCallQueue(); > System.out.println("Refresh call queue successful for " > + proxy.getAddress()); > }catch (IOException ioe){ > System.out.println("Refresh call queue failed for " > + proxy.getAddress()); > exceptions.add(ioe); > } > ... > {code} > The test method closed first in TestDFSAdminWithHA also needs to be > modified,otherwise an error will be reported,similar to the following: > {code:java} > [ERROR] Failures: > [ERROR] > TestDFSAdminWithHA.testRefreshCallQueueNN1DownNN2Up:726->assertOutputMatches:77 > Expected output to match 'Refresh call queue failed for.* > Refresh call queue successful for.* > ' but err_output was: > Refresh call queue failed for localhost/127.0.0.1:12876 > refreshCallQueue: Call From h110/10.1.234.110 to localhost:12876 failed on > connection exception: java.net.ConnectException: Connection refused; For more > details see: http://wiki.apache.org/hadoop/ConnectionRefused and output was: > Refresh call queue successful for localhost/127.0.0.1:12878{code} -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16456) EC: Decommission a rack with only on dn will fail when the rack number is equal with replication
[ https://issues.apache.org/jira/browse/HDFS-16456?focusedWorklogId=769465&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-769465 ] ASF GitHub Bot logged work on HDFS-16456: - Author: ASF GitHub Bot Created on: 12/May/22 08:14 Start Date: 12/May/22 08:14 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4304: URL: https://github.com/apache/hadoop/pull/4304#issuecomment-1124671782 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 11m 3s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ branch-3.3 Compile Tests _ | | +0 :ok: | mvndep | 15m 42s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 44s | | branch-3.3 passed | | +1 :green_heart: | compile | 21m 44s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 3m 47s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 4m 25s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 4m 33s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 6m 51s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 28m 26s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 32s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 49s | [/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4304/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | compile | 2m 50s | [/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4304/1/artifact/out/patch-compile-root.txt) | root in the patch failed. | | -1 :x: | javac | 2m 50s | [/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4304/1/artifact/out/patch-compile-root.txt) | root in the patch failed. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 2m 29s | | the patch passed | | -1 :x: | mvnsite | 0m 54s | [/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4304/1/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | +1 :green_heart: | javadoc | 3m 13s | | the patch passed | | -1 :x: | spotbugs | 0m 50s | [/patch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4304/1/artifact/out/patch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | shadedclient | 12m 40s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 52s | | hadoop-common in the patch passed. | | -1 :x: | unit | 0m 50s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4304/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 167m 55s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4304/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4304 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 8d3f7941228c 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / c387e506e8d0fb76d000ec507f9b44bcaee4fd69 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4304/1/testReport/ | | Max. process+thread count | 1267 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common
[jira] [Work logged] (HDFS-13522) RBF: Support observer node from Router-Based Federation
[ https://issues.apache.org/jira/browse/HDFS-13522?focusedWorklogId=769463&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-769463 ] ASF GitHub Bot logged work on HDFS-13522: - Author: ASF GitHub Bot Created on: 12/May/22 08:09 Start Date: 12/May/22 08:09 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4127: URL: https://github.com/apache/hadoop/pull/4127#issuecomment-1124665943 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 12 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 39s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 27m 37s | | trunk passed | | +1 :green_heart: | compile | 25m 6s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 21m 43s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 6m 37s | | trunk passed | | -1 :x: | javadoc | 1m 36s | [/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/8/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-common in trunk failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 6m 36s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 11m 51s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 57s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 1s | | the patch passed | | +1 :green_heart: | compile | 24m 5s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 24m 5s | | the patch passed | | +1 :green_heart: | compile | 21m 34s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 21m 34s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 55s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/8/artifact/out/results-checkstyle-root.txt) | root: The patch generated 2 new + 340 unchanged - 1 fixed = 342 total (was 341) | | +1 :green_heart: | mvnsite | 7m 16s | | the patch passed | | +1 :green_heart: | xml | 0m 3s | | The patch has no ill-formed XML file. | | -1 :x: | javadoc | 1m 47s | [/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/8/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-common in the patch failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 7m 35s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 13m 11s | | the patch passed | | +1 :green_heart: | shadedclient | 25m 6s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 16s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 55s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 364m 41s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 34m 57s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 1m 35s | | The patch