[jira] [Updated] (HDFS-15053) RBF: Add permission check for safemode operation
[ https://issues.apache.org/jira/browse/HDFS-15053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He updated HDFS-15053: --- Attachment: HDFS-15053.002.patch > RBF: Add permission check for safemode operation > > > Key: HDFS-15053 > URL: https://issues.apache.org/jira/browse/HDFS-15053 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-15053.001.patch, HDFS-15053.002.patch > > > Propose to add superuser permission check for safemode operation. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15058) TestFsck.testFsckListCorruptFilesBlocks and TestFsck.testFsckListCorruptSnapshotFiles fail some times
[ https://issues.apache.org/jira/browse/HDFS-15058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995435#comment-16995435 ] hemanthboyina commented on HDFS-15058: -- hi [~seanlau] , this issue was already raised in HDFS-15038 > TestFsck.testFsckListCorruptFilesBlocks and > TestFsck.testFsckListCorruptSnapshotFiles fail some times > - > > Key: HDFS-15058 > URL: https://issues.apache.org/jira/browse/HDFS-15058 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: liusheng >Priority: Major > > when I am try to run the tests of HDFS, the > *TestFsck.testFsckListCorruptFilesBlocks* and > *TestFsck.testFsckListCorruptSnapshotFiles* tests are easy to fail, see: > {code:java} > 06:26:38 [ERROR] Failures: > 06:26:38 [ERROR] TestFsck.testFsckListCorruptFilesBlocks:1167 > 06:26:38 [ERROR] TestFsck.testFsckListCorruptSnapshotFiles:2167 > 06:26:38 [INFO] > 06:26:38 [ERROR] Tests run: 33, Failures: 2, Errors: 0, Skipped: 0 > {code} > Both of these two tests failures are mainly because the tests will check the > number of corrupt files after sleep *1000 ms* and the number is not equal to > expected. see: > {noformat} > blk_1073741825 /corruptData/8117051706407353421 > blk_1073741825 /corruptData/.snapshot/mySnapShot/8117051706407353421 > The filesystem under path '/corruptData' has 2 CORRUPT files2. bad fsck > include snapshot out: The list of corrupt files under path '/corruptData' are: > blk_1073741825 /corruptData/8117051706407353421 > blk_1073741825 /corruptData/.snapshot/mySnapShot/8117051706407353421 > The filesystem under path '/corruptData' has 2 CORRUPT files2019-12-13 > 06:26:35,808 [Listener at localhost/44367] INFO hdfs.MiniDFSCluster > (MiniDFSCluster.java:shutdown(2067)) - Shutting down the Mini HDFS Cluster > 2019-12-13 06:26:35,808 [Listener at localhost/44367] INFO > hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNode(2115)) - Shutting > down DataNode 0 > {noformat} > To fix these two tests, we need to enlarge the sleep time of 1000 ms, > according to my testing, enlaging the time to *5000* ms can make the tests > passed every times. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15058) TestFsck.testFsckListCorruptFilesBlocks and TestFsck.testFsckListCorruptSnapshotFiles fail some times
[ https://issues.apache.org/jira/browse/HDFS-15058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] liusheng updated HDFS-15058: Description: when I am try to run the tests of HDFS, the *TestFsck.testFsckListCorruptFilesBlocks* and *TestFsck.testFsckListCorruptSnapshotFiles* tests are easy to fail, see: {code:java} 06:26:38 [ERROR] Failures: 06:26:38 [ERROR] TestFsck.testFsckListCorruptFilesBlocks:1167 06:26:38 [ERROR] TestFsck.testFsckListCorruptSnapshotFiles:2167 06:26:38 [INFO] 06:26:38 [ERROR] Tests run: 33, Failures: 2, Errors: 0, Skipped: 0 {code} Both of these two tests failures are mainly because the tests will check the number of corrupt files after sleep *1000 ms* and the number is not equal to expected. see: {noformat} blk_1073741825 /corruptData/8117051706407353421 blk_1073741825 /corruptData/.snapshot/mySnapShot/8117051706407353421 The filesystem under path '/corruptData' has 2 CORRUPT files2. bad fsck include snapshot out: The list of corrupt files under path '/corruptData' are: blk_1073741825 /corruptData/8117051706407353421 blk_1073741825 /corruptData/.snapshot/mySnapShot/8117051706407353421 The filesystem under path '/corruptData' has 2 CORRUPT files2019-12-13 06:26:35,808 [Listener at localhost/44367] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(2067)) - Shutting down the Mini HDFS Cluster 2019-12-13 06:26:35,808 [Listener at localhost/44367] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNode(2115)) - Shutting down DataNode 0 {noformat} To fix these two tests, we need to enlarge the sleep time of 1000 ms, according to my testing, enlaging the time to *5000* ms can make the tests passed every times. was: when I am try to run the tests of HDFS, the *TestFsck.testFsckListCorruptFilesBlocks* and *TestFsck.testFsckListCorruptSnapshotFiles* tests are easy to fail, see: {code:java} 06:26:38 [ERROR] Failures: 06:26:38 [ERROR] TestFsck.testFsckListCorruptFilesBlocks:1167 06:26:38 [ERROR] TestFsck.testFsckListCorruptSnapshotFiles:2167 06:26:38 [INFO] 06:26:38 [ERROR] Tests run: 33, Failures: 2, Errors: 0, Skipped: 0 {code} Both of these two tests failures are mainly because the tests will check the number of corrupt files after sleep *1000 ms* and the number is not equal to expected. see: {noformat} blk_1073741825 /corruptData/8117051706407353421 blk_1073741825 /corruptData/.snapshot/mySnapShot/8117051706407353421 The filesystem under path '/corruptData' has 2 CORRUPT files2. bad fsck include snapshot out: The list of corrupt files under path '/corruptData' are: blk_1073741825 /corruptData/8117051706407353421 blk_1073741825 /corruptData/.snapshot/mySnapShot/8117051706407353421 The filesystem under path '/corruptData' has 2 CORRUPT files2019-12-13 06:26:35,808 [Listener at localhost/44367] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(2067)) - Shutting down the Mini HDFS Cluster 2019-12-13 06:26:35,808 [Listener at localhost/44367] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNode(2115)) - Shutting down DataNode 0 {noformat} To fix these two tests, we need to enlarge the sleep time of 1000 ms, according to my testing, enlaging the time to *5000* ms can make the tests passed every times. > TestFsck.testFsckListCorruptFilesBlocks and > TestFsck.testFsckListCorruptSnapshotFiles fail some times > - > > Key: HDFS-15058 > URL: https://issues.apache.org/jira/browse/HDFS-15058 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: liusheng >Priority: Major > > when I am try to run the tests of HDFS, the > *TestFsck.testFsckListCorruptFilesBlocks* and > *TestFsck.testFsckListCorruptSnapshotFiles* tests are easy to fail, see: > {code:java} > 06:26:38 [ERROR] Failures: > 06:26:38 [ERROR] TestFsck.testFsckListCorruptFilesBlocks:1167 > 06:26:38 [ERROR] TestFsck.testFsckListCorruptSnapshotFiles:2167 > 06:26:38 [INFO] > 06:26:38 [ERROR] Tests run: 33, Failures: 2, Errors: 0, Skipped: 0 > {code} > Both of these two tests failures are mainly because the tests will check the > number of corrupt files after sleep *1000 ms* and the number is not equal to > expected. see: > {noformat} > blk_1073741825 /corruptData/8117051706407353421 > blk_1073741825 /corruptData/.snapshot/mySnapShot/8117051706407353421 > The filesystem under path '/corruptData' has 2 CORRUPT files2. bad fsck > include snapshot out: The list of corrupt files under path '/corruptData' are: > blk_1073741825 /corruptData/8117051706407353421 > blk_1073741825 /corruptData/.snapshot/mySnapShot/8117051706407353421 > The filesystem under path '/corruptData' has 2 CORRUPT files2019-12-13 > 06:26:35,808 [Listener at localhost/44367] INFO hdfs.MiniDFSCluster > (MiniDFSCluster.java:shutdown(2067)) - Shutting down
[jira] [Created] (HDFS-15058) TestFsck.testFsckListCorruptFilesBlocks and TestFsck.testFsckListCorruptSnapshotFiles fail some times
liusheng created HDFS-15058: --- Summary: TestFsck.testFsckListCorruptFilesBlocks and TestFsck.testFsckListCorruptSnapshotFiles fail some times Key: HDFS-15058 URL: https://issues.apache.org/jira/browse/HDFS-15058 Project: Hadoop HDFS Issue Type: Bug Reporter: liusheng when I am try to run the tests of HDFS, the *TestFsck.testFsckListCorruptFilesBlocks* and *TestFsck.testFsckListCorruptSnapshotFiles* tests are easy to fail, see: {code:java} 06:26:38 [ERROR] Failures: 06:26:38 [ERROR] TestFsck.testFsckListCorruptFilesBlocks:1167 06:26:38 [ERROR] TestFsck.testFsckListCorruptSnapshotFiles:2167 06:26:38 [INFO] 06:26:38 [ERROR] Tests run: 33, Failures: 2, Errors: 0, Skipped: 0 {code} Both of these two tests failures are mainly because the tests will check the number of corrupt files after sleep *1000 ms* and the number is not equal to expected. see: {noformat} blk_1073741825 /corruptData/8117051706407353421 blk_1073741825 /corruptData/.snapshot/mySnapShot/8117051706407353421 The filesystem under path '/corruptData' has 2 CORRUPT files2. bad fsck include snapshot out: The list of corrupt files under path '/corruptData' are: blk_1073741825 /corruptData/8117051706407353421 blk_1073741825 /corruptData/.snapshot/mySnapShot/8117051706407353421 The filesystem under path '/corruptData' has 2 CORRUPT files2019-12-13 06:26:35,808 [Listener at localhost/44367] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(2067)) - Shutting down the Mini HDFS Cluster 2019-12-13 06:26:35,808 [Listener at localhost/44367] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNode(2115)) - Shutting down DataNode 0 {noformat} To fix these two tests, we need to enlarge the sleep time of 1000 ms, according to my testing, enlaging the time to 5000 ms can ensure tests passed every times. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15058) TestFsck.testFsckListCorruptFilesBlocks and TestFsck.testFsckListCorruptSnapshotFiles fail some times
[ https://issues.apache.org/jira/browse/HDFS-15058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] liusheng updated HDFS-15058: Description: when I am try to run the tests of HDFS, the *TestFsck.testFsckListCorruptFilesBlocks* and *TestFsck.testFsckListCorruptSnapshotFiles* tests are easy to fail, see: {code:java} 06:26:38 [ERROR] Failures: 06:26:38 [ERROR] TestFsck.testFsckListCorruptFilesBlocks:1167 06:26:38 [ERROR] TestFsck.testFsckListCorruptSnapshotFiles:2167 06:26:38 [INFO] 06:26:38 [ERROR] Tests run: 33, Failures: 2, Errors: 0, Skipped: 0 {code} Both of these two tests failures are mainly because the tests will check the number of corrupt files after sleep *1000 ms* and the number is not equal to expected. see: {noformat} blk_1073741825 /corruptData/8117051706407353421 blk_1073741825 /corruptData/.snapshot/mySnapShot/8117051706407353421 The filesystem under path '/corruptData' has 2 CORRUPT files2. bad fsck include snapshot out: The list of corrupt files under path '/corruptData' are: blk_1073741825 /corruptData/8117051706407353421 blk_1073741825 /corruptData/.snapshot/mySnapShot/8117051706407353421 The filesystem under path '/corruptData' has 2 CORRUPT files2019-12-13 06:26:35,808 [Listener at localhost/44367] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(2067)) - Shutting down the Mini HDFS Cluster 2019-12-13 06:26:35,808 [Listener at localhost/44367] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNode(2115)) - Shutting down DataNode 0 {noformat} To fix these two tests, we need to enlarge the sleep time of 1000 ms, according to my testing, enlaging the time to *5000* ms can make the tests passed every times. was: when I am try to run the tests of HDFS, the *TestFsck.testFsckListCorruptFilesBlocks* and *TestFsck.testFsckListCorruptSnapshotFiles* tests are easy to fail, see: {code:java} 06:26:38 [ERROR] Failures: 06:26:38 [ERROR] TestFsck.testFsckListCorruptFilesBlocks:1167 06:26:38 [ERROR] TestFsck.testFsckListCorruptSnapshotFiles:2167 06:26:38 [INFO] 06:26:38 [ERROR] Tests run: 33, Failures: 2, Errors: 0, Skipped: 0 {code} Both of these two tests failures are mainly because the tests will check the number of corrupt files after sleep *1000 ms* and the number is not equal to expected. see: {noformat} blk_1073741825 /corruptData/8117051706407353421 blk_1073741825 /corruptData/.snapshot/mySnapShot/8117051706407353421 The filesystem under path '/corruptData' has 2 CORRUPT files2. bad fsck include snapshot out: The list of corrupt files under path '/corruptData' are: blk_1073741825 /corruptData/8117051706407353421 blk_1073741825 /corruptData/.snapshot/mySnapShot/8117051706407353421 The filesystem under path '/corruptData' has 2 CORRUPT files2019-12-13 06:26:35,808 [Listener at localhost/44367] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(2067)) - Shutting down the Mini HDFS Cluster 2019-12-13 06:26:35,808 [Listener at localhost/44367] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNode(2115)) - Shutting down DataNode 0 {noformat} To fix these two tests, we need to enlarge the sleep time of 1000 ms, according to my testing, enlaging the time to 5000 ms can ensure tests passed every times. > TestFsck.testFsckListCorruptFilesBlocks and > TestFsck.testFsckListCorruptSnapshotFiles fail some times > - > > Key: HDFS-15058 > URL: https://issues.apache.org/jira/browse/HDFS-15058 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: liusheng >Priority: Major > > when I am try to run the tests of HDFS, the > *TestFsck.testFsckListCorruptFilesBlocks* and > *TestFsck.testFsckListCorruptSnapshotFiles* tests are easy to fail, see: > {code:java} > 06:26:38 [ERROR] Failures: > 06:26:38 [ERROR] TestFsck.testFsckListCorruptFilesBlocks:1167 > 06:26:38 [ERROR] TestFsck.testFsckListCorruptSnapshotFiles:2167 > 06:26:38 [INFO] > 06:26:38 [ERROR] Tests run: 33, Failures: 2, Errors: 0, Skipped: 0 > {code} > Both of these two tests failures are mainly because the tests will check the > number of corrupt files after sleep *1000 ms* and the number is not equal to > expected. see: > {noformat} > blk_1073741825 /corruptData/8117051706407353421 > blk_1073741825 /corruptData/.snapshot/mySnapShot/8117051706407353421 > The filesystem under path '/corruptData' has 2 CORRUPT files2. bad fsck > include snapshot out: The list of corrupt files under path '/corruptData' are: > blk_1073741825 /corruptData/8117051706407353421 > blk_1073741825 /corruptData/.snapshot/mySnapShot/8117051706407353421 > The filesystem under path '/corruptData' has 2 CORRUPT files2019-12-13 > 06:26:35,808 [Listener at localhost/44367] INFO hdfs.MiniDFSCluster > (MiniDFSCluster.java:shutdown(2067)) - Shutting
[jira] [Commented] (HDFS-15048) Fix findbug in DirectoryScanner
[ https://issues.apache.org/jira/browse/HDFS-15048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995410#comment-16995410 ] Hadoop QA commented on HDFS-15048: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 36s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 17s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 31m 41s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:e573ea49085 | | JIRA Issue | HDFS-15048 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12988758/HDFS-15048.001.patch | | Optional Tests | dupname asflicense xml | | uname | Linux 2661c2f0cf0f 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 65c4660 | | maven | version: Apache Maven 3.3.9 | | Max. process+thread count | 346 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/28522/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Fix findbug in DirectoryScanner > --- > > Key: HDFS-15048 > URL: https://issues.apache.org/jira/browse/HDFS-15048 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Takanobu Asanuma >Assignee: Masatake Iwasaki >Priority: Major > Attachments: HDFS-15048.001.patch > > > There is a findbug in DirectoryScanner. > {noformat} > Multithreaded correctness Warnings > org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile() calls > Thread.sleep() with a lock held > Bug type SWL_SLEEP_WITH_LOCK_HELD (click for details) > In class org.apache.hadoop.hdfs.server.datanode.DirectoryScanner > In method org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile() > At DirectoryScanner.java:[line 441] > {noformat} > https://builds.apache.org/job/PreCommit-HDFS-Build/28498/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15048) Fix findbug in DirectoryScanner
[ https://issues.apache.org/jira/browse/HDFS-15048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HDFS-15048: Status: Patch Available (was: Open) > Fix findbug in DirectoryScanner > --- > > Key: HDFS-15048 > URL: https://issues.apache.org/jira/browse/HDFS-15048 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Takanobu Asanuma >Assignee: Masatake Iwasaki >Priority: Major > Attachments: HDFS-15048.001.patch > > > There is a findbug in DirectoryScanner. > {noformat} > Multithreaded correctness Warnings > org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile() calls > Thread.sleep() with a lock held > Bug type SWL_SLEEP_WITH_LOCK_HELD (click for details) > In class org.apache.hadoop.hdfs.server.datanode.DirectoryScanner > In method org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile() > At DirectoryScanner.java:[line 441] > {noformat} > https://builds.apache.org/job/PreCommit-HDFS-Build/28498/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15048) Fix findbug in DirectoryScanner
[ https://issues.apache.org/jira/browse/HDFS-15048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995388#comment-16995388 ] Masatake Iwasaki commented on HDFS-15048: - {quote}it is ok since there is supposed to be just one DirectoryScanner in a DataNode except for a test case. {quote} Sure. The problems is that {{DataNodeTestUtils#runDirectoryScanner}} calls {{DirectoryScanner#reconcile}} from outside. I'd not like to update the DirectoryScanner a lot just for test utils. Suppressing the warning should be enough. > Fix findbug in DirectoryScanner > --- > > Key: HDFS-15048 > URL: https://issues.apache.org/jira/browse/HDFS-15048 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Takanobu Asanuma >Assignee: Masatake Iwasaki >Priority: Major > Attachments: HDFS-15048.001.patch > > > There is a findbug in DirectoryScanner. > {noformat} > Multithreaded correctness Warnings > org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile() calls > Thread.sleep() with a lock held > Bug type SWL_SLEEP_WITH_LOCK_HELD (click for details) > In class org.apache.hadoop.hdfs.server.datanode.DirectoryScanner > In method org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile() > At DirectoryScanner.java:[line 441] > {noformat} > https://builds.apache.org/job/PreCommit-HDFS-Build/28498/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15048) Fix findbug in DirectoryScanner
[ https://issues.apache.org/jira/browse/HDFS-15048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HDFS-15048: Attachment: HDFS-15048.001.patch > Fix findbug in DirectoryScanner > --- > > Key: HDFS-15048 > URL: https://issues.apache.org/jira/browse/HDFS-15048 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Takanobu Asanuma >Assignee: Masatake Iwasaki >Priority: Major > Attachments: HDFS-15048.001.patch > > > There is a findbug in DirectoryScanner. > {noformat} > Multithreaded correctness Warnings > org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile() calls > Thread.sleep() with a lock held > Bug type SWL_SLEEP_WITH_LOCK_HELD (click for details) > In class org.apache.hadoop.hdfs.server.datanode.DirectoryScanner > In method org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile() > At DirectoryScanner.java:[line 441] > {noformat} > https://builds.apache.org/job/PreCommit-HDFS-Build/28498/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HDFS-15041) Make MAX_LOCK_HOLD_MS and full queue size configurable
[ https://issues.apache.org/jira/browse/HDFS-15041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuqi updated HDFS-15041: - Comment: was deleted (was: Thanks for [~hexiaoqiao] to help to cc [~weichiu]. Now i am the Hadoop YARN Contributor, could you help me to add to Hadoop HDFS Contributor. It's my honor to contribute to Hadoop HDFS.) > Make MAX_LOCK_HOLD_MS and full queue size configurable > -- > > Key: HDFS-15041 > URL: https://issues.apache.org/jira/browse/HDFS-15041 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 3.2.0 >Reporter: zhuqi >Assignee: zhuqi >Priority: Major > Attachments: HDFS-15041.001.patch, HDFS-15041.002.patch > > > Now the MAX_LOCK_HOLD_MS and the full queue size are fixed. But different > cluster have different need for the latency and the queue health standard. > We'd better to make the two parameter configurable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15057) NFS: Error 'E72: Close error on swap file' occur when vi a file
WangZhichao created HDFS-15057: -- Summary: NFS: Error 'E72: Close error on swap file' occur when vi a file Key: HDFS-15057 URL: https://issues.apache.org/jira/browse/HDFS-15057 Project: Hadoop HDFS Issue Type: Bug Components: nfs Affects Versions: 3.2.1 Reporter: WangZhichao 10.43.183.108 is nfs-hdfs-gateway HQxDAP-161 is nfs-hdfs-client Steps are as follows: [root@HQxDAP-161 mnt]# mount -t nfs -o vers=3,proto=tcp,nolock 10.43.183.108:/ /mnt/test [root@HQxDAP-161 mnt]# cd /mnt/test [root@HQxDAP-161 test]# ls 1.txt [root@HQxDAP-161 test]# vi 1.txt E72: Close error on swap file[root@HQxDAP-161 test]# -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15016) RBF: getDatanodeReport() should return the latest update
[ https://issues.apache.org/jira/browse/HDFS-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995311#comment-16995311 ] Íñigo Goiri commented on HDFS-15016: [~ayushtkn] can you take a look? > RBF: getDatanodeReport() should return the latest update > > > Key: HDFS-15016 > URL: https://issues.apache.org/jira/browse/HDFS-15016 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-15016.000.patch, HDFS-15016.001.patch, > HDFS-15016.002.patch, HDFS-15016.003.patch > > > Currently, when the Router calls getDatanodeReport() (or > getDatanodeStorageReport()) and the DN is in multiple clusters, it just takes > the one that comes first. It should consider the latest update. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15056) NFS: Error 'Stale file handle' caused by executing 'mount' command in the mount directory after mounting to nfs-hdfs-gateway
[ https://issues.apache.org/jira/browse/HDFS-15056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] WangZhichao updated HDFS-15056: --- Description: 10.43.33.246 is nfs-hdfs-gateway, centos87 is nfs-hdfs-client, The recur steps are as follows: [root@centos87 ~]# mount -t nfs -o vers=3,proto=tcp,nolock 10.43.33.246:/ /var/data/share/am [root@centos87 ~]# cd /var/data/share/am [root@centos87 am]# ls csvFile hbase oneminer solr sparkSQL spark-tmp testaie3 user zaip_data_87 hadoop hive saveModelPath spark sparkSQL-tmp test time_series_demo wzc [root@centos87 am]# mount | grep 10.43 10.43.33.246:/ on /var/data/share/am type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.43.33.246,mountvers=3,mountport=4242,mountproto=tcp,local_lock=all,addr=10.43.33.246) [root@centos87 am]# ls -lrt ls: cannot open directory .: Stale file handle Key point: 1. The operating system of the node(centos87) where nfs-client is located is CentOS Linux release 7.7.1908 (Core). It can recur with Redhat7, but it can not recur with Redhat6. 2. After the command 'mount' is executed in the mount directory('/var/data/share/am'), the problem will recur. If it is executed in other directories, the problem will not recur. was: 10.43.33.246 is nfs-hdfs-gateway, centos87 is nfs-hdfs-client, The reproduction steps are as follows: [root@centos87 ~]# mount -t nfs -o vers=3,proto=tcp,nolock 10.43.33.246:/ /var/data/share/am [root@centos87 ~]# cd /var/data/share/am [root@centos87 am]# ls csvFile hbase oneminer solr sparkSQL spark-tmp testaie3 user zaip_data_87 hadoop hive saveModelPath spark sparkSQL-tmp test time_series_demo wzc [root@centos87 am]# mount | grep 10.43 10.43.33.246:/ on /var/data/share/am type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.43.33.246,mountvers=3,mountport=4242,mountproto=tcp,local_lock=all,addr=10.43.33.246) [root@centos87 am]# ls -lrt ls: cannot open directory .: Stale file handle Key point: 1. The operating system of the node(centos87) where nfs-client is located is CentOS Linux release 7.7.1908 (Core). It can be reproduced with Redhat7, but it can not be reproduced with Redhat6. 2. After the command 'mount' is executed in the mount directory('/var/data/share/am'), the problem will recur. If it is executed in other directories, the problem will not recur. > NFS: Error 'Stale file handle' caused by executing 'mount' command in the > mount directory after mounting to nfs-hdfs-gateway > > > Key: HDFS-15056 > URL: https://issues.apache.org/jira/browse/HDFS-15056 > Project: Hadoop HDFS > Issue Type: Bug > Components: nfs >Affects Versions: 3.2.1 >Reporter: WangZhichao >Priority: Major > > 10.43.33.246 is nfs-hdfs-gateway, > centos87 is nfs-hdfs-client, > The recur steps are as follows: > [root@centos87 ~]# mount -t nfs -o vers=3,proto=tcp,nolock 10.43.33.246:/ > /var/data/share/am > [root@centos87 ~]# cd /var/data/share/am > [root@centos87 am]# ls > csvFile hbase oneminer solr sparkSQL spark-tmp testaie3 user zaip_data_87 > hadoop hive saveModelPath spark sparkSQL-tmp test time_series_demo wzc > [root@centos87 am]# mount | grep 10.43 > 10.43.33.246:/ on /var/data/share/am type nfs > (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.43.33.246,mountvers=3,mountport=4242,mountproto=tcp,local_lock=all,addr=10.43.33.246) > [root@centos87 am]# ls -lrt > ls: cannot open directory .: Stale file handle > > Key point: > 1. The operating system of the node(centos87) where nfs-client is located is > CentOS Linux release 7.7.1908 (Core). It can recur with Redhat7, but it can > not recur with Redhat6. > 2. After the command 'mount' is executed in the mount > directory('/var/data/share/am'), the problem will recur. If it is executed in > other directories, the problem will not recur. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15056) NFS: Error 'Stale file handle' caused by executing 'mount' command in the mount directory after mounting to nfs-hdfs-gateway
WangZhichao created HDFS-15056: -- Summary: NFS: Error 'Stale file handle' caused by executing 'mount' command in the mount directory after mounting to nfs-hdfs-gateway Key: HDFS-15056 URL: https://issues.apache.org/jira/browse/HDFS-15056 Project: Hadoop HDFS Issue Type: Bug Components: nfs Affects Versions: 3.2.1 Reporter: WangZhichao 10.43.33.246 is nfs-hdfs-gateway, centos87 is nfs-hdfs-client, The reproduction steps are as follows: [root@centos87 ~]# mount -t nfs -o vers=3,proto=tcp,nolock 10.43.33.246:/ /var/data/share/am [root@centos87 ~]# cd /var/data/share/am [root@centos87 am]# ls csvFile hbase oneminer solr sparkSQL spark-tmp testaie3 user zaip_data_87 hadoop hive saveModelPath spark sparkSQL-tmp test time_series_demo wzc [root@centos87 am]# mount | grep 10.43 10.43.33.246:/ on /var/data/share/am type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.43.33.246,mountvers=3,mountport=4242,mountproto=tcp,local_lock=all,addr=10.43.33.246) [root@centos87 am]# ls -lrt ls: cannot open directory .: Stale file handle Key point: 1. The operating system of the node(centos87) where nfs-client is located is CentOS Linux release 7.7.1908 (Core). It can be reproduced with Redhat7, but it can not be reproduced with Redhat6. 2. After the command 'mount' is executed in the mount directory('/var/data/share/am'), the problem will recur. If it is executed in other directories, the problem will not recur. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15053) RBF: Add permission check for safemode operation
[ https://issues.apache.org/jira/browse/HDFS-15053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995291#comment-16995291 ] Ayush Saxena commented on HDFS-15053: - Thanx [~hexiaoqiao] for working on this. I think we can refactor to a single methods {{checkSuperuserPrivilege()}} as it is in {{FSNamesystem}} For the test, Setting back to superuser should be in finally block, if the test fails before, otherwise it won't reset. Apart LGTM > RBF: Add permission check for safemode operation > > > Key: HDFS-15053 > URL: https://issues.apache.org/jira/browse/HDFS-15053 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-15053.001.patch > > > Propose to add superuser permission check for safemode operation. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15055) Hedging clones client's buffer
[ https://issues.apache.org/jira/browse/HDFS-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-15055: -- Priority: Minor (was: Major) > Hedging clones client's buffer > -- > > Key: HDFS-15055 > URL: https://issues.apache.org/jira/browse/HDFS-15055 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 2.9.2, 3.3.0, 3.2.1, 2.9.3, 3.2.2 >Reporter: Lukas Majercak >Priority: Minor > > Currently, DFSInputStream clones the buffer passed from the caller for every > request, this can have severe impact on the performance. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Reopened] (HDFS-15055) Hedging clones client's buffer
[ https://issues.apache.org/jira/browse/HDFS-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak reopened HDFS-15055: --- > Hedging clones client's buffer > -- > > Key: HDFS-15055 > URL: https://issues.apache.org/jira/browse/HDFS-15055 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 2.9.2, 3.3.0, 3.2.1, 2.9.3, 3.2.2 >Reporter: Lukas Majercak >Priority: Major > > Currently, DFSInputStream clones the buffer passed from the caller for every > request, this can have severe impact on the performance. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15055) Hedging clones client's buffer
[ https://issues.apache.org/jira/browse/HDFS-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995256#comment-16995256 ] Lukas Majercak commented on HDFS-15055: --- Although I feel like this still could be an issue. Potentially we'll create up to a blocksize sized buffer for every single hedged request. > Hedging clones client's buffer > -- > > Key: HDFS-15055 > URL: https://issues.apache.org/jira/browse/HDFS-15055 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 2.9.2, 3.3.0, 3.2.1, 2.9.3, 3.2.2 >Reporter: Lukas Majercak >Priority: Major > > Currently, DFSInputStream clones the buffer passed from the caller for every > request, this can have severe impact on the performance. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15055) Hedging clones client's buffer
[ https://issues.apache.org/jira/browse/HDFS-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995254#comment-16995254 ] Lukas Majercak commented on HDFS-15055: --- Closing as we actually create a separate buffer for the length requested. > Hedging clones client's buffer > -- > > Key: HDFS-15055 > URL: https://issues.apache.org/jira/browse/HDFS-15055 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 2.9.2, 3.3.0, 3.2.1, 2.9.3, 3.2.2 >Reporter: Lukas Majercak >Priority: Major > > Currently, DFSInputStream clones the buffer passed from the caller for every > request, this can have severe impact on the performance. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15055) Hedging clones client's buffer
[ https://issues.apache.org/jira/browse/HDFS-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak resolved HDFS-15055. --- Resolution: Not A Problem > Hedging clones client's buffer > -- > > Key: HDFS-15055 > URL: https://issues.apache.org/jira/browse/HDFS-15055 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 2.9.2, 3.3.0, 3.2.1, 2.9.3, 3.2.2 >Reporter: Lukas Majercak >Priority: Major > > Currently, DFSInputStream clones the buffer passed from the caller for every > request, this can have severe impact on the performance. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15055) Hedging clones client's buffer
[ https://issues.apache.org/jira/browse/HDFS-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-15055: -- Description: Currently, DFSInputStream clones the buffer passed from the caller for every request, this can have severe impact on the performance. (was: Currently, DFSInputStream clones the buffer passed from the caller for every request, this can have severe impact on the performance (imagine cloning a 1GB buffer).) > Hedging clones client's buffer > -- > > Key: HDFS-15055 > URL: https://issues.apache.org/jira/browse/HDFS-15055 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 2.9.2, 3.3.0, 3.2.1, 2.9.3, 3.2.2 >Reporter: Lukas Majercak >Priority: Major > > Currently, DFSInputStream clones the buffer passed from the caller for every > request, this can have severe impact on the performance. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15055) Hedging clones client's buffer
[ https://issues.apache.org/jira/browse/HDFS-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-15055: -- Priority: Major (was: Critical) > Hedging clones client's buffer > -- > > Key: HDFS-15055 > URL: https://issues.apache.org/jira/browse/HDFS-15055 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 2.9.2, 3.3.0, 3.2.1, 2.9.3, 3.2.2 >Reporter: Lukas Majercak >Priority: Major > > Currently, DFSInputStream clones the buffer passed from the caller for every > request, this can have severe impact on the performance (imagine cloning a > 1GB buffer). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15055) Hedging clones client's buffer
[ https://issues.apache.org/jira/browse/HDFS-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-15055: -- Description: Currently, DFSInputStream clones the buffer passed from the caller for every request, this can have severe impact on the performance (imagine cloning a 1GB buffer). (was: _emphasized text_) > Hedging clones client's buffer > -- > > Key: HDFS-15055 > URL: https://issues.apache.org/jira/browse/HDFS-15055 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 2.9.2, 3.3.0, 3.2.1, 2.9.3, 3.2.2 >Reporter: Lukas Majercak >Priority: Critical > > Currently, DFSInputStream clones the buffer passed from the caller for every > request, this can have severe impact on the performance (imagine cloning a > 1GB buffer). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15055) Hedging clones client's buffer
Lukas Majercak created HDFS-15055: - Summary: Hedging clones client's buffer Key: HDFS-15055 URL: https://issues.apache.org/jira/browse/HDFS-15055 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs-client Affects Versions: 3.2.1, 2.9.2, 3.3.0, 2.9.3, 3.2.2 Reporter: Lukas Majercak -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15055) Hedging clones client's buffer
[ https://issues.apache.org/jira/browse/HDFS-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-15055: -- Description: _emphasized text_ > Hedging clones client's buffer > -- > > Key: HDFS-15055 > URL: https://issues.apache.org/jira/browse/HDFS-15055 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 2.9.2, 3.3.0, 3.2.1, 2.9.3, 3.2.2 >Reporter: Lukas Majercak >Priority: Critical > > _emphasized text_ -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15048) Fix findbug in DirectoryScanner
[ https://issues.apache.org/jira/browse/HDFS-15048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995244#comment-16995244 ] Wei-Chiu Chuang commented on HDFS-15048: It's a bad practice (blame me for not catching it) but from a correctness stand point it is ok since there is supposed to be just one DirectoryScanner in a DataNode except for a test case. It is good to fix it though. > Fix findbug in DirectoryScanner > --- > > Key: HDFS-15048 > URL: https://issues.apache.org/jira/browse/HDFS-15048 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Takanobu Asanuma >Assignee: Masatake Iwasaki >Priority: Major > > There is a findbug in DirectoryScanner. > {noformat} > Multithreaded correctness Warnings > org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile() calls > Thread.sleep() with a lock held > Bug type SWL_SLEEP_WITH_LOCK_HELD (click for details) > In class org.apache.hadoop.hdfs.server.datanode.DirectoryScanner > In method org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile() > At DirectoryScanner.java:[line 441] > {noformat} > https://builds.apache.org/job/PreCommit-HDFS-Build/28498/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15046) Backport HDFS-7060 to branch-2.10
[ https://issues.apache.org/jira/browse/HDFS-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995242#comment-16995242 ] Hadoop QA commented on HDFS-15046: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 19s{color} | {color:red} Docker failed to build yetus/hadoop:f555aa740b5. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-15046 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12988644/HDFS-15046.branch-2.001.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/28520/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Backport HDFS-7060 to branch-2.10 > - > > Key: HDFS-15046 > URL: https://issues.apache.org/jira/browse/HDFS-15046 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Lisheng Sun >Priority: Major > Attachments: HDFS-15046.branch-2.001.patch, > HDFS-15046.branch-2.9.001.patch, HDFS-15046.branch-2.9.002.patch > > > Not sure why it didn't get backported in 2.x before, but looks like a good > improvement overall. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15046) Backport HDFS-7060 to branch-2.10
[ https://issues.apache.org/jira/browse/HDFS-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995240#comment-16995240 ] Wei-Chiu Chuang commented on HDFS-15046: HADOOP-16754 is in branch-2.9 now. Retrigger the precommit. > Backport HDFS-7060 to branch-2.10 > - > > Key: HDFS-15046 > URL: https://issues.apache.org/jira/browse/HDFS-15046 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Lisheng Sun >Priority: Major > Attachments: HDFS-15046.branch-2.001.patch, > HDFS-15046.branch-2.9.001.patch, HDFS-15046.branch-2.9.002.patch > > > Not sure why it didn't get backported in 2.x before, but looks like a good > improvement overall. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15044) [Dynamometer] Show the line of audit log when parsing it unsuccessfully
[ https://issues.apache.org/jira/browse/HDFS-15044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995238#comment-16995238 ] Takanobu Asanuma commented on HDFS-15044: - Thanks for reviewing and committing it, [~xkrogen]! > [Dynamometer] Show the line of audit log when parsing it unsuccessfully > --- > > Key: HDFS-15044 > URL: https://issues.apache.org/jira/browse/HDFS-15044 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: tools >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Fix For: 3.3.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14989) Add a 'swapBlockList' operation to Namenode.
[ https://issues.apache.org/jira/browse/HDFS-14989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated HDFS-14989: - Status: Patch Available (was: In Progress) > Add a 'swapBlockList' operation to Namenode. > > > Key: HDFS-14989 > URL: https://issues.apache.org/jira/browse/HDFS-14989 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Major > > Borrowing from the design doc. > bq. The swapBlockList takes two parameters, a source file and a destination > file. This operation swaps the blocks belonging to the source and the > destination atomically. > bq. The namespace metadata of interest is the INodeFile class. A file > (INodeFile) contains a header composed of PREFERRED_BLOCK_SIZE, > BLOCK_LAYOUT_AND_REDUNDANCY and STORAGE_POLICY_ID. In addition, an INodeFile > contains a list of blocks (BlockInfo[]). The operation will swap > BLOCK_LAYOUT_AND_REDUNDANCY header bits and the block lists. But it will not > touch other fields. To avoid complication, this operation will abort if > either file is open (isUnderConstruction() == true) > bq. Additionally, this operation introduces a new opcode OP_SWAP_BLOCK_LIST > to record the change persistently. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15036) Active NameNode should not silently fail the image transfer
[ https://issues.apache.org/jira/browse/HDFS-15036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-15036: -- Fix Version/s: 3.2.2 3.1.4 > Active NameNode should not silently fail the image transfer > --- > > Key: HDFS-15036 > URL: https://issues.apache.org/jira/browse/HDFS-15036 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.10.0 >Reporter: Konstantin Shvachko >Assignee: Chen Liang >Priority: Major > Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1 > > Attachments: HDFS-15036.001.patch, HDFS-15036.002.patch, > HDFS-15036.003.patch > > > Image transfer from Standby NameNode to Active silently fails on Active, > without any logging and not notifying the receiver side. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15036) Active NameNode should not silently fail the image transfer
[ https://issues.apache.org/jira/browse/HDFS-15036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995038#comment-16995038 ] Chen Liang edited comment on HDFS-15036 at 12/12/19 11:42 PM: -- Thanks [~shv]! I've committed to trunk and branch-2, will commit to branch-3.2 and branch-3.1 shortly as well. was (Author: vagarychen): Thanks [~shv]! I've committed to trunk and branch-2. > Active NameNode should not silently fail the image transfer > --- > > Key: HDFS-15036 > URL: https://issues.apache.org/jira/browse/HDFS-15036 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.10.0 >Reporter: Konstantin Shvachko >Assignee: Chen Liang >Priority: Major > Fix For: 3.3.0, 2.10.1 > > Attachments: HDFS-15036.001.patch, HDFS-15036.002.patch, > HDFS-15036.003.patch > > > Image transfer from Standby NameNode to Active silently fails on Active, > without any logging and not notifying the receiver side. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15038) TestFsck testFsckListCorruptSnapshotFiles is failing in trunk
[ https://issues.apache.org/jira/browse/HDFS-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995179#comment-16995179 ] Hadoop QA commented on HDFS-15038: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 31s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 29s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 49s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 46s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}171m 51s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDeadNodeDetection | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:e573ea49085 | | JIRA Issue | HDFS-15038 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12988722/HDFS-15038.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 164dca16a347 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 65c4660 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/28519/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/28519/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/28519/testReport/ | | Max. process+thread count | 2675
[jira] [Commented] (HDFS-15036) Active NameNode should not silently fail the image transfer
[ https://issues.apache.org/jira/browse/HDFS-15036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995038#comment-16995038 ] Chen Liang commented on HDFS-15036: --- Thanks [~shv]! I've committed to trunk and branch-2. > Active NameNode should not silently fail the image transfer > --- > > Key: HDFS-15036 > URL: https://issues.apache.org/jira/browse/HDFS-15036 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.10.0 >Reporter: Konstantin Shvachko >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-15036.001.patch, HDFS-15036.002.patch, > HDFS-15036.003.patch > > > Image transfer from Standby NameNode to Active silently fails on Active, > without any logging and not notifying the receiver side. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15036) Active NameNode should not silently fail the image transfer
[ https://issues.apache.org/jira/browse/HDFS-15036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-15036: -- Fix Version/s: 2.10.1 3.3.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Active NameNode should not silently fail the image transfer > --- > > Key: HDFS-15036 > URL: https://issues.apache.org/jira/browse/HDFS-15036 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.10.0 >Reporter: Konstantin Shvachko >Assignee: Chen Liang >Priority: Major > Fix For: 3.3.0, 2.10.1 > > Attachments: HDFS-15036.001.patch, HDFS-15036.002.patch, > HDFS-15036.003.patch > > > Image transfer from Standby NameNode to Active silently fails on Active, > without any logging and not notifying the receiver side. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15038) TestFsck testFsckListCorruptSnapshotFiles is failing in trunk
[ https://issues.apache.org/jira/browse/HDFS-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994999#comment-16994999 ] Íñigo Goiri commented on HDFS-15038: This looks much better. A couple minor comments: * Probably passing the path as a final argument to the function instead of setting "/corruptData" internally. * Use lambda to define the function (() -> ). * I would log any exception instead of ignoring right away. * Make the javadoc a full doc with arguments, etc. * Reduce the time to check from 500 to 200 or 100. > TestFsck testFsckListCorruptSnapshotFiles is failing in trunk > - > > Key: HDFS-15038 > URL: https://issues.apache.org/jira/browse/HDFS-15038 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15038.001.patch > > > [https://builds.apache.org/job/PreCommit-HDFS-Build/28481/testReport/] > > [https://builds.apache.org/job/PreCommit-HDFS-Build/28482/testReport/] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15038) TestFsck testFsckListCorruptSnapshotFiles is failing in trunk
[ https://issues.apache.org/jira/browse/HDFS-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994981#comment-16994981 ] hemanthboyina commented on HDFS-15038: -- updated the patch , please review In some of the builds TestFsck testFsckListCorruptFilesBlocks was failing which is having same issue . have updated for the same . for refernce [https://builds.apache.org/job/PreCommit-HDFS-Build/28482/testReport/] > TestFsck testFsckListCorruptSnapshotFiles is failing in trunk > - > > Key: HDFS-15038 > URL: https://issues.apache.org/jira/browse/HDFS-15038 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15038.001.patch > > > [https://builds.apache.org/job/PreCommit-HDFS-Build/28481/testReport/] > > [https://builds.apache.org/job/PreCommit-HDFS-Build/28482/testReport/] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15038) TestFsck testFsckListCorruptSnapshotFiles is failing in trunk
[ https://issues.apache.org/jira/browse/HDFS-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hemanthboyina updated HDFS-15038: - Attachment: (was: HDFS-15038.001.patch) > TestFsck testFsckListCorruptSnapshotFiles is failing in trunk > - > > Key: HDFS-15038 > URL: https://issues.apache.org/jira/browse/HDFS-15038 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15038.001.patch > > > [https://builds.apache.org/job/PreCommit-HDFS-Build/28481/testReport/] > > [https://builds.apache.org/job/PreCommit-HDFS-Build/28482/testReport/] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15038) TestFsck testFsckListCorruptSnapshotFiles is failing in trunk
[ https://issues.apache.org/jira/browse/HDFS-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hemanthboyina updated HDFS-15038: - Attachment: HDFS-15038.001.patch Status: Patch Available (was: Open) > TestFsck testFsckListCorruptSnapshotFiles is failing in trunk > - > > Key: HDFS-15038 > URL: https://issues.apache.org/jira/browse/HDFS-15038 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15038.001.patch > > > [https://builds.apache.org/job/PreCommit-HDFS-Build/28481/testReport/] > > [https://builds.apache.org/job/PreCommit-HDFS-Build/28482/testReport/] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15038) TestFsck testFsckListCorruptSnapshotFiles is failing in trunk
[ https://issues.apache.org/jira/browse/HDFS-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hemanthboyina updated HDFS-15038: - Attachment: HDFS-15038.001.patch > TestFsck testFsckListCorruptSnapshotFiles is failing in trunk > - > > Key: HDFS-15038 > URL: https://issues.apache.org/jira/browse/HDFS-15038 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15038.001.patch > > > [https://builds.apache.org/job/PreCommit-HDFS-Build/28481/testReport/] > > [https://builds.apache.org/job/PreCommit-HDFS-Build/28482/testReport/] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15051) RBF: Propose to revoke WRITE MountTableEntry privilege to super user only
[ https://issues.apache.org/jira/browse/HDFS-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994972#comment-16994972 ] Íñigo Goiri commented on HDFS-15051: My user case is that we have federated subfolders (e.g., /user) and we want to allow a user to update mount points in subfolders (e.g., /user/user1). I thought this was covered with the current ACLs but it looks like it is not. My vote is to fix those and allow teams to control their own mount points without be super user. So ideally if user1 wants to let his group change a mount point, it should be allowed. In general, the concept of letting users manage mount points is very powerful for them. Obviously, we need to have proper security for them. Could you post a unit test with a case that shouldn't happen? > RBF: Propose to revoke WRITE MountTableEntry privilege to super user only > - > > Key: HDFS-15051 > URL: https://issues.apache.org/jira/browse/HDFS-15051 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-15051.001.patch > > > The current permission checker of #MountTableStoreImpl is not very restrict. > In some case, any user could add/update/remove MountTableEntry without the > expected permission checking. > The following code segment try to check permission when operate > MountTableEntry, however mountTable object is from Client/RouterAdmin > {{MountTable mountTable = request.getEntry();}}, and user could pass any mode > which could bypass the permission checker. > {code:java} > public void checkPermission(MountTable mountTable, FsAction access) > throws AccessControlException { > if (isSuperUser()) { > return; > } > FsPermission mode = mountTable.getMode(); > if (getUser().equals(mountTable.getOwnerName()) > && mode.getUserAction().implies(access)) { > return; > } > if (isMemberOfGroup(mountTable.getGroupName()) > && mode.getGroupAction().implies(access)) { > return; > } > if (!getUser().equals(mountTable.getOwnerName()) > && !isMemberOfGroup(mountTable.getGroupName()) > && mode.getOtherAction().implies(access)) { > return; > } > throw new AccessControlException( > "Permission denied while accessing mount table " > + mountTable.getSourcePath() > + ": user " + getUser() + " does not have " + access.toString() > + " permissions."); > } > {code} > I just propose revoke WRITE MountTableEntry privilege to super user only. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15003) RBF: Make Router support storage type quota.
[ https://issues.apache.org/jira/browse/HDFS-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994968#comment-16994968 ] Íñigo Goiri commented on HDFS-15003: +1 on [^HDFS-15003.005.patch]. > RBF: Make Router support storage type quota. > > > Key: HDFS-15003 > URL: https://issues.apache.org/jira/browse/HDFS-15003 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HDFS-15003.001.patch, HDFS-15003.002.patch, > HDFS-15003.003.patch, HDFS-15003.004.patch, HDFS-15003.005.patch > > > Make Router support storage type quota. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15036) Active NameNode should not silently fail the image transfer
[ https://issues.apache.org/jira/browse/HDFS-15036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994950#comment-16994950 ] Hudson commented on HDFS-15036: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17758 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17758/]) HDFS-15036. Active NameNode should not silently fail the image transfer. (cliang: rev 65c4660bcd897e139fc175ca438cff75ec0c6be8) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/StandbyCheckpointer.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java > Active NameNode should not silently fail the image transfer > --- > > Key: HDFS-15036 > URL: https://issues.apache.org/jira/browse/HDFS-15036 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.10.0 >Reporter: Konstantin Shvachko >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-15036.001.patch, HDFS-15036.002.patch, > HDFS-15036.003.patch > > > Image transfer from Standby NameNode to Active silently fails on Active, > without any logging and not notifying the receiver side. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong
[ https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994942#comment-16994942 ] Hadoop QA commented on HDFS-14519: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.10 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 39s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} branch-2.10 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} branch-2.10 passed with JDK v1.8.0_222 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} branch-2.10 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} branch-2.10 passed with JDK v1.8.0_222 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 48s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}100m 16s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:a969cad0a12 | | JIRA Issue | HDFS-14519 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12988709/HDFS-14519-branch-2.10-03.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d6cf82683049 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-2.10 / b91fda7 | | maven | version: Apache
[jira] [Commented] (HDFS-15038) TestFsck testFsckListCorruptSnapshotFiles is failing in trunk
[ https://issues.apache.org/jira/browse/HDFS-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994921#comment-16994921 ] Íñigo Goiri commented on HDFS-15038: As usual, instead of waiting blindly, we should use a GenericTestUtils#waitFor(). > TestFsck testFsckListCorruptSnapshotFiles is failing in trunk > - > > Key: HDFS-15038 > URL: https://issues.apache.org/jira/browse/HDFS-15038 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > > [https://builds.apache.org/job/PreCommit-HDFS-Build/28481/testReport/] > > [https://builds.apache.org/job/PreCommit-HDFS-Build/28482/testReport/] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15053) RBF: Add permission check for safemode operation
[ https://issues.apache.org/jira/browse/HDFS-15053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994917#comment-16994917 ] Íñigo Goiri commented on HDFS-15053: +1 on [^HDFS-15053.001.patch]. > RBF: Add permission check for safemode operation > > > Key: HDFS-15053 > URL: https://issues.apache.org/jira/browse/HDFS-15053 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-15053.001.patch > > > Propose to add superuser permission check for safemode operation. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15038) TestFsck testFsckListCorruptSnapshotFiles is failing in trunk
[ https://issues.apache.org/jira/browse/HDFS-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994879#comment-16994879 ] hemanthboyina commented on HDFS-15038: -- {code:java} // wait for the namenode to see the corruption final NamenodeProtocols namenode = cluster.getNameNodeRpc(); CorruptFileBlocks corruptFileBlocks = namenode .listCorruptFileBlocks("/corruptData", null); int numCorrupt = corruptFileBlocks.getFiles().length;{code} we should wait for the file blocks to be corrupted , but the existing waiting time was not sufficient Have added a Thread.sleep(5000) and tested , it was working fine now . > TestFsck testFsckListCorruptSnapshotFiles is failing in trunk > - > > Key: HDFS-15038 > URL: https://issues.apache.org/jira/browse/HDFS-15038 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > > [https://builds.apache.org/job/PreCommit-HDFS-Build/28481/testReport/] > > [https://builds.apache.org/job/PreCommit-HDFS-Build/28482/testReport/] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong
[ https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994870#comment-16994870 ] Erik Krogen commented on HDFS-14519: Thanks for the update :) I am +1 on the backport as long as the Jenkins report comes back clean. > NameQuota is not update after concat operation, so namequota is wrong > - > > Key: HDFS-14519 > URL: https://issues.apache.org/jira/browse/HDFS-14519 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ranith Sardar >Assignee: Ranith Sardar >Priority: Major > Fix For: 3.3.0, 3.1.4, 3.2.2 > > Attachments: HDFS-14519-branch-2.10-003.patch, > HDFS-14519-branch-2.10-03.patch, HDFS-14519.001.patch, HDFS-14519.002.patch, > HDFS-14519.003.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong
[ https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-14519: Attachment: HDFS-14519-branch-2.10-03.patch > NameQuota is not update after concat operation, so namequota is wrong > - > > Key: HDFS-14519 > URL: https://issues.apache.org/jira/browse/HDFS-14519 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ranith Sardar >Assignee: Ranith Sardar >Priority: Major > Fix For: 3.3.0, 3.1.4, 3.2.2 > > Attachments: HDFS-14519-branch-2.10-003.patch, > HDFS-14519-branch-2.10-03.patch, HDFS-14519.001.patch, HDFS-14519.002.patch, > HDFS-14519.003.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13682) Cannot create encryption zone after KMS auth token expires
[ https://issues.apache.org/jira/browse/HDFS-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994851#comment-16994851 ] Nanda kumar commented on HDFS-13682: This change is breaking externally managed subjects. Even if the {{currentUGI}} (which is managed externally) has access, we go ahead and return {{UserGroupInformation.getLoginUser()}} from {{KMSClientProvider#getActualUgi}}. When the {{LoginUser}} doesn't have access, we get "{{GSSException: No valid credentials provided}}." As UGI.shouldRelogin() depends on isHadoopLogin(), it will break externally managed subjects. > Cannot create encryption zone after KMS auth token expires > -- > > Key: HDFS-13682 > URL: https://issues.apache.org/jira/browse/HDFS-13682 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption, kms, namenode >Affects Versions: 3.0.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Critical > Fix For: 3.2.0, 3.1.1, 3.0.4 > > Attachments: HDFS-13682.01.patch, HDFS-13682.02.patch, > HDFS-13682.03.patch, HDFS-13682.dirty.repro.branch-2.patch, > HDFS-13682.dirty.repro.patch > > > Our internal testing reported this behavior recently. > {noformat} > [root@nightly6x-1 ~]# sudo -u hdfs /usr/bin/kinit -kt > /cdep/keytabs/hdfs.keytab hdfs -l 30d -r 30d > [root@nightly6x-1 ~]# sudo -u hdfs klist > Ticket cache: FILE:/tmp/krb5cc_994 > Default principal: h...@gce.cloudera.com > Valid starting Expires Service principal > 06/12/2018 03:24:09 07/12/2018 03:24:09 > krbtgt/gce.cloudera@gce.cloudera.com > [root@nightly6x-1 ~]# sudo -u hdfs hdfs crypto -createZone -keyName key77 > -path /user/systest/ez > RemoteException: > org.apache.hadoop.security.authentication.client.AuthenticationException: > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt) > {noformat} > Upon further investigation, it's due to the KMS client (cached in HDFS NN) > cannot authenticate with the server after the authentication token (which is > cached by KMSCP) expires, even if the HDFS client RPC has valid kerberos > credentials. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong
[ https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994820#comment-16994820 ] Erik Krogen commented on HDFS-14519: Hi [~ayushtkn], it looks like the {{branch-2.10-003}} patch you uploaded still has {{Assert.assertEquals}} so it is failing to compile. Did you attach the wrong file perhaps? > NameQuota is not update after concat operation, so namequota is wrong > - > > Key: HDFS-14519 > URL: https://issues.apache.org/jira/browse/HDFS-14519 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ranith Sardar >Assignee: Ranith Sardar >Priority: Major > Fix For: 3.3.0, 3.1.4, 3.2.2 > > Attachments: HDFS-14519-branch-2.10-003.patch, HDFS-14519.001.patch, > HDFS-14519.002.patch, HDFS-14519.003.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15044) [Dynamometer] Show the line of audit log when parsing it unsuccessfully
[ https://issues.apache.org/jira/browse/HDFS-15044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994818#comment-16994818 ] Hudson commented on HDFS-15044: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17757 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17757/]) HDFS-15044. [Dynamometer] Show the line of audit log when parsing it (xkrogen: rev c210cede5ce143a0c12646d82d657863f0ec96b6) * (edit) hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/audit/AuditLogDirectParser.java > [Dynamometer] Show the line of audit log when parsing it unsuccessfully > --- > > Key: HDFS-15044 > URL: https://issues.apache.org/jira/browse/HDFS-15044 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: tools >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Fix For: 3.3.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15047) Document the new decommission monitor (HDFS-14854)
[ https://issues.apache.org/jira/browse/HDFS-15047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HDFS-15047: Fix Version/s: 3.3.0 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Committed this. Thanks for the review, [~inigoiri]. > Document the new decommission monitor (HDFS-14854) > -- > > Key: HDFS-15047 > URL: https://issues.apache.org/jira/browse/HDFS-15047 > Project: Hadoop HDFS > Issue Type: Task > Components: documentation >Affects Versions: 3.3.0 >Reporter: Wei-Chiu Chuang >Assignee: Masatake Iwasaki >Priority: Major > Fix For: 3.3.0 > > > We can document HDFS-14854, add it to > https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html > and mark it as an experimental feature. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15044) [Dynamometer] Show the line of audit log when parsing it unsuccessfully
[ https://issues.apache.org/jira/browse/HDFS-15044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-15044: --- Resolution: Fixed Status: Resolved (was: Patch Available) > [Dynamometer] Show the line of audit log when parsing it unsuccessfully > --- > > Key: HDFS-15044 > URL: https://issues.apache.org/jira/browse/HDFS-15044 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: tools >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Fix For: 3.3.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15044) [Dynamometer] Show the line of audit log when parsing it unsuccessfully
[ https://issues.apache.org/jira/browse/HDFS-15044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-15044: --- Fix Version/s: 3.3.0 > [Dynamometer] Show the line of audit log when parsing it unsuccessfully > --- > > Key: HDFS-15044 > URL: https://issues.apache.org/jira/browse/HDFS-15044 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: tools >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Fix For: 3.3.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15047) Document the new decommission monitor (HDFS-14854)
[ https://issues.apache.org/jira/browse/HDFS-15047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994801#comment-16994801 ] Hudson commented on HDFS-15047: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17756 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17756/]) HDFS-15047. Document the new decommission monitor (HDFS-14854). (#1755) (github: rev bdd00f10b46c1c856433e2948906f36c70d3a0be) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDataNodeAdminGuide.md > Document the new decommission monitor (HDFS-14854) > -- > > Key: HDFS-15047 > URL: https://issues.apache.org/jira/browse/HDFS-15047 > Project: Hadoop HDFS > Issue Type: Task > Components: documentation >Affects Versions: 3.3.0 >Reporter: Wei-Chiu Chuang >Assignee: Masatake Iwasaki >Priority: Major > > We can document HDFS-14854, add it to > https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html > and mark it as an experimental feature. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15044) [Dynamometer] Show the line of audit log when parsing it unsuccessfully
[ https://issues.apache.org/jira/browse/HDFS-15044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994803#comment-16994803 ] Erik Krogen commented on HDFS-15044: Just committed this to trunk. Thanks for the contribution [~tasanuma]! > [Dynamometer] Show the line of audit log when parsing it unsuccessfully > --- > > Key: HDFS-15044 > URL: https://issues.apache.org/jira/browse/HDFS-15044 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: tools >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14854) Create improved decommission monitor implementation
[ https://issues.apache.org/jira/browse/HDFS-14854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994802#comment-16994802 ] Hudson commented on HDFS-14854: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17756 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17756/]) HDFS-15047. Document the new decommission monitor (HDFS-14854). (#1755) (github: rev bdd00f10b46c1c856433e2948906f36c70d3a0be) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDataNodeAdminGuide.md > Create improved decommission monitor implementation > --- > > Key: HDFS-14854 > URL: https://issues.apache.org/jira/browse/HDFS-14854 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 3.3.0 >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell >Priority: Major > Fix For: 3.3.0 > > Attachments: 012_to_013_changes.diff, > Decommission_Monitor_V2_001.pdf, HDFS-14854.001.patch, HDFS-14854.002.patch, > HDFS-14854.003.patch, HDFS-14854.004.patch, HDFS-14854.005.patch, > HDFS-14854.006.patch, HDFS-14854.007.patch, HDFS-14854.008.patch, > HDFS-14854.009.patch, HDFS-14854.010.patch, HDFS-14854.011.patch, > HDFS-14854.012.patch, HDFS-14854.013.patch, HDFS-14854.014.patch > > > In HDFS-13157, we discovered a series of problems with the current > decommission monitor implementation, such as: > * Blocks are replicated sequentially disk by disk and node by node, and > hence the load is not spread well across the cluster > * Adding a node for decommission can cause the namenode write lock to be > held for a long time. > * Decommissioning nodes floods the replication queue and under replicated > blocks from a future node or disk failure may way for a long time before they > are replicated. > * Blocks pending replication are checked many times under a write lock > before they are sufficiently replicate, wasting resources > In this Jira I propose to create a new implementation of the decommission > monitor that resolves these issues. As it will be difficult to prove one > implementation is better than another, the new implementation can be enabled > or disabled giving the option of the existing implementation or the new one. > I will attach a pdf with some more details on the design and then a version 1 > patch shortly. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15048) Fix findbug in DirectoryScanner
[ https://issues.apache.org/jira/browse/HDFS-15048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki reassigned HDFS-15048: --- Assignee: Masatake Iwasaki > Fix findbug in DirectoryScanner > --- > > Key: HDFS-15048 > URL: https://issues.apache.org/jira/browse/HDFS-15048 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Takanobu Asanuma >Assignee: Masatake Iwasaki >Priority: Major > > There is a findbug in DirectoryScanner. > {noformat} > Multithreaded correctness Warnings > org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile() calls > Thread.sleep() with a lock held > Bug type SWL_SLEEP_WITH_LOCK_HELD (click for details) > In class org.apache.hadoop.hdfs.server.datanode.DirectoryScanner > In method org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile() > At DirectoryScanner.java:[line 441] > {noformat} > https://builds.apache.org/job/PreCommit-HDFS-Build/28498/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15052) WebHDFS getTrashRoot leads to OOM due to FileSystem object creation
[ https://issues.apache.org/jira/browse/HDFS-15052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HDFS-15052: Status: Patch Available (was: Open) > WebHDFS getTrashRoot leads to OOM due to FileSystem object creation > --- > > Key: HDFS-15052 > URL: https://issues.apache.org/jira/browse/HDFS-15052 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 3.0.0-alpha2, 2.9.0 >Reporter: Wei-Chiu Chuang >Assignee: Masatake Iwasaki >Priority: Major > > Quoting [~daryn] in HDFS-10756 : > {quote}Surprised nobody has discovered this will lead to an inevitable OOM in > the NN. The NN should not be creating filesystems to itself, and must never > create filesystems in a remote user's context or the cache will explode. > {quote} > I guess the problem lies in side NamenodeWebHdfsMethods#getTrashRoot > {code:java} > private static String getTrashRoot(String fullPath, > Configuration conf) throws IOException { > FileSystem fs = FileSystem.get(conf != null ? conf : new > Configuration()); > return fs.getTrashRoot( > new org.apache.hadoop.fs.Path(fullPath)).toUri().getPath(); > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15054) Delete Snapshot not updating new modification time
[ https://issues.apache.org/jira/browse/HDFS-15054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hemanthboyina updated HDFS-15054: - Description: on creating a snapshot , we set modifcation time for the snapshot along with that we update modification time of snapshot created directory {code:java} snapshotRoot.updateModificationTime(now, Snapshot.CURRENT_STATE_ID); s.getRoot().setModificationTime(now, Snapshot.CURRENT_STATE_ID); {code} So on deleting snapshot , we should update the modification time for snapshot created directory . > Delete Snapshot not updating new modification time > -- > > Key: HDFS-15054 > URL: https://issues.apache.org/jira/browse/HDFS-15054 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > > on creating a snapshot , we set modifcation time for the snapshot along with > that we update modification time of snapshot created directory > {code:java} > snapshotRoot.updateModificationTime(now, Snapshot.CURRENT_STATE_ID); > s.getRoot().setModificationTime(now, Snapshot.CURRENT_STATE_ID); {code} > So on deleting snapshot , we should update the modification time for snapshot > created directory . -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15054) Delete Snapshot not updating new modification time
hemanthboyina created HDFS-15054: Summary: Delete Snapshot not updating new modification time Key: HDFS-15054 URL: https://issues.apache.org/jira/browse/HDFS-15054 Project: Hadoop HDFS Issue Type: Bug Reporter: hemanthboyina Assignee: hemanthboyina -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15053) RBF: Add permission check for safemode operation
[ https://issues.apache.org/jira/browse/HDFS-15053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994759#comment-16994759 ] Hadoop QA commented on HDFS-15053: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 47s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 42s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 15s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 62m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterFaultTolerant | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:e573ea49085 | | JIRA Issue | HDFS-15053 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12988697/HDFS-15053.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f21db3f6b660 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0e28cd8 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/28517/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/28517/testReport/ | | Max. process+thread count | 2738 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/28517/console | | Powered by | Apache Yetus
[jira] [Commented] (HDFS-15051) RBF: Propose to revoke WRITE MountTableEntry privilege to super user only
[ https://issues.apache.org/jira/browse/HDFS-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994693#comment-16994693 ] Xiaoqiao He commented on HDFS-15051: It seems that no related with other module to add permission check and set only super user have privilege to operate safemode (at interface #RouterStateManager), So I file another separate JIRA(HDFS-15053) to work on this. > RBF: Propose to revoke WRITE MountTableEntry privilege to super user only > - > > Key: HDFS-15051 > URL: https://issues.apache.org/jira/browse/HDFS-15051 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-15051.001.patch > > > The current permission checker of #MountTableStoreImpl is not very restrict. > In some case, any user could add/update/remove MountTableEntry without the > expected permission checking. > The following code segment try to check permission when operate > MountTableEntry, however mountTable object is from Client/RouterAdmin > {{MountTable mountTable = request.getEntry();}}, and user could pass any mode > which could bypass the permission checker. > {code:java} > public void checkPermission(MountTable mountTable, FsAction access) > throws AccessControlException { > if (isSuperUser()) { > return; > } > FsPermission mode = mountTable.getMode(); > if (getUser().equals(mountTable.getOwnerName()) > && mode.getUserAction().implies(access)) { > return; > } > if (isMemberOfGroup(mountTable.getGroupName()) > && mode.getGroupAction().implies(access)) { > return; > } > if (!getUser().equals(mountTable.getOwnerName()) > && !isMemberOfGroup(mountTable.getGroupName()) > && mode.getOtherAction().implies(access)) { > return; > } > throw new AccessControlException( > "Permission denied while accessing mount table " > + mountTable.getSourcePath() > + ": user " + getUser() + " does not have " + access.toString() > + " permissions."); > } > {code} > I just propose revoke WRITE MountTableEntry privilege to super user only. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15053) RBF: Add permission check for safemode operation
[ https://issues.apache.org/jira/browse/HDFS-15053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He updated HDFS-15053: --- Attachment: HDFS-15053.001.patch Status: Patch Available (was: Open) submit the init patch with permission check when operate safemode, only super user have this privilege. > RBF: Add permission check for safemode operation > > > Key: HDFS-15053 > URL: https://issues.apache.org/jira/browse/HDFS-15053 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-15053.001.patch > > > Propose to add superuser permission check for safemode operation. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15053) RBF: Add permission check for safemode operation
Xiaoqiao He created HDFS-15053: -- Summary: RBF: Add permission check for safemode operation Key: HDFS-15053 URL: https://issues.apache.org/jira/browse/HDFS-15053 Project: Hadoop HDFS Issue Type: Sub-task Components: rbf Reporter: Xiaoqiao He Assignee: Xiaoqiao He Propose to add superuser permission check for safemode operation. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15050) Optimize log information when DFSInputStream meet CannotObtainBlockLengthException
[ https://issues.apache.org/jira/browse/HDFS-15050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-15050: --- Fix Version/s: 2.10.1 2.9.3 > Optimize log information when DFSInputStream meet > CannotObtainBlockLengthException > -- > > Key: HDFS-15050 > URL: https://issues.apache.org/jira/browse/HDFS-15050 > Project: Hadoop HDFS > Issue Type: Improvement > Components: dfsclient >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Fix For: 3.3.0, 2.9.3, 3.1.4, 3.2.2, 2.10.1 > > Attachments: HDFS-15050.001.patch > > > We could not identify which file it belongs easily when DFSInputStream meet > CannotObtainBlockLengthException, as the following exception log. Just > suggest to log file path string when we meet CannotObtainBlockLengthException. > {code:java} > Caused by: java.io.IOException: Cannot obtain block length for > LocatedBlock{BP-***:blk_***_***; getBlockSize()=690504; corrupt=false; > offset=1811939328; > locs=[DatanodeInfoWithStorage[*:50010,DS-2bcadcc4-458a-45c6-a91b-8461bf7cdd71,DISK], > > DatanodeInfoWithStorage[*:50010,DS-8f2bb259-ecb2-4839-8769-4a0523360d58,DISK], > > DatanodeInfoWithStorage[*:50010,DS-69f4de6f-2428-42ff-9486-98c2544b1ada,DISK]]} > at > org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:402) > at > org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:345) > at > org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:280) > at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:272) > at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1664) > at > org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304) > at > org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:300) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:300) > at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161) > at > org.apache.hadoop.fs.viewfs.ChRootedFileSystem.open(ChRootedFileSystem.java:266) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem.open(ViewFileSystem.java:481) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:828) > at > org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:109) > at > org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67) > at > org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:65) > ... 16 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13511) Provide specialized exception when block length cannot be obtained
[ https://issues.apache.org/jira/browse/HDFS-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-13511: --- Fix Version/s: 2.10.1 2.9.3 > Provide specialized exception when block length cannot be obtained > -- > > Key: HDFS-13511 > URL: https://issues.apache.org/jira/browse/HDFS-13511 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Gabor Bota >Priority: Major > Fix For: 3.2.0, 3.1.1, 2.9.3, 2.10.1 > > Attachments: HDFS-13511.001.patch, HDFS-13511.002.patch, > HDFS-13511.003.patch > > > In downstream project, I saw the following code: > {code} > FSDataInputStream inputStream = hdfs.open(new Path(path)); > ... > if (options.getRecoverFailedOpen() && dfs != null && > e.getMessage().toLowerCase() > .startsWith("cannot obtain block length for")) { > {code} > The above tightly depends on the following in DFSInputStream#readBlockLength > {code} > throw new IOException("Cannot obtain block length for " + locatedblock); > {code} > The check based on string matching is brittle in production deployment. > After discussing with [~ste...@apache.org], better approach is to introduce > specialized IOException, e.g. CannotObtainBlockLengthException so that > downstream project doesn't have to rely on string matching. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong
[ https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994528#comment-16994528 ] Hadoop QA commented on HDFS-14519: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 5m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.10 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 4s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} branch-2.10 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} branch-2.10 passed with JDK v1.8.0_222 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 26s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s{color} | {color:green} branch-2.10 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} branch-2.10 passed with JDK v1.8.0_222 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 53s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 0s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 0s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 0s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_222. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 0s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_222. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 1m 0s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 59s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:a969cad0a12 | | JIRA Issue | HDFS-14519 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12988571/HDFS-14519-branch-2.10-003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 766857fd97ec 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-2.10 / a969cad | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | Multi-JDK versions | /usr/lib/jvm/
[jira] [Commented] (HDFS-15050) Optimize log information when DFSInputStream meet CannotObtainBlockLengthException
[ https://issues.apache.org/jira/browse/HDFS-15050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994506#comment-16994506 ] Hudson commented on HDFS-15050: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17755 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17755/]) HDFS-15050. Optimize log information when DFSInputStream meet (weichiu: rev 0e28cd8f63615ed2f1183f27efb5c2aaf6aa) * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/CannotObtainBlockLengthException.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java > Optimize log information when DFSInputStream meet > CannotObtainBlockLengthException > -- > > Key: HDFS-15050 > URL: https://issues.apache.org/jira/browse/HDFS-15050 > Project: Hadoop HDFS > Issue Type: Improvement > Components: dfsclient >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Fix For: 3.3.0, 3.1.4, 3.2.2 > > Attachments: HDFS-15050.001.patch > > > We could not identify which file it belongs easily when DFSInputStream meet > CannotObtainBlockLengthException, as the following exception log. Just > suggest to log file path string when we meet CannotObtainBlockLengthException. > {code:java} > Caused by: java.io.IOException: Cannot obtain block length for > LocatedBlock{BP-***:blk_***_***; getBlockSize()=690504; corrupt=false; > offset=1811939328; > locs=[DatanodeInfoWithStorage[*:50010,DS-2bcadcc4-458a-45c6-a91b-8461bf7cdd71,DISK], > > DatanodeInfoWithStorage[*:50010,DS-8f2bb259-ecb2-4839-8769-4a0523360d58,DISK], > > DatanodeInfoWithStorage[*:50010,DS-69f4de6f-2428-42ff-9486-98c2544b1ada,DISK]]} > at > org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:402) > at > org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:345) > at > org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:280) > at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:272) > at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1664) > at > org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304) > at > org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:300) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:300) > at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161) > at > org.apache.hadoop.fs.viewfs.ChRootedFileSystem.open(ChRootedFileSystem.java:266) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem.open(ViewFileSystem.java:481) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:828) > at > org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:109) > at > org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67) > at > org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:65) > ... 16 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15003) RBF: Make Router support storage type quota.
[ https://issues.apache.org/jira/browse/HDFS-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994500#comment-16994500 ] Hadoop QA commented on HDFS-15003: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 56s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 43s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 27s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 61m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:e573ea49085 | | JIRA Issue | HDFS-15003 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12988657/HDFS-15003.005.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c977ea84c085 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 93bb368 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/28515/testReport/ | | Max. process+thread count | 2789 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/28515/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > RBF: Make Router support storage type quota. > > > Key: HDFS-15003 > URL: http
[jira] [Updated] (HDFS-15050) Optimize log information when DFSInputStream meet CannotObtainBlockLengthException
[ https://issues.apache.org/jira/browse/HDFS-15050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-15050: --- Fix Version/s: 3.2.2 3.1.4 3.3.0 Resolution: Fixed Status: Resolved (was: Patch Available) Thanks [~hexiaoqiao] for contributing the patch. I've committed the patch to trunk branch-3.2 and branch-3.1. > Optimize log information when DFSInputStream meet > CannotObtainBlockLengthException > -- > > Key: HDFS-15050 > URL: https://issues.apache.org/jira/browse/HDFS-15050 > Project: Hadoop HDFS > Issue Type: Improvement > Components: dfsclient >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Fix For: 3.3.0, 3.1.4, 3.2.2 > > Attachments: HDFS-15050.001.patch > > > We could not identify which file it belongs easily when DFSInputStream meet > CannotObtainBlockLengthException, as the following exception log. Just > suggest to log file path string when we meet CannotObtainBlockLengthException. > {code:java} > Caused by: java.io.IOException: Cannot obtain block length for > LocatedBlock{BP-***:blk_***_***; getBlockSize()=690504; corrupt=false; > offset=1811939328; > locs=[DatanodeInfoWithStorage[*:50010,DS-2bcadcc4-458a-45c6-a91b-8461bf7cdd71,DISK], > > DatanodeInfoWithStorage[*:50010,DS-8f2bb259-ecb2-4839-8769-4a0523360d58,DISK], > > DatanodeInfoWithStorage[*:50010,DS-69f4de6f-2428-42ff-9486-98c2544b1ada,DISK]]} > at > org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:402) > at > org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:345) > at > org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:280) > at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:272) > at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1664) > at > org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304) > at > org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:300) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:300) > at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161) > at > org.apache.hadoop.fs.viewfs.ChRootedFileSystem.open(ChRootedFileSystem.java:266) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem.open(ViewFileSystem.java:481) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:828) > at > org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:109) > at > org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67) > at > org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:65) > ... 16 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15012) NN fails to parse Edit logs after applying HDFS-13101
[ https://issues.apache.org/jira/browse/HDFS-15012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994397#comment-16994397 ] Hadoop QA commented on HDFS-15012: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 29s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 13s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}102m 24s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}164m 6s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:e573ea49085 | | JIRA Issue | HDFS-15012 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12988642/HDFS-15012.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 5b7f69772808 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 93bb368 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/28513/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/28513/testReport/ | | Max. process+thread count | 2814 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/28513/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message w
[jira] [Commented] (HDFS-15051) RBF: Propose to revoke WRITE MountTableEntry privilege to super user only
[ https://issues.apache.org/jira/browse/HDFS-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994393#comment-16994393 ] Xiaoqiao He commented on HDFS-15051: + RouterStateManager which is without any access control. > RBF: Propose to revoke WRITE MountTableEntry privilege to super user only > - > > Key: HDFS-15051 > URL: https://issues.apache.org/jira/browse/HDFS-15051 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-15051.001.patch > > > The current permission checker of #MountTableStoreImpl is not very restrict. > In some case, any user could add/update/remove MountTableEntry without the > expected permission checking. > The following code segment try to check permission when operate > MountTableEntry, however mountTable object is from Client/RouterAdmin > {{MountTable mountTable = request.getEntry();}}, and user could pass any mode > which could bypass the permission checker. > {code:java} > public void checkPermission(MountTable mountTable, FsAction access) > throws AccessControlException { > if (isSuperUser()) { > return; > } > FsPermission mode = mountTable.getMode(); > if (getUser().equals(mountTable.getOwnerName()) > && mode.getUserAction().implies(access)) { > return; > } > if (isMemberOfGroup(mountTable.getGroupName()) > && mode.getGroupAction().implies(access)) { > return; > } > if (!getUser().equals(mountTable.getOwnerName()) > && !isMemberOfGroup(mountTable.getGroupName()) > && mode.getOtherAction().implies(access)) { > return; > } > throw new AccessControlException( > "Permission denied while accessing mount table " > + mountTable.getSourcePath() > + ": user " + getUser() + " does not have " + access.toString() > + " permissions."); > } > {code} > I just propose revoke WRITE MountTableEntry privilege to super user only. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15003) RBF: Make Router support storage type quota.
[ https://issues.apache.org/jira/browse/HDFS-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16994379#comment-16994379 ] Jinglun commented on HDFS-15003: Hi [~elgoiri], sorry for my late response. Yes the failed ut is related. Fix it and upload v05. > RBF: Make Router support storage type quota. > > > Key: HDFS-15003 > URL: https://issues.apache.org/jira/browse/HDFS-15003 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HDFS-15003.001.patch, HDFS-15003.002.patch, > HDFS-15003.003.patch, HDFS-15003.004.patch, HDFS-15003.005.patch > > > Make Router support storage type quota. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15003) RBF: Make Router support storage type quota.
[ https://issues.apache.org/jira/browse/HDFS-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinglun updated HDFS-15003: --- Attachment: HDFS-15003.005.patch > RBF: Make Router support storage type quota. > > > Key: HDFS-15003 > URL: https://issues.apache.org/jira/browse/HDFS-15003 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HDFS-15003.001.patch, HDFS-15003.002.patch, > HDFS-15003.003.patch, HDFS-15003.004.patch, HDFS-15003.005.patch > > > Make Router support storage type quota. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org