[jira] [Commented] (HDFS-11779) Ozone: KSM: add listBuckets
[ https://issues.apache.org/jira/browse/HDFS-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024291#comment-16024291 ] Weiwei Yang commented on HDFS-11779: Thanks [~xyao] for the quick reply, I am fine with that. Lets stick to the design then. > Ozone: KSM: add listBuckets > --- > > Key: HDFS-11779 > URL: https://issues.apache.org/jira/browse/HDFS-11779 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Weiwei Yang > > Lists buckets of a given volume. Similar to listVolumes, paging supported via > prevKey, prefix and maxKeys. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11421) Make WebHDFS' ACLs RegEx configurable
[ https://issues.apache.org/jira/browse/HDFS-11421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-11421: - Attachment: HDFS-11421.branch-2.003.patch branch-2 patch 3 has [a green jenkins run on HDFS-11876|https://issues.apache.org/jira/browse/HDFS-11876?focusedCommentId=16024235&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16024235] (With asf license -1 unrelated). So I think it's good to commit. But since I made some minor style changes, could you take a look [~qwertymaniac]? Thanks a lot! > Make WebHDFS' ACLs RegEx configurable > - > > Key: HDFS-11421 > URL: https://issues.apache.org/jira/browse/HDFS-11421 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 2.6.0 >Reporter: Harsh J >Assignee: Harsh J > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11421.000.patch, HDFS-11421-branch-2.000.patch, > HDFS-11421.branch-2.001.patch, HDFS-11421.branch-2.003.patch > > > Part of HDFS-5608 added support for GET/SET ACLs over WebHDFS. This currently > identifies the passed arguments via a hard-coded regex that mandates certain > group and user naming styles. > A similar limitation had existed before for CHOWN and other User/Group set > related operations of WebHDFS, where it was then made configurable via > HDFS-11391 + HDFS-4983. > Such configurability should be allowed for the ACL operations too. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11779) Ozone: KSM: add listBuckets
[ https://issues.apache.org/jira/browse/HDFS-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024276#comment-16024276 ] Xiaoyu Yao commented on HDFS-11779: --- Thanks [~cheersyang] for picking up this work. According to the design spec, Ozone REST API for volume list and bucket list are all defined and with prefix based filtering and paging support. In the latest KSM spec, we don't have a restriction on the number of buckets per volume. It is good to have paging and filtering to scale for large number of buckets. The test implementation OzoneMetadataManger#listBuckets does have a TODO for that but I think we should implement that for KSM. {code}// TODO : Query using Prefix and PrevKey {code} > Ozone: KSM: add listBuckets > --- > > Key: HDFS-11779 > URL: https://issues.apache.org/jira/browse/HDFS-11779 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Weiwei Yang > > Lists buckets of a given volume. Similar to listVolumes, paging supported via > prevKey, prefix and maxKeys. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11876) Make WebHDFS' ACLs RegEx configurable Testing
[ https://issues.apache.org/jira/browse/HDFS-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-11876: - Resolution: Duplicate Status: Resolved (was: Patch Available) Closing as dup now that jenkins is back. > Make WebHDFS' ACLs RegEx configurable Testing > - > > Key: HDFS-11876 > URL: https://issues.apache.org/jira/browse/HDFS-11876 > Project: Hadoop HDFS > Issue Type: Test >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Trivial > Attachments: HDFS-11421.branch-2.001.patch, > HDFS-11421.branch-2.003.patch, HDFS-11876.branch-2.002.patch > > > See HDFS-11421, running branch-2 test here. (Because can't seem to trigger > asf bot to run branch-2 test if a jira is associated with github) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11882) TestDFSRSDefault10x4StripedOutputStreamWithFailure and TestDFSStripedOutputStreamWithFailure010 fail
[ https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024266#comment-16024266 ] Akira Ajisaka commented on HDFS-11882: -- Note that the precommit Jenkins job does not pick up the two failing tests because the tests are in hadoop-hdfs module and the module is not modified by the patch. > TestDFSRSDefault10x4StripedOutputStreamWithFailure and > TestDFSStripedOutputStreamWithFailure010 fail > > > Key: HDFS-11882 > URL: https://issues.apache.org/jira/browse/HDFS-11882 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding, test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HDFS-11882.01.patch > > > {noformat} > Running > org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure > Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec > <<< FAILURE! - in > org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure > testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure) > Time elapsed: 38.831 sec <<< ERROR! > java.lang.IllegalStateException: null > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034) > at > org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) > at > org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11882) TestDFSRSDefault10x4StripedOutputStreamWithFailure and TestDFSStripedOutputStreamWithFailure010 fail
[ https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024253#comment-16024253 ] Hadoop QA commented on HDFS-11882: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 10s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 54s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11882 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869805/HDFS-11882.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 9cb1b31d6ba2 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d049bd2 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19609/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19609/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestDFSRSDefault10x4StripedOutputStreamWithFailure and > TestDFSStripedOutputStreamWithFailure010 fail > > > Key: HDFS-11882 > URL: https://issues.apache.org/jira/browse/HDFS-11882 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding, test >Reporter: Akira Ajisaka >Assignee
[jira] [Commented] (HDFS-11883) [SPS] : Handle NPE in BlockStorageMovementTracker when dropSPSWork() called
[ https://issues.apache.org/jira/browse/HDFS-11883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024249#comment-16024249 ] Rakesh R commented on HDFS-11883: - bq. I feel one null check is enough for blocksMoving in BlockStorageMovementTracker#run().. Good catch. You are correct, {{moverTaskFutures}} map is getting cleared during {{#dropSPSWork()}}. Please go ahead with the fix. > [SPS] : Handle NPE in BlockStorageMovementTracker when dropSPSWork() called > --- > > Key: HDFS-11883 > URL: https://issues.apache.org/jira/browse/HDFS-11883 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Affects Versions: HDFS-10285 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > > {noformat} > Exception in thread "BlockStorageMovementTracker" > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.datanode.BlockStorageMovementTracker.run(BlockStorageMovementTracker.java:91) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11779) Ozone: KSM: add listBuckets
[ https://issues.apache.org/jira/browse/HDFS-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024219#comment-16024219 ] Weiwei Yang edited comment on HDFS-11779 at 5/25/17 6:09 AM: - I studied amazone s3 a bit, according to s3 document [http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Client.html#listBuckets--], the {{listBuckets}} API simply returns a list of buckets. {code} public List listBuckets() throws SdkClientException, AmazonServiceException {code} Do we really to support pagination for buckets? There won't be too many buckets in a volume, I propose to use similar signature as s3. {code} /** * Returns a list of all buckets in the given volume. * * @param volumeName name of the volume. * @return a list of buckets. * @throws IOException */ public List listBuckets(String volumeName) throws IOException; {code} [~xyao], [~anu], does this make sense to you? Thank you. was (Author: cheersyang): I studied amazone s3 a bit, according to s3 document [http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Client.html#listBuckets--], the {{listBuckets}} API simply returns a list of buckets. {code} public List listBuckets() throws SdkClientException, AmazonServiceException {code} Do we really to support pagination for buckets? There won't be too many buckets in a volume, I propose to use similar signature as s3. {code} /** * Returns a list of all buckets in the given volume. * * @param volumeName name of the volume. * @return a list of buckets. * @throws IOException */ public List listBuckets(String volumeName) throws IOException; {code} [~xyao], [~anu], does this make sense to you? Thank you. > Ozone: KSM: add listBuckets > --- > > Key: HDFS-11779 > URL: https://issues.apache.org/jira/browse/HDFS-11779 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Weiwei Yang > > Lists buckets of a given volume. Similar to listVolumes, paging supported via > prevKey, prefix and maxKeys. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11882) TestDFSRSDefault10x4StripedOutputStreamWithFailure and TestDFSStripedOutputStreamWithFailure010 fail
[ https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-11882: - Summary: TestDFSRSDefault10x4StripedOutputStreamWithFailure and TestDFSStripedOutputStreamWithFailure010 fail (was: TestDFSRSDefault10x4StripedOutputStreamWithFailure fails) > TestDFSRSDefault10x4StripedOutputStreamWithFailure and > TestDFSStripedOutputStreamWithFailure010 fail > > > Key: HDFS-11882 > URL: https://issues.apache.org/jira/browse/HDFS-11882 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding, test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HDFS-11882.01.patch > > > {noformat} > Running > org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure > Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec > <<< FAILURE! - in > org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure > testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure) > Time elapsed: 38.831 sec <<< ERROR! > java.lang.IllegalStateException: null > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034) > at > org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) > at > org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11882) TestDFSRSDefault10x4StripedOutputStreamWithFailure and TestDFSStripedOutputStreamWithFailure010 fail
[ https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024237#comment-16024237 ] Akira Ajisaka commented on HDFS-11882: -- This patch fixes org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010.test9 as well > TestDFSRSDefault10x4StripedOutputStreamWithFailure and > TestDFSStripedOutputStreamWithFailure010 fail > > > Key: HDFS-11882 > URL: https://issues.apache.org/jira/browse/HDFS-11882 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding, test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HDFS-11882.01.patch > > > {noformat} > Running > org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure > Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec > <<< FAILURE! - in > org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure > testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure) > Time elapsed: 38.831 sec <<< ERROR! > java.lang.IllegalStateException: null > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034) > at > org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) > at > org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11876) Make WebHDFS' ACLs RegEx configurable Testing
[ https://issues.apache.org/jira/browse/HDFS-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024235#comment-16024235 ] Hadoop QA commented on HDFS-11876: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 12s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 42s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 11s{color} | {color:green} hadoop-hdfs-project-jdk1.8.0_131 with JDK v1.8.0_131 generated 0 new + 79 unchanged - 2 fixed = 79 total (was 81) {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 18s{color} | {color:green} hadoop-hdfs-project-jdk1.7.0_131 with JDK v1.7.0_131 generated 0 new + 81 unchanged - 2 fixed = 81 total (was 83) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 168 unchanged - 1 fixed = 168 total (was 169) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 16s{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.7.0_131. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 1s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 19s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:
[jira] [Commented] (HDFS-11883) [SPS] : Handle NPE in BlockStorageMovementTracker when dropSPSWork() called
[ https://issues.apache.org/jira/browse/HDFS-11883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024233#comment-16024233 ] Surendra Singh Lilhore commented on HDFS-11883: --- I feel one null check is enough for {{blocksMoving}} in BlockStorageMovementTracker#run().. [~umamaheswararao] and [~rakeshr] What do you think ? > [SPS] : Handle NPE in BlockStorageMovementTracker when dropSPSWork() called > --- > > Key: HDFS-11883 > URL: https://issues.apache.org/jira/browse/HDFS-11883 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Affects Versions: HDFS-10285 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > > {noformat} > Exception in thread "BlockStorageMovementTracker" > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.datanode.BlockStorageMovementTracker.run(BlockStorageMovementTracker.java:91) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11883) [SPS] : Handle NPE in BlockStorageMovementTracker when dropSPSWork() called
Surendra Singh Lilhore created HDFS-11883: - Summary: [SPS] : Handle NPE in BlockStorageMovementTracker when dropSPSWork() called Key: HDFS-11883 URL: https://issues.apache.org/jira/browse/HDFS-11883 Project: Hadoop HDFS Issue Type: Sub-task Components: datanode Affects Versions: HDFS-10285 Reporter: Surendra Singh Lilhore Assignee: Surendra Singh Lilhore {noformat} Exception in thread "BlockStorageMovementTracker" java.lang.NullPointerException at org.apache.hadoop.hdfs.server.datanode.BlockStorageMovementTracker.run(BlockStorageMovementTracker.java:91) at java.lang.Thread.run(Thread.java:745) {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11771) Ozone: KSM: Add checkVolumeAccess
[ https://issues.apache.org/jira/browse/HDFS-11771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024226#comment-16024226 ] Xiaoyu Yao commented on HDFS-11771: --- Thanks [~msingh] for working on this. The current implementation assumes only the volume owner can pass the checkVolumeAccess. As documented in the KSM spec, this was only an implementation that was used for testing purpose. "We need to support the full ACL set that is specified in the rest protocol. The entities supported are user, group and world." > Ozone: KSM: Add checkVolumeAccess > -- > > Key: HDFS-11771 > URL: https://issues.apache.org/jira/browse/HDFS-11771 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Mukul Kumar Singh > Attachments: HDFS-11771-HDFS-7240.001.patch > > > Checks if the caller has access to a given volume. This call supports the > ACLs specified in the ozone rest protocol documentation. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11882) TestDFSRSDefault10x4StripedOutputStreamWithFailure fails
[ https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024224#comment-16024224 ] Akira Ajisaka commented on HDFS-11882: -- Hi [~andrew.wang], [~zhz] and [~walter.k.su], would you review this patch because you worked on HDFS-9342? > TestDFSRSDefault10x4StripedOutputStreamWithFailure fails > > > Key: HDFS-11882 > URL: https://issues.apache.org/jira/browse/HDFS-11882 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding, test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HDFS-11882.01.patch > > > {noformat} > Running > org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure > Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec > <<< FAILURE! - in > org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure > testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure) > Time elapsed: 38.831 sec <<< ERROR! > java.lang.IllegalStateException: null > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034) > at > org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) > at > org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11882) TestDFSRSDefault10x4StripedOutputStreamWithFailure fails
[ https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-11882: - Status: Patch Available (was: Open) > TestDFSRSDefault10x4StripedOutputStreamWithFailure fails > > > Key: HDFS-11882 > URL: https://issues.apache.org/jira/browse/HDFS-11882 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding, test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HDFS-11882.01.patch > > > {noformat} > Running > org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure > Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec > <<< FAILURE! - in > org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure > testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure) > Time elapsed: 38.831 sec <<< ERROR! > java.lang.IllegalStateException: null > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034) > at > org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) > at > org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11882) TestDFSRSDefault10x4StripedOutputStreamWithFailure fails
[ https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-11882: - Attachment: HDFS-11882.01.patch Attaching a patch to skip using ackedBytes if ackedBytes is greater than sentBytes. > TestDFSRSDefault10x4StripedOutputStreamWithFailure fails > > > Key: HDFS-11882 > URL: https://issues.apache.org/jira/browse/HDFS-11882 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding, test >Reporter: Akira Ajisaka > Attachments: HDFS-11882.01.patch > > > {noformat} > Running > org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure > Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec > <<< FAILURE! - in > org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure > testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure) > Time elapsed: 38.831 sec <<< ERROR! > java.lang.IllegalStateException: null > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034) > at > org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) > at > org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-11882) TestDFSRSDefault10x4StripedOutputStreamWithFailure fails
[ https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reassigned HDFS-11882: Assignee: Akira Ajisaka > TestDFSRSDefault10x4StripedOutputStreamWithFailure fails > > > Key: HDFS-11882 > URL: https://issues.apache.org/jira/browse/HDFS-11882 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding, test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HDFS-11882.01.patch > > > {noformat} > Running > org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure > Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec > <<< FAILURE! - in > org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure > testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure) > Time elapsed: 38.831 sec <<< ERROR! > java.lang.IllegalStateException: null > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034) > at > org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) > at > org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11779) Ozone: KSM: add listBuckets
[ https://issues.apache.org/jira/browse/HDFS-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024219#comment-16024219 ] Weiwei Yang commented on HDFS-11779: I studied amazone s3 a bit, according to s3 document [http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Client.html#listBuckets--], it simply returns a list of buckets. {code} public List listBuckets() throws SdkClientException, AmazonServiceException {code} Do we really to support pagination for buckets? There won't be too many buckets in a volume, I propose to use similar signature as s3. {code} /** * Returns a list of all buckets in the given volume. * * @param volumeName name of the volume. * @return a list of buckets. * @throws IOException */ public List listBuckets(String volumeName) throws IOException; {code} [~xyao], [~anu], does this make sense to you? Thank you. > Ozone: KSM: add listBuckets > --- > > Key: HDFS-11779 > URL: https://issues.apache.org/jira/browse/HDFS-11779 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Weiwei Yang > > Lists buckets of a given volume. Similar to listVolumes, paging supported via > prevKey, prefix and maxKeys. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11779) Ozone: KSM: add listBuckets
[ https://issues.apache.org/jira/browse/HDFS-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024219#comment-16024219 ] Weiwei Yang edited comment on HDFS-11779 at 5/25/17 5:36 AM: - I studied amazone s3 a bit, according to s3 document [http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Client.html#listBuckets--], the {{listBuckets}} API simply returns a list of buckets. {code} public List listBuckets() throws SdkClientException, AmazonServiceException {code} Do we really to support pagination for buckets? There won't be too many buckets in a volume, I propose to use similar signature as s3. {code} /** * Returns a list of all buckets in the given volume. * * @param volumeName name of the volume. * @return a list of buckets. * @throws IOException */ public List listBuckets(String volumeName) throws IOException; {code} [~xyao], [~anu], does this make sense to you? Thank you. was (Author: cheersyang): I studied amazone s3 a bit, according to s3 document [http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Client.html#listBuckets--], it simply returns a list of buckets. {code} public List listBuckets() throws SdkClientException, AmazonServiceException {code} Do we really to support pagination for buckets? There won't be too many buckets in a volume, I propose to use similar signature as s3. {code} /** * Returns a list of all buckets in the given volume. * * @param volumeName name of the volume. * @return a list of buckets. * @throws IOException */ public List listBuckets(String volumeName) throws IOException; {code} [~xyao], [~anu], does this make sense to you? Thank you. > Ozone: KSM: add listBuckets > --- > > Key: HDFS-11779 > URL: https://issues.apache.org/jira/browse/HDFS-11779 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Weiwei Yang > > Lists buckets of a given volume. Similar to listVolumes, paging supported via > prevKey, prefix and maxKeys. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-11773) Ozone: KSM : add listVolumes
[ https://issues.apache.org/jira/browse/HDFS-11773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao reassigned HDFS-11773: - Assignee: Weiwei Yang (was: Mukul Kumar Singh) > Ozone: KSM : add listVolumes > > > Key: HDFS-11773 > URL: https://issues.apache.org/jira/browse/HDFS-11773 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Weiwei Yang > > List volume call can be used in three different contexts. One is for the > administrators to list all volumes in a cluster. Second is for the > administrator to list the volumes owned by a specific user. Third is a user > listing the volumes owned by himself/herself. > Since these calls can return large number of entries the rest protocol > supports paging. Paging is supported by the use of prevKey, prefix and > maxKeys. The caller is aware the this call is neither atomic nor consistent. > So we can iterate over the list even while changes are happening to the list. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11708) positional read will fail if replicas moved to different DNs after stream is opened
[ https://issues.apache.org/jira/browse/HDFS-11708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024209#comment-16024209 ] Hadoop QA commented on HDFS-11708: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 21s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 20s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}137m 26s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11708 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869785/HDFS-11708-05.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux abb01436237a 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d049bd2 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19607/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19607/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-h
[jira] [Commented] (HDFS-11669) [SPS]: Add option in "setStoragePolicy" command to satisfy the policy.
[ https://issues.apache.org/jira/browse/HDFS-11669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024201#comment-16024201 ] Surendra Singh Lilhore commented on HDFS-11669: --- bq. how about postponing this task to next phase once we finished merging the existing code to trunk code? Sure, I am ok with this... bq. One idea to reduce the locking time during recursive iteration is, acquire & release lock for each sub-dirs rather than holding lock at the root till all the sub-dirs are visited. Can we do something same as storage policy. Set the xattr in one directory and schedule it for SPS. Later SPS thread will find recursively all the sub directory/file and satisfy the policy. So we can avoid setting recursively xattr. > [SPS]: Add option in "setStoragePolicy" command to satisfy the policy. > -- > > Key: HDFS-11669 > URL: https://issues.apache.org/jira/browse/HDFS-11669 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode, shell >Affects Versions: HDFS-10285 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-11669-HDFS-10285.001.patch > > > Add one new option {{-satisfypolicy}} in {{setStoragePolicy}} command to > satisfy the storage policy. > {noformat} > hdfs storagepolicies -setStoragePolicy -path -policy > -satisfypolicy > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11882) TestDFSRSDefault10x4StripedOutputStreamWithFailure fails
[ https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024199#comment-16024199 ] Akira Ajisaka commented on HDFS-11882: -- {code:title=DFSStripedOutputStream.java} // should update the block group length based on the acked length final long sentBytes = currentBlockGroup.getNumBytes(); final long ackedBytes = getNumAckedStripes() * cellSize * numDataBlocks; Preconditions.checkState(ackedBytes <= sentBytes); {code} In the above code, ackedBytes can be greater than sentBytes when some DataNodes are failing. When sentBytes is 18*64k and the cellSize is 64k, DN1~8 will have two 64k data blocks, DN9~10 will have one 64k data block, and DN11~14 will have two 64k parity blocks. In this situation, {{getNumAckedStriples()}} will return 2 if DN9 and DN10 are failing. That way, in the testcase ackedBytes will become 20*64k, which is greater than sentBytes. > TestDFSRSDefault10x4StripedOutputStreamWithFailure fails > > > Key: HDFS-11882 > URL: https://issues.apache.org/jira/browse/HDFS-11882 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding, test >Reporter: Akira Ajisaka > > {noformat} > Running > org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure > Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec > <<< FAILURE! - in > org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure > testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure) > Time elapsed: 38.831 sec <<< ERROR! > java.lang.IllegalStateException: null > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034) > at > org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) > at > org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381) > at > org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11869) Backport HDFS-11078 to branch 2.7
[ https://issues.apache.org/jira/browse/HDFS-11869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024188#comment-16024188 ] Hadoop QA commented on HDFS-11869: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 58s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 55s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1265 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 33s{color} | {color:red} The patch 70 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 22s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}151m 20s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_131 Failed junit tests | hadoop.hdfs.server.blockmanagement.TestNodeCount | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot | | | hadoop.hdfs.TestDatanodeRegistration | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | JDK v1.7.0_131 Failed junit tests | hadoop.hdfs.server.bloc
[jira] [Commented] (HDFS-11655) Ozone: CLI: Guarantees user runs SCM commands has appropriate permission
[ https://issues.apache.org/jira/browse/HDFS-11655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024185#comment-16024185 ] Hadoop QA commented on HDFS-11655: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 55s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 6s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 41s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}139m 12s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure | | | hadoop.tracing.TestTracing | | | hadoop.cblock.TestBufferManager | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11655 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869783/HDFS-11655-HDFS-7240.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 3473aa2cae50 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 67da8be | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/19605/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19605/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19605/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19605/console | | Powered by | Apache Yetus 0.5.0-
[jira] [Created] (HDFS-11882) TestDFSRSDefault10x4StripedOutputStreamWithFailure fails
Akira Ajisaka created HDFS-11882: Summary: TestDFSRSDefault10x4StripedOutputStreamWithFailure fails Key: HDFS-11882 URL: https://issues.apache.org/jira/browse/HDFS-11882 Project: Hadoop HDFS Issue Type: Bug Components: erasure-coding, test Reporter: Akira Ajisaka {noformat} Running org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec <<< FAILURE! - in org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure) Time elapsed: 38.831 sec <<< ERROR! java.lang.IllegalStateException: null at com.google.common.base.Preconditions.checkState(Preconditions.java:129) at org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780) at org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664) at org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034) at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11642) Block Storage: fix TestCBlockCLI and TestCBlockServerPersistence cleanup
[ https://issues.apache.org/jira/browse/HDFS-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-11642: Fix Version/s: HDFS-7240 > Block Storage: fix TestCBlockCLI and TestCBlockServerPersistence cleanup > > > Key: HDFS-11642 > URL: https://issues.apache.org/jira/browse/HDFS-11642 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Fix For: HDFS-7240 > > Attachments: HDFS-11642-HDFS-7240.001.patch > > > This was found in recent Jenkins run on HDFS-7240. > The cblock service RPC binding port (9810) was not cleaned up after test. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11642) Block Storage: fix TestCBlockCLI and TestCBlockServerPersistence cleanup
[ https://issues.apache.org/jira/browse/HDFS-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024156#comment-16024156 ] Brahma Reddy Battula commented on HDFS-11642: - Updated the fix version. > Block Storage: fix TestCBlockCLI and TestCBlockServerPersistence cleanup > > > Key: HDFS-11642 > URL: https://issues.apache.org/jira/browse/HDFS-11642 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Fix For: HDFS-7240 > > Attachments: HDFS-11642-HDFS-7240.001.patch > > > This was found in recent Jenkins run on HDFS-7240. > The cblock service RPC binding port (9810) was not cleaned up after test. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11832) Switch leftover logs to slf4j format in BlockManager.java
[ https://issues.apache.org/jira/browse/HDFS-11832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024124#comment-16024124 ] Hui Xu commented on HDFS-11832: --- Hi, I want to know what to do next? Just waiting some member to merge the patch to 3.0.0-alpha3? Thanks! > Switch leftover logs to slf4j format in BlockManager.java > - > > Key: HDFS-11832 > URL: https://issues.apache.org/jira/browse/HDFS-11832 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 2.7.0, 2.8.0, 3.0.0-alpha1 >Reporter: Hui Xu >Assignee: Chen Liang >Priority: Minor > Attachments: HDFS-11832.001.patch, HDFS-11832.002.patch, > HDFS-11832.003.patch, HDFS-11832.004.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > HDFS-7706 Switch BlockManager logging to use slf4j. But the logging formats > were not modified appropriately. For example: > if (LOG.isDebugEnabled()) { > LOG.debug("blocks = " + java.util.Arrays.asList(blocks)); > } > These codes should be modified to: > LOG.debug("blocks = {}", java.util.Arrays.asList(blocks)); -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11876) Make WebHDFS' ACLs RegEx configurable Testing
[ https://issues.apache.org/jira/browse/HDFS-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024116#comment-16024116 ] Hadoop QA commented on HDFS-11876: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 9m 32s{color} | {color:red} Docker failed to build yetus/hadoop:8515d35. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-11876 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869784/HDFS-11421.branch-2.003.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19606/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Make WebHDFS' ACLs RegEx configurable Testing > - > > Key: HDFS-11876 > URL: https://issues.apache.org/jira/browse/HDFS-11876 > Project: Hadoop HDFS > Issue Type: Test >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Trivial > Attachments: HDFS-11421.branch-2.001.patch, > HDFS-11421.branch-2.003.patch, HDFS-11876.branch-2.002.patch > > > See HDFS-11421, running branch-2 test here. (Because can't seem to trigger > asf bot to run branch-2 test if a jira is associated with github) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11708) positional read will fail if replicas moved to different DNs after stream is opened
[ https://issues.apache.org/jira/browse/HDFS-11708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B updated HDFS-11708: - Attachment: HDFS-11708-05.patch Attached the rebased patch. {{TestPread.testPreadFailureWithChangedBlockLocations()}} fails without change. > positional read will fail if replicas moved to different DNs after stream is > opened > --- > > Key: HDFS-11708 > URL: https://issues.apache.org/jira/browse/HDFS-11708 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.7.3 >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Critical > Labels: release-blocker > Attachments: HDFS-11708-01.patch, HDFS-11708-02.patch, > HDFS-11708-03.patch, HDFS-11708-04.patch, HDFS-11708-05.patch > > > Scenario: > 1. File was written to DN1, DN2 with RF=2 > 2. File stream opened to read and kept. Block Locations are [DN1,DN2] > 3. One of the replica (DN2) moved to another datanode (DN3) due to datanode > dead/balancing/etc. > 4. Latest block locations in NameNode will be DN1 and DN3 in the 'same order' > 5. DN1 went down, but not yet detected as dead in NameNode. > 6. Client start reading using positional read api "read(pos, buf[], offset, > length)" -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11876) Make WebHDFS' ACLs RegEx configurable Testing
[ https://issues.apache.org/jira/browse/HDFS-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-11876: - Attachment: HDFS-11421.branch-2.003.patch > Make WebHDFS' ACLs RegEx configurable Testing > - > > Key: HDFS-11876 > URL: https://issues.apache.org/jira/browse/HDFS-11876 > Project: Hadoop HDFS > Issue Type: Test >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Trivial > Attachments: HDFS-11421.branch-2.001.patch, > HDFS-11421.branch-2.003.patch, HDFS-11876.branch-2.002.patch > > > See HDFS-11421, running branch-2 test here. (Because can't seem to trigger > asf bot to run branch-2 test if a jira is associated with github) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11868) Backport HDFS-8674 to branch 2.7
[ https://issues.apache.org/jira/browse/HDFS-11868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024068#comment-16024068 ] Hadoop QA commented on HDFS-11868: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 3m 4s{color} | {color:red} Docker failed to build yetus/hadoop:c420dfe. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-11868 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869356/HDFS-8674-branch-2.7.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19604/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Backport HDFS-8674 to branch 2.7 > > > Key: HDFS-11868 > URL: https://issues.apache.org/jira/browse/HDFS-11868 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HDFS-8674-branch-2.7.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11708) positional read will fail if replicas moved to different DNs after stream is opened
[ https://issues.apache.org/jira/browse/HDFS-11708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024087#comment-16024087 ] Brahma Reddy Battula commented on HDFS-11708: - Yes, After HDFS-9807, patch will not apply cleanly. I missed this change. *org.apache.hadoop.hdfs.TestPread#testPreadFailureWithChangedBlockLocations {color:red}will fail without changes{color}.Following is stacktrace.* {noformat} org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-586358626-10.18.246.125-1495677968099:blk_1073741825_1001 file=/test at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:843) at org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:962) at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1324) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1283) at org.apache.hadoop.hdfs.TestPread.doPreadTestWithChangedLocations(TestPread.java:684) at org.apache.hadoop.hdfs.TestPread.testPreadFailureWithChangedBlockLocations(TestPread.java:566) {noformat} *org.apache.hadoop.hdfs.TestPread#testPreadHedgedFailureWithChangedBlockLocations* hedged read would pass even without these changes because there is ignoredNodes map always maintained without clearing, so there was a chance to connect to valid node and read the replica.and this addressed in HDFS-11738 [~vinayrpet] already mentioned about this. can we add HDFS-11738 changes here only Or can you move hedgedread changes to HDFS-11738 itself..? > positional read will fail if replicas moved to different DNs after stream is > opened > --- > > Key: HDFS-11708 > URL: https://issues.apache.org/jira/browse/HDFS-11708 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.7.3 >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Critical > Labels: release-blocker > Attachments: HDFS-11708-01.patch, HDFS-11708-02.patch, > HDFS-11708-03.patch, HDFS-11708-04.patch > > > Scenario: > 1. File was written to DN1, DN2 with RF=2 > 2. File stream opened to read and kept. Block Locations are [DN1,DN2] > 3. One of the replica (DN2) moved to another datanode (DN3) due to datanode > dead/balancing/etc. > 4. Latest block locations in NameNode will be DN1 and DN3 in the 'same order' > 5. DN1 went down, but not yet detected as dead in NameNode. > 6. Client start reading using positional read api "read(pos, buf[], offset, > length)" -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11655) Ozone: CLI: Guarantees user runs SCM commands has appropriate permission
[ https://issues.apache.org/jira/browse/HDFS-11655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-11655: --- Attachment: HDFS-11655-HDFS-7240.004.patch Attached v4 patch to fix the checkstyle issue. > Ozone: CLI: Guarantees user runs SCM commands has appropriate permission > > > Key: HDFS-11655 > URL: https://issues.apache.org/jira/browse/HDFS-11655 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: command-line, security > Attachments: HDFS-11655-HDFS-7240.001.patch, > HDFS-11655-HDFS-7240.002.patch, HDFS-11655-HDFS-7240.003.patch, > HDFS-11655-HDFS-7240.004.patch > > > We need to add a permission check module for ozone command line utilities, to > make sure users run commands with proper privileges. For now, commands in > [design doc| > https://issues.apache.org/jira/secure/attachment/12861478/storage-container-manager-cli-v002.pdf] > all require admin privilege. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11655) Ozone: CLI: Guarantees user runs SCM commands has appropriate permission
[ https://issues.apache.org/jira/browse/HDFS-11655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-11655: --- Attachment: HDFS-11655-HDFS-7240.003.patch > Ozone: CLI: Guarantees user runs SCM commands has appropriate permission > > > Key: HDFS-11655 > URL: https://issues.apache.org/jira/browse/HDFS-11655 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: command-line, security > Attachments: HDFS-11655-HDFS-7240.001.patch, > HDFS-11655-HDFS-7240.002.patch, HDFS-11655-HDFS-7240.003.patch > > > We need to add a permission check module for ozone command line utilities, to > make sure users run commands with proper privileges. For now, commands in > [design doc| > https://issues.apache.org/jira/secure/attachment/12861478/storage-container-manager-cli-v002.pdf] > all require admin privilege. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11868) Backport HDFS-8674 to branch 2.7
[ https://issues.apache.org/jira/browse/HDFS-11868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Inigo Goiri updated HDFS-11868: --- Status: Patch Available (was: In Progress) > Backport HDFS-8674 to branch 2.7 > > > Key: HDFS-11868 > URL: https://issues.apache.org/jira/browse/HDFS-11868 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HDFS-8674-branch-2.7.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11868) Backport HDFS-8674 to branch 2.7
[ https://issues.apache.org/jira/browse/HDFS-11868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Inigo Goiri updated HDFS-11868: --- Status: In Progress (was: Patch Available) > Backport HDFS-8674 to branch 2.7 > > > Key: HDFS-11868 > URL: https://issues.apache.org/jira/browse/HDFS-11868 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HDFS-8674-branch-2.7.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11655) Ozone: CLI: Guarantees user runs SCM commands has appropriate permission
[ https://issues.apache.org/jira/browse/HDFS-11655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-11655: --- Attachment: (was: HDFS-11655-HDFS-7240.003.patch) > Ozone: CLI: Guarantees user runs SCM commands has appropriate permission > > > Key: HDFS-11655 > URL: https://issues.apache.org/jira/browse/HDFS-11655 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: command-line, security > Attachments: HDFS-11655-HDFS-7240.001.patch, > HDFS-11655-HDFS-7240.002.patch, HDFS-11655-HDFS-7240.003.patch > > > We need to add a permission check module for ozone command line utilities, to > make sure users run commands with proper privileges. For now, commands in > [design doc| > https://issues.apache.org/jira/secure/attachment/12861478/storage-container-manager-cli-v002.pdf] > all require admin privilege. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11873) Ozone: Object store handler cannot serve multiple requests from single http client
[ https://issues.apache.org/jira/browse/HDFS-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-11873: --- Summary: Ozone: Object store handler cannot serve multiple requests from single http client (was: Ozone: Object store handler cannot serve requests from same http client) > Ozone: Object store handler cannot serve multiple requests from single http > client > -- > > Key: HDFS-11873 > URL: https://issues.apache.org/jira/browse/HDFS-11873 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Critical > Attachments: HDFS-11873-HDFS-7240.testcase.patch > > > This issue was found when I worked on HDFS-11846. Instead of creating a new > http client instance per request, I tried to reuse {{CloseableHttpClient}} in > {{OzoneClient}} class in a {{PoolingHttpClientConnectionManager}}. However, > every second request from the http client hangs, which could not get > dispatched to {{ObjectStoreJerseyContainer}}. There seems to be something > wrong in the netty pipeline, this jira aims to 1) fix the problem in the > server side 2) use the pool for client http clients to reduce the resource > overhead. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11869) Backport HDFS-11078 to branch 2.7
[ https://issues.apache.org/jira/browse/HDFS-11869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Inigo Goiri updated HDFS-11869: --- Status: Patch Available (was: In Progress) > Backport HDFS-11078 to branch 2.7 > - > > Key: HDFS-11869 > URL: https://issues.apache.org/jira/browse/HDFS-11869 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HDFS-11078-branch-2.7.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11868) Backport HDFS-8674 to branch 2.7
[ https://issues.apache.org/jira/browse/HDFS-11868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024050#comment-16024050 ] Konstantin Shvachko commented on HDFS-11868: Nope, didn't work. Must "Patch Available". > Backport HDFS-8674 to branch 2.7 > > > Key: HDFS-11868 > URL: https://issues.apache.org/jira/browse/HDFS-11868 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HDFS-8674-branch-2.7.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11868) Backport HDFS-8674 to branch 2.7
[ https://issues.apache.org/jira/browse/HDFS-11868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024051#comment-16024051 ] Inigo Goiri commented on HDFS-11868: Moved to PA. > Backport HDFS-8674 to branch 2.7 > > > Key: HDFS-11868 > URL: https://issues.apache.org/jira/browse/HDFS-11868 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HDFS-8674-branch-2.7.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11868) Backport HDFS-8674 to branch 2.7
[ https://issues.apache.org/jira/browse/HDFS-11868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Inigo Goiri updated HDFS-11868: --- Status: Patch Available (was: In Progress) > Backport HDFS-8674 to branch 2.7 > > > Key: HDFS-11868 > URL: https://issues.apache.org/jira/browse/HDFS-11868 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HDFS-8674-branch-2.7.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11869) Backport HDFS-11078 to branch 2.7
[ https://issues.apache.org/jira/browse/HDFS-11869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024049#comment-16024049 ] Konstantin Shvachko commented on HDFS-11869: Yet again "IN PROGRESS" status doesn't trigger Jenkins. > Backport HDFS-11078 to branch 2.7 > - > > Key: HDFS-11869 > URL: https://issues.apache.org/jira/browse/HDFS-11869 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HDFS-11078-branch-2.7.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11868) Backport HDFS-8674 to branch 2.7
[ https://issues.apache.org/jira/browse/HDFS-11868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024044#comment-16024044 ] Konstantin Shvachko commented on HDFS-11868: [~elgoiri] you need to make it patch available. I think you get different workflow if you create it as a task. [Started Jenkins build|https://builds.apache.org/job/PreCommit-HADOOP-Build/12387/] manually. > Backport HDFS-8674 to branch 2.7 > > > Key: HDFS-11868 > URL: https://issues.apache.org/jira/browse/HDFS-11868 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HDFS-8674-branch-2.7.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-6291) FSImage may be left unclosed in BootstrapStandby#doRun()
[ https://issues.apache.org/jira/browse/HDFS-6291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-6291: -- Fix Version/s: 2.7.4 Just committed this to branch-2.7. Thank you [~elgoiri] for backport. > FSImage may be left unclosed in BootstrapStandby#doRun() > > > Key: HDFS-6291 > URL: https://issues.apache.org/jira/browse/HDFS-6291 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha >Reporter: Ted Yu >Assignee: Sanghyun Yun >Priority: Minor > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1 > > Attachments: HDFS-6291.2.patch, HDFS-6291.patch > > > At around line 203: > {code} > if (!checkLogsAvailableForRead(image, imageTxId, curTxId)) { > return ERR_CODE_LOGS_UNAVAILABLE; > } > {code} > If we return following the above check, image is not closed. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-11867) Backport HDFS-6291 to branch 2.7
[ https://issues.apache.org/jira/browse/HDFS-11867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko resolved HDFS-11867. Resolution: Fixed Hadoop Flags: Reviewed Just committed this to branch-2.7. Thank you [~elgoiri] > Backport HDFS-6291 to branch 2.7 > > > Key: HDFS-11867 > URL: https://issues.apache.org/jira/browse/HDFS-11867 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HDFS-6291-branch-2.7.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11787) After HDFS-11515, -du still throws ConcurrentModificationException
[ https://issues.apache.org/jira/browse/HDFS-11787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024004#comment-16024004 ] Wei-Chiu Chuang commented on HDFS-11787: Reverted it based on: 1) I had to revert it before reverting HDFS-11661. 2) HDFS-11515 does not fix the bug for all cases. > After HDFS-11515, -du still throws ConcurrentModificationException > -- > > Key: HDFS-11787 > URL: https://issues.apache.org/jira/browse/HDFS-11787 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots, tools >Affects Versions: 3.0.0-alpha3, 2.8.1 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Fix For: 3.0.0-alpha3, 2.8.1 > > > I ran a modified NameNode that was patched against HDFS-11515 on a production > cluster fsimage, and I am still seeing ConcurrentModificationException. > It seems that there are corner cases not convered by HDFS-11515. File this > jira to discuss how to proceed. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11846) Ozone: Fix Http connection leaks in ozone clients
[ https://issues.apache.org/jira/browse/HDFS-11846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024003#comment-16024003 ] Weiwei Yang commented on HDFS-11846: Thank you [~xyao] :). > Ozone: Fix Http connection leaks in ozone clients > - > > Key: HDFS-11846 > URL: https://issues.apache.org/jira/browse/HDFS-11846 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Fix For: HDFS-7240 > > Attachments: HDFS-11846-HDFS-7240.001.patch, > HDFS-11846-HDFS-7240.002.patch > > > There are several problems > # Http clients in {{OzoneVolume}}, {{OzoneBucket}} and {{OzoneClient}} are > created per request, per [Reuse of HttpClient > instance|http://hc.apache.org/httpclient-3.x/performance.html#Reuse_of_HttpClient_instance] > doc, proposed to reuse the http client instance to reduce the over head. > # Some resources in these classes were not properly cleaned up. E.g the http > connection, HttpGet/HttpPost requests. > > This jira's purpose is to fix these issues and investigate how we can improve > the client. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11515) -du throws ConcurrentModificationException
[ https://issues.apache.org/jira/browse/HDFS-11515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024001#comment-16024001 ] Hudson commented on HDFS-11515: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11776 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11776/]) Revert "HDFS-11515. -du throws ConcurrentModificationException. (weichiu: rev 2cba5612282509001a221b9751e1fd36c084807f) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java > -du throws ConcurrentModificationException > -- > > Key: HDFS-11515 > URL: https://issues.apache.org/jira/browse/HDFS-11515 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode, shell >Affects Versions: 2.8.0, 3.0.0-alpha2 >Reporter: Wei-Chiu Chuang >Assignee: Istvan Fajth > Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11515.001.patch, HDFS-11515.002.patch, > HDFS-11515.003.patch, HDFS-11515.004.patch, HDFS-11515.test.patch > > > HDFS-10797 fixed a disk summary (-du) bug, but it introduced a new bug. > The bug can be reproduced running the following commands: > {noformat} > bash-4.1$ hdfs dfs -mkdir /tmp/d0 > bash-4.1$ hdfs dfsadmin -allowSnapshot /tmp/d0 > Allowing snaphot on /tmp/d0 succeeded > bash-4.1$ hdfs dfs -touchz /tmp/d0/f4 > bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1 > bash-4.1$ hdfs dfs -createSnapshot /tmp/d0 s1 > Created snapshot /tmp/d0/.snapshot/s1 > bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d2 > bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d3 > bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d2/d4 > bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d3/d5 > bash-4.1$ hdfs dfs -createSnapshot /tmp/d0 s2 > Created snapshot /tmp/d0/.snapshot/s2 > bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d2/d4 > bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d2 > bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d3/d5 > bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d3 > bash-4.1$ hdfs dfs -du -h /tmp/d0 > du: java.util.ConcurrentModificationException > 0 0 /tmp/d0/f4 > {noformat} > A ConcurrentModificationException forced du to terminate abruptly. > Correspondingly, NameNode log has the following error: > {noformat} > 2017-03-08 14:32:17,673 WARN org.apache.hadoop.ipc.Server: IPC Server handler > 4 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getContentSumma > ry from 10.0.0.198:49957 Call#2 Retry#0 > java.util.ConcurrentModificationException > at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922) > at java.util.HashMap$KeyIterator.next(HashMap.java:956) > at > org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext.tallyDeletedSnapshottedINodes(ContentSummaryComputationContext.java:209) > at > org.apache.hadoop.hdfs.server.namenode.INode.computeAndConvertContentSummary(INode.java:507) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.getContentSummary(FSDirectory.java:2302) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:4535) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1087) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getContentSummary(AuthorizationProviderProxyClientProtocol.java:5 > 63) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.jav > a:873) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2212) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2210) > {noformat} > The bug is due to a improper use of HashSet, not concurrent operations. > Basically, a HashSet can not be updated while an iterator is traversing it. -- This message was sent by Atlassian JIRA (v6.3.15#6346) ---
[jira] [Commented] (HDFS-10797) Disk usage summary of snapshots causes renamed blocks to get counted twice
[ https://issues.apache.org/jira/browse/HDFS-10797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024002#comment-16024002 ] Hudson commented on HDFS-10797: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11776 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11776/]) Revert "HDFS-10797. Disk usage summary of snapshots causes renamed (weichiu: rev b8b69d797aed8dfeb65ea462c2856f62e9aa1023) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeSymlink.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ContentSummaryComputationContext.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/Snapshot.java > Disk usage summary of snapshots causes renamed blocks to get counted twice > -- > > Key: HDFS-10797 > URL: https://issues.apache.org/jira/browse/HDFS-10797 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots >Affects Versions: 2.8.0 >Reporter: Sean Mackrory >Assignee: Sean Mackrory > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-10797.001.patch, HDFS-10797.002.patch, > HDFS-10797.003.patch, HDFS-10797.004.patch, HDFS-10797.005.patch, > HDFS-10797.006.patch, HDFS-10797.007.patch, HDFS-10797.008.patch, > HDFS-10797.009.patch, HDFS-10797.010.patch, HDFS-10797.010.patch > > > DirectoryWithSnapshotFeature.computeContentSummary4Snapshot calculates how > much disk usage is used by a snapshot by tallying up the files in the > snapshot that have since been deleted (that way it won't overlap with regular > files whose disk usage is computed separately). However that is determined > from a diff that shows moved (to Trash or otherwise) or renamed files as a > deletion and a creation operation that may overlap with the list of blocks. > Only the deletion operation is taken into consideration, and this causes > those blocks to get represented twice in the disk usage tallying. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-11787) After HDFS-11515, -du still throws ConcurrentModificationException
[ https://issues.apache.org/jira/browse/HDFS-11787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HDFS-11787. Resolution: Fixed Assignee: Wei-Chiu Chuang Fix Version/s: 2.8.1 3.0.0-alpha3 Release Note: Reverted HDFS-11515. I reverted HDFS-11515 from branch-2.8, branch-2 and trunk. > After HDFS-11515, -du still throws ConcurrentModificationException > -- > > Key: HDFS-11787 > URL: https://issues.apache.org/jira/browse/HDFS-11787 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots, tools >Affects Versions: 3.0.0-alpha3, 2.8.1 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Fix For: 3.0.0-alpha3, 2.8.1 > > > I ran a modified NameNode that was patched against HDFS-11515 on a production > cluster fsimage, and I am still seeing ConcurrentModificationException. > It seems that there are corner cases not convered by HDFS-11515. File this > jira to discuss how to proceed. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-11661) GetContentSummary uses excessive amounts of memory
[ https://issues.apache.org/jira/browse/HDFS-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HDFS-11661. Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.1 3.0.0-alpha3 Release Note: Reverted HDFS-10797 to fix a scalability regression brought by the commit. Based on multiple +1, I reverted the commit from branch-2.8, branch-2 and trunk. Thanks to [~nroberts] for reporting the issue, and comments from [~kihwal], [~mackrorysd], [~xiaochen] [~djp] [~andrew.wang] [~shahrs87] [~yzhangal] and [~daryn]. [~daryn] thanks for your effort trying to fix the bug. Please file a new jira for your patch. Thanks! > GetContentSummary uses excessive amounts of memory > -- > > Key: HDFS-11661 > URL: https://issues.apache.org/jira/browse/HDFS-11661 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.8.0, 3.0.0-alpha2 >Reporter: Nathan Roberts >Assignee: Wei-Chiu Chuang >Priority: Blocker > Fix For: 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11661.001.patch, HDFs-11661.002.patch, Heap > growth.png > > > ContentSummaryComputationContext::nodeIncluded() is being used to keep track > of all INodes visited during the current content summary calculation. This > can be all of the INodes in the filesystem, making for a VERY large hash > table. This simply won't work on large filesystems. > We noticed this after upgrading a namenode with ~100Million filesystem > objects was spending significantly more time in GC. Fortunately this system > had some memory breathing room, other clusters we have will not run with this > additional demand on memory. > This was added as part of HDFS-10797 as a way of keeping track of INodes that > have already been accounted for - to avoid double counting. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11878) Fix journal missing log httpServerUrl address in JournalNodeSyncer
[ https://issues.apache.org/jira/browse/HDFS-11878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023982#comment-16023982 ] Hadoop QA commented on HDFS-11878: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 8s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 91m 22s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.balancer.TestBalancerRPCDelay | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11878 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869744/HDFS-11878.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux a68e99ec98fb 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0e83ed5 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19602/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19602/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19602/console | | Powered by | Apache
[jira] [Commented] (HDFS-11823) Extend TestDFSStripedIutputStream/TestDFSStripedOutputStream with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-11823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023976#comment-16023976 ] Takanobu Asanuma commented on HDFS-11823: - Thanks for reviewing and committing, [~jingzhao]! > Extend TestDFSStripedIutputStream/TestDFSStripedOutputStream with a random EC > policy > > > Key: HDFS-11823 > URL: https://issues.apache.org/jira/browse/HDFS-11823 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11823.1.patch > > > From the discussion in HDFS-7866 and HDFS-9962, in addtion to the default ec > policy, it would be good if we add a random ec policy to each test. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11876) Make WebHDFS' ACLs RegEx configurable Testing
[ https://issues.apache.org/jira/browse/HDFS-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023972#comment-16023972 ] Hadoop QA commented on HDFS-11876: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 55s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 29s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 46s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 29s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 29s{color} | {color:green} hadoop-hdfs-project-jdk1.8.0_131 with JDK v1.8.0_131 generated 0 new + 79 unchanged - 2 fixed = 79 total (was 81) {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 27s{color} | {color:green} hadoop-hdfs-project-jdk1.7.0_131 with JDK v1.7.0_131 generated 0 new + 81 unchanged - 2 fixed = 81 total (was 83) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 35s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 168 unchanged - 1 fixed = 169 total (was 169) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 14s{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.7.0_131. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 8s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:blac
[jira] [Updated] (HDFS-11867) Backport HDFS-6291 to branch 2.7
[ https://issues.apache.org/jira/browse/HDFS-11867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-11867: --- Issue Type: Bug (was: Task) > Backport HDFS-6291 to branch 2.7 > > > Key: HDFS-11867 > URL: https://issues.apache.org/jira/browse/HDFS-11867 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HDFS-6291-branch-2.7.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11878) Fix journal missing log httpServerUrl address in JournalNodeSyncer
[ https://issues.apache.org/jira/browse/HDFS-11878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023931#comment-16023931 ] Arpit Agarwal commented on HDFS-11878: -- +1 pending Jenkins. > Fix journal missing log httpServerUrl address in JournalNodeSyncer > -- > > Key: HDFS-11878 > URL: https://issues.apache.org/jira/browse/HDFS-11878 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HDFS-11878.001.patch > > > JournalNodeSyncer should build the httpServerUrl, for downloading a missing > log, from the JN address it has rather than from the fromUrl field of the > getEditLogManifest response. The response might have the default http address > (0.0.0.0) as the fromUrl address. Whereas the httpServerUrl would require the > host address of the JN to download missing log segments. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11877) FileJournalManager#getLogFile should ignore in progress edit logs during JN sync
[ https://issues.apache.org/jira/browse/HDFS-11877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023918#comment-16023918 ] Hudson commented on HDFS-11877: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11775 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11775/]) HDFS-11877. FileJournalManager#getLogFile should ignore in progress edit (arp: rev 0e83ed5e7372c801c9fee01df91b6b56de467ab1) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/GetJournalEditServlet.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java > FileJournalManager#getLogFile should ignore in progress edit logs during JN > sync > > > Key: HDFS-11877 > URL: https://issues.apache.org/jira/browse/HDFS-11877 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11877.001.patch > > > Due to synchronization introduced in HDFS-4025, a journal might have an edit > log and an in progress edit log with the same start tx id. This would create > an exception if GetJournalEditServlet tries to download edit with that start > tx id from FileJournalManager. JournalNodeSyncer can fail when trying to > fetch an edit log in this scenario. > FileJournalManager#getLogFile should ignore in progress edit logs for JN sync > downloads. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11865) Ozone: Do not initialize Ratis cluster during datanode startup
[ https://issues.apache.org/jira/browse/HDFS-11865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023912#comment-16023912 ] Xiaoyu Yao commented on HDFS-11865: --- Thanks [~szetszwo] for working on this. The patch looks excellent to me. I just have few minor issues: OzoneConfigKeys.java Can you add some comments on how to configure and use dfs.container.ratis.server.id (e.g. via RaftClient#reinitialize) in a Ratis cluster? XceiverServerRatis.java Line 63: on Datanode without dfs.container.ratis.server.id configured, the id will be null. Will that cause NPE on line 86 when run RaftPeerId.valueOf(id)? Maybe this is an invalid configuration. Can we add some logic to handle it gracefully. TestContainerServer.java Line 106: Can you elaborate on the reason to disable test with more than 1 node {{runTestClientServerRatis(NETTY, 3);}}? Maybe add a TODO to fix it later? > Ozone: Do not initialize Ratis cluster during datanode startup > -- > > Key: HDFS-11865 > URL: https://issues.apache.org/jira/browse/HDFS-11865 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: HDFS-11865-HDFS-7240.20170522.patch, > HDFS-11865-HDFS-7240.20170523.patch > > > During a datanode startup, we current pass dfs.container.ratis.conf so that > the datanode is bound to a particular Ratis cluster. > In this JIRA, we change Datanode that the datanode is no longer bound to any > Ratis cluster during startup. We use the Ratis reinitialize request > (RATIS-86) to set up a Ratis cluster later on. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11822) Block Storage: Fix TestCBlockCLI, failing because of " Address already in use"
[ https://issues.apache.org/jira/browse/HDFS-11822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023901#comment-16023901 ] Chen Liang commented on HDFS-11822: --- Thanks [~msingh] for the patch! One question though: looks like {{cblockServiceRpcAddress}} and {{cblockServerRpcAddress}} in {{CBlockManager.java}} are only used to print the two log messages? In which case seems no need to have them as class member variables. Or is there anything missing here? > Block Storage: Fix TestCBlockCLI, failing because of " Address already in use" > -- > > Key: HDFS-11822 > URL: https://issues.apache.org/jira/browse/HDFS-11822 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Attachments: HDFS-11822-HDFS-7240.001.patch, > HDFS-11822-HDFS-7240.002.patch > > > TestCBlockCLI is failing because of bind error. > https://builds.apache.org/job/PreCommit-HDFS-Build/19429/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt > {code} > org.apache.hadoop.cblock.TestCBlockCLI Time elapsed: 0.668 sec <<< ERROR! > java.net.BindException: Problem binding to [0.0.0.0:9810] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:543) > at org.apache.hadoop.ipc.Server$Listener.(Server.java:1033) > at org.apache.hadoop.ipc.Server.(Server.java:2791) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:960) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:420) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:341) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:802) > at > org.apache.hadoop.cblock.CBlockManager.startRpcServer(CBlockManager.java:215) > at org.apache.hadoop.cblock.CBlockManager.(CBlockManager.java:131) > at org.apache.hadoop.cblock.TestCBlockCLI.setup(TestCBlockCLI.java:57) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-11731) Balancer.run() prints redundant included, excluded, source nodes.
[ https://issues.apache.org/jira/browse/HDFS-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko resolved HDFS-11731. Resolution: Invalid Yes, I agree there is no redundancy. Not sure now where I saw it. [~vrushalic] thank you for verifying. Closing. > Balancer.run() prints redundant included, excluded, source nodes. > - > > Key: HDFS-11731 > URL: https://issues.apache.org/jira/browse/HDFS-11731 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.8.0 >Reporter: Konstantin Shvachko > Labels: newbie > > Included, excluded, and source nodes are printed twice by the Balancer. First > as part of {{BalancerParameters.toString()}} in > {code} > LOG.info("parameters = " + p); > {code} > And then separately > {code} > LOG.info("included nodes = " + p.getIncludedNodes()); > LOG.info("excluded nodes = " + p.getExcludedNodes()); > LOG.info("source nodes = " + p.getSourceNodes()); > {code} > The latter can be removed. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11780) Ozone: KSM : Add putKey
[ https://issues.apache.org/jira/browse/HDFS-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023884#comment-16023884 ] Xiaoyu Yao commented on HDFS-11780: --- Thanks [~vagarychen] for working on this. The patch looks good to me overall. Here are some comments: KsmKeyBlock.java NIT: KsmKeyBlock-> KsmKeyInfo to be consistent with Volume/Bucket? NIT: Line 35: blockKey-> blockId? Can you update the comment: "name of the block scm specified" -> "name of the block id SCM assigned for the key" KeySpaceManagerProtocol.java NIT: allocateKeyBlock-> allocateKey " Allocate a block to a container, the block is returned to the client" -> " Allocate a key, the block/container information is returned to the client" KeySpaceManagerProtocolClientSideTranslatorPB.java Line 327-328 javadocs KeySpaceManagerProtocol.proto KeyInfo -> KeyArg KeyBlockInfo -> KeyInfo getKeyBlock-> createKey to be consistent KeyManagerImpl.java Line 76: AllocatedBlock has a field called shouldCreateContainer. This needs to be included in KsmKeyInfo (currently called KsmKeyBlock) so that the client can handle the case where the newly created container can be created before write. DistributedStorageHandler.java Line 295: this is related to previous comment, ChunkOutputStream needs to be updated to handle the shouldCreateContainer when the key got assigned to a container that has not been created on SCM datanodes. TestKeySpaceManager.java Line 222: the OutputStream need to be closed to trigger writeChunkToContainer(). This can be done either by calling DistributedStorageHandler#commitKey() or use try-with-resources for the stream. Otherwise, the write is just writen to a ByteBuffer (see ChunkOutputStream#flushBufferToTrunk()). > Ozone: KSM : Add putKey > --- > > Key: HDFS-11780 > URL: https://issues.apache.org/jira/browse/HDFS-11780 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Chen Liang > Attachments: HDFS-11780-HDFS-7240.001.patch, > HDFS-11780-HDFS-7240.002.patch, HDFS-11780-HDFS-7240.003.patch > > > Support putting a key into an Ozone bucket. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11878) Fix journal missing log httpServerUrl address in JournalNodeSyncer
[ https://issues.apache.org/jira/browse/HDFS-11878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-11878: -- Status: Patch Available (was: Open) > Fix journal missing log httpServerUrl address in JournalNodeSyncer > -- > > Key: HDFS-11878 > URL: https://issues.apache.org/jira/browse/HDFS-11878 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HDFS-11878.001.patch > > > JournalNodeSyncer should build the httpServerUrl, for downloading a missing > log, from the JN address it has rather than from the fromUrl field of the > getEditLogManifest response. The response might have the default http address > (0.0.0.0) as the fromUrl address. Whereas the httpServerUrl would require the > host address of the JN to download missing log segments. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11878) Fix journal missing log httpServerUrl address in JournalNodeSyncer
[ https://issues.apache.org/jira/browse/HDFS-11878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-11878: -- Attachment: HDFS-11878.001.patch > Fix journal missing log httpServerUrl address in JournalNodeSyncer > -- > > Key: HDFS-11878 > URL: https://issues.apache.org/jira/browse/HDFS-11878 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HDFS-11878.001.patch > > > JournalNodeSyncer should build the httpServerUrl, for downloading a missing > log, from the JN address it has rather than from the fromUrl field of the > getEditLogManifest response. The response might have the default http address > (0.0.0.0) as the fromUrl address. Whereas the httpServerUrl would require the > host address of the JN to download missing log segments. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11877) FileJournalManager#getLogFile should ignore in progress edit logs during JN sync
[ https://issues.apache.org/jira/browse/HDFS-11877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023861#comment-16023861 ] Hanisha Koneru commented on HDFS-11877: --- Thank you [~arpitagarwal] for reviewing committing the patch. > FileJournalManager#getLogFile should ignore in progress edit logs during JN > sync > > > Key: HDFS-11877 > URL: https://issues.apache.org/jira/browse/HDFS-11877 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11877.001.patch > > > Due to synchronization introduced in HDFS-4025, a journal might have an edit > log and an in progress edit log with the same start tx id. This would create > an exception if GetJournalEditServlet tries to download edit with that start > tx id from FileJournalManager. JournalNodeSyncer can fail when trying to > fetch an edit log in this scenario. > FileJournalManager#getLogFile should ignore in progress edit logs for JN sync > downloads. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11877) FileJournalManager#getLogFile should ignore in progress edit logs during JN sync
[ https://issues.apache.org/jira/browse/HDFS-11877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-11877: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha3 Status: Resolved (was: Patch Available) Committed this to trunk. Verified that the failed unit tests are not related to this patch. Thanks for the contribution [~hanishakoneru]. > FileJournalManager#getLogFile should ignore in progress edit logs during JN > sync > > > Key: HDFS-11877 > URL: https://issues.apache.org/jira/browse/HDFS-11877 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11877.001.patch > > > Due to synchronization introduced in HDFS-4025, a journal might have an edit > log and an in progress edit log with the same start tx id. This would create > an exception if GetJournalEditServlet tries to download edit with that start > tx id from FileJournalManager. JournalNodeSyncer can fail when trying to > fetch an edit log in this scenario. > FileJournalManager#getLogFile should ignore in progress edit logs for JN sync > downloads. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11708) positional read will fail if replicas moved to different DNs after stream is opened
[ https://issues.apache.org/jira/browse/HDFS-11708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023853#comment-16023853 ] Konstantin Shvachko commented on HDFS-11708: Hey [~vinayrpet], # Current patch does not apply cleanly to trunk any more. # When I merge the test part of 004 patch, but exclude your changes to {{DFSInputStream}}, which I assume are the actual fix, then {{TestPread}} succeeds. It should fail if it captures the bug, right? > positional read will fail if replicas moved to different DNs after stream is > opened > --- > > Key: HDFS-11708 > URL: https://issues.apache.org/jira/browse/HDFS-11708 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.7.3 >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Critical > Labels: release-blocker > Attachments: HDFS-11708-01.patch, HDFS-11708-02.patch, > HDFS-11708-03.patch, HDFS-11708-04.patch > > > Scenario: > 1. File was written to DN1, DN2 with RF=2 > 2. File stream opened to read and kept. Block Locations are [DN1,DN2] > 3. One of the replica (DN2) moved to another datanode (DN3) due to datanode > dead/balancing/etc. > 4. Latest block locations in NameNode will be DN1 and DN3 in the 'same order' > 5. DN1 went down, but not yet detected as dead in NameNode. > 6. Client start reading using positional read api "read(pos, buf[], offset, > length)" -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11776) Ozone: KSM: add SetBucketProperty
[ https://issues.apache.org/jira/browse/HDFS-11776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023837#comment-16023837 ] Hadoop QA commented on HDFS-11776: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 38s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 48s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 31s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 40s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 has 2 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} HDFS-7240 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 16s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 7s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}123m 53s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestEncryptionZones | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11776 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869712/HDFS-11776-HDFS-7240.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 3fa20947f1b2 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64
[jira] [Commented] (HDFS-11877) FileJournalManager#getLogFile should ignore in progress edit logs during JN sync
[ https://issues.apache.org/jira/browse/HDFS-11877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023831#comment-16023831 ] Hadoop QA commented on HDFS-11877: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 38s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}123m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | Timed out junit tests | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11877 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869716/HDFS-11877.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 727635f8d0e4 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1c8dd6d | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19598/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19598/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19598/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > FileJournalManager#getLogFile should ignore in progress edit logs during JN > sync > -
[jira] [Commented] (HDFS-11780) Ozone: KSM : Add putKey
[ https://issues.apache.org/jira/browse/HDFS-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023824#comment-16023824 ] Hadoop QA commented on HDFS-11780: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 52s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 41s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 has 2 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 52s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green} HDFS-7240 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 6s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 36s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 19s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 5s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}108m 52s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11780 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869719/HDFS-11780-HDFS-7240.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 03d7fb7485ae 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Buil
[jira] [Commented] (HDFS-11776) Ozone: KSM: add SetBucketProperty
[ https://issues.apache.org/jira/browse/HDFS-11776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023780#comment-16023780 ] Chen Liang commented on HDFS-11776: --- Thanks [~nandakumar131] for working on this! v000 patch LGTM, pending jenkins. Some thoughts though, I was wondering, given {{setBucketProperty(KsmBucketArgs args)}}, is it really useful at all to have the other set methods {{setBucketVersioning(BucketArgs args)}}, {{void setBucketStorageClass(BucketArgs args)}} and {{void setBucketAcls(BucketArgs args)}}? Seems to me that setBucketProperty(KsmBucketArgs args) is the union of the three. If we are to keep the three methods though, I think it is probably better to change the signature, say, change {{setBucketStorageClass(BucketArgs args)}} to {{setBucketStorageClass(String volumeName, String bucketName, StorageType type)}} which makes it a wrapper exposed to client. Any thoughts? Since this seems to come from older code, @[~anu]. > Ozone: KSM: add SetBucketProperty > - > > Key: HDFS-11776 > URL: https://issues.apache.org/jira/browse/HDFS-11776 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Nandakumar > Attachments: HDFS-11776-HDFS-7240.000.patch > > > Allows changing the properties of an existing bucket. Properties supported by > this call are > # ACLs - Allows changing ACLs on a existing bucket. > # StorageType - Allows users to control where the bucket should live. we > ignore this for the time being, since SCM does not expose APIs for this yet. > # Versioning - Enables versioning on buckets. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11741) Long running balancer may fail due to expired DataEncryptionKey
[ https://issues.apache.org/jira/browse/HDFS-11741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023750#comment-16023750 ] Wei-Chiu Chuang commented on HDFS-11741: Thanks [~xiaochen]. I like your suggestion. I initially just wanted to maintain the parity to DFSClient#newDataEncryptionKey. But that's actually not needed: DFSClient does not have access to block key, so it has to ask NameNode for DEK. Balancer KeyManager has access to block key, so it can generate DEK on its own, no extra overhead for NN. > Long running balancer may fail due to expired DataEncryptionKey > --- > > Key: HDFS-11741 > URL: https://issues.apache.org/jira/browse/HDFS-11741 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover > Environment: CDH5.8.2, Kerberos, Data transfer encryption enabled. > Balancer login using keytab >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HDFS-11741.001.patch, HDFS-11741.002.patch, > HDFS-11741.003.patch, HDFS-11741.004.patch > > > We found a long running balancer may fail despite using keytab, because > KeyManager returns expired DataEncryptionKey, and it throws the following > exception: > {noformat} > 2017-04-30 05:03:58,661 WARN [pool-1464-thread-10] balancer.Dispatcher > (Dispatcher.java:dispatch(325)) - Failed to move blk_1067352712_3913241 with > size=546650 from 10.0.0.134:50010:DISK to 10.0.0.98:50010:DISK through > 10.0.0.134:50010 > org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: > Can't re-compute encryption key for nonce, since the required block key > (keyID=1005215027) doesn't exist. Current key: 1005215030 > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessageAndNegotiatedCipherOption(DataTransferSaslUtil.java:417) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:474) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:299) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:242) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:183) > at > org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.dispatch(Dispatcher.java:311) > at > org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.access$2300(Dispatcher.java:182) > at > org.apache.hadoop.hdfs.server.balancer.Dispatcher$1.run(Dispatcher.java:899) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} > This bug is similar in nature to HDFS-10609. While balancer KeyManager > actively synchronizes itself with NameNode w.r.t block keys, it does not > update DataEncryptionKey accordingly. > In a specific cluster, with Kerberos ticket life time 10 hours, and default > block token expiration/life time 10 hours, a long running balancer failed > after 20~30 hours. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11876) Make WebHDFS' ACLs RegEx configurable Testing
[ https://issues.apache.org/jira/browse/HDFS-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-11876: - Attachment: HDFS-11876.branch-2.002.patch > Make WebHDFS' ACLs RegEx configurable Testing > - > > Key: HDFS-11876 > URL: https://issues.apache.org/jira/browse/HDFS-11876 > Project: Hadoop HDFS > Issue Type: Test >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Trivial > Attachments: HDFS-11421.branch-2.001.patch, > HDFS-11876.branch-2.002.patch > > > See HDFS-11421, running branch-2 test here. (Because can't seem to trigger > asf bot to run branch-2 test if a jira is associated with github) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11741) Long running balancer may fail due to expired DataEncryptionKey
[ https://issues.apache.org/jira/browse/HDFS-11741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023691#comment-16023691 ] Xiao Chen commented on HDFS-11741: -- Thanks [~jojochuang] for reporting the issue and working on the fix, and others for reviewing. Just want to make sure I understand correctly: the problem is the {{KeyManager}} instance in the {{Dispatcher}} uses a version of {{encryptionKey}}, which is associated with a {{BlockKey}} that is larger than {{2 * keyUpdateInterval + tokenLifetime}} old. So the balancer side of {{BlockTokenSecretManager}} cannot find that {{BlockKey}}, and this is because the {{encryptionKey}} object isn't updated. If above is correct, can we go with the route to have KM's {{BlockKeyUpdater}} (or a new EKUpdater) to update the {{encryptionKey}} periodically (say, tokenLifetime / 2, or /4) as well? I think this is more future proof because {{KeyManager}} is associated with {{NameNodeConnector}} - it seems dispatcher is the only place that retrieves this KM, but I feel the problem exists with NNC. > Long running balancer may fail due to expired DataEncryptionKey > --- > > Key: HDFS-11741 > URL: https://issues.apache.org/jira/browse/HDFS-11741 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover > Environment: CDH5.8.2, Kerberos, Data transfer encryption enabled. > Balancer login using keytab >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HDFS-11741.001.patch, HDFS-11741.002.patch, > HDFS-11741.003.patch, HDFS-11741.004.patch > > > We found a long running balancer may fail despite using keytab, because > KeyManager returns expired DataEncryptionKey, and it throws the following > exception: > {noformat} > 2017-04-30 05:03:58,661 WARN [pool-1464-thread-10] balancer.Dispatcher > (Dispatcher.java:dispatch(325)) - Failed to move blk_1067352712_3913241 with > size=546650 from 10.0.0.134:50010:DISK to 10.0.0.98:50010:DISK through > 10.0.0.134:50010 > org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: > Can't re-compute encryption key for nonce, since the required block key > (keyID=1005215027) doesn't exist. Current key: 1005215030 > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessageAndNegotiatedCipherOption(DataTransferSaslUtil.java:417) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:474) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:299) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:242) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:183) > at > org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.dispatch(Dispatcher.java:311) > at > org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.access$2300(Dispatcher.java:182) > at > org.apache.hadoop.hdfs.server.balancer.Dispatcher$1.run(Dispatcher.java:899) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} > This bug is similar in nature to HDFS-10609. While balancer KeyManager > actively synchronizes itself with NameNode w.r.t block keys, it does not > update DataEncryptionKey accordingly. > In a specific cluster, with Kerberos ticket life time 10 hours, and default > block token expiration/life time 10 hours, a long running balancer failed > after 20~30 hours. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-5042) Completed files lost after power failure
[ https://issues.apache.org/jira/browse/HDFS-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023687#comment-16023687 ] Hadoop QA commented on HDFS-5042: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 23s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 41s{color} | {color:green} root: The patch generated 0 new + 307 unchanged - 1 fixed = 307 total (was 308) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 42s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 13s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}132m 34s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | | | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-5042 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869701/HDFS-5042-04.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux a31fc119ab69 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1c8dd6d | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/19595/artifact
[jira] [Commented] (HDFS-11877) FileJournalManager#getLogFile should ignore in progress edit logs during JN sync
[ https://issues.apache.org/jira/browse/HDFS-11877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023683#comment-16023683 ] Arpit Agarwal commented on HDFS-11877: -- +1 pending Jenkins. > FileJournalManager#getLogFile should ignore in progress edit logs during JN > sync > > > Key: HDFS-11877 > URL: https://issues.apache.org/jira/browse/HDFS-11877 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HDFS-11877.001.patch > > > Due to synchronization introduced in HDFS-4025, a journal might have an edit > log and an in progress edit log with the same start tx id. This would create > an exception if GetJournalEditServlet tries to download edit with that start > tx id from FileJournalManager. JournalNodeSyncer can fail when trying to > fetch an edit log in this scenario. > FileJournalManager#getLogFile should ignore in progress edit logs for JN sync > downloads. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11682) TestBalancer#testBalancerWithStripedFile is flaky
[ https://issues.apache.org/jira/browse/HDFS-11682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023660#comment-16023660 ] Andrew Wang commented on HDFS-11682: Thanks for working on this Eddy! I'm loath to make the balancer tests run even longer. 5 retries on what's already a 40s wait is really long. If the issue is out-of-date NN information, can we manually trigger HBs/ IBRs / BRs instead? > TestBalancer#testBalancerWithStripedFile is flaky > - > > Key: HDFS-11682 > URL: https://issues.apache.org/jira/browse/HDFS-11682 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Lei (Eddy) Xu > Attachments: HDFS-11682.00.patch, HDFS-11682.01.patch, > IndexOutOfBoundsException.log, timeout.log > > > Saw this fail in two different ways on a precommit run, but pass locally. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11881) NameNode consumes a lot of memory for snapshot diff report generation
[ https://issues.apache.org/jira/browse/HDFS-11881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HDFS-11881: -- Description: *Problem:* HDFS supports a snapshot diff tool which can generate a [detailed report | https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html#Get_Snapshots_Difference_Report] of modified, created, deleted and renamed files between any 2 snapshots. {noformat} hdfs snapshotDiff {noformat} However, if the diff list between 2 snapshots happens to be huge, in the order of millions, then NameNode can consume a lot of memory while generating the huge diff report. In a few cases, we are seeing NameNode getting into a long GC lasting for few minutes to make room for this burst in memory requirement during snapshot diff report generation. *RootCause:* * NameNode tries to generate the diff report with all diff entries at once which puts undue pressure * Each diff report entry has the diff type (enum), source path byte array, and destination path byte array to the minimum. Let's take file deletions use case. For file deletions, there would be only source or destination paths in the diff report entry. Let's assume these deleted files on average take 128Bytes for the path. 4 million file deletion captured in diff report will thus need 512MB of memory * The snapshot diff report uses simple java ArrayList which tries to double its backing contiguous memory chunk every time the usage factor crosses the capacity threshold. So, a 512MB memory requirement might be internally asking for a much larger contiguous memory chunk *Proposal:* * Make NameNode snapshot diff report service follow the batch model (like directory listing service). Clients (hdfs snapshotDiff command) will then receive diff report in small batches, and need to iterate several times to get the full list. * Additionally, snap diff report service in the NameNode can make use of ChunkedArrayList data structure instead of the current ArrayList so as to avoid the curse of fragmentation and large contiguous memory requirement. was: Problem: HDFS supports a snapshot diff tool which can generate a [detailed report | https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html#Get_Snapshots_Difference_Report] of modified, created, deleted and renamed files between any 2 snapshots. {noformat} hdfs snapshotDiff {noformat} However, if the diff list between 2 snapshots happens to be huge, in the order of millions, then NameNode can consume a lot of memory while generating the huge diff report. In a few cases, we are seeing NameNode getting into a long GC lasting for few minutes to make room for this burst in memory requirement during snapshot diff report generation. RootCause: * NameNode tries to generate the diff report with all diff entries at once which puts undue pressure * Each diff report entry has the diff type (enum), source path byte array, and destination path byte array to the minimum. Let's take file deletions use case. For file deletions, there would be only source or destination paths in the diff report entry. Let's assume these deleted files on average take 128Bytes for the path. 4 million file deletion captured in diff report will thus need 512MB of memory * The snapshot diff report uses simple java ArrayList which tries to double its backing contiguous memory chunk every time the usage factor crosses the capacity threshold. So, a 512MB memory requirement might be internally asking for a much larger contiguous memory chunk Proposal: * Make NameNode snapshot diff report service follow the batch model (like directory listing service). Clients (hdfs snapshotDiff command) will then receive diff report in small batches, and need to iterate several times to get the full list. * Additionally, snap diff report service in the NameNode can make use of ChunkedArrayList data structure instead of the current ArrayList so as to avoid the curse of fragmentation and large contiguous memory requirement. > NameNode consumes a lot of memory for snapshot diff report generation > - > > Key: HDFS-11881 > URL: https://issues.apache.org/jira/browse/HDFS-11881 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, snapshots >Affects Versions: 3.0.0-alpha1 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > > *Problem:* > HDFS supports a snapshot diff tool which can generate a [detailed report | > https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html#Get_Snapshots_Difference_Report] > of modified, created, deleted and renamed files between any 2 snapshots. > {noformat} > hdfs snapshotDiff > {noformat} > However, if the
[jira] [Commented] (HDFS-11780) Ozone: KSM : Add putKey
[ https://issues.apache.org/jira/browse/HDFS-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023630#comment-16023630 ] Hadoop QA commented on HDFS-11780: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 15s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 37s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 43s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 has 2 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 58s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 38s{color} | {color:green} HDFS-7240 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 39s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 27s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 22s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}113m 37s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | | | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11780 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869707/HDFS-11780-HDFS-7240.002.
[jira] [Commented] (HDFS-11682) TestBalancer#testBalancerWithStripedFile is flaky
[ https://issues.apache.org/jira/browse/HDFS-11682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023618#comment-16023618 ] Hadoop QA commented on HDFS-11682: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 201 unchanged - 4 fixed = 201 total (was 205) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 56s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 91m 23s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11682 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869700/HDFS-11682.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1065820e723b 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1c8dd6d | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19594/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19594/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19594/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestBalancer#testBalancerWithStripedFile is flaky > - > > Key: HDFS-11682 > URL: https://issues.apache.org/jira/browse/HDFS-11682 > Proje
[jira] [Updated] (HDFS-11780) Ozone: KSM : Add putKey
[ https://issues.apache.org/jira/browse/HDFS-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-11780: -- Attachment: HDFS-11780-HDFS-7240.003.patch Post v003 patch to rebase, and fixed the name of the newly added test. > Ozone: KSM : Add putKey > --- > > Key: HDFS-11780 > URL: https://issues.apache.org/jira/browse/HDFS-11780 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Chen Liang > Attachments: HDFS-11780-HDFS-7240.001.patch, > HDFS-11780-HDFS-7240.002.patch, HDFS-11780-HDFS-7240.003.patch > > > Support putting a key into an Ozone bucket. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11659) TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten fail due to no DataNode available for pipeline recovery.
[ https://issues.apache.org/jira/browse/HDFS-11659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023582#comment-16023582 ] Lei (Eddy) Xu commented on HDFS-11659: -- Hi, [~jojochuang] It is not because the DN fails, it is due to that the client / pipeline put this DN into excludeNode list. So it then adds a DN to help for pipeline recover. > TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten fail due to no > DataNode available for pipeline recovery. > > > Key: HDFS-11659 > URL: https://issues.apache.org/jira/browse/HDFS-11659 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.3, 3.0.0-alpha2 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu > Attachments: HDFS-11659.000.patch > > > The test fails after the following error messages: > {code} > java.io.IOException: Failed to replace a bad datanode on the existing > pipeline due to no more good datanodes being available to try. (Nodes: > current=[DatanodeInfoWithStorage[127.0.0.1:57377,DS-b4ec61fc-657c-4e2a-9dc3-8d93b7769a2b,DISK], > > DatanodeInfoWithStorage[127.0.0.1:47448,DS-18bca8d7-048d-4d7f-9594-d2df16096a3d,DISK]], > > original=[DatanodeInfoWithStorage[127.0.0.1:57377,DS-b4ec61fc-657c-4e2a-9dc3-8d93b7769a2b,DISK], > > DatanodeInfoWithStorage[127.0.0.1:47448,DS-18bca8d7-048d-4d7f-9594-d2df16096a3d,DISK]]). > The current failed datanode replacement policy is DEFAULT, and a client may > configure this via > 'dfs.client.block.write.replace-datanode-on-failure.policy' in its > configuration. > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1280) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1354) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1512) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1236) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:721) > {code} > In such case, the DataNode that has removed can not be used in the pipeline > recovery. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11597) Ozone: Add Ratis management API
[ https://issues.apache.org/jira/browse/HDFS-11597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-11597: --- Attachment: HDFS-11597-HDFS-7240.20170524.patch HDFS-11597-HDFS-7240.20170524.patch: some minor changes. > Ozone: Add Ratis management API > --- > > Key: HDFS-11597 > URL: https://issues.apache.org/jira/browse/HDFS-11597 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: HDFS-11597-HDFS-7240.20170522.patch, > HDFS-11597-HDFS-7240.20170523.patch, HDFS-11597-HDFS-7240.20170524.patch > > > We need APIs to manage Ratis clusters for the following operations: > - create cluster; > - close cluster; > - get members; and > - update members. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11876) Make WebHDFS' ACLs RegEx configurable Testing
[ https://issues.apache.org/jira/browse/HDFS-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023577#comment-16023577 ] Hadoop QA commented on HDFS-11876: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 30s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 27s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s{color} | {color:green} hadoop-hdfs-project-jdk1.8.0_131 with JDK v1.8.0_131 generated 0 new + 79 unchanged - 2 fixed = 79 total (was 81) {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 21s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 21s{color} | {color:green} hadoop-hdfs-project-jdk1.7.0_121 with JDK v1.7.0_121 generated 0 new + 82 unchanged - 2 fixed = 82 total (was 84) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 27s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 170 unchanged - 0 fixed = 173 total (was 170) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 10s{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.7.0_121. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 16s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_121. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:blac
[jira] [Commented] (HDFS-11780) Ozone: KSM : Add putKey
[ https://issues.apache.org/jira/browse/HDFS-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023557#comment-16023557 ] Hadoop QA commented on HDFS-11780: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 15s{color} | {color:red} HDFS-11780 does not apply to HDFS-7240. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-11780 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869707/HDFS-11780-HDFS-7240.002.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19597/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: KSM : Add putKey > --- > > Key: HDFS-11780 > URL: https://issues.apache.org/jira/browse/HDFS-11780 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Chen Liang > Attachments: HDFS-11780-HDFS-7240.001.patch, > HDFS-11780-HDFS-7240.002.patch > > > Support putting a key into an Ozone bucket. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11881) NameNode consumes a lot of memory for snapshot diff report generation
Manoj Govindassamy created HDFS-11881: - Summary: NameNode consumes a lot of memory for snapshot diff report generation Key: HDFS-11881 URL: https://issues.apache.org/jira/browse/HDFS-11881 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs, snapshots Affects Versions: 3.0.0-alpha1 Reporter: Manoj Govindassamy Assignee: Manoj Govindassamy Problem: HDFS supports a snapshot diff tool which can generate a [detailed report | https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html#Get_Snapshots_Difference_Report] of modified, created, deleted and renamed files between any 2 snapshots. {noformat} hdfs snapshotDiff {noformat} However, if the diff list between 2 snapshots happens to be huge, in the order of millions, then NameNode can consume a lot of memory while generating the huge diff report. In a few cases, we are seeing NameNode getting into a long GC lasting for few minutes to make room for this burst in memory requirement during snapshot diff report generation. RootCause: * NameNode tries to generate the diff report with all diff entries at once which puts undue pressure * Each diff report entry has the diff type (enum), source path byte array, and destination path byte array to the minimum. Let's take file deletions use case. For file deletions, there would be only source or destination paths in the diff report entry. Let's assume these deleted files on average take 128Bytes for the path. 4 million file deletion captured in diff report will thus need 512MB of memory * The snapshot diff report uses simple java ArrayList which tries to double its backing contiguous memory chunk every time the usage factor crosses the capacity threshold. So, a 512MB memory requirement might be internally asking for a much larger contiguous memory chunk Proposal: * Make NameNode snapshot diff report service follow the batch model (like directory listing service). Clients (hdfs snapshotDiff command) will then receive diff report in small batches, and need to iterate several times to get the full list. * Additionally, snap diff report service in the NameNode can make use of ChunkedArrayList data structure instead of the current ArrayList so as to avoid the curse of fragmentation and large contiguous memory requirement. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11877) FileJournalManager#getLogFile should ignore in progress edit logs during JN sync
[ https://issues.apache.org/jira/browse/HDFS-11877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-11877: -- Attachment: HDFS-11877.001.patch > FileJournalManager#getLogFile should ignore in progress edit logs during JN > sync > > > Key: HDFS-11877 > URL: https://issues.apache.org/jira/browse/HDFS-11877 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HDFS-11877.001.patch > > > Due to synchronization introduced in HDFS-4025, a journal might have an edit > log and an in progress edit log with the same start tx id. This would create > an exception if GetJournalEditServlet tries to download edit with that start > tx id from FileJournalManager. JournalNodeSyncer can fail when trying to > fetch an edit log in this scenario. > FileJournalManager#getLogFile should ignore in progress edit logs for JN sync > downloads. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11877) FileJournalManager#getLogFile should ignore in progress edit logs during JN sync
[ https://issues.apache.org/jira/browse/HDFS-11877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-11877: -- Status: Patch Available (was: Open) > FileJournalManager#getLogFile should ignore in progress edit logs during JN > sync > > > Key: HDFS-11877 > URL: https://issues.apache.org/jira/browse/HDFS-11877 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HDFS-11877.001.patch > > > Due to synchronization introduced in HDFS-4025, a journal might have an edit > log and an in progress edit log with the same start tx id. This would create > an exception if GetJournalEditServlet tries to download edit with that start > tx id from FileJournalManager. JournalNodeSyncer can fail when trying to > fetch an edit log in this scenario. > FileJournalManager#getLogFile should ignore in progress edit logs for JN sync > downloads. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11776) Ozone: KSM: add SetBucketProperty
[ https://issues.apache.org/jira/browse/HDFS-11776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-11776: -- Status: Patch Available (was: Open) > Ozone: KSM: add SetBucketProperty > - > > Key: HDFS-11776 > URL: https://issues.apache.org/jira/browse/HDFS-11776 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Nandakumar > Attachments: HDFS-11776-HDFS-7240.000.patch > > > Allows changing the properties of an existing bucket. Properties supported by > this call are > # ACLs - Allows changing ACLs on a existing bucket. > # StorageType - Allows users to control where the bucket should live. we > ignore this for the time being, since SCM does not expose APIs for this yet. > # Versioning - Enables versioning on buckets. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11877) FileJournalManager#getLogFile should ignore in progress edit logs during JN sync
[ https://issues.apache.org/jira/browse/HDFS-11877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-11877: -- Description: Due to synchronization introduced in HDFS-4025, a journal might have an edit log and an in progress edit log with the same start tx id. This would create an exception if GetJournalEditServlet tries to download edit with that start tx id from FileJournalManager. JournalNodeSyncer can fail when trying to fetch an edit log in this scenario. FileJournalManager#getLogFile should ignore in progress edit logs for JN sync downloads. was: Due to synchronization introduced in HDFS-4025, a journal might have an edit log and an in progress edit log with the same start tx id. This would create an exception if GetJournalEditServlet tries to download edit with that start tx id from FileJournalManager. JournalNodeSyncer can fail when trying to fetch an edit log in this scenario. FileJournalManager#getLogFile should have an option to specify whether in progress edit logs should be considered or not. > FileJournalManager#getLogFile should ignore in progress edit logs during JN > sync > > > Key: HDFS-11877 > URL: https://issues.apache.org/jira/browse/HDFS-11877 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > > Due to synchronization introduced in HDFS-4025, a journal might have an edit > log and an in progress edit log with the same start tx id. This would create > an exception if GetJournalEditServlet tries to download edit with that start > tx id from FileJournalManager. JournalNodeSyncer can fail when trying to > fetch an edit log in this scenario. > FileJournalManager#getLogFile should ignore in progress edit logs for JN sync > downloads. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-11853) Ozone: KSM: Add getKey
[ https://issues.apache.org/jira/browse/HDFS-11853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao reassigned HDFS-11853: - Assignee: Chen Liang (was: Xiaoyu Yao) > Ozone: KSM: Add getKey > --- > > Key: HDFS-11853 > URL: https://issues.apache.org/jira/browse/HDFS-11853 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Chen Liang > > Support read the content (object) of the key. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11877) FileJournalManager#getLogFile should ignore in progress edit logs during JN sync
[ https://issues.apache.org/jira/browse/HDFS-11877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-11877: -- Summary: FileJournalManager#getLogFile should ignore in progress edit logs during JN sync (was: FileJournalManager#getLogFile should have an option to ignore in progress edit logs) > FileJournalManager#getLogFile should ignore in progress edit logs during JN > sync > > > Key: HDFS-11877 > URL: https://issues.apache.org/jira/browse/HDFS-11877 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > > Due to synchronization introduced in HDFS-4025, a journal might have an edit > log and an in progress edit log with the same start tx id. This would create > an exception if GetJournalEditServlet tries to download edit with that start > tx id from FileJournalManager. JournalNodeSyncer can fail when trying to > fetch an edit log in this scenario. > FileJournalManager#getLogFile should have an option to specify whether in > progress edit logs should be considered or not. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11846) Ozone: Fix Http connection leaks in ozone clients
[ https://issues.apache.org/jira/browse/HDFS-11846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-11846: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Status: Resolved (was: Patch Available) Thanks [~cheersyang] for the contribution and all for the reviews. I've committed the fix to the feature branch. > Ozone: Fix Http connection leaks in ozone clients > - > > Key: HDFS-11846 > URL: https://issues.apache.org/jira/browse/HDFS-11846 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Fix For: HDFS-7240 > > Attachments: HDFS-11846-HDFS-7240.001.patch, > HDFS-11846-HDFS-7240.002.patch > > > There are several problems > # Http clients in {{OzoneVolume}}, {{OzoneBucket}} and {{OzoneClient}} are > created per request, per [Reuse of HttpClient > instance|http://hc.apache.org/httpclient-3.x/performance.html#Reuse_of_HttpClient_instance] > doc, proposed to reuse the http client instance to reduce the over head. > # Some resources in these classes were not properly cleaned up. E.g the http > connection, HttpGet/HttpPost requests. > > This jira's purpose is to fix these issues and investigate how we can improve > the client. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11846) Ozone: Fix Http connection leaks in ozone clients
[ https://issues.apache.org/jira/browse/HDFS-11846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-11846: -- Summary: Ozone: Fix Http connection leaks in ozone clients (was: Ozone: Potential http connection leaks in ozone clients) > Ozone: Fix Http connection leaks in ozone clients > - > > Key: HDFS-11846 > URL: https://issues.apache.org/jira/browse/HDFS-11846 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11846-HDFS-7240.001.patch, > HDFS-11846-HDFS-7240.002.patch > > > There are several problems > # Http clients in {{OzoneVolume}}, {{OzoneBucket}} and {{OzoneClient}} are > created per request, per [Reuse of HttpClient > instance|http://hc.apache.org/httpclient-3.x/performance.html#Reuse_of_HttpClient_instance] > doc, proposed to reuse the http client instance to reduce the over head. > # Some resources in these classes were not properly cleaned up. E.g the http > connection, HttpGet/HttpPost requests. > > This jira's purpose is to fix these issues and investigate how we can improve > the client. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11880) To remove protobuf formats such as StorageTypeProto and OzoneAclInfo from KSM wrappers
Nandakumar created HDFS-11880: - Summary: To remove protobuf formats such as StorageTypeProto and OzoneAclInfo from KSM wrappers Key: HDFS-11880 URL: https://issues.apache.org/jira/browse/HDFS-11880 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Affects Versions: HDFS-7240 Reporter: Nandakumar Assignee: Nandakumar KSM wrappers like KsmBucketInfo and KsmBucketArgs are using protobuf formats such as StorageTypeProto and OzoneAclInfo, this jira is to remove the dependency and use {{StorageType}} and {{OzoneAcl}} instead. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11880) Ozone: KSM: Remove protobuf formats such as StorageTypeProto and OzoneAclInfo from KSM wrappers
[ https://issues.apache.org/jira/browse/HDFS-11880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nandakumar updated HDFS-11880: -- Summary: Ozone: KSM: Remove protobuf formats such as StorageTypeProto and OzoneAclInfo from KSM wrappers (was: To remove protobuf formats such as StorageTypeProto and OzoneAclInfo from KSM wrappers) > Ozone: KSM: Remove protobuf formats such as StorageTypeProto and OzoneAclInfo > from KSM wrappers > --- > > Key: HDFS-11880 > URL: https://issues.apache.org/jira/browse/HDFS-11880 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Nandakumar >Assignee: Nandakumar > > KSM wrappers like KsmBucketInfo and KsmBucketArgs are using protobuf formats > such as StorageTypeProto and OzoneAclInfo, this jira is to remove the > dependency and use {{StorageType}} and {{OzoneAcl}} instead. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11778) Ozone: KSM: add getBucketInfo
[ https://issues.apache.org/jira/browse/HDFS-11778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-11778: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Target Version/s: HDFS-7240 Status: Resolved (was: Patch Available) +1 for the 002 patch. Thanks [~nandakumar131] for the contribution and all for the reviews. I've committed the patch to HDFS-7240 branch. > Ozone: KSM: add getBucketInfo > - > > Key: HDFS-11778 > URL: https://issues.apache.org/jira/browse/HDFS-11778 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Nandakumar > Fix For: HDFS-7240 > > Attachments: HDFS-11778-HDFS-7240.000.patch, > HDFS-11778-HDFS-7240.001.patch, HDFS-11778-HDFS-7240.002.patch > > > Returns the bucket information if the bucket exists. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org