[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user
[ https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156498#comment-16156498 ] Yongjun Zhang commented on HDFS-12357: -- Thanks [~manojg] for the review. Uploaded rev7 to address all except for "u4", which is already covered by a pre-existing case as I stated earlier. Good point to check permission in addition to checking CALLED map in test, added that. > Let NameNode to bypass external attribute provider for special user > --- > > Key: HDFS-12357 > URL: https://issues.apache.org/jira/browse/HDFS-12357 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang > Attachments: HDFS-12357.001a.patch, HDFS-12357.001b.patch, > HDFS-12357.001.patch, HDFS-12357.002.patch, HDFS-12357.003.patch, > HDFS-12357.004.patch, HDFS-12357.005.patch, HDFS-12357.006.patch, > HDFS-12357.007.patch > > > This is a third proposal to solve the problem described in HDFS-12202. > The problem is, when we do distcp from one cluster to another (or within the > same cluster), in addition to copying file data, we copy the metadata from > source to target. If external attribute provider is enabled, the metadata may > be read from the provider, thus provider data read from source may be saved > to target HDFS. > We want to avoid saving metadata from external provider to HDFS, so we want > to bypass external provider when doing the distcp (or hadoop fs -cp) > operation. > Two alternative approaches were proposed earlier, one in HDFS-12202, the > other in HDFS-12294. The proposal here is the third one. > The idea is, we introduce a new config, that specifies a special user (or a > list of users), and let NN bypass external provider when the current user is > a special user. > If we run applications as the special user that need data from external > attribute provider, then it won't work. So the constraint on this approach > is, the special users here should not run applications that need data from > external provider. > Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], > [~manojg] for the discussions in the other jiras. > I'm creating this one to discuss further. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12402) Refactor ErasureCodingPolicyManager
[ https://issues.apache.org/jira/browse/HDFS-12402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-12402: - Attachment: HDFS-12402.002.patch Patch refined. > Refactor ErasureCodingPolicyManager > --- > > Key: HDFS-12402 > URL: https://issues.apache.org/jira/browse/HDFS-12402 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha4 >Reporter: SammiChen >Assignee: SammiChen > Attachments: HDFS-12402.001.patch, HDFS-12402.002.patch > > > 1. Correct message string grammar error > 2. Use HadoopIllegalArgumentException instead of IllegalECPolicyException > 3. Use HadoopIllegalArgumentException instead of IllegalArgumentException > 4. Remove IllegalECPolicyException -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12357) Let NameNode to bypass external attribute provider for special user
[ https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongjun Zhang updated HDFS-12357: - Attachment: HDFS-12357.007.patch > Let NameNode to bypass external attribute provider for special user > --- > > Key: HDFS-12357 > URL: https://issues.apache.org/jira/browse/HDFS-12357 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang > Attachments: HDFS-12357.001a.patch, HDFS-12357.001b.patch, > HDFS-12357.001.patch, HDFS-12357.002.patch, HDFS-12357.003.patch, > HDFS-12357.004.patch, HDFS-12357.005.patch, HDFS-12357.006.patch, > HDFS-12357.007.patch > > > This is a third proposal to solve the problem described in HDFS-12202. > The problem is, when we do distcp from one cluster to another (or within the > same cluster), in addition to copying file data, we copy the metadata from > source to target. If external attribute provider is enabled, the metadata may > be read from the provider, thus provider data read from source may be saved > to target HDFS. > We want to avoid saving metadata from external provider to HDFS, so we want > to bypass external provider when doing the distcp (or hadoop fs -cp) > operation. > Two alternative approaches were proposed earlier, one in HDFS-12202, the > other in HDFS-12294. The proposal here is the third one. > The idea is, we introduce a new config, that specifies a special user (or a > list of users), and let NN bypass external provider when the current user is > a special user. > If we run applications as the special user that need data from external > attribute provider, then it won't work. So the constraint on this approach > is, the special users here should not run applications that need data from > external provider. > Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], > [~manojg] for the discussions in the other jiras. > I'm creating this one to discuss further. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12402) Refactor ErasureCodingPolicyManager
[ https://issues.apache.org/jira/browse/HDFS-12402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-12402: - Description: 1. Correct message string grammar error 2. Use HadoopIllegalArgumentException instead of IllegalECPolicyException 3. Use HadoopIllegalArgumentException instead of IllegalArgumentException 4. Remove IllegalECPolicyException was: 1. Correct message string grammar error 2. Use HadoopIllegalArgumentException instead of IllegalECPolicyException 3. Use HadoopIllegalArgumentException instead of IllegalArgumentException > Refactor ErasureCodingPolicyManager > --- > > Key: HDFS-12402 > URL: https://issues.apache.org/jira/browse/HDFS-12402 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha4 >Reporter: SammiChen >Assignee: SammiChen > Attachments: HDFS-12402.001.patch > > > 1. Correct message string grammar error > 2. Use HadoopIllegalArgumentException instead of IllegalECPolicyException > 3. Use HadoopIllegalArgumentException instead of IllegalArgumentException > 4. Remove IllegalECPolicyException -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12402) Refactor ErasureCodingPolicyManager
[ https://issues.apache.org/jira/browse/HDFS-12402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-12402: - Description: 1. Correct message string grammar error 2. Use HadoopIllegalArgumentException instead of IllegalECPolicyException 3. Use HadoopIllegalArgumentException instead of IllegalArgumentException was: 1. Correct message string grammar error 2. Use HadoopIllegalECPolicyException instead of IllegalECPolicyException > Refactor ErasureCodingPolicyManager > --- > > Key: HDFS-12402 > URL: https://issues.apache.org/jira/browse/HDFS-12402 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha4 >Reporter: SammiChen >Assignee: SammiChen > Attachments: HDFS-12402.001.patch > > > 1. Correct message string grammar error > 2. Use HadoopIllegalArgumentException instead of IllegalECPolicyException > 3. Use HadoopIllegalArgumentException instead of IllegalArgumentException -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12402) Refactor ErasureCodingPolicyManager
[ https://issues.apache.org/jira/browse/HDFS-12402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156486#comment-16156486 ] SammiChen commented on HDFS-12402: -- Yes, [~rakeshr], thanks for the reminder. I will upload a new patch. > Refactor ErasureCodingPolicyManager > --- > > Key: HDFS-12402 > URL: https://issues.apache.org/jira/browse/HDFS-12402 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha4 >Reporter: SammiChen >Assignee: SammiChen > Attachments: HDFS-12402.001.patch > > > 1. Correct message string grammar error > 2. Use HadoopIllegalECPolicyException instead of IllegalECPolicyException -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12399) Improve erasure coding codec through add more unit tests
[ https://issues.apache.org/jira/browse/HDFS-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-12399: - Description: Improve erasure coding codec through add more unit tests (was: Handle corner case when erasure coding codec change after name node restart, cluster migration, cluster downgrade or upgrade.) > Improve erasure coding codec through add more unit tests > - > > Key: HDFS-12399 > URL: https://issues.apache.org/jira/browse/HDFS-12399 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha3 >Reporter: SammiChen >Assignee: SammiChen > Labels: hdfs-ec-3.0-nice-to-have > > Improve erasure coding codec through add more unit tests -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12399) Improve erasure coding codec through add more unit tests
[ https://issues.apache.org/jira/browse/HDFS-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-12399: - Summary: Improve erasure coding codec through add more unit tests (was: Handle erasure coding codec change after name node restart) > Improve erasure coding codec through add more unit tests > - > > Key: HDFS-12399 > URL: https://issues.apache.org/jira/browse/HDFS-12399 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha3 >Reporter: SammiChen >Assignee: SammiChen > Labels: hdfs-ec-3.0-nice-to-have > > Handle corner case when erasure coding codec change after name node restart, > cluster migration, cluster downgrade or upgrade. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12401) Ozone: TestBlockDeletingService#testBlockDeletionTimeout sometimes timeout
[ https://issues.apache.org/jira/browse/HDFS-12401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang reassigned HDFS-12401: -- Assignee: Weiwei Yang > Ozone: TestBlockDeletingService#testBlockDeletionTimeout sometimes timeout > -- > > Key: HDFS-12401 > URL: https://issues.apache.org/jira/browse/HDFS-12401 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: HDFS-7240 >Reporter: Xiaoyu Yao >Assignee: Weiwei Yang > > {code} > testBlockDeletionTimeout(org.apache.hadoop.ozone.container.common.TestBlockDeletingService) > Time elapsed: 100.383 sec <<< ERROR! > java.util.concurrent.TimeoutException: Timed out waiting for condition. > Thread diagnostics: > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12401) Ozone: TestBlockDeletingService#testBlockDeletionTimeout sometimes timeout
[ https://issues.apache.org/jira/browse/HDFS-12401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156477#comment-16156477 ] Weiwei Yang commented on HDFS-12401: Hi [~xyao], I will take a look at this failure shortly, thanks for filing this. > Ozone: TestBlockDeletingService#testBlockDeletionTimeout sometimes timeout > -- > > Key: HDFS-12401 > URL: https://issues.apache.org/jira/browse/HDFS-12401 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: HDFS-7240 >Reporter: Xiaoyu Yao >Assignee: Weiwei Yang > > {code} > testBlockDeletionTimeout(org.apache.hadoop.ozone.container.common.TestBlockDeletingService) > Time elapsed: 100.383 sec <<< ERROR! > java.util.concurrent.TimeoutException: Timed out waiting for condition. > Thread diagnostics: > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11991) Ozone: Ozone shell: the root is assumed to hdfs
[ https://issues.apache.org/jira/browse/HDFS-11991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156476#comment-16156476 ] Weiwei Yang commented on HDFS-11991: Thanks [~anu], [~nandakumar131], sounds good to me. > Ozone: Ozone shell: the root is assumed to hdfs > --- > > Key: HDFS-11991 > URL: https://issues.apache.org/jira/browse/HDFS-11991 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Weiwei Yang > Labels: ozoneMerge > Fix For: HDFS-7240 > > > *hdfs oz* command, or ozone shell has a command like option to run some > commands as root easily by specifying _--root_ as a command line option. > But after HDFS-11655 that assumption is no longer true. We need to detect the > user that started the scm/ksm service and _root_ should be mapped to that > user. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12402) Refactor ErasureCodingPolicyManager
[ https://issues.apache.org/jira/browse/HDFS-12402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-12402: Component/s: erasure-coding > Refactor ErasureCodingPolicyManager > --- > > Key: HDFS-12402 > URL: https://issues.apache.org/jira/browse/HDFS-12402 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha4 >Reporter: SammiChen >Assignee: SammiChen > Attachments: HDFS-12402.001.patch > > > 1. Correct message string grammar error > 2. Use HadoopIllegalECPolicyException instead of IllegalECPolicyException -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode
[ https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-7859: Attachment: HDFS-7859.015.patch Refined the patch after offline discussion with Kai. > Erasure Coding: Persist erasure coding policies in NameNode > --- > > Key: HDFS-7859 > URL: https://issues.apache.org/jira/browse/HDFS-7859 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: SammiChen > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, > HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, > HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, > HDFS-7859.010.patch, HDFS-7859.011.patch, HDFS-7859.012.patch, > HDFS-7859.013.patch, HDFS-7859.014.patch, HDFS-7859.015.patch, > HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.002.patch, > HDFS-7859-HDFS-7285.003.patch > > > In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we > persist EC schemas in NameNode centrally and reliably, so that EC zones can > reference them by name efficiently. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11715) Ozone: SCM : Add priority for datanode commands
[ https://issues.apache.org/jira/browse/HDFS-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156469#comment-16156469 ] Weiwei Yang commented on HDFS-11715: Hi [~anu], I don't think this is required before the merge, this can be a post-merge task. Thanks > Ozone: SCM : Add priority for datanode commands > --- > > Key: HDFS-11715 > URL: https://issues.apache.org/jira/browse/HDFS-11715 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Anu Engineer > Labels: OzonePostMerge, tocheck > > While reviewing HDFS-11493, [~cheersyang] commented that it would be a good > idea to support priority for datanode commands send from SCM. > bq. The queue seems to be time ordered, I think it will be better to support > priority as well. Commands may have different priority, for example, > replicate a container priority is usually higher than delete a container > replica; replicate a container also may have different priorities according > to the number of replicas > This jira tracks that work item. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12402) Refactor ErasureCodingPolicyManager
[ https://issues.apache.org/jira/browse/HDFS-12402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156444#comment-16156444 ] Rakesh R commented on HDFS-12402: - Thanks [~Sammi] for the patch. It looks like the patch has modified {{HadoopIllegalArgumentException}} and {{IllegalECPolicyException}} is still unchanged. > Refactor ErasureCodingPolicyManager > --- > > Key: HDFS-12402 > URL: https://issues.apache.org/jira/browse/HDFS-12402 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: SammiChen >Assignee: SammiChen > Attachments: HDFS-12402.001.patch > > > 1. Correct message string grammar error > 2. Use HadoopIllegalECPolicyException instead of IllegalECPolicyException -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12402) Refactor ErasureCodingPolicyManager
[ https://issues.apache.org/jira/browse/HDFS-12402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-12402: - Status: Open (was: Patch Available) > Refactor ErasureCodingPolicyManager > --- > > Key: HDFS-12402 > URL: https://issues.apache.org/jira/browse/HDFS-12402 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: SammiChen >Assignee: SammiChen > Attachments: HDFS-12402.001.patch > > > 1. Correct message string grammar error > 2. Use HadoopIllegalECPolicyException instead of IllegalECPolicyException -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12402) Refactor ErasureCodingPolicyManager
[ https://issues.apache.org/jira/browse/HDFS-12402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-12402: - Status: Patch Available (was: Open) > Refactor ErasureCodingPolicyManager > --- > > Key: HDFS-12402 > URL: https://issues.apache.org/jira/browse/HDFS-12402 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: SammiChen >Assignee: SammiChen > Attachments: HDFS-12402.001.patch > > > 1. Correct message string grammar error > 2. Use HadoopIllegalECPolicyException instead of IllegalECPolicyException -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12402) Refactor ErasureCodingPolicyManager
[ https://issues.apache.org/jira/browse/HDFS-12402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-12402: - Status: Patch Available (was: Open) > Refactor ErasureCodingPolicyManager > --- > > Key: HDFS-12402 > URL: https://issues.apache.org/jira/browse/HDFS-12402 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: SammiChen >Assignee: SammiChen > Attachments: HDFS-12402.001.patch > > > 1. Correct message string grammar error > 2. Use HadoopIllegalECPolicyException instead of IllegalECPolicyException -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12402) Refactor ErasureCodingPolicyManager
[ https://issues.apache.org/jira/browse/HDFS-12402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-12402: - Attachment: HDFS-12402.001.patch Initial patch > Refactor ErasureCodingPolicyManager > --- > > Key: HDFS-12402 > URL: https://issues.apache.org/jira/browse/HDFS-12402 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: SammiChen >Assignee: SammiChen > Attachments: HDFS-12402.001.patch > > > 1. Correct message string grammar error > 2. Use HadoopIllegalECPolicyException instead of IllegalECPolicyException -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Moved] (HDFS-12403) The size of dataQueue and ackQueue in DataStreamer has no limit when writer thread is interrupted
[ https://issues.apache.org/jira/browse/HDFS-12403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S moved YARN-7168 to HDFS-12403: Component/s: (was: client) Key: HDFS-12403 (was: YARN-7168) Project: Hadoop HDFS (was: Hadoop YARN) > The size of dataQueue and ackQueue in DataStreamer has no limit when writer > thread is interrupted > - > > Key: HDFS-12403 > URL: https://issues.apache.org/jira/browse/HDFS-12403 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jiandan Yang > Attachments: mat.jpg > > > In our cluster, when found NodeManager frequently FullGC when decommissioning > NodeManager, and we found the biggest object is dataQueue of DataStreamer, it > has almost 6w DFSPacket, and every DFSPacket is about 64k, as shown below. > The root reason is that the size of dataQueue and ackQueue in DataStreamer > has no limit when writer thread is interrupted. > DFSOutputStream#waitAndQueuePacket does not wait when writer thread is > interrupted. I know NodeManager may stop writing when interruped, but > DFSOutputStream also could do something to avoid Infinite growth of dataQueue. > {code:java} > while (!streamerClosed && dataQueue.size() + ackQueue.size() > > dfsClient.getConf().getWriteMaxPackets()) { > if (firstWait) { > Span span = Tracer.getCurrentSpan(); > if (span != null) { > span.addTimelineAnnotation("dataQueue.wait"); > } > firstWait = false; > } > try { > dataQueue.wait(); > } catch (InterruptedException e) { > // If we get interrupted while waiting to queue data, we still > need to get rid > // of the current packet. This is because we have an invariant > that if > // currentPacket gets full, it will get queued before the next > writeChunk. > // > // Rather than wait around for space in the queue, we should > instead try to > // return to the caller as soon as possible, even though we > slightly overrun > // the MAX_PACKETS length. > Thread.currentThread().interrupt(); > break; > } > } > } finally { > Span span = Tracer.getCurrentSpan(); > if ((span != null) && (!firstWait)) { > span.addTimelineAnnotation("end.wait"); > } > } > {code} > !mat.jpg|memory_analysis! -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12402) Refactor ErasureCodingPolicyManager
SammiChen created HDFS-12402: Summary: Refactor ErasureCodingPolicyManager Key: HDFS-12402 URL: https://issues.apache.org/jira/browse/HDFS-12402 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 3.0.0-alpha4 Reporter: SammiChen Assignee: SammiChen 1. Correct message string grammar error 2. Use HadoopIllegalECPolicyException instead of IllegalECPolicyException -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12392) Writing striped file failed due to different cell size
[ https://issues.apache.org/jira/browse/HDFS-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156360#comment-16156360 ] SammiChen commented on HDFS-12392: -- Thanks [~drankye] for helping review and commit the patch! > Writing striped file failed due to different cell size > -- > > Key: HDFS-12392 > URL: https://issues.apache.org/jira/browse/HDFS-12392 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-alpha3 >Reporter: SammiChen >Assignee: SammiChen > Labels: hdfs-ec-3.0-must-do > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12392.001.patch, HDFS-12392.002.patch, > HDFS-12392.003.patch > > > Root cause: The buffer size returned by ElasticByteBufferPool.getBuffer() is > more than caller expected. > Exception stack: > org.apache.hadoop.HadoopIllegalArgumentException: Invalid buffer, not of > length 4096 > at > org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferEncodingState.checkBuffers(ByteBufferEncodingState.java:99) > at > org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferEncodingState.(ByteBufferEncodingState.java:46) > at > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureEncoder.encode(RawErasureEncoder.java:67) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.encode(DFSStripedOutputStream.java:368) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.writeParityCells(DFSStripedOutputStream.java:942) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:547) > at > org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217) > at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:125) > at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:111) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57) > at java.io.DataOutputStream.write(DataOutputStream.java:107) > at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:94) > at org.apache.hadoop.hdfs.DFSTestUtil.writeFile(DFSTestUtil.java:834) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12390) Support to refresh DNS to switch mapping
[ https://issues.apache.org/jira/browse/HDFS-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156311#comment-16156311 ] Jiandan Yang commented on HDFS-12390: -- [~aw] Rewriting the script to Java will not starts child process? Could you give me more details? > Support to refresh DNS to switch mapping > > > Key: HDFS-12390 > URL: https://issues.apache.org/jira/browse/HDFS-12390 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs, hdfs-client >Reporter: Jiandan Yang >Assignee: Jiandan Yang > Attachments: HDFS-12390.001.patch, HDFS-12390.002.patch, > HDFS-12390-branch-2.8.2.001.patch > > > As described in > [HDFS-12200|https://issues.apache.org/jira/browse/HDFS-12200], > ScriptBasedMapping may lead to NN cpu 100%. ScriptBasedMapping run > sub_processor to get rack info of DN/Client, so we think it's a little > heavy. We prepare to use TableMappingļ¼but TableMapping does not support > refresh and can not reload rack info of newly added DataNodes. > So we implement refreshDNSToSwitch in dfsadmin. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12390) Support to refresh DNS to switch mapping
[ https://issues.apache.org/jira/browse/HDFS-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156311#comment-16156311 ] Jiandan Yang edited comment on HDFS-12390 at 9/7/17 1:54 AM: -- [~aw] Rewriting the script to Java will not start child process? Could you give me more details? was (Author: yangjiandan): [~aw] Rewriting the script to Java will not starts child process? Could you give me more details? > Support to refresh DNS to switch mapping > > > Key: HDFS-12390 > URL: https://issues.apache.org/jira/browse/HDFS-12390 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs, hdfs-client >Reporter: Jiandan Yang >Assignee: Jiandan Yang > Attachments: HDFS-12390.001.patch, HDFS-12390.002.patch, > HDFS-12390-branch-2.8.2.001.patch > > > As described in > [HDFS-12200|https://issues.apache.org/jira/browse/HDFS-12200], > ScriptBasedMapping may lead to NN cpu 100%. ScriptBasedMapping run > sub_processor to get rack info of DN/Client, so we think it's a little > heavy. We prepare to use TableMappingļ¼but TableMapping does not support > refresh and can not reload rack info of newly added DataNodes. > So we implement refreshDNSToSwitch in dfsadmin. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode
[ https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156269#comment-16156269 ] Kai Zheng commented on HDFS-7859: - HDFS-12395 handles edit log related changes as part of this work. > Erasure Coding: Persist erasure coding policies in NameNode > --- > > Key: HDFS-7859 > URL: https://issues.apache.org/jira/browse/HDFS-7859 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: SammiChen > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, > HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, > HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, > HDFS-7859.010.patch, HDFS-7859.011.patch, HDFS-7859.012.patch, > HDFS-7859.013.patch, HDFS-7859.014.patch, HDFS-7859-HDFS-7285.002.patch, > HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.003.patch > > > In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we > persist EC schemas in NameNode centrally and reliably, so that EC zones can > reference them by name efficiently. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12395) Support erasure coding policy operation changes in namenode edit log
[ https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HDFS-12395: - Summary: Support erasure coding policy operation changes in namenode edit log (was: Support add, remove, disable and enable erasure coding policy operations in edit log) > Support erasure coding policy operation changes in namenode edit log > > > Key: HDFS-12395 > URL: https://issues.apache.org/jira/browse/HDFS-12395 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: SammiChen >Assignee: SammiChen > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12395.001.patch > > > Support add, remove, disable, enable erasure coding policy operation in edit > log. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode
[ https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156260#comment-16156260 ] Kai Zheng commented on HDFS-7859: - Unfortunately I still found lots of not-so-relevant changes here. The changes looks good to have but should be in separate task(s) under HDFS-7337, instead of being mixed here, this jira should focus on persisting EC policies in NN and nothing more, so that some folks would be easier to have a quick glance at what changes we introduced to NN/fsimage. Please proceed one more time, thanks. > Erasure Coding: Persist erasure coding policies in NameNode > --- > > Key: HDFS-7859 > URL: https://issues.apache.org/jira/browse/HDFS-7859 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: SammiChen > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, > HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, > HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, > HDFS-7859.010.patch, HDFS-7859.011.patch, HDFS-7859.012.patch, > HDFS-7859.013.patch, HDFS-7859.014.patch, HDFS-7859-HDFS-7285.002.patch, > HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.003.patch > > > In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we > persist EC schemas in NameNode centrally and reliably, so that EC zones can > reference them by name efficiently. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12353) Modify Dfsuse percent of dfsadmin report inconsistent with Dfsuse percent of datanode reports.
[ https://issues.apache.org/jira/browse/HDFS-12353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156237#comment-16156237 ] Lei (Eddy) Xu commented on HDFS-12353: -- Good catch, [~steven-wugang]. The patch LGTM. A small request, could you add a test to enforce this fix remain correctly in the future? Thanks! > Modify Dfsuse percent of dfsadmin report inconsistent with Dfsuse percent of > datanode reports. > -- > > Key: HDFS-12353 > URL: https://issues.apache.org/jira/browse/HDFS-12353 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: steven-wugang >Assignee: steven-wugang > Attachments: HDFS-12353.patch > > > use command "hdfs dfsadmin -report",as follows: > [hdfs@zhd2-3 sbin]$ hdfs dfsadmin -report > Configured Capacity: 157497375621120 (143.24 TB) > Present Capacity: 148541284228197 (135.10 TB) > DFS Remaining: 56467228499968 (51.36 TB) > DFS Used: 92074055728229 (83.74 TB) > DFS Used%: 61.99% > Under replicated blocks: 1 > Blocks with corrupt replicas: 3 > Missing blocks: 0 > Missing blocks (with replication factor 1): 0 > - > Live datanodes (4): > Name: 172.168.129.1:50010 (zhd2-1) > Hostname: zhd2-1 > Decommission Status : Normal > Configured Capacity: 39374343905280 (35.81 TB) > DFS Used: 23560170107046 (21.43 TB) > Non DFS Used: 609684660058 (567.81 GB) > DFS Remaining: 15204489138176 (13.83 TB) > DFS Used%: 59.84% > DFS Remaining%: 38.62% > Configured Cache Capacity: 60 (5.59 GB) > Cache Used: 0 (0 B) > Cache Remaining: 60 (5.59 GB) > Cache Used%: 0.00% > Cache Remaining%: 100.00% > Xceivers: 36 > Last contact: Fri Aug 25 10:06:50 CST 2017 > Name: 172.168.129.3:50010 (zhd2-3) > Hostname: zhd2-3 > Decommission Status : Normal > Configured Capacity: 39374343905280 (35.81 TB) > DFS Used: 23463410242057 (21.34 TB) > Non DFS Used: 620079140343 (577.49 GB) > DFS Remaining: 15290854522880 (13.91 TB) > DFS Used%: 59.59% > DFS Remaining%: 38.83% > Configured Cache Capacity: 60 (5.59 GB) > Cache Used: 0 (0 B) > Cache Remaining: 60 (5.59 GB) > Cache Used%: 0.00% > Cache Remaining%: 100.00% > Xceivers: 30 > Last contact: Fri Aug 25 10:06:50 CST 2017 > Name: 172.168.129.4:50010 (zhd2-4) > Hostname: zhd2-4 > Decommission Status : Normal > Configured Capacity: 39374343905280 (35.81 TB) > DFS Used: 23908322375185 (21.74 TB) > Non DFS Used: 618808670703 (576.31 GB) > DFS Remaining: 14847212859392 (13.50 TB) > DFS Used%: 60.72% > DFS Remaining%: 37.71% > Configured Cache Capacity: 60 (5.59 GB) > Cache Used: 0 (0 B) > Cache Remaining: 60 (5.59 GB) > Cache Used%: 0.00% > Cache Remaining%: 100.00% > Xceivers: 38 > Last contact: Fri Aug 25 10:06:50 CST 2017 > Name: 172.168.129.2:50010 (zhd2-2) > Hostname: zhd2-2 > Decommission Status : Normal > Configured Capacity: 39374343905280 (35.81 TB) > DFS Used: 21142153003941 (19.23 TB) > Non DFS Used: 7107518921819 (6.46 TB) > DFS Remaining: 11124671979520 (10.12 TB) > DFS Used%: 53.70% > DFS Remaining%: 28.25% > Configured Cache Capacity: 60 (5.59 GB) > Cache Used: 0 (0 B) > Cache Remaining: 60 (5.59 GB) > Cache Used%: 0.00% > Cache Remaining%: 100.00% > Xceivers: 22 > Last contact: Fri Aug 25 10:06:50 CST 2017 > The first "DFS Used%" value on the top is DFS Used/Present Capacity,but "DFS > Used%" value in other live datanode reports is DFS Used/Configured Capacity. > The two calculation methods are inconsistent,misunderstanding may arise. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12323) NameNode terminates after full GC thinking QJM unresponsive if full GC is much longer than timeout
[ https://issues.apache.org/jira/browse/HDFS-12323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156219#comment-16156219 ] Hadoop QA commented on HDFS-12323: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 8 unchanged - 1 fixed = 9 total (was 9) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 35s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}136m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.server.namenode.TestReencryptionHandler | | | hadoop.hdfs.TestEncryptionZones | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12323 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12885680/HDFS-12323.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux e0014e189b66 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / dd81494 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/21027/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit |
[jira] [Commented] (HDFS-11939) Ozone : add read/write random access to Chunks of a key
[ https://issues.apache.org/jira/browse/HDFS-11939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156211#comment-16156211 ] Hadoop QA commented on HDFS-11939: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HDFS-11939 does not apply to HDFS-7240. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-11939 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872799/HDFS-11939-HDFS-7240.004.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21030/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone : add read/write random access to Chunks of a key > --- > > Key: HDFS-11939 > URL: https://issues.apache.org/jira/browse/HDFS-11939 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Chen Liang > Labels: OzonePostMerge > Attachments: HDFS-11939-HDFS-7240.001.patch, > HDFS-11939-HDFS-7240.002.patch, HDFS-11939-HDFS-7240.003.patch, > HDFS-11939-HDFS-7240.004.patch > > > In Ozone, the value of a key is a sequence of container chunks. Currently, > the only way to read/write the chunks is by using ChunkInputStream and > ChunkOutputStream. However, by the nature of streams, these classes are > currently implemented to only allow sequential read/write. > Ideally we would like to support random access of the chunks. For example, we > want to be able to seek to a specific offset and read/write some data. This > will be critical for key range read/write feature, and potentially important > for supporting parallel read/write. > This JIRA tracks adding support by implementing FileChannel class on top > Chunks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12268) Ozone: Add metrics for pending storage container requests
[ https://issues.apache.org/jira/browse/HDFS-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12268: Labels: ozoneMerge (was: ) > Ozone: Add metrics for pending storage container requests > - > > Key: HDFS-12268 > URL: https://issues.apache.org/jira/browse/HDFS-12268 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Labels: ozoneMerge > Attachments: HDFS-12268-HDFS-7240.001.patch, > HDFS-12268-HDFS-7240.002.patch, HDFS-12268-HDFS-7240.003.patch, > HDFS-12268-HDFS-7240.004.patch, HDFS-12268-HDFS-7240.005.patch > > > As storage container async interface has been supported after HDFS-11580, we > need to keep an eye on the queue depth of pending container requests. It can > help us better found if there are some performance problems. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions
[ https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12235: Labels: ozoneMerge (was: ) > Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions > --- > > Key: HDFS-12235 > URL: https://issues.apache.org/jira/browse/HDFS-12235 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: ozoneMerge > Attachments: HDFS-12235-HDFS-7240.001.patch, > HDFS-12235-HDFS-7240.002.patch, HDFS-12235-HDFS-7240.003.patch, > HDFS-12235-HDFS-7240.004.patch, HDFS-12235-HDFS-7240.005.patch, > HDFS-12235-HDFS-7240.006.patch, HDFS-12235-HDFS-7240.007.patch, > HDFS-12235-HDFS-7240.008.patch, HDFS-12235-HDFS-7240.009.patch > > > KSM and SCM interaction for delete key operation, both KSM and SCM stores key > state info in a backlog, KSM needs to scan this log and send block-deletion > command to SCM, once SCM is fully aware of the message, KSM removes the key > completely from namespace. See more from the design doc under HDFS-11922, > this is task break down 2. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12000) Ozone: Container : Add key versioning support-1
[ https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12000: Labels: OzonePostMerge (was: ) > Ozone: Container : Add key versioning support-1 > --- > > Key: HDFS-12000 > URL: https://issues.apache.org/jira/browse/HDFS-12000 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Chen Liang > Labels: OzonePostMerge > Attachments: HDFS-12000-HDFS-7240.001.patch, > HDFS-12000-HDFS-7240.002.patch, HDFS-12000-HDFS-7240.003.patch, > HDFS-12000-HDFS-7240.004.patch, HDFS-12000-HDFS-7240.005.patch, > OzoneVersion.001.pdf > > > The rest interface of ozone supports versioning of keys. This support comes > from the containers and how chunks are managed to support this feature. This > JIRA tracks that feature. Will post a detailed design doc so that we can talk > about this feature. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12340) Ozone: C/C++ implementation of ozone client using curl
[ https://issues.apache.org/jira/browse/HDFS-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12340: Labels: ozoneMerge (was: ) > Ozone: C/C++ implementation of ozone client using curl > -- > > Key: HDFS-12340 > URL: https://issues.apache.org/jira/browse/HDFS-12340 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12340-HDFS-7240.001.patch, > HDFS-12340-HDFS-7240.002.patch, main.C, ozoneClient.C, ozoneClient.h > > > This Jira is introduced for implementation of ozone client in C/C++ using > curl library. > All these calls will make use of HTTP protocol and would require libcurl. The > libcurl API are referenced from here: > https://curl.haxx.se/libcurl/ > Additional details would be posted along with the patches. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12385) Ozone: OzoneClient: Refactoring OzoneClient API
[ https://issues.apache.org/jira/browse/HDFS-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12385: Labels: ozoneMerge (was: ) > Ozone: OzoneClient: Refactoring OzoneClient API > --- > > Key: HDFS-12385 > URL: https://issues.apache.org/jira/browse/HDFS-12385 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar > Labels: ozoneMerge > Attachments: HDFS-12385-HDFS-7240.000.patch, OzoneClient.pdf > > > This jira is for refactoring {{OzoneClient}} API. [^OzoneClient.pdf] will > give an idea on how the API will look. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12389) Ozone: oz commandline list calls should return valid JSON format output
[ https://issues.apache.org/jira/browse/HDFS-12389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12389: Labels: ozoneMerge (was: ) > Ozone: oz commandline list calls should return valid JSON format output > --- > > Key: HDFS-12389 > URL: https://issues.apache.org/jira/browse/HDFS-12389 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: ozoneMerge > Attachments: HDFS-12389-HDFS-7240.001.patch, > HDFS-12389-HDFS-7240.002.patch, json_output_test.log > > > At present the outputs of {{listVolume}}, {{listBucket}} and {{listKey}} are > hard to parse, for example following call > {code} > ./bin/hdfs oz -listVolume http://localhost:9864 -user wwei > {code} > lists all volumes in my cluster and it returns > {noformat} > { > "version" : 0, > "md5hash" : null, > "createdOn" : "Mon, 04 Sep 2017 03:25:22 GMT", > "modifiedOn" : "Mon, 04 Sep 2017 03:25:22 GMT", > "size" : 10240, > "keyName" : "key-0-22381", > "dataFileName" : null > } > { > "version" : 0, > "md5hash" : null, > "createdOn" : "Mon, 04 Sep 2017 03:25:22 GMT", > "modifiedOn" : "Mon, 04 Sep 2017 03:25:22 GMT", > "size" : 10240, > "keyName" : "key-0-22381", > "dataFileName" : null > } > ... > {noformat} > this is not a valid JSON format output hence it is hard to parse in clients' > script for further interactions. Propose to reformat them to valid JSON data. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12100) Ozone: KSM: Allocate key should honour volume quota if quota is set on the volume
[ https://issues.apache.org/jira/browse/HDFS-12100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12100: Labels: ozoneMerge (was: ) > Ozone: KSM: Allocate key should honour volume quota if quota is set on the > volume > - > > Key: HDFS-12100 > URL: https://issues.apache.org/jira/browse/HDFS-12100 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Lokesh Jain > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12100-HDFS-7240.001.patch, > HDFS-12100-HDFS-7240.002.patch, HDFS-12100-HDFS-7240.003.patch > > > KeyManagerImpl#allocateKey currently does not check the volume quota before > allocating a key, this can cause the volume quota overrun. > Volume quota needs to be check before allocating the key in the SCM. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12329) Ozone: Ratis: Readonly calls in XceiverClientRatis should use sendReadOnly
[ https://issues.apache.org/jira/browse/HDFS-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12329: Labels: ozoneMerge (was: ) > Ozone: Ratis: Readonly calls in XceiverClientRatis should use sendReadOnly > -- > > Key: HDFS-12329 > URL: https://issues.apache.org/jira/browse/HDFS-12329 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12329-HDFS-7240.001.patch, > HDFS-12329-HDFS-7240.002.patch > > > Currently both write and readonly calls in Ratis use RaftClient.send which > enqueues the the request to the raft log and is processed later when the log > entry is consumed. > Readonly call can be optimized by using RaftClient.sendReadOnly which will > directly query the RaftServer for a particular request. > This jira will be used to discuss this issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12370) Ozone: Implement TopN container choosing policy for BlockDeletionService
[ https://issues.apache.org/jira/browse/HDFS-12370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12370: Labels: ozoneMerge (was: ) > Ozone: Implement TopN container choosing policy for BlockDeletionService > > > Key: HDFS-12370 > URL: https://issues.apache.org/jira/browse/HDFS-12370 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Labels: ozoneMerge > Attachments: HDFS-12370-HDFS-7240.001.patch, > HDFS-12370-HDFS-7240.002.patch, HDFS-12370-HDFS-7240.003.patch > > > Implement TopN container choosing policy for BlockDeletionService. This is > discussed from HDFS-12354. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12321) Ozone : debug cli: add support to load user-provided SQL query
[ https://issues.apache.org/jira/browse/HDFS-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12321: Labels: ozoneMerge (was: ) > Ozone : debug cli: add support to load user-provided SQL query > -- > > Key: HDFS-12321 > URL: https://issues.apache.org/jira/browse/HDFS-12321 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Chen Liang > Labels: ozoneMerge > Fix For: ozone > > Attachments: HDFS-12321-HDFS-7240.001.patch, > HDFS-12321-HDFS-7240.002.patch, HDFS-12321-HDFS-7240.003.patch, > HDFS-12321-HDFS-7240.004.patch, HDFS-12321-HDFS-7240.005.patch, > HDFS-12321-HDFS-7240.006.patch, HDFS-12321-HDFS-7240.007.patch, > HDFS-12321-HDFS-7240.008.patch, HDFS-12321-HDFS-7240.009.patch, > HDFS-12321-HDFS-7240.010.patch > > > This JIRA extends SQL CLI to support loading a user-provided file that > includes any sql query the user wants to run on the SQLite db obtained by > converting Ozone metadata db. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later
[ https://issues.apache.org/jira/browse/HDFS-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12098: Labels: ozoneMerge (was: ) > Ozone: Datanode is unable to register with scm if scm starts later > -- > > Key: HDFS-12098 > URL: https://issues.apache.org/jira/browse/HDFS-12098 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, ozone, scm >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Critical > Labels: ozoneMerge > Attachments: disabled-scm-test.patch, HDFS-12098-HDFS-7240.001.patch, > HDFS-12098-HDFS-7240.002.patch, HDFS-12098-HDFS-7240.testcase-1.patch, > HDFS-12098-HDFS-7240.testcase.patch, Screen Shot 2017-07-11 at 4.58.08 > PM.png, thread_dump.log > > > Reproducing steps > 1. Start namenode > {{./bin/hdfs --daemon start namenode}} > 2. Start datanode > {{./bin/hdfs datanode}} > will see following connection issues > {noformat} > 17/07/13 21:16:48 INFO ipc.Client: Retrying connect to server: > ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 0 time(s); retry > policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 > SECONDS) > 17/07/13 21:16:49 INFO ipc.Client: Retrying connect to server: > ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 1 time(s); retry > policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 > SECONDS) > 17/07/13 21:16:50 INFO ipc.Client: Retrying connect to server: > ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 2 time(s); retry > policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 > SECONDS) > 17/07/13 21:16:51 INFO ipc.Client: Retrying connect to server: > ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 3 time(s); retry > policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 > SECONDS) > {noformat} > this is expected because scm is not started yet > 3. Start scm > {{./bin/hdfs scm}} > expecting datanode can register to this scm, expecting the log in scm > {noformat} > 17/07/13 21:22:30 INFO node.SCMNodeManager: Data node with ID: > af22862d-aafa-4941-9073-53224ae43e2c Registered. > {noformat} > but did *NOT* see this log. (_I debugged into the code and found the datanode > state was transited SHUTDOWN unexpectedly because the thread leaks, each of > those threads counted to set to next state and they all set to SHUTDOWN > state_) > 4. Create a container from scm CLI > {{./bin/hdfs scm -container -create -c 20170714c0}} > this fails with following exception > {noformat} > Creating container : 20170714c0. > Error executing > command:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.scm.exceptions.SCMException): > Unable to create container while in chill mode > at > org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:241) > at > org.apache.hadoop.ozone.scm.StorageContainerManager.allocateContainer(StorageContainerManager.java:392) > at > org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.allocateContainer(StorageContainerLocationProtocolServerSideTranslatorPB.java:73) > {noformat} > datanode was not registered to scm, thus it's still in chill mode. > *Note*, if we start scm first, there is no such issue, I can create container > from CLI without any problem. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12331) Ozone: Mini cluster can't start up on Windows after HDFS-12159
[ https://issues.apache.org/jira/browse/HDFS-12331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12331: Labels: ozoneMerge (was: ) > Ozone: Mini cluster can't start up on Windows after HDFS-12159 > -- > > Key: HDFS-12331 > URL: https://issues.apache.org/jira/browse/HDFS-12331 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Labels: ozoneMerge > Attachments: HDFS-12331-HDFS-7240.001.patch > > > ozone mini cluster can't start up on Windows after HDFS-12159. > The error log: > {noformat} > java.net.URISyntaxException: Illegal character in opaque part at index 2: > D:\work-project\hadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\dn0_data-1\3d3d5718-4219-4ec3-a9c5-e594801a1430 > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.checkChars(URI.java:3021) > at java.net.URI$Parser.parse(URI.java:3058) > at java.net.URI.(URI.java:588) > at org.apache.ratis.util.FileUtils.stringAsURI(FileUtils.java:133) > at > org.apache.ratis.server.storage.RaftStorage.(RaftStorage.java:49) > at org.apache.ratis.server.impl.ServerState.(ServerState.java:85) > at > org.apache.ratis.server.impl.RaftServerImpl.(RaftServerImpl.java:94) > at > org.apache.ratis.server.impl.RaftServerProxy.initImpl(RaftServerProxy.java:67) > at > org.apache.ratis.server.impl.RaftServerProxy.(RaftServerProxy.java:62) > at > org.apache.ratis.server.impl.ServerImplUtils.newRaftServer(ServerImplUtils.java:43) > at > org.apache.ratis.server.impl.ServerImplUtils.newRaftServer(ServerImplUtils.java:35) > at org.apache.ratis.server.RaftServer$Builder.build(RaftServer.java:70) > at > org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.(XceiverServerRatis.java:68) > at > org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.newXceiverServerRatis(XceiverServerRatis.java:130) > at > org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.(OzoneContainer.java:113) > at > org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.(DatanodeStateMachine.java:76) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.bpRegistrationSucceeded(DataNode.java:1592) > at > org.apache.hadoop.hdfs.server.datanode.BPOfferService.registrationSucceeded(BPOfferService.java:409) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:783) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:286) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816) > {noformat} > The root cause is that RatiServer instance was newly created in > {{OzoneContainer}} after HDFS-12159 but it can't recognize the path under > Windows. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12321) Ozone : debug cli: add support to load user-provided SQL query
[ https://issues.apache.org/jira/browse/HDFS-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12321: Component/s: ozone > Ozone : debug cli: add support to load user-provided SQL query > -- > > Key: HDFS-12321 > URL: https://issues.apache.org/jira/browse/HDFS-12321 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Chen Liang > Labels: ozoneMerge > Fix For: ozone > > Attachments: HDFS-12321-HDFS-7240.001.patch, > HDFS-12321-HDFS-7240.002.patch, HDFS-12321-HDFS-7240.003.patch, > HDFS-12321-HDFS-7240.004.patch, HDFS-12321-HDFS-7240.005.patch, > HDFS-12321-HDFS-7240.006.patch, HDFS-12321-HDFS-7240.007.patch, > HDFS-12321-HDFS-7240.008.patch, HDFS-12321-HDFS-7240.009.patch, > HDFS-12321-HDFS-7240.010.patch > > > This JIRA extends SQL CLI to support loading a user-provided file that > includes any sql query the user wants to run on the SQLite db obtained by > converting Ozone metadata db. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11939) Ozone : add read/write random access to Chunks of a key
[ https://issues.apache.org/jira/browse/HDFS-11939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11939: Labels: OzonePostMerge (was: ) > Ozone : add read/write random access to Chunks of a key > --- > > Key: HDFS-11939 > URL: https://issues.apache.org/jira/browse/HDFS-11939 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Chen Liang > Labels: OzonePostMerge > Attachments: HDFS-11939-HDFS-7240.001.patch, > HDFS-11939-HDFS-7240.002.patch, HDFS-11939-HDFS-7240.003.patch, > HDFS-11939-HDFS-7240.004.patch > > > In Ozone, the value of a key is a sequence of container chunks. Currently, > the only way to read/write the chunks is by using ChunkInputStream and > ChunkOutputStream. However, by the nature of streams, these classes are > currently implemented to only allow sequential read/write. > Ideally we would like to support random access of the chunks. For example, we > want to be able to seek to a specific offset and read/write some data. This > will be critical for key range read/write feature, and potentially important > for supporting parallel read/write. > This JIRA tracks adding support by implementing FileChannel class on top > Chunks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11939) Ozone : add read/write random access to Chunks of a key
[ https://issues.apache.org/jira/browse/HDFS-11939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11939: Component/s: ozone > Ozone : add read/write random access to Chunks of a key > --- > > Key: HDFS-11939 > URL: https://issues.apache.org/jira/browse/HDFS-11939 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Chen Liang > Labels: OzonePostMerge > Attachments: HDFS-11939-HDFS-7240.001.patch, > HDFS-11939-HDFS-7240.002.patch, HDFS-11939-HDFS-7240.003.patch, > HDFS-11939-HDFS-7240.004.patch > > > In Ozone, the value of a key is a sequence of container chunks. Currently, > the only way to read/write the chunks is by using ChunkInputStream and > ChunkOutputStream. However, by the nature of streams, these classes are > currently implemented to only allow sequential read/write. > Ideally we would like to support random access of the chunks. For example, we > want to be able to seek to a specific offset and read/write some data. This > will be critical for key range read/write feature, and potentially important > for supporting parallel read/write. > This JIRA tracks adding support by implementing FileChannel class on top > Chunks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11997) ChunkManager functions do not use the argument keyName
[ https://issues.apache.org/jira/browse/HDFS-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11997: Labels: OzonePostMerge (was: ) > ChunkManager functions do not use the argument keyName > -- > > Key: HDFS-11997 > URL: https://issues.apache.org/jira/browse/HDFS-11997 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Chen Liang > Labels: OzonePostMerge > > {{ChunkManagerImpl}}'s functions i.e. {{writeChunk}} {{readChunk}} > {{deleteChunk}} all take a {{keyName}} argument, which is not being used by > any of them. > I think this makes sense because conceptually {{ChunkManager}} should not > have to know keyName to do anything, probably except for some sort of sanity > check or logging, which is not there either. We should revisit whether we > need it here. I think we should remove it to make the Chunk syntax, and the > function signatures more cleanly abstracted. > Any comments? [~anu] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism
[ https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12387: Labels: ozoneMerge (was: ) > Ozone: Support Ratis as a first class replication mechanism > --- > > Key: HDFS-12387 > URL: https://issues.apache.org/jira/browse/HDFS-12387 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Critical > Labels: ozoneMerge > Attachments: HDFS-12387-HDFS-7240.001.patch > > > Ozone container layer supports pluggable replication policies. This JIRA > brings Apache Ratis based replication to Ozone. Apache Ratis is a java > implementation of Raft protocol. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11997) ChunkManager functions do not use the argument keyName
[ https://issues.apache.org/jira/browse/HDFS-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11997: Component/s: ozone > ChunkManager functions do not use the argument keyName > -- > > Key: HDFS-11997 > URL: https://issues.apache.org/jira/browse/HDFS-11997 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Chen Liang > > {{ChunkManagerImpl}}'s functions i.e. {{writeChunk}} {{readChunk}} > {{deleteChunk}} all take a {{keyName}} argument, which is not being used by > any of them. > I think this makes sense because conceptually {{ChunkManager}} should not > have to know keyName to do anything, probably except for some sort of sanity > check or logging, which is not there either. We should revisit whether we > need it here. I think we should remove it to make the Chunk syntax, and the > function signatures more cleanly abstracted. > Any comments? [~anu] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10929) Ozone:SCM: explore if we need 3 maps for tracking the state of nodes
[ https://issues.apache.org/jira/browse/HDFS-10929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-10929: Labels: OzonePostMerge (was: ) > Ozone:SCM: explore if we need 3 maps for tracking the state of nodes > > > Key: HDFS-10929 > URL: https://issues.apache.org/jira/browse/HDFS-10929 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer > Labels: OzonePostMerge > Fix For: HDFS-7240 > > > Based on comments from [~jingzhao], This jira tracks if we really need 3 maps > in the SCMNodeManager class or if we should collapse them to a single one. > The reason why we have 3 maps is to reduce lock contention. We might be able > to collapse this into a single map. This JIRA is to track that action item. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11139) Ozone: SCM: Handle duplicate Datanode ID
[ https://issues.apache.org/jira/browse/HDFS-11139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11139: Labels: ozoneMerge tocheck (was: ) > Ozone: SCM: Handle duplicate Datanode ID > - > > Key: HDFS-11139 > URL: https://issues.apache.org/jira/browse/HDFS-11139 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Labels: ozoneMerge, tocheck > Fix For: HDFS-7240 > > > The Datanode ID is used when a data node registers. It is assumed that > datanodes are unique across the cluster. > However due to operator error or other cases we might encounter duplicate > datanode ID. SCM should be able to recognize this and handle in correctly. > Here is a sub-set of datanode scenarios it needs to handle. > 1. Normal Datanode > 2. Copy of a Datanode metadata by operator to another node > 3. A Datanode being renamed - hostname change > 4. Container Reports -- 2 machines with same datanode ID. SCM thinks they are > same node. > 5. Decommission -- we decommission both nodes if IDs are same. > 6. Commands will be send to both nodes. > So it is necessary that SCM identity when a datanode is reusing a datanode ID > that is already used by another node. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10357) Ozone: Replace Jersey container with Netty Container
[ https://issues.apache.org/jira/browse/HDFS-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-10357: Labels: OzonePostMerge (was: ) > Ozone: Replace Jersey container with Netty Container > > > Key: HDFS-10357 > URL: https://issues.apache.org/jira/browse/HDFS-10357 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Labels: OzonePostMerge > Fix For: HDFS-7240 > > > In the ozone branch, we have implemented Web Interface calls using JAX-RS. > This was very useful when the REST interfaces where in flux. This JIRA > proposes to replace Jersey based code with pure netty and remove any > dependency that Ozone has on Jersey. This will create both faster and simpler > code in Ozone web interface. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10356) Ozone: Container server needs enhancements to control of bind address for greater flexibility and testability.
[ https://issues.apache.org/jira/browse/HDFS-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-10356: Labels: OzonePostMerge tocheck (was: ) > Ozone: Container server needs enhancements to control of bind address for > greater flexibility and testability. > -- > > Key: HDFS-10356 > URL: https://issues.apache.org/jira/browse/HDFS-10356 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chris Nauroth >Assignee: Anu Engineer > Labels: OzonePostMerge, tocheck > > The container server, as implemented in class > {{org.apache.hadoop.ozone.container.common.transport.server.XceiverServer}}, > currently does not offer the same degree of flexibility as our other RPC > servers for controlling the network interface and port used in the bind call. > There is no "bind-host" property, so it is not possible to select all > available network interfaces via the 0.0.0.0 wildcard address. If the > requested port is different from the actual bound port (i.e. setting port to > 0 in test cases), then there is no exposure of that actual bound port to > clients. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HDFS-8502) Ozone: Storage container data pipeline
[ https://issues.apache.org/jira/browse/HDFS-8502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-8502: --- Comment: was deleted (was: HDFS-12387 does exactly this. Resolving this as a duplicate. ) > Ozone: Storage container data pipeline > -- > > Key: HDFS-8502 > URL: https://issues.apache.org/jira/browse/HDFS-8502 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > > This jira layout the basic framework of the data pipeline to replicate the > storage containers while writing. An important design goal is to keep the > pipeline semantics independent of the storage container implementation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-8502) Ozone: Storage container data pipeline
[ https://issues.apache.org/jira/browse/HDFS-8502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer resolved HDFS-8502. Resolution: Duplicate HDFS-12387 does exactly what is described in this JIRA. So resolving this as duplicate/done. > Ozone: Storage container data pipeline > -- > > Key: HDFS-8502 > URL: https://issues.apache.org/jira/browse/HDFS-8502 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > > This jira layout the basic framework of the data pipeline to replicate the > storage containers while writing. An important design goal is to keep the > pipeline semantics independent of the storage container implementation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8502) Ozone: Storage container data pipeline
[ https://issues.apache.org/jira/browse/HDFS-8502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156202#comment-16156202 ] Anu Engineer commented on HDFS-8502: HDFS-12387 does exactly this. Resolving this as a duplicate. > Ozone: Storage container data pipeline > -- > > Key: HDFS-8502 > URL: https://issues.apache.org/jira/browse/HDFS-8502 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > > This jira layout the basic framework of the data pipeline to replicate the > storage containers while writing. An important design goal is to keep the > pipeline semantics independent of the storage container implementation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12401) Ozone: TestBlockDeletingService#testBlockDeletionTimeout sometimes timeout
[ https://issues.apache.org/jira/browse/HDFS-12401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12401: Summary: Ozone: TestBlockDeletingService#testBlockDeletionTimeout sometimes timeout (was: TestBlockDeletingService#testBlockDeletionTimeout sometimes timeout) > Ozone: TestBlockDeletingService#testBlockDeletionTimeout sometimes timeout > -- > > Key: HDFS-12401 > URL: https://issues.apache.org/jira/browse/HDFS-12401 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: HDFS-7240 >Reporter: Xiaoyu Yao > > {code} > testBlockDeletionTimeout(org.apache.hadoop.ozone.container.common.TestBlockDeletingService) > Time elapsed: 100.383 sec <<< ERROR! > java.util.concurrent.TimeoutException: Timed out waiting for condition. > Thread diagnostics: > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12350) Support meta tags in configs
[ https://issues.apache.org/jira/browse/HDFS-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156176#comment-16156176 ] Hadoop QA commented on HDFS-12350: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 13s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 50s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 3 new + 250 unchanged - 1 fixed = 253 total (was 251) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 1s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 71m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | | | hadoop.security.TestRaceWhenRelogin | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12350 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12885685/HDFS-12350.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux bad0aa2a782d 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 22de944 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/21029/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21029/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21029/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21029/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Support meta tags in configs > > > Key: HDFS-12350 > URL: https://issues.apache.org/jira/browse/HDFS-12350 >
[jira] [Commented] (HDFS-12400) Provide a way for NN to drain the local key cache before re-encryption
[ https://issues.apache.org/jira/browse/HDFS-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156175#comment-16156175 ] Hadoop QA commented on HDFS-12400: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 2s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 13s{color} | {color:orange} root: The patch generated 1 new + 192 unchanged - 0 fixed = 193 total (was 192) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 14s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}131m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}210m 15s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestShellBasedUnixGroupsMapping | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.tools.TestDFSHAAdminMiniCluster | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12400 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12885656/HDFS-12400.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 802b821529ef 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1f3bc63 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle |
[jira] [Updated] (HDFS-11614) Ozone: Cleanup javadoc issues
[ https://issues.apache.org/jira/browse/HDFS-11614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11614: Priority: Blocker (was: Major) > Ozone: Cleanup javadoc issues > - > > Key: HDFS-11614 > URL: https://issues.apache.org/jira/browse/HDFS-11614 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Priority: Blocker > Labels: ozoneMerge > > There is a bunch of JavaDoc issues in the Ozone API. It would be good to have > a clean code base before we merge. This JIRA tracks cleaning up of java doc > comments. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11442) Ozone: Fix the Cluster ID generation code in SCM
[ https://issues.apache.org/jira/browse/HDFS-11442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11442: Labels: ozoneMerge (was: ) > Ozone: Fix the Cluster ID generation code in SCM > > > Key: HDFS-11442 > URL: https://issues.apache.org/jira/browse/HDFS-11442 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Blocker > Labels: ozoneMerge > Fix For: HDFS-7240 > > > The Cluster ID is randomly generated right now when SCM is started and we > avoid verifying the clients cluster ID matches what SCM expects. This JIRA is > to track the comments in code. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11563) Ozone: enforce DependencyConvergence uniqueVersions
[ https://issues.apache.org/jira/browse/HDFS-11563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11563: Labels: ozoneMerge tocheck (was: ) > Ozone: enforce DependencyConvergence uniqueVersions > --- > > Key: HDFS-11563 > URL: https://issues.apache.org/jira/browse/HDFS-11563 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: build, ozone >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze >Priority: Blocker > Labels: ozoneMerge, tocheck > > In HDFS-11519, we disable DependencyConvergence uniqueVersions so that > Jenkins can test the branch with public maven repo. We should re-enable it > before merging the branch. > {code} > // hadoop-project/pom.xml > @@ -1505,7 +1545,9 @@ > > > > - true > + > > > > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11442) Ozone: Fix the Cluster ID generation code in SCM
[ https://issues.apache.org/jira/browse/HDFS-11442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11442: Priority: Blocker (was: Major) > Ozone: Fix the Cluster ID generation code in SCM > > > Key: HDFS-11442 > URL: https://issues.apache.org/jira/browse/HDFS-11442 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Blocker > Labels: ozoneMerge > Fix For: HDFS-7240 > > > The Cluster ID is randomly generated right now when SCM is started and we > avoid verifying the clients cluster ID matches what SCM expects. This JIRA is > to track the comments in code. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11612) Ozone: Cleanup Checkstyle issues
[ https://issues.apache.org/jira/browse/HDFS-11612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11612: Labels: ozoneMerge (was: ) > Ozone: Cleanup Checkstyle issues > > > Key: HDFS-11612 > URL: https://issues.apache.org/jira/browse/HDFS-11612 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Blocker > Labels: ozoneMerge > > There is a bunch of check style issues under Ozone tree. We have to clean > them up before we call for a merge of this tree. This jira tracks that work > item. It would be a noisy but mostly content less change. Hence it is easier > to track that in separate patch -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11612) Ozone: Cleanup Checkstyle issues
[ https://issues.apache.org/jira/browse/HDFS-11612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11612: Priority: Blocker (was: Major) > Ozone: Cleanup Checkstyle issues > > > Key: HDFS-11612 > URL: https://issues.apache.org/jira/browse/HDFS-11612 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Blocker > > There is a bunch of check style issues under Ozone tree. We have to clean > them up before we call for a merge of this tree. This jira tracks that work > item. It would be a noisy but mostly content less change. Hence it is easier > to track that in separate patch -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11614) Ozone: Cleanup javadoc issues
[ https://issues.apache.org/jira/browse/HDFS-11614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11614: Labels: ozoneMerge (was: ) > Ozone: Cleanup javadoc issues > - > > Key: HDFS-11614 > URL: https://issues.apache.org/jira/browse/HDFS-11614 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Priority: Blocker > Labels: ozoneMerge > > There is a bunch of JavaDoc issues in the Ozone API. It would be good to have > a clean code base before we merge. This JIRA tracks cleaning up of java doc > comments. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11734) Ozone: provide a way to validate ContainerCommandRequestProto
[ https://issues.apache.org/jira/browse/HDFS-11734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11734: Labels: ozoneMerge tocheck (was: ) > Ozone: provide a way to validate ContainerCommandRequestProto > - > > Key: HDFS-11734 > URL: https://issues.apache.org/jira/browse/HDFS-11734 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Tsz Wo Nicholas Sze >Assignee: Anu Engineer >Priority: Critical > Labels: ozoneMerge, tocheck > > We need some API to check if a ContainerCommandRequestProto is valid. > It is useful when the container pipeline is run with Ratis. Then, the leader > could first checks if a ContainerCommandRequestProto is valid before the > request is propagated to the followers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11734) Ozone: provide a way to validate ContainerCommandRequestProto
[ https://issues.apache.org/jira/browse/HDFS-11734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11734: Priority: Critical (was: Major) > Ozone: provide a way to validate ContainerCommandRequestProto > - > > Key: HDFS-11734 > URL: https://issues.apache.org/jira/browse/HDFS-11734 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Tsz Wo Nicholas Sze >Assignee: Anu Engineer >Priority: Critical > Labels: ozoneMerge, tocheck > > We need some API to check if a ContainerCommandRequestProto is valid. > It is useful when the container pipeline is run with Ratis. Then, the leader > could first checks if a ContainerCommandRequestProto is valid before the > request is propagated to the followers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11715) Ozone: SCM : Add priority for datanode commands
[ https://issues.apache.org/jira/browse/HDFS-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156163#comment-16156163 ] Anu Engineer commented on HDFS-11715: - @weiwei yang, Let us chat about this some time. Should we get this done before ozone trunk merge ? > Ozone: SCM : Add priority for datanode commands > --- > > Key: HDFS-11715 > URL: https://issues.apache.org/jira/browse/HDFS-11715 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Anu Engineer > Labels: OzonePostMerge, tocheck > > While reviewing HDFS-11493, [~cheersyang] commented that it would be a good > idea to support priority for datanode commands send from SCM. > bq. The queue seems to be time ordered, I think it will be better to support > priority as well. Commands may have different priority, for example, > replicate a container priority is usually higher than delete a container > replica; replicate a container also may have different priorities according > to the number of replicas > This jira tracks that work item. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11715) Ozone: SCM : Add priority for datanode commands
[ https://issues.apache.org/jira/browse/HDFS-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11715: Labels: OzonePostMerge tocheck (was: ) > Ozone: SCM : Add priority for datanode commands > --- > > Key: HDFS-11715 > URL: https://issues.apache.org/jira/browse/HDFS-11715 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Anu Engineer > Labels: OzonePostMerge, tocheck > > While reviewing HDFS-11493, [~cheersyang] commented that it would be a good > idea to support priority for datanode commands send from SCM. > bq. The queue seems to be time ordered, I think it will be better to support > priority as well. Commands may have different priority, for example, > replicate a container priority is usually higher than delete a container > replica; replicate a container also may have different priorities according > to the number of replicas > This jira tracks that work item. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11699) Ozone:SCM: Add support for close containers in SCM
[ https://issues.apache.org/jira/browse/HDFS-11699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11699: Labels: OzonePostMerge tocheck (was: ) > Ozone:SCM: Add support for close containers in SCM > -- > > Key: HDFS-11699 > URL: https://issues.apache.org/jira/browse/HDFS-11699 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Labels: OzonePostMerge, tocheck > > Add support for closed containers in SCM. When a container is closed, SCM > needs to make a set of decisions like which pool and which machines are > expected to have this container. SCM also needs to issue a copyContainer > command to the target datanodes so that these nodes can replicate data from > the original locations. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11686) Ozone: Support CopyContainer
[ https://issues.apache.org/jira/browse/HDFS-11686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11686: Labels: OzonePostMerge (was: ) > Ozone: Support CopyContainer > > > Key: HDFS-11686 > URL: https://issues.apache.org/jira/browse/HDFS-11686 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Anu Engineer > Labels: OzonePostMerge > > Once a container is closed we need to copy the container to the correct pool > or re-encode the container to use erasure coding. The copyContainer allows > users to get the container as a tarball from the remote machine. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11613) Ozone: Cleanup findbugs issues
[ https://issues.apache.org/jira/browse/HDFS-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156161#comment-16156161 ] Anu Engineer commented on HDFS-11613: - Need to fix this before merge. I will start filing many smaller jiras so it is easy to code review. > Ozone: Cleanup findbugs issues > -- > > Key: HDFS-11613 > URL: https://issues.apache.org/jira/browse/HDFS-11613 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Blocker > Labels: ozoneMerge > > Some of the ozone checkins happened before Findbugs started running on test > files. This will cause issues when we attempt to merge with trunk. This jira > tracks cleaning up all Findbugs issue under ozone. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11613) Ozone: Cleanup findbugs issues
[ https://issues.apache.org/jira/browse/HDFS-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11613: Priority: Blocker (was: Major) > Ozone: Cleanup findbugs issues > -- > > Key: HDFS-11613 > URL: https://issues.apache.org/jira/browse/HDFS-11613 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Blocker > Labels: ozoneMerge > > Some of the ozone checkins happened before Findbugs started running on test > files. This will cause issues when we attempt to merge with trunk. This jira > tracks cleaning up all Findbugs issue under ozone. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11613) Ozone: Cleanup findbugs issues
[ https://issues.apache.org/jira/browse/HDFS-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11613: Labels: ozoneMerge (was: ) > Ozone: Cleanup findbugs issues > -- > > Key: HDFS-11613 > URL: https://issues.apache.org/jira/browse/HDFS-11613 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Blocker > Labels: ozoneMerge > > Some of the ozone checkins happened before Findbugs started running on test > files. This will cause issues when we attempt to merge with trunk. This jira > tracks cleaning up all Findbugs issue under ozone. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11677) OZone: SCM CLI: Implement get container command
[ https://issues.apache.org/jira/browse/HDFS-11677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156160#comment-16156160 ] Anu Engineer commented on HDFS-11677: - we have this change done for datanode. That is the closeContainer on datanodes is supported, we might need to corresponding change in SCM to support this fully. > OZone: SCM CLI: Implement get container command > --- > > Key: HDFS-11677 > URL: https://issues.apache.org/jira/browse/HDFS-11677 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Chen Liang > Labels: command-line, ozoneMerge, tocheck > > Implement get container > {code} > hdfs scm -container get -o > {code} > This command works only against a closed container. If the container is > closed, then SCM will return the address of the datanodes. The datanodes > support an API called copyCon- tainer, which returns the container as a tar > ball. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11678) Ozone: SCM CLI: Implement get container metrics command
[ https://issues.apache.org/jira/browse/HDFS-11678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11678: Labels: command-line ozoneMerge tocheck (was: command-line) > Ozone: SCM CLI: Implement get container metrics command > --- > > Key: HDFS-11678 > URL: https://issues.apache.org/jira/browse/HDFS-11678 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Yuanbo Liu > Labels: command-line, ozoneMerge, tocheck > > Implement the command to get container metrics > {code} > hdfs scm -container metrics > {code} > this command returns container metrics in certain format, e.g json. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11676) Ozone: SCM CLI: Implement close container command
[ https://issues.apache.org/jira/browse/HDFS-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156159#comment-16156159 ] Anu Engineer commented on HDFS-11676: - We have to datanode side of this code in place now. We will need to add this in SCM side and we can do the command line. > Ozone: SCM CLI: Implement close container command > - > > Key: HDFS-11676 > URL: https://issues.apache.org/jira/browse/HDFS-11676 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Chen Liang > Labels: ozoneMerge, tocheck > > Implement close container command > {code} > hdfs scm -container close > {code} > This command connects to SCM and closes a container. Once the container is > closed in the SCM, the corresponding container is closed at the appropriate > datanode. if the container does not exist, it will return an error. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11676) Ozone: SCM CLI: Implement close container command
[ https://issues.apache.org/jira/browse/HDFS-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11676: Labels: ozoneMerge tocheck (was: ozoneMerge) > Ozone: SCM CLI: Implement close container command > - > > Key: HDFS-11676 > URL: https://issues.apache.org/jira/browse/HDFS-11676 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Chen Liang > Labels: ozoneMerge, tocheck > > Implement close container command > {code} > hdfs scm -container close > {code} > This command connects to SCM and closes a container. Once the container is > closed in the SCM, the corresponding container is closed at the appropriate > datanode. if the container does not exist, it will return an error. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11677) OZone: SCM CLI: Implement get container command
[ https://issues.apache.org/jira/browse/HDFS-11677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11677: Labels: command-line ozoneMerge tocheck (was: command-line) > OZone: SCM CLI: Implement get container command > --- > > Key: HDFS-11677 > URL: https://issues.apache.org/jira/browse/HDFS-11677 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Chen Liang > Labels: command-line, ozoneMerge, tocheck > > Implement get container > {code} > hdfs scm -container get -o > {code} > This command works only against a closed container. If the container is > closed, then SCM will return the address of the datanodes. The datanodes > support an API called copyCon- tainer, which returns the container as a tar > ball. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11676) Ozone: SCM CLI: Implement close container command
[ https://issues.apache.org/jira/browse/HDFS-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11676: Labels: ozoneMerge (was: ) > Ozone: SCM CLI: Implement close container command > - > > Key: HDFS-11676 > URL: https://issues.apache.org/jira/browse/HDFS-11676 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Chen Liang > Labels: ozoneMerge > > Implement close container command > {code} > hdfs scm -container close > {code} > This command connects to SCM and closes a container. Once the container is > closed in the SCM, the corresponding container is closed at the appropriate > datanode. if the container does not exist, it will return an error. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12401) TestBlockDeletingService#testBlockDeletionTimeout sometimes timeout
Xiaoyu Yao created HDFS-12401: - Summary: TestBlockDeletingService#testBlockDeletionTimeout sometimes timeout Key: HDFS-12401 URL: https://issues.apache.org/jira/browse/HDFS-12401 Project: Hadoop HDFS Issue Type: Sub-task Components: HDFS-7240 Affects Versions: HDFS-7240 Reporter: Xiaoyu Yao {code} testBlockDeletionTimeout(org.apache.hadoop.ozone.container.common.TestBlockDeletingService) Time elapsed: 100.383 sec <<< ERROR! java.util.concurrent.TimeoutException: Timed out waiting for condition. Thread diagnostics: {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11735) Ozone: In Ratis, leader should validate ContainerCommandRequestProto before propagating it to followers
[ https://issues.apache.org/jira/browse/HDFS-11735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11735: Labels: ozoneMerge tocheck (was: ) > Ozone: In Ratis, leader should validate ContainerCommandRequestProto before > propagating it to followers > --- > > Key: HDFS-11735 > URL: https://issues.apache.org/jira/browse/HDFS-11735 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Labels: ozoneMerge, tocheck > Attachments: HDFS-11735-HDFS-7240.20170501.patch > > > The leader should use the API provided by HDFS-11734 to check if a > ContainerCommandRequestProto is valid before propagating it to followers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12131) Add some of the FSNamesystem JMX values as metrics
[ https://issues.apache.org/jira/browse/HDFS-12131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156149#comment-16156149 ] Adam Whitlock commented on HDFS-12131: -- [~shv] and [~xkrogen] - Thank you! > Add some of the FSNamesystem JMX values as metrics > -- > > Key: HDFS-12131 > URL: https://issues.apache.org/jira/browse/HDFS-12131 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, namenode >Reporter: Erik Krogen >Assignee: Erik Krogen >Priority: Minor > Fix For: 2.9.0, 3.0.0-beta1, 2.8.3, 2.7.5 > > Attachments: HDFS-12131.000.patch, HDFS-12131.001.patch, > HDFS-12131.002.patch, HDFS-12131.002.patch, HDFS-12131.003.patch, > HDFS-12131.004.patch, HDFS-12131.005.patch, HDFS-12131.006.patch, > HDFS-12131-branch-2.006.patch, HDFS-12131-branch-2.7.006.patch, > HDFS-12131-branch-2.8.006.patch > > > A number of useful numbers are emitted via the FSNamesystem JMX, but not > through the metrics system. These would be useful to be able to track over > time, e.g. to alert on via standard metrics systems or to view trends and > rate changes: > * NumLiveDataNodes > * NumDeadDataNodes > * NumDecomLiveDataNodes > * NumDecomDeadDataNodes > * NumDecommissioningDataNodes > * NumStaleStorages > * VolumeFailuresTotal > * EstimatedCapacityLostTotal > * NumInMaintenanceLiveDataNodes > * NumInMaintenanceDeadDataNodes > * NumEnteringMaintenanceDataNodes > This is a simple change that just requires annotating the JMX methods with > {{@Metric}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12131) Add some of the FSNamesystem JMX values as metrics
[ https://issues.apache.org/jira/browse/HDFS-12131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156146#comment-16156146 ] Erik Krogen commented on HDFS-12131: Thank you Konstantin! > Add some of the FSNamesystem JMX values as metrics > -- > > Key: HDFS-12131 > URL: https://issues.apache.org/jira/browse/HDFS-12131 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, namenode >Reporter: Erik Krogen >Assignee: Erik Krogen >Priority: Minor > Fix For: 2.9.0, 3.0.0-beta1, 2.8.3, 2.7.5 > > Attachments: HDFS-12131.000.patch, HDFS-12131.001.patch, > HDFS-12131.002.patch, HDFS-12131.002.patch, HDFS-12131.003.patch, > HDFS-12131.004.patch, HDFS-12131.005.patch, HDFS-12131.006.patch, > HDFS-12131-branch-2.006.patch, HDFS-12131-branch-2.7.006.patch, > HDFS-12131-branch-2.8.006.patch > > > A number of useful numbers are emitted via the FSNamesystem JMX, but not > through the metrics system. These would be useful to be able to track over > time, e.g. to alert on via standard metrics systems or to view trends and > rate changes: > * NumLiveDataNodes > * NumDeadDataNodes > * NumDecomLiveDataNodes > * NumDecomDeadDataNodes > * NumDecommissioningDataNodes > * NumStaleStorages > * VolumeFailuresTotal > * EstimatedCapacityLostTotal > * NumInMaintenanceLiveDataNodes > * NumInMaintenanceDeadDataNodes > * NumEnteringMaintenanceDataNodes > This is a simple change that just requires annotating the JMX methods with > {{@Metric}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11744) Ozone: Implement the trace ID generator
[ https://issues.apache.org/jira/browse/HDFS-11744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156143#comment-16156143 ] Anu Engineer commented on HDFS-11744: - [~linyiqun] We have trace ID on each call since the xciever client will fail if we did not have a valid ID. is this JIRA still required? can you please comment? cc: [~msingh] I think it was one of your patches that fixed the issue, care to comment? > Ozone: Implement the trace ID generator > --- > > Key: HDFS-11744 > URL: https://issues.apache.org/jira/browse/HDFS-11744 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > > Currently in ozone, if a client wants to call a container operation command, > it requires client to create an unique id. Actually this is not a convenience > way, we should provide a specific id generator to help us do this. Also we > can keep the original way since sometimes the client wants to use its own > traceID in case of error this will be help debugging and tracing. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11601) Ozone: Compact DB should be called on Open Containers.
[ https://issues.apache.org/jira/browse/HDFS-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156142#comment-16156142 ] Anu Engineer commented on HDFS-11601: - [~cheersyang] Since we have RocksDB now, I think we should wait until we get more testing with scale and see if we really need to do anything here. I have marked it as ozoneMerge so that we will be forced to look at this before merge. > Ozone: Compact DB should be called on Open Containers. > -- > > Key: HDFS-11601 > URL: https://issues.apache.org/jira/browse/HDFS-11601 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Weiwei Yang > Labels: ozoneMerge, tocheck > > The discussion in HDFS-11594 pointed to a potential issue that we might run > into. That is too many delete key operations can take place and make a DB > slow. Running compactDB in those cases are useful. Currently we run compactDB > when we close a container. This JIRA tracks a potential improvement of > running compactDB even on open containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11601) Ozone: Compact DB should be called on Open Containers.
[ https://issues.apache.org/jira/browse/HDFS-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11601: Labels: ozoneMerge tocheck (was: ) > Ozone: Compact DB should be called on Open Containers. > -- > > Key: HDFS-11601 > URL: https://issues.apache.org/jira/browse/HDFS-11601 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Weiwei Yang > Labels: ozoneMerge, tocheck > > The discussion in HDFS-11594 pointed to a potential issue that we might run > into. That is too many delete key operations can take place and make a DB > slow. Running compactDB in those cases are useful. Currently we run compactDB > when we close a container. This JIRA tracks a potential improvement of > running compactDB even on open containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11910) Ozone:KSM: Add setVolumeAcls to allow adding/removing acls from a KSM volume
[ https://issues.apache.org/jira/browse/HDFS-11910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156139#comment-16156139 ] Anu Engineer commented on HDFS-11910: - [~msingh] tagging this as a post merge work item. Please let me know if you would like this to be tracked under ozone merge goal. > Ozone:KSM: Add setVolumeAcls to allow adding/removing acls from a KSM volume > > > Key: HDFS-11910 > URL: https://issues.apache.org/jira/browse/HDFS-11910 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Labels: OzonePostMerge > > Create KSM volumes sets the acls for the user creating a volume, however it > will be desirable to have setVolumeAcls to change the set of acls for the > volume -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11897) Ozone: KSM: Changing log level for client calls in KSM
[ https://issues.apache.org/jira/browse/HDFS-11897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11897: Priority: Major (was: Critical) > Ozone: KSM: Changing log level for client calls in KSM > -- > > Key: HDFS-11897 > URL: https://issues.apache.org/jira/browse/HDFS-11897 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Nandakumar >Assignee: Nandakumar > Labels: ozoneMerge > > Whenever there is no Volume/Bucker/Key found in MetadataDB for a client call, > KSM logs ERROR which is not necessary. The level of these log messages can be > changed to DEBUG, which will be helpful in debugging. > Changes are to be made in the following classes > * VolumeManagerImpl > * BucketManagerImpl > * KeyManagerImpl -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11897) Ozone: KSM: Changing log level for client calls in KSM
[ https://issues.apache.org/jira/browse/HDFS-11897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11897: Priority: Critical (was: Minor) > Ozone: KSM: Changing log level for client calls in KSM > -- > > Key: HDFS-11897 > URL: https://issues.apache.org/jira/browse/HDFS-11897 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Nandakumar >Assignee: Nandakumar >Priority: Critical > Labels: ozoneMerge > > Whenever there is no Volume/Bucker/Key found in MetadataDB for a client call, > KSM logs ERROR which is not necessary. The level of these log messages can be > changed to DEBUG, which will be helpful in debugging. > Changes are to be made in the following classes > * VolumeManagerImpl > * BucketManagerImpl > * KeyManagerImpl -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12350) Support meta tags in configs
[ https://issues.apache.org/jira/browse/HDFS-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156137#comment-16156137 ] Hadoop QA commented on HDFS-12350: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 27s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 40s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 40s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 33s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 2 new + 250 unchanged - 1 fixed = 252 total (was 251) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 27s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 24s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 52s{color} | {color:red} hadoop-common-project_hadoop-common generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 25s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 44m 14s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12350 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12885681/HDFS-12350.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux adda7720ef91 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 22de944 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/21028/artifact/patchprocess/patch-mvninstall-hadoop-common-project_hadoop-common.txt | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/21028/artifact/patchprocess/patch-compile-root.txt | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/21028/artifact/patchprocess/patch-compile-root.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/21028/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | mvnsite | https://builds.apache.org/job/PreCommit-HDFS-Build/21028/artifact/patchprocess/patch-mvnsite-hadoop-common-project_hadoop-common.txt | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/21028/artifact/patchprocess/patch-findbugs-hadoop-common-project_hadoop-common.txt | | javadoc |
[jira] [Updated] (HDFS-11897) Ozone: KSM: Changing log level for client calls in KSM
[ https://issues.apache.org/jira/browse/HDFS-11897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11897: Labels: ozoneMerge (was: ) > Ozone: KSM: Changing log level for client calls in KSM > -- > > Key: HDFS-11897 > URL: https://issues.apache.org/jira/browse/HDFS-11897 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Nandakumar >Assignee: Nandakumar >Priority: Minor > Labels: ozoneMerge > > Whenever there is no Volume/Bucker/Key found in MetadataDB for a client call, > KSM logs ERROR which is not necessary. The level of these log messages can be > changed to DEBUG, which will be helpful in debugging. > Changes are to be made in the following classes > * VolumeManagerImpl > * BucketManagerImpl > * KeyManagerImpl -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11910) Ozone:KSM: Add setVolumeAcls to allow adding/removing acls from a KSM volume
[ https://issues.apache.org/jira/browse/HDFS-11910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11910: Labels: OzonePostMerge (was: ) > Ozone:KSM: Add setVolumeAcls to allow adding/removing acls from a KSM volume > > > Key: HDFS-11910 > URL: https://issues.apache.org/jira/browse/HDFS-11910 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Labels: OzonePostMerge > > Create KSM volumes sets the acls for the user creating a volume, however it > will be desirable to have setVolumeAcls to change the set of acls for the > volume -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11909) Ozone: KSM : Support for simulated file system operations
[ https://issues.apache.org/jira/browse/HDFS-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156136#comment-16156136 ] Anu Engineer commented on HDFS-11909: - This is a good to have feature, so moving to ozone post merge target. > Ozone: KSM : Support for simulated file system operations > -- > > Key: HDFS-11909 > URL: https://issues.apache.org/jira/browse/HDFS-11909 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Labels: OzonePostMerge > Attachments: simulation-file-system.pdf > > > This JIRA adds a proposal that makes it easy to implement OzoneFileSystem. > This allows the directory and file list operations simpler. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11468) Ozone: SCM: Add Node Metrics for SCM
[ https://issues.apache.org/jira/browse/HDFS-11468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11468: Labels: ozoneMerge (was: ) > Ozone: SCM: Add Node Metrics for SCM > > > Key: HDFS-11468 > URL: https://issues.apache.org/jira/browse/HDFS-11468 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > > This ticket is opened to add node metrics in SCM based on heartbeat, node > report and container report from datanodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11909) Ozone: KSM : Support for simulated file system operations
[ https://issues.apache.org/jira/browse/HDFS-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11909: Labels: OzonePostMerge (was: ) > Ozone: KSM : Support for simulated file system operations > -- > > Key: HDFS-11909 > URL: https://issues.apache.org/jira/browse/HDFS-11909 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Labels: OzonePostMerge > Attachments: simulation-file-system.pdf > > > This JIRA adds a proposal that makes it easy to implement OzoneFileSystem. > This allows the directory and file list operations simpler. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11468) Ozone: SCM: Add Node Metrics for SCM
[ https://issues.apache.org/jira/browse/HDFS-11468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11468: Priority: Critical (was: Major) > Ozone: SCM: Add Node Metrics for SCM > > > Key: HDFS-11468 > URL: https://issues.apache.org/jira/browse/HDFS-11468 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Critical > Labels: ozoneMerge > > This ticket is opened to add node metrics in SCM based on heartbeat, node > report and container report from datanodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11468) Ozone: SCM: Add Node Metrics for SCM
[ https://issues.apache.org/jira/browse/HDFS-11468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11468: Component/s: ozone > Ozone: SCM: Add Node Metrics for SCM > > > Key: HDFS-11468 > URL: https://issues.apache.org/jira/browse/HDFS-11468 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Critical > Labels: ozoneMerge > > This ticket is opened to add node metrics in SCM based on heartbeat, node > report and container report from datanodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11937) Ozone: Support range in getKey operation
[ https://issues.apache.org/jira/browse/HDFS-11937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156134#comment-16156134 ] Anu Engineer commented on HDFS-11937: - [~msingh] I am marking this as a post merge request. Since you already have a working ozone fs for time being. > Ozone: Support range in getKey operation > > > Key: HDFS-11937 > URL: https://issues.apache.org/jira/browse/HDFS-11937 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Labels: OzonePostMerge > > We need to support HTTP ranges so that users can get a key by ranges. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11937) Ozone: Support range in getKey operation
[ https://issues.apache.org/jira/browse/HDFS-11937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11937: Labels: OzonePostMerge (was: ) > Ozone: Support range in getKey operation > > > Key: HDFS-11937 > URL: https://issues.apache.org/jira/browse/HDFS-11937 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Labels: OzonePostMerge > > We need to support HTTP ranges so that users can get a key by ranges. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org