[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder
[ https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555704#comment-14555704 ] Vinayakumar B commented on HDFS-8382: - Changes looks good. Triggered the jenkins again now, will wait for one more report. Remove chunkSize parameter from initialize method of raw erasure coder -- Key: HDFS-8382 URL: https://issues.apache.org/jira/browse/HDFS-8382 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng Attachments: HDFS-8382-HDFS-7285-v1.patch, HDFS-8382-HDFS-7285-v2.patch, HDFS-8382-HDFS-7285-v3.patch, HDFS-8382-HDFS-7285-v4.patch, HDFS-8382-HDFS-7285-v5.patch Per discussion in HDFS-8347, we need to support encoding/decoding variable width units data instead of predefined fixed width like {{chunkSize}}. Have this issue to remove chunkSize in the general raw erasure coder API. Specific coder will support fixed chunkSize using hard-coded or specific schema customizing way if necessary, like HitchHiker coder. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder
[ https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555718#comment-14555718 ] Kai Zheng commented on HDFS-8382: - Thanks Vinay for the good analysis. I thought you're right, it's because the change spans multiple modules, particularly from hadoop-common side to HDFS side. Remove chunkSize parameter from initialize method of raw erasure coder -- Key: HDFS-8382 URL: https://issues.apache.org/jira/browse/HDFS-8382 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng Attachments: HDFS-8382-HDFS-7285-v1.patch, HDFS-8382-HDFS-7285-v2.patch, HDFS-8382-HDFS-7285-v3.patch, HDFS-8382-HDFS-7285-v4.patch, HDFS-8382-HDFS-7285-v5.patch Per discussion in HDFS-8347, we need to support encoding/decoding variable width units data instead of predefined fixed width like {{chunkSize}}. Have this issue to remove chunkSize in the general raw erasure coder API. Specific coder will support fixed chunkSize using hard-coded or specific schema customizing way if necessary, like HitchHiker coder. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder
[ https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555688#comment-14555688 ] Vinayakumar B commented on HDFS-8382: - [~drankye] and [~aw], I think the problem is, when the jenkins is running *Precommit-HDFS-Build* on the branch HDFS-7285 with running hadoop-hdfs module tests, it depends on hadoop-common and hadoop-hdfs-client module's jars present in local maven repo. At the same time, these jars can be replaced by another hadoop job of *Precommit-HADOOP-Build* or any other Hadoop project running on different branch. So for the HDFS-7285's tests, extra classes added in same branch and in some other module (hadoop-common/hadoop-hdfs-client) will be missing. Till now we have seen failures with missing classes from hadoop-common and hadoop-hdfs-client modules while running hadoop-hdfs tests. I think the solution will be like having a separate maven repo for each jenkins project to avoid the collisions, even though results in duplicate contents of repo. What you say [~aw] ? Similar problem would have been experienced earlier when the patch involved multiple module changes. Remove chunkSize parameter from initialize method of raw erasure coder -- Key: HDFS-8382 URL: https://issues.apache.org/jira/browse/HDFS-8382 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng Attachments: HDFS-8382-HDFS-7285-v1.patch, HDFS-8382-HDFS-7285-v2.patch, HDFS-8382-HDFS-7285-v3.patch, HDFS-8382-HDFS-7285-v4.patch, HDFS-8382-HDFS-7285-v5.patch Per discussion in HDFS-8347, we need to support encoding/decoding variable width units data instead of predefined fixed width like {{chunkSize}}. Have this issue to remove chunkSize in the general raw erasure coder API. Specific coder will support fixed chunkSize using hard-coded or specific schema customizing way if necessary, like HitchHiker coder. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder
[ https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555990#comment-14555990 ] Hadoop QA commented on HDFS-8382: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 14m 37s | Pre-patch HDFS-7285 compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 4 new or modified test files. | | {color:red}-1{color} | javac | 7m 26s | The applied patch generated 4 additional warning messages. | | {color:green}+1{color} | javadoc | 9m 39s | There were no new javadoc warning messages. | | {color:red}-1{color} | release audit | 0m 15s | The applied patch generated 1 release audit warnings. | | {color:red}-1{color} | checkstyle | 1m 36s | The applied patch generated 4 new checkstyle issues (total was 51, now 41). | | {color:green}+1{color} | whitespace | 0m 6s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 37s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 32s | The patch built with eclipse:eclipse. | | {color:red}-1{color} | findbugs | 4m 52s | The patch appears to introduce 1 new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | common tests | 23m 26s | Tests passed in hadoop-common. | | {color:red}-1{color} | hdfs tests | 173m 11s | Tests failed in hadoop-hdfs. | | | | 237m 42s | | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs | | | Inconsistent synchronization of org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 88% of time Unsynchronized access at DFSOutputStream.java:88% of time Unsynchronized access at DFSOutputStream.java:[line 146] | | Failed unit tests | hadoop.hdfs.TestEncryptedTransfer | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.server.namenode.TestFileTruncate | | | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy | | | hadoop.hdfs.server.namenode.TestAuditLogs | | | hadoop.hdfs.server.blockmanagement.TestBlockInfo | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12734419/HDFS-8382-HDFS-7285-v5.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | HDFS-7285 / 24d0fbe | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/11098/artifact/patchprocess/diffJavacWarnings.txt | | Release Audit | https://builds.apache.org/job/PreCommit-HDFS-Build/11098/artifact/patchprocess/patchReleaseAuditProblems.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/11098/artifact/patchprocess/diffcheckstylehadoop-common.txt | | Findbugs warnings | https://builds.apache.org/job/PreCommit-HDFS-Build/11098/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html | | hadoop-common test log | https://builds.apache.org/job/PreCommit-HDFS-Build/11098/artifact/patchprocess/testrun_hadoop-common.txt | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/11098/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/11098/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/11098/console | This message was automatically generated. Remove chunkSize parameter from initialize method of raw erasure coder -- Key: HDFS-8382 URL: https://issues.apache.org/jira/browse/HDFS-8382 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng Attachments: HDFS-8382-HDFS-7285-v1.patch, HDFS-8382-HDFS-7285-v2.patch, HDFS-8382-HDFS-7285-v3.patch, HDFS-8382-HDFS-7285-v4.patch, HDFS-8382-HDFS-7285-v5.patch Per discussion in HDFS-8347, we need to support encoding/decoding variable width units data instead of predefined fixed width like {{chunkSize}}. Have this issue to remove chunkSize in the general raw erasure coder API. Specific coder will support fixed chunkSize using hard-coded or specific schema customizing way if necessary, like HitchHiker coder. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder
[ https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556307#comment-14556307 ] Hadoop QA commented on HDFS-8382: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 15m 2s | Pre-patch HDFS-7285 compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 4 new or modified test files. | | {color:green}+1{color} | javac | 7m 41s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 9m 51s | There were no new javadoc warning messages. | | {color:red}-1{color} | release audit | 0m 15s | The applied patch generated 1 release audit warnings. | | {color:green}+1{color} | checkstyle | 1m 48s | There were no new checkstyle issues. | | {color:green}+1{color} | whitespace | 0m 6s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 38s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 32s | The patch built with eclipse:eclipse. | | {color:red}-1{color} | findbugs | 4m 57s | The patch appears to introduce 1 new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | common tests | 24m 5s | Tests passed in hadoop-common. | | {color:red}-1{color} | hdfs tests | 17m 44s | Tests failed in hadoop-hdfs. | | | | 83m 45s | | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs | | | Inconsistent synchronization of org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 88% of time Unsynchronized access at DFSOutputStream.java:88% of time Unsynchronized access at DFSOutputStream.java:[line 146] | | Failed unit tests | hadoop.hdfs.TestReservedRawPaths | | | hadoop.hdfs.server.blockmanagement.TestDatanodeManager | | | hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots | | | hadoop.hdfs.TestSetrepIncreasing | | | hadoop.hdfs.TestModTime | | | hadoop.fs.TestUrlStreamHandler | | | hadoop.hdfs.security.TestDelegationToken | | | hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolarent | | | hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead | | | hadoop.hdfs.server.namenode.TestFileLimit | | | hadoop.hdfs.TestParallelShortCircuitRead | | | hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot | | | hadoop.hdfs.TestDisableConnCache | | | hadoop.hdfs.server.blockmanagement.TestBlockInfoStriped | | | hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter | | | hadoop.hdfs.server.namenode.TestEditLogAutoroll | | | hadoop.TestRefreshCallQueue | | | hadoop.hdfs.protocolPB.TestPBHelper | | | hadoop.hdfs.web.TestWebHdfsUrl | | | hadoop.hdfs.TestECSchemas | | | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints | | | hadoop.hdfs.TestConnCache | | | hadoop.cli.TestCryptoAdminCLI | | | hadoop.hdfs.TestDFSClientRetries | | | hadoop.hdfs.TestSetrepDecreasing | | | hadoop.hdfs.server.datanode.TestDiskError | | | hadoop.fs.viewfs.TestViewFsWithAcls | | | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes | | | hadoop.hdfs.server.namenode.TestAddStripedBlocks | | | hadoop.hdfs.server.namenode.TestFSEditLogLoader | | | hadoop.hdfs.server.namenode.TestHostsFiles | | | hadoop.hdfs.server.datanode.TestTransferRbw | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy | | | hadoop.fs.contract.hdfs.TestHDFSContractDelete | | | hadoop.hdfs.server.namenode.TestFileContextAcl | | | hadoop.fs.TestFcHdfsSetUMask | | | hadoop.fs.TestUnbuffer | | | hadoop.hdfs.server.namenode.TestClusterId | | | hadoop.hdfs.server.namenode.TestDeleteRace | | | hadoop.hdfs.TestPread | | | hadoop.hdfs.server.namenode.TestFSDirectory | | | hadoop.hdfs.server.namenode.TestLeaseManager | | | hadoop.fs.contract.hdfs.TestHDFSContractOpen | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing | | | hadoop.hdfs.server.datanode.TestStorageReport | | | hadoop.hdfs.server.datanode.TestBlockRecovery | | | hadoop.hdfs.server.namenode.TestFileTruncate | | | hadoop.hdfs.TestReadWhileWriting | | | hadoop.fs.contract.hdfs.TestHDFSContractMkdir | | | hadoop.fs.contract.hdfs.TestHDFSContractAppend | | | hadoop.hdfs.server.datanode.TestFsDatasetCache | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestRbwSpaceReservation | | | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock | | | hadoop.hdfs.server.namenode.ha.TestQuotasWithHA | | | hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA | | | hadoop.hdfs.server.namenode.TestSecondaryWebUi | | | hadoop.hdfs.server.namenode.TestMalformedURLs | | | hadoop.hdfs.server.namenode.TestAuditLogger | | | hadoop.hdfs.server.namenode.TestRecoverStripedBlocks | | |
[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder
[ https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556425#comment-14556425 ] Allen Wittenauer commented on HDFS-8382: bq. I think the solution will be like having a separate maven repo for each jenkins project to avoid the collisions, even though results in duplicate contents of repo. This is on my list of things to do, but it's going to be a while. HADOOP-11933 and HADOOP-11929 take first priority since they have much bigger impacts. (In fact, we pretty much can't do separate maven repos in any sort of sane way until HADOOP-11933 anyway, without a ton of extra gymnastics in test-patch.sh) Remove chunkSize parameter from initialize method of raw erasure coder -- Key: HDFS-8382 URL: https://issues.apache.org/jira/browse/HDFS-8382 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng Attachments: HDFS-8382-HDFS-7285-v1.patch, HDFS-8382-HDFS-7285-v2.patch, HDFS-8382-HDFS-7285-v3.patch, HDFS-8382-HDFS-7285-v4.patch, HDFS-8382-HDFS-7285-v5.patch, HDFS-8382-HDFS-7285-v6.patch Per discussion in HDFS-8347, we need to support encoding/decoding variable width units data instead of predefined fixed width like {{chunkSize}}. Have this issue to remove chunkSize in the general raw erasure coder API. Specific coder will support fixed chunkSize using hard-coded or specific schema customizing way if necessary, like HitchHiker coder. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder
[ https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556690#comment-14556690 ] Vinayakumar B commented on HDFS-8382: - bq. This is on my list of things to do, but it's going to be a while. Thanks [~aw] for the so many recent improvements in the area of building and testing on jenkins. Great work. Remove chunkSize parameter from initialize method of raw erasure coder -- Key: HDFS-8382 URL: https://issues.apache.org/jira/browse/HDFS-8382 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng Attachments: HDFS-8382-HDFS-7285-v1.patch, HDFS-8382-HDFS-7285-v2.patch, HDFS-8382-HDFS-7285-v3.patch, HDFS-8382-HDFS-7285-v4.patch, HDFS-8382-HDFS-7285-v5.patch, HDFS-8382-HDFS-7285-v6.patch Per discussion in HDFS-8347, we need to support encoding/decoding variable width units data instead of predefined fixed width like {{chunkSize}}. Have this issue to remove chunkSize in the general raw erasure coder API. Specific coder will support fixed chunkSize using hard-coded or specific schema customizing way if necessary, like HitchHiker coder. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder
[ https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14554004#comment-14554004 ] Vinayakumar B commented on HDFS-8382: - I think, Patch still needs a rebase on {{RSRawDecoder.java}}, fails to apply. Remove chunkSize parameter from initialize method of raw erasure coder -- Key: HDFS-8382 URL: https://issues.apache.org/jira/browse/HDFS-8382 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng Attachments: HDFS-8382-HDFS-7285-v1.patch, HDFS-8382-HDFS-7285-v2.patch, HDFS-8382-HDFS-7285-v3.patch, HDFS-8382-HDFS-7285-v4.patch Per discussion in HDFS-8347, we need to support encoding/decoding variable width units data instead of predefined fixed width like {{chunkSize}}. Have this issue to remove chunkSize in the general raw erasure coder API. Specific coder will support fixed chunkSize using hard-coded or specific schema customizing way if necessary, like HitchHiker coder. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder
[ https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14554298#comment-14554298 ] Hadoop QA commented on HDFS-8382: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 14m 58s | Pre-patch HDFS-7285 compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 4 new or modified test files. | | {color:red}-1{color} | javac | 7m 48s | The applied patch generated 4 additional warning messages. | | {color:green}+1{color} | javadoc | 9m 58s | There were no new javadoc warning messages. | | {color:red}-1{color} | release audit | 0m 15s | The applied patch generated 1 release audit warnings. | | {color:red}-1{color} | checkstyle | 1m 23s | The applied patch generated 4 new checkstyle issues (total was 51, now 41). | | {color:green}+1{color} | whitespace | 0m 5s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 38s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 33s | The patch built with eclipse:eclipse. | | {color:red}-1{color} | findbugs | 4m 59s | The patch appears to introduce 6 new Findbugs (version 3.0.0) warnings. | | {color:red}-1{color} | common tests | 23m 56s | Tests failed in hadoop-common. | | {color:red}-1{color} | hdfs tests | 91m 28s | Tests failed in hadoop-hdfs. | | | | 157m 27s | | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs | | | Inconsistent synchronization of org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 89% of time Unsynchronized access at DFSOutputStream.java:89% of time Unsynchronized access at DFSOutputStream.java:[line 146] | | | Possible null pointer dereference of arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java:arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java:[line 194] | | | Unread field:field be static? At ErasureCodingWorker.java:[line 254] | | | Should org.apache.hadoop.hdfs.server.datanode.erasurecode.ErasureCodingWorker$StripedReader be a _static_ inner class? At ErasureCodingWorker.java:inner class? At ErasureCodingWorker.java:[lines 905-912] | | | Result of integer multiplication cast to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock, int, int, int, int) At StripedBlockUtil.java:to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock, int, int, int, int) At StripedBlockUtil.java:[line 108] | | | Result of integer multiplication cast to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.getStartOffsetsForInternalBlocks(ECSchema, int, LocatedStripedBlock, long) At StripedBlockUtil.java:to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.getStartOffsetsForInternalBlocks(ECSchema, int, LocatedStripedBlock, long) At StripedBlockUtil.java:[line 408] | | Failed unit tests | hadoop.ipc.TestRPC | | | hadoop.hdfs.TestFileAppend4 | | | hadoop.hdfs.TestRead | | | hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional | | | hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd | | | hadoop.hdfs.server.namenode.ha.TestHAStateTransitions | | | hadoop.hdfs.server.datanode.TestRefreshNamenodes | | | hadoop.hdfs.TestHdfsAdmin | | | hadoop.hdfs.server.datanode.TestDataNodeUUID | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN | | | hadoop.hdfs.TestClientReportBadBlock | | | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport | | | hadoop.hdfs.server.namenode.TestNamenodeRetryCache | | | hadoop.hdfs.server.namenode.TestFSEditLogLoader | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy | | | hadoop.hdfs.server.blockmanagement.TestDatanodeManager | | | hadoop.hdfs.server.datanode.TestDataNodeMetrics | | | hadoop.hdfs.TestAppendSnapshotTruncate | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles | | | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement | | | hadoop.hdfs.server.namenode.TestNameNodeRpcServer | | | hadoop.hdfs.TestFileAppendRestart | | | hadoop.hdfs.server.namenode.TestSecondaryNameNodeUpgrade | | | hadoop.cli.TestErasureCodingCLI | | | hadoop.hdfs.protocol.TestBlockListAsLongs | | |
[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder
[ https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555176#comment-14555176 ] Kai Zheng commented on HDFS-8382: - Hello [~aw], do you have any idea why the building looks like this, so many unit tests failed? We see this in other issues related to HDFS-7285 as well. Thanks! Remove chunkSize parameter from initialize method of raw erasure coder -- Key: HDFS-8382 URL: https://issues.apache.org/jira/browse/HDFS-8382 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng Attachments: HDFS-8382-HDFS-7285-v1.patch, HDFS-8382-HDFS-7285-v2.patch, HDFS-8382-HDFS-7285-v3.patch, HDFS-8382-HDFS-7285-v4.patch, HDFS-8382-HDFS-7285-v5.patch Per discussion in HDFS-8347, we need to support encoding/decoding variable width units data instead of predefined fixed width like {{chunkSize}}. Have this issue to remove chunkSize in the general raw erasure coder API. Specific coder will support fixed chunkSize using hard-coded or specific schema customizing way if necessary, like HitchHiker coder. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder
[ https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552079#comment-14552079 ] Vinayakumar B commented on HDFS-8382: - bq. Updated the patch also removing initialize method per the suggestion. I see {{initialize()}} and {{chunkSize}} in {{AbstractErasureCoder}} currently this is the only place where chunkSize was passed from ECSchema to coder. So HDFS-8374 depends on this. Remove chunkSize parameter from initialize method of raw erasure coder -- Key: HDFS-8382 URL: https://issues.apache.org/jira/browse/HDFS-8382 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng Attachments: HDFS-8382-HDFS-7285-v1.patch, HDFS-8382-HDFS-7285-v2.patch, HDFS-8382-HDFS-7285-v3.patch Per discussion in HDFS-8347, we need to support encoding/decoding variable width units data instead of predefined fixed width like {{chunkSize}}. Have this issue to remove chunkSize in the general raw erasure coder API. Specific coder will support fixed chunkSize using hard-coded or specific schema customizing way if necessary, like HitchHiker coder. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder
[ https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552170#comment-14552170 ] Kai Zheng commented on HDFS-8382: - Thanks Vinay for the comments. Yes I was going to get all of this done here but looks like I missed some places. Good catch! Will address them in the following patch. Thanks. Remove chunkSize parameter from initialize method of raw erasure coder -- Key: HDFS-8382 URL: https://issues.apache.org/jira/browse/HDFS-8382 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng Attachments: HDFS-8382-HDFS-7285-v1.patch, HDFS-8382-HDFS-7285-v2.patch, HDFS-8382-HDFS-7285-v3.patch Per discussion in HDFS-8347, we need to support encoding/decoding variable width units data instead of predefined fixed width like {{chunkSize}}. Have this issue to remove chunkSize in the general raw erasure coder API. Specific coder will support fixed chunkSize using hard-coded or specific schema customizing way if necessary, like HitchHiker coder. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder
[ https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552089#comment-14552089 ] Vinayakumar B commented on HDFS-8382: - I see that patch covers 'Raw' coders. But similar treatment to be done for 'coder' classes also. Could it be done in this patch itself. ? Remove chunkSize parameter from initialize method of raw erasure coder -- Key: HDFS-8382 URL: https://issues.apache.org/jira/browse/HDFS-8382 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng Attachments: HDFS-8382-HDFS-7285-v1.patch, HDFS-8382-HDFS-7285-v2.patch, HDFS-8382-HDFS-7285-v3.patch Per discussion in HDFS-8347, we need to support encoding/decoding variable width units data instead of predefined fixed width like {{chunkSize}}. Have this issue to remove chunkSize in the general raw erasure coder API. Specific coder will support fixed chunkSize using hard-coded or specific schema customizing way if necessary, like HitchHiker coder. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder
[ https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14551439#comment-14551439 ] Kai Zheng commented on HDFS-8382: - Thanks for the comment, Nicholas. While having the initialize method makes somethings easy I agree it's better to remove it. Previously I was thinking there are something heavy to be done not appropriate in constructor, but for now when I have finished the native coders, I agree the initialization work can also be done well in constructor. Will remove the initialize method as well in following updated patch. Remove chunkSize parameter from initialize method of raw erasure coder -- Key: HDFS-8382 URL: https://issues.apache.org/jira/browse/HDFS-8382 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng Attachments: HDFS-8382-HDFS-7285-v1.patch, HDFS-8382-HDFS-7285-v2.patch Per discussion in HDFS-8347, we need to support encoding/decoding variable width units data instead of predefined fixed width like {{chunkSize}}. Have this issue to remove chunkSize in the general raw erasure coder API. Specific coder will support fixed chunkSize using hard-coded or specific schema customizing way if necessary, like HitchHiker coder. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder
[ https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14551221#comment-14551221 ] Tsz Wo Nicholas Sze commented on HDFS-8382: --- We should remove all the initialize methods, pass the parameter via constructors and change all the fields to final. Remove chunkSize parameter from initialize method of raw erasure coder -- Key: HDFS-8382 URL: https://issues.apache.org/jira/browse/HDFS-8382 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng Attachments: HDFS-8382-HDFS-7285-v1.patch, HDFS-8382-HDFS-7285-v2.patch Per discussion in HDFS-8347, we need to support encoding/decoding variable width units data instead of predefined fixed width like {{chunkSize}}. Have this issue to remove chunkSize in the general raw erasure coder API. Specific coder will support fixed chunkSize using hard-coded or specific schema customizing way if necessary, like HitchHiker coder. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder
[ https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14547700#comment-14547700 ] Hadoop QA commented on HDFS-8382: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 14m 48s | Pre-patch HDFS-7285 compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 3 new or modified test files. | | {color:green}+1{color} | javac | 7m 32s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 9m 40s | There were no new javadoc warning messages. | | {color:red}-1{color} | release audit | 0m 15s | The applied patch generated 1 release audit warnings. | | {color:green}+1{color} | checkstyle | 1m 46s | There were no new checkstyle issues. | | {color:green}+1{color} | whitespace | 0m 1s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 38s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 33s | The patch built with eclipse:eclipse. | | {color:red}-1{color} | findbugs | 4m 53s | The patch appears to introduce 9 new Findbugs (version 2.0.3) warnings. | | {color:green}+1{color} | common tests | 22m 58s | Tests passed in hadoop-common. | | {color:red}-1{color} | hdfs tests | 187m 34s | Tests failed in hadoop-hdfs. | | | | 251m 43s | | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs | | | Inconsistent synchronization of org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 89% of time Unsynchronized access at DFSOutputStream.java:89% of time Unsynchronized access at DFSOutputStream.java:[line 146] | | | Possible null pointer dereference of arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java:arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java:[line 194] | | | Unread field:field be static? At ErasureCodingWorker.java:[line 252] | | | Should org.apache.hadoop.hdfs.server.datanode.erasurecode.ErasureCodingWorker$StripedReader be a _static_ inner class? At ErasureCodingWorker.java:inner class? At ErasureCodingWorker.java:[lines 913-915] | | | Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String, ECSchema):in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String, ECSchema): String.getBytes() At ErasureCodingZoneManager.java:[line 117] | | | Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath): new String(byte[]) At ErasureCodingZoneManager.java:[line 81] | | | Dead store to dataBlkNum in org.apache.hadoop.hdfs.util.StripedBlockUtil.calcualteChunkPositionsInBuf(ECSchema, LocatedStripedBlock, byte[], int, int, int, int, long, int, StripedBlockUtil$AlignedStripe[]) At StripedBlockUtil.java:org.apache.hadoop.hdfs.util.StripedBlockUtil.calcualteChunkPositionsInBuf(ECSchema, LocatedStripedBlock, byte[], int, int, int, int, long, int, StripedBlockUtil$AlignedStripe[]) At StripedBlockUtil.java:[line 467] | | | Result of integer multiplication cast to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock, int, int, int, int) At StripedBlockUtil.java:to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock, int, int, int, int) At StripedBlockUtil.java:[line 86] | | | Result of integer multiplication cast to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions(int, int, long, int, int) At StripedBlockUtil.java:to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions(int, int, long, int, int) At StripedBlockUtil.java:[line 206] | | Failed unit tests | hadoop.hdfs.server.datanode.TestIncrementalBlockReports | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.server.namenode.TestAuditLogs | | | hadoop.hdfs.server.datanode.TestBlockReplacement | | | hadoop.hdfs.server.datanode.TestTriggerBlockReport | | | hadoop.hdfs.server.blockmanagement.TestBlockInfo | | | hadoop.hdfs.server.namenode.TestFileTruncate | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | Timed out tests | org.apache.hadoop.hdfs.TestDatanodeDeath | \\ \\ || Subsystem ||
[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder
[ https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14547503#comment-14547503 ] Hadoop QA commented on HDFS-8382: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | patch | 0m 0s | The patch command could not apply the patch during dryrun. | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12732806/HDFS-8382-HDFS-7285-v1.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | HDFS-7285 / f346672 | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/11020/console | This message was automatically generated. Remove chunkSize parameter from initialize method of raw erasure coder -- Key: HDFS-8382 URL: https://issues.apache.org/jira/browse/HDFS-8382 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng Attachments: HDFS-8382-HDFS-7285-v1.patch Per discussion in HDFS-8347, we need to support encoding/decoding variable width units data instead of predefined fixed width like {{chunkSize}}. Have this issue to remove chunkSize in the general raw erasure coder API. Specific coder will support fixed chunkSize using hard-coded or specific schema customizing way if necessary, like HitchHiker coder. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder
[ https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14541494#comment-14541494 ] Kai Zheng commented on HDFS-8382: - Oh bad. This one should be under HADOOP-11264. Would anyone help with the transition? Thanks. Remove chunkSize parameter from initialize method of raw erasure coder -- Key: HDFS-8382 URL: https://issues.apache.org/jira/browse/HDFS-8382 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng Per discussion in HDFS-8347, we need to support encoding/decoding variable width units data instead of predefined fixed width like {{chunkSize}}. Have this issue to remove chunkSize in the general raw erasure coder API. Specific coder will support fixed chunkSize using hard-coded or specific schema customizing way if necessary, like HitchHiker coder. -- This message was sent by Atlassian JIRA (v6.3.4#6332)